MPLS and the Evolving Telecom Landscape

The idea of breaking a data stream or data set into pieces, putting the pieces into packets and attaching an address to each packet arose early in the days of computer networking, for reasons of scale and economy.  The justification was simple: if the packet has an address, then it can be sent to any recipient on the network, thus enabling communication without the need to make a dedicated connection between sender and recipient, as is done for many applications such as telephony and fax, which use elaborate electronic and mechanical switches to create such a dedicated connection.  This type of connectionless communication, which switches packets rather than circuits, offers many advantages when real-time applications are not involved: simplified hardware, lower cost, greater flexibility, and ease of adding new participants.  Packet switching technology is thus an alternative to traditional connection-oriented networks that works well for certain applications.  It is, however, “best effort”, and packets in the same data stream can take different paths through the network, arriving out of order, at somewhat random times, or not at all.  This has important implications for the use of packet-based networks when they are used to transmit packets containing voice or video—something never envisioned when the idea of packet switching was originally developed.  And it suggests that somehow reintroducing connection-orientation may be useful.

Early History of Packet Switching

In the early days, of course, packet switching was over slow lines—often just dial-up speed, 300 bits/second—and available network hardware was also slow.  Thus it was suitable only for low data rate applications such as text email.  This was the case in the 1970s.  In the 1980s speeds began to increase, and new technology allowed higher speed communications over traditional phone lines, for access at least.  Dedicated lines became common for the network backbone, the first of which was the ARPANet, an experimental network project funded by the Department of Defense to see if and how well computers could communicate.  The theoretical maximum data rate over phone lines is 56-64K bits/second, though in practice this is rarely realized.  But speeds gradually crept up and better coding technology allowed modems to reach speeds in the 50K bits/second range.  This was, however, quite marginal for any type of graphical or video application.  Meanwhile networking hardware was increasing in speed, and new fiber optic links were becoming more widely available.  The development of optical amplifiers in the late 1980s enabled dense wave division multiplexing (sending of information simultaneously over many wavelengths of light on a single fiber), which caused bandwidth to explode, and by the late 1990s routers that could switch at 10 Gbits/second were available.

By the mid-1980s it was realized that the “old” method of sending packets through a network, which involved searching a large table at each hop to find the next hop (routing), was very inefficient for the core, where large numbers of packets all followed the same path but each required a separate router table search.  So new technologies were developed, the first of which was Frame Relay (FR), in the early 1990s.  These technologies used the idea of “tag” or “label” switching, whereby packets were assigned a special label when they entered the network, and were “switched” at each hop based on an indexed lookup of that label at each hop, which only had to be done once for a given path.  Thus if large numbers of packets followed a particular path, all would have the same label attached, and significant efficiencies were thereby realized since indexed lookup in a small table was much faster than searching a large table.  This therefore meant the introduction of special paths through the network, corresponding to the labels—in effect a return, in a sense, to a type of connection-orientation.  Increasing packet network speeds and decreasing hardware costs made the choice to use a single network for all types of communication (voice, video, and data) an obvious one, albeit more difficult in practice than in theory, because of the differing requirements for these services.  Voice (or audio) was relatively low bandwidth, but its packets had to arrive in order and within a limited time; otherwise the voice on the other end would become unintelligible.  Even relatively short delays of a few hundred milliseconds are very disconcerting.  Video has similar requirements, though most video can stand larger delays provided that its packets arrive in the correct order.  This allows for buffering to handle some network problems.  Data packets generally have the fewest constraints and can continue as “best effort”.

ATM: Technology to Integrate Disparate Types of Traffic

FR was not designed to integrate multiple types of traffic smoothly and efficiently, so a new technology was developed to facilitate this type of integration, called Asynchronous Transfer Mode (ATM).  ATM was designed from the ground up to handle these three types of traffic over a single network.  Based on hardware capabilities at the time (early 90s), it was decided that packets (called “cells”) had to be made small so that they could be switched in hardware available at the time, to facilitate voice traffic.  A compromise was reached: 48 bytes of data and 5 bytes of header, for a total of 53 bytes.  ATM networks also were designed to operate at layer 2, which means that they operate on a single network where the address of each machine is known.  Communication is over virtual circuits (VCs), which are set up in advance so that a cell with its assigned label could be quickly switched through the network.  The virtual circuits implemented were Permanent Virtual Circuits (PVCs), intended to be used between destinations that communicated regularly with high volumes of traffic.  Circuits that could be quickly set up and torn down, called Switched Virtual Circuits (SVCs) were part of the specification, but were never deployed.  ATM did allow for expedited flows of time-critical data, so that ATM networks could successfully carry voice, video and data, something that ordinary IP networks could not do well unless their utilization was very low.

ATM: Victim of Its Own Success

ATM networks were very successful and widely deployed during the late 1990s and into the early 2000s, though ATM hardware was complex and rather expensive.  However, ATM was slowly being overtaken by events, events catalyzed by the success of ATM in delivering fast packet performance.  First, commercialization of the old ARPANet, now called the Internet, meant that there was growing interest in web pages, i.e., lots of data, and thus voice traffic—if sent at all—was a smaller and smaller portion of the overall network traffic.  Even email now could carry long text, audio, or video attachments.  The result was that the small packet size was becoming an obstacle (tens of thousands of packets could be required for graphics, even more for video), and the high cost of the hardware made network upgrades very expensive.  Setting up the PVCs was complex and as a layer 2 technology, one ATM network did not natively know about other ATM networks.  In addition, the interface between ATM networks and IP networks was rather clumsy.  At the same time, the cost of IP-based hardware, such as routers, was falling sharply and its capacity was growing.  Moreover IP was becoming the technology of choice for more and more applications, and it integrated well with Ethernet (a layer 2 technology).

Rise of MPLS

This led to the idea of doing with IP (a layer 3 technology) some of the same things as ATM, and specifically, the idea of using label switching to get some of ATM’s benefits.  The technology developed to do this was called Multiprotocol Label Switching, or MPLS.  MPLS has proved to be very capable and flexible, and able to do natively many of the things that ATM was designed to do.  Specifically, MPLS has its own version of ATM PVCs, called “Label Switched Paths” (LSPs) which are easier to set up than PVCs and run over layer 3, so they know about other networks.  These LSPs can be used to reserve bandwidth across a network, which makes possible service-level guarantees—essential for traffic such as voice or video.  The LSPs also enable Virtual Private Networks (VPNs), which allow for secure private networks for an organization to be set up over public networks, such as the Internet, at far lower cost than would be required for dedicated links.  These VPNs can be rapidly modified to accommodate new users—something that is very slow and expensive for most networks comprised of dedicated physical links.  In fact MPLS VPNs can operate in two modes: Overlay or Peer.  In the overlay model, the service provider supplies only dedicated links (LSPs) between locations specified by the customer.  In the peer model, the customer’s routers connect with the nearest service provider router, and the service provider takes care of managing the VPN.  MPLS can also provide new types of service, such as Virtual Private Wire Service, or pseudowires, which emulate dedicated layer 2 links, again at far lower cost than actual dedicated links.  Such pseudowires can be setup for carrying Time Division Multiplexed (TDM) data streams, either synchronously (Circuit Emulation Service over Packet Network—CESoP) or asynchronously (Structure Agnostic TDM over Packet Network—SAToP).  By using digitization, pseudowires can actually emulate layer 1 (hard wired) connectivity between sites, i.e., look like an analog connection.

By 2010 both ATM and FR were in fairly steep decline, relegated to the status of legacy technologies.  MPLS usage is growing and new applications for it continue to emerge, making it the technology of choice for most large-scale networks.  Nowadays the great buzzword is “cloud computing”, with different versions such as Software as a Service (SaaS—Gmail, Google maps, YouTube, Facebook), Platform as a Service (PaaS—Amazon Elastic Compute Cloud, Windows Azure, Salesforce).  All of these utilize MPLS-based networks.  But now emerging is Infrastructure as a Service (IaaS), which may be enabled by such MPLS functions as pseudowires, reserved bandwidth, and VPNs.  Stay tuned—MPLS will continue to grow in importance!

Editor’s Note: Dr. Tom Fowler, a Principal Member of our Telecommunications Faculty, has 25+ years of engineering, R&D, consulting, and teaching/ training experience. He teaches courses on Optical Technologies and IP-Based Networks.  He has worked extensively on the development of large-scale US Government telecom networks and is now involved in planning the next generation of those networks.  He is the author of 100+ articles, papers, and reviews.  He has published a book and translated two books.  He has presented courses and papers in the United States, Canada, Mexico, South America, and Europe. He serves as editor of The Telecommunications Review, a widely-respected annual review of trends, issues, and topics in telecommunications.  His PhD in electrical engineering is from The George Washington University; MSEE is from Columbia University; and BSEE is from University of Maryland, where he also completed a BA degree.

 

Sorry, comments for this entry are closed at this time.