Inverse Multiplexing



Introduction

Over the past ten years, local area networks (LANs) have become increasingly popular, to the point that they are now an indispensable part of a company’s infrastructure. Not only have LANs grown in popularity, but also in size. From a network of only a few computers, most companies now have a computer on every desk, each being a part of the local network. And as they have grown in size and the number of computers attached to them, LANs have also seen a large growth in the amount of bandwidth they use. It is not uncommon for a LAN backbone to have data rates of 100 Mbps up to 1 Gbps. This increase in bandwidth has not only occurred as a natural result of an increase in the size of networks, but also because today’s modern applications have evolved to require these large amounts of bandwidth. In addition, today’s LANs have also experienced a convergence of applications that they must support. Local area networks are now being used to transport voice and video traffic together with the traditional data traffic that they have already supported. And in the case of voice and video applications, not only is there a need for more bandwidth, but there is also a need for guaranteed levels of service because these applications are very sensitive to latency and delay.

Added to these bandwidth requirements of the LAN are the bandwidth requirements of the WAN (wide area network). Most medium to large sized companies have enterprise networks, consisting of many LANs tied together with various WAN links. However, unlike the LAN infrastructure, which has generally been able to grow and meet these increased bandwidth requirements, most WAN connections run at relatively low data rates. Most companies have only 56 or 64 Kbps links between their LANs, while some larger companies are able to afford full T1 pipes of 1.544 Mbps. But even with a full T1 data rate, a large gap still exists between the bandwidth available on the LAN as opposed to the WAN. Granted, not all of the traffic on the LAN needs to traverse the WAN link, but in most cases, companies have not had the luxury of balancing their local bandwidth needs with their wide area bandwidth needs. Many are forced to be content with less bandwidth than they require because they cannot afford a full T3 pipe (45 Mbps) or because such service is not available to them. Or, they have had to pay the premium rates of a T3 circuit. In some cases, companies have deployed multiple parallel T1 circuits, but this has only added to the complexity of the issue, without really providing the increased bandwidth that they need. As you can see, there are a number of challenges presented to enterprise networks that are trying to support today’s modern applications and high volume traffic.

In order to help bridge the gap between the LAN and the WAN, a number of technologies have emerged. A few of these, which will be highlighted in this paper, include:

  • T1 Inverse Multiplexing
  • Multiple T1 Load Sharing
  • Inverse Multiplexing over ATM

But how do these technologies work? And how do they help bridge the gap between the LAN and the WAN? What are the advantages and disadvantages of using these technologies? These questions will be answered in the sections that follow.

T1 Inverse Multiplexing – What is it?

While the idea of inverse multiplexing may be new to many IT administrators, traditional multiplexing is not. Therefore, we will first consider how traditional multiplexing works before describing inverse multiplexing. In traditional multiplexing, multiple streams of data are combined into one single but larger data pipe. At the other end of the pipe, the data stream is demultiplexed into the original streams of data. There are a number of ways this is accomplished, depending on whether a signal is analog or digital. Generally, analog circuits use frequency division multiplexing (FDM), while digital circuits use time division multiplexing (TDM). In frequency division multiplexing, multiple streams of data can be transported simultaneously because each uses a different frequency width. In time division multiplexing, multiple streams of data are combined by using alternating time slots. For optical networks, a new form of multiplexing has come into existence. Wavelength division multiplexing (WDM) uses different wavelengths of light in order to transport multiple streams of data. In any case, traditional multiplexing is based on the concept of combining multiple streams of data into a single, larger data stream.

On the other hand, inverse multiplexing (or imuxing) is exactly the opposite of traditional multiplexing. Instead of combining multiple streams of data into a single circuit, inverse multiplexing combines multiple circuits into a single logical data pipe. So, a large, single stream of data is split up and spread across multiple T1 circuits and then recombined into a single data stream at the other end. The data is spread across the T1 circuits in a round robin fashion, meaning that each bit of data is sequentially sent to the next T1 in a circular fashion. However, to the application or DTE device using this bandwidth, it only sees a single logical channel that is equivalent to the total aggregate bandwidth of all the individual T1s combined. This can help reduce the bottleneck that is often experienced at the WAN link.

According to a white paper by Techguide.com, “inverse multiplexing provides a uniquely scalable solution… [because] as network bandwidth requirements increase over time, inverse multiplexing facilitates the incremental growth of WAN links by allowing the addition of more T1 or E1 circuits as required” (10). So, a company may decide to inverse multiplex two T1 circuits in the beginning, and as needs grow, they can just add another circuit to the link. In most parts of the United States, the use of multiple T1s proves more economical than a full T3 when less than 8 T1s are required. This is an effective solution when bandwidth requirements are higher than a single T1 can support and less than a full T3, and when bandwidth requirements are expected to steadily grow over time. Then, when a company reaches the point that they are spending the same amount on T1s as it would cost for a full T3 circuit, the solution remains scalable because all they need to do is upgrade to a T3 circuit and the end users and applications using the link will not be affected. At that point, a company can also maintain its original investment by moving their inverse multiplexing equipment to another location, perhaps where only a single T1 is being used. That location could then take advantage of inverse multiplexing, slowly being built up until it matures to a full T3, at which point the cycle can be started all over at yet another location. In this way, inverse multiplexing offers a number of advantages to both end customers as well as service providers. End customers with enterprise networks can use the technology to efficiently and economically meet the growing bandwidth requirements between remote sites, and service providers can use inverse multiplexing to maintain its ability to flexibly meet the needs of its customers. Some of the benefits of inverse multiplexing are highlighted below:

  • Scalable bandwidth
  • Carrier class fault tolerance – if a link fails, the link will fall back to the next increment and stay up
  • Cost efficient use of existing T1 infrastructure
  • Lower cost than T3 service

In addition to inverse multiplexing, load sharing has also been used in order to combine multiple T1 circuits. How does load sharing work? How does it differ from inverse multiplexing? And, which technology is better?

Multiple T1 Load Sharing

Load sharing is a technology that is very similar to inverse multiplexing in that it combines multiple T1 lines between two locations. However, there are a number of differences between the two technologies. Whereas inverse multiplexing creates a single logical channel that is the aggregate of all the T1s combined, load sharing distributes whole packets over multiple T1 links that reside in parallel. One method used to accomplish this is called “route caching.” This method assigns each session a particular T1 link (T1 Imuxing, 2). So although load sharing also provides additional bandwidth between sites, it does so by presenting multiple links to the DTE and balances the load among these. On the other hand, inverse multiplexing only presents a single aggregate link to the DTE, or router. This means that with load sharing, the bandwidth available to a single application is limited to the total bandwidth of a single T1 circuit, or 1.544 Mbps. But with inverse multiplexing, applications do not see the individual T1 circuits, and therefore, a larger amount of bandwidth is available to a single application (Techguide, 17).

Another difference between inverse multiplexing and load sharing is in the way it handles a link failure. When a single T1 circuit fails on an inverse multiplexed link, end-to-end delivery of the network traffic is still guaranteed, because the traffic will be shared among the remaining T1 circuits. However, because load sharing using route caching generally dedicates a single T1 circuit to a particular application, if that circuit fails then the application may time out. Although another method of load sharing overcomes this by monitoring each T1 link and distributing traffic based on availability, it still does not overcome the bandwidth per application limitation. Also, this form of load sharing increases the processing power required by the routers, and can sometimes increase the total latency (T1 Imuxing, 3).

Another disadvantage of load sharing is that each T1 circuit must be assigned an IP address and managed separately. With inverse multiplexing, there is only a single connection to the router port from the imux equipment, thus using only a single IP address and also simplifying management. This also reduces the amount of equipment necessary, since an inverse mux will replace multiple CSU/DSUs, and also requires less serial ports on the router. So, although load sharing technology is another means of increasing bandwidth between sites, it does not offer the same robust features and advantages that inverse multiplexing does. Inverse multiplexing also offers a unique advantage for networks that use ATM (Asynchronous Transfer Mode) for their backbone. This is accomplished by means of Inverse Multiplexing over ATM, or IMA. How does this technology work and what are its advantages?

Inverse Multiplexing over ATM

ATM networks use a cell based technology that supports voice, video and data at a wide range of transmission speeds (Techguide, 19). Because it is capable of supporting these various types of traffic simultaneously while guaranteeing different levels of quality (quality of service, or QoS), there has been an increase in the popularity of ATM among LAN backbones, and in some situations, even to the desktop. However, the availability of ATM on the WAN has been very limited, and in cases where it has been available, its use has generally been prohibited by the high costs. Therefore, in 1997 the ATM Forum defined a standard for Inverse Multiplexing over ATM, or IMA (Techguide, 20). Inverse Multiplexing over ATM defines a new UNI, or User to Network Interface, that specifies how a stream of ATM cells are spread across multiple T1 circuits. This new UNI rides on top of the existing T1 ATM PHY (or physical interface, which defines how ATM cells map onto existing physical layer media) and uses the IMA Control Protocol (ICP) to perform the inverse multiplexing. Basically, IMA works in the same way as regular inverse multiplexing does, using a round robin cyclic approach, except that it spreads out the data cell by cell. This makes sense, since the native unit of data in an ATM network is the cell. In this way, IMA offers all of the advantages of an ATM network, and at the same time, also offers all of the advantages of multiple T1 inverse multiplexing. As a result, companies using ATM backbones have found that IMA is a very cost efficient and practical way to transport their ATM traffic across the WAN using existing and generally widely available T1 circuits. And because ATM can support voice, video and data, network managers are also able to use IMA to transparently transport different types of data with varying levels of QoS across the WAN (IMA, 2-6).

Although it is expected to gain wide acceptance, because IMA defines a new UNI, it probably will take time for this specification to mature. As a result, its implementation will probably be slow at first, since most companies cannot afford to roll out a technology that is still subject to changes in hardware and protocols. So, despite the benefits of IMA, most will probably wait for the dust to settle before implementing it. A new, lower cost and lower risk solution is made possible through bit-based ATM inverse multiplexing. This technology will be discussed briefly in the next section.

Bit-Based ATM Inverse Multiplexing

Bit-based ATM inverse multiplexing uses the ATM Forum’s Cell-Based Transmission Convergence sublayer. This specifies how an ATM cell stream can be transported at the bit level, instead of at the cell level. After determining the “start of cell”, cells are subsequently transported across the media bit-by-bit (Bit by Bit, 4). This can use the existing DS3 or OC3c UNI interfaces already available on a company’s ATM switch. These ATM interfaces are used as a DTE port, from which the data is transported over the WAN using multiple T1s. A bit-based ATM imux converts the traffic rate to the lower NxT1 rate using buffers. This can provide network managers the ability to have an ATM link over their WAN without using a new UNI or changing their existing ATM equipment (Bit by Bit, 6).

In addition, in a network that still uses traditional TDM based non-ATM traffic, they do not need to convert all of their WAN access to ATM. The ATM traffic can be combined with non-ATM traffic over a channelized DS3 circuit (Bit by Bit, 7). This means that some of the 28 available T1s on a DS3 can be used to transparently transport the ATM traffic, and the rest of the available T1s can be used for other traffic.

Therefore, bit-based ATM inverse multiplexing provides a means for ATM and TDM to co-exist, and it takes advantage of existing T1 circuits to provide the benefits of ATM across the WAN. There is no doubt that bit-based ATM inverse multiplexing will play a large role in transporting ATM over the WAN as the newly defined IMA specification continues to mature.

Transparent LAN Service

One of the benefits of inverse multiplexing, is its ability to make possible Transparent LAN Service using existing T1 circuits. A service provider as a convenience can offer transparent LAN services to its customers. Customers who need to interconnect various locations at native LAN speeds, but who do not have experience with WAN products or protocols, can be offered this service. The service provider can accomplish this by installing and maintaining inverse multiplexers at the customer’s sites and then scale the bandwidth to their needs.

Alternately, a company with the in-house knowledge and experience to accomplish this on their own can install and maintain their own inverse multiplexing equipment. This is an example of how inverse multiplexing can benefit customers, providing them higher bandwidth at a lower cost, and allowing LANs to be connected transparently.

Conclusion

As networks continue to grow, and as applications continue to demand more and more bandwidth, there is no doubt that inverse multiplexing will offer a effective solution that is scalable and cost efficient. Although load sharing can be implemented with many of today’s routers, it is not nearly as scalable and robust a solution as inverse multiplexing is. Therefore, for companies looking to bridge the gap between their LAN and WAN connection speeds, inverse multiplexing offers a number of advantages with very little risk. And for those who have already begun to experience the benefits of ATM in their LAN, Inverse Multiplexing over ATM, as well as Bit-Based ATM Inverse Multiplexing, both offer a means for ATM to be transparently transported over the WAN using existing T1 facilities.

ComTest Technologies can assist you with your Inverse Multiplexing needs. Just visit some of the Additional Resources below, or call (808) 831-0601.


Additional Resources

Inverse Multiplexing Info at Larscom's site

References

This page was written by Will Twiggs, an associate with ComTest Technologies, Inc. For more information or if you have questions about this material, you can contact the author at william@comtest.com

"Inverse Multiplexing over ATM." www.3com.com/technology/tech_net/white_papers/500642.html.

"Inverse Multiplexing – Scalable Solutions for the WAN." http://www.techguide.com.

"T1 Inverse Multiplexing: Getting Started on the Road to ATM." http://www.larscom.com/lib/wp_t1imux.htm.

Langdon, Robin D., Imuxing ATM, Bit by Bit. Larscom, 1997.


This page created by Will Twiggs
Best viewed at "800x600" using MS IE 4+
Last updated on 04/03/2001