المساعد الشخصي الرقمي

مشاهدة النسخة كاملة : Cell Switching (ATM)



A7med Baraka
04-15-2009, 08:27 PM
Cell Switching (ATM)

Now that we have discussed some of the general issues associated with switched networks, it is time to focus on one particular switching technology: asynchronous transfer mode (ATM). ATM has become a tremendously important technology in recent years for a variety of reasons, not least of which is that it has been embraced by the telephone industry, which has historically been less than active in data communications except as a supplier of links on top of which other people have built networks. ATM also happened to be in the right place at the right time, as a high speed switching technology that appeared on the scene just when shared media like Ethernet and FDDI were starting to look a bit too slow for many users of computer networks.
ATM is a connection-oriented packet-switched network, which is to say, it uses virtual circuits very much in the manner described in Section In ATM terminology, the connection setup phase is called signalling. At the time of this writing, the ATM Forum (the standards body that governs ATM) is still hammering out the details of an ATM signalling protocol know as Q.2931. In addition to discovering a suitable route across an ATM network, Q.2931 is also responsible for allocating resources at the switches along the circuit. This is done in an effort to ensure the circuit a particular quality of service. We return to this topic in Chapter where we discuss it in the context of similar efforts to implement QoS.
The thing that makes ATM really unusual is that the packets that are switched in an ATM network are of fixed length. That length happens to be 53 bytes---5 bytes of header followed by 48 bytes of payload---a rather interesting choice discussed in more detail below. To distinguish these fixed length packets from the more common variable length packets normally used in computer networks, they are given a special name: cells. ATM may be thought of as the canonical example of cell switching.

Cells

All the packet switching technologies we have looked at so far have used variable length packets. Variable length packets are normally constrained to fall within some bounds. The lower bound is set by the minimum amount of information that needs to be contained in the packet, which is typically a header with no optional extensions. The upper bound may be set by a variety of factors; the maximum FDDI packet size, for example, determines how long each station is allowed to transmit without passing on the token, and thus, determines how long a station might have to wait for the token to reach it. Cells, in contrast, are both fixed length and small in size. While this seems like a simple enough design choice, there are actually a lot of factors involved, as explained in the following paragraphs.

Cell Size

Variable length packets have some nice characteristics. If you only have one byte to send (e.g. to acknowledge the receipt of a packet), you put it in a minimum-sized packet. If you have a large file to send, however, you break it up into as many maximum-sized packets as you need. You do not need to send any extraneous padding in the first case, and in the second, you drive down the ratio of header to data bytes, thus increasing bandwidth efficiency. You also minimize the total number of packets sent, thereby minimizing the total processing incurred by per-packet operations. This can be particularly important in obtaining high throughput, since many network devices are limited not by how many bits per second they can process, but rather by the number of packets per second.
So, why fixed length cells? One of the main reasons was to facilitate the implementation of hardware switches. When ATM was being created in the mid and late 80's, 10 Mbps Ethernet was the cutting edge technology in terms of speed. To go much faster, most people thought in terms of hardware. Also, in the telephone world, people think big when they think of switches---telephone switches often serve tens of thousands of customers. Fixed length packets turn out to be a very helpful thing if you want to build fast, highly scalable switches. There are two main reasons for this:



it is easier to build hardware to do simple jobs, and the job of processing packets is simpler when you already know how long each one will be;
if all packets are the same length, then you can have lots of switching elements all doing much the same thing in parallel, each of them taking the same time to do its job.

This second reason, enabling parallelism, greatly improves the scalability of switch designs. We will examine a highly scalable, parallel switch in Section . It would be overstating the case to say that fast, parallel hardware switches can only be built using fixed-length cells. However, it is certainly true that cells ease the task of building such hardware, and that there was a lot of knowledge about how to build cell switches in hardware at the time ATM standards were being defined.
Another nice property of cells relates to the behavior of queues. Queues build up in a switch when traffic from several inputs may be heading for a single output. In general, once you extract a packet from a queue and start transmitting it, you need to continue until the whole packet is transmitted; it is not practical to preempt the transmission of a packet. (Recall that this idea is at the heart of statistical multiplexing.) The longest time that a queue output can be tied up is equal to the time it takes to transmit a maximum sized packet. Fixed length cells mean that a queue output is never tied up for more than the time it takes to transmit one cell, which is almost certainly shorter than the maximally sized packet on a variable length packet network. Thus, if tight control over the latency experienced by cells when they pass through a queue is important, cells provide some advantage. Of course, long queues can still build up and there is no getting around the fact that some cells will have to wait their turn. What you get from cells is not much shorter queues, but potentially finer control over the behavior of queues.
An example will help to clarify this idea. Imagine a network with variable length packets, where the maximum packet length is 4KB and the link speed is 100 Mbps. The time to transmit a maximum sized packet is http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img84000.gifs. Thus, a high priority packet that arrives just after the switch starts to transmit a 4KB packet will have to sit in the queue 327.68http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img85000.gifs waiting for access to the link. In contrast, the longest wait if the switch were forwarding 53-byte cells would be http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img86000.gifs. This may not seem like a big deal, but the ability to control delay, and especially its variation with time (jitter), can be important for some applications.
Queues of cells also tend to be a little shorter than queues of packets for the following reason. When a packet begins to arrive in an empty queue, it is typical for the switch to wait for the whole packet to arrive before it can start transmitting the packet on an outgoing link. This means that the link sits idle while the packet arrives. However, if you imagine a large packet being replaced by a `train' of small cells, then as soon as the first cell in the train has entered the queue, the switch can transmit it. Imagine in the example above what would happen if two 4KB packets arrived in a queue at about the same time. The link sits idles for 327.68http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img87000.gifs while these two packets arrive, and at the end of that period we have 8KB in the queue. Only then can the queue start to empty. If those same two packets were sent as trains of cells, then transmission of the cells could start 4.24http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img88000.gifs after the first train starts to arrive. At the end of 327.68http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img89000.gifs, the link has been active for a little over 323http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img90000.gifs and there are just over 4KB of data left in the queue, not 8KB as before. Shorter queues mean less delay for all the traffic.
Having decided to use small fixed-length packets, the next question is, what is the right length to fix them at? If you make them too short, then the amount of header information that needs to be carried around relative to the amount of data that fits in one cell gets larger, so the percentage of link bandwidth that is actually used to carry data goes down. Even more seriously, if you build a device that processes cells at some maximum number of cells per second, then as cells get shorter, the total data rate drops in direct proportion to cell size. An example of such a device might be a network adapter that reassembles cells into larger units before handing them up to the host. The performance of such a device directly depends on cell size. On the other hand, if you make cells too big, then there is a problem of wasted bandwidth caused by the need to pad transmitted data to fill a complete cell. If the cell payload size is 48 bytes and you want to send 1 byte, you'll need to send 47 bytes of padding. If this happens a lot, then the utilization of the link will be very low.
Efficient link utilization is not the only factor that influences cell size. For example, cell size has a particular effect on voice traffic, and since ATM grew out of the telephony community, one of the major concerns was that it be able to carry voice effectively. The standard digital encoding of voice is done at 64Kbps (8-bit samples taken at 8 KHz). To maximize efficiency, you want to collect a full cell's worth of voice samples before transmitting a cell. A sampling rate of 8 KHz means that one byte is sampled every 125http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img91000.gifs, so the time it takes to fill an n-byte cell with samples is http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img92000.gifs. If cells are, say, 1000 bytes long, it would take 125ms just to collect a full cell of samples before you even start to transmit it to the receiver. That sort of latency starts to be quite noticeable to a human listener. Even considerably smaller latencies create problems for voice, particularly in the form of echoes. Echoes can be eliminated by a piece of technology called echo cancellors, but these add cost to a telephone network that many network operators would rather avoid.
All of the above factors caused a great deal of debate in the international standards bodies when ATM was being standardized, and the fact that no length was perfect in all cases was used by those opposed to ATM to argue that fixed length cells were a bad idea in the first place. As is so often the case with standards, the end result was a compromise that pleased almost no-one: 48 bytes was chosen as the length for the ATM cell payload. Probably the greatest tragedy of this choice was that it is not a power of two, which means that is is quite a mismatch to most things that computers handle, like pages and cache lines. Rather less controversially, the header was fixed at 5 bytes. The format of an ATM cell is shown in Figure .

Sidebar: A Compromise of 48 Bytes



The explanation for why the payload of an ATM cell is 48 bytes is an interesting one, and makes an excellent case for studying the process of standardization. As the ATM standard was evolving, the US telephone companies were pushing a 64-byte cell size, while the European companies were advocating 32-byte cells. The reason the Europeans wanted the smaller size is that since the countries they served were of a small enough size, they would not have to install echo cancellors if they were able to keep the latency induced by generating a complete cell small enough. Thirty-two byte cells were adequate for this purpose. In contrast, the US is a large enough country that the phone companies had to install echo cancellors anyway, and so the larger cell size reflected a desire to improve the header-to-payload ratio.
It turns out that averaging is a classic form of compromise---48 bytes is simply the average of 64 bytes and 32 bytes. So as not to leave the false impression that this use of compromise-by-averaging is an isolated incident, we note that the 7-layer OSI model was actually a comprise between 6 and 8 layers.


Cell Format

The ATM cell actually comes in two different formats, depending on where you look in the network. The one shown in Figure is called the UNI (user-network interface) format; the alternative is the NNI (network-network interface). The UNI format is used when transmitting cells between a host and a switch, while the NNI format is used when transmitting cells between switches. The only difference is that the NNI format replaces the GFC field with four extra bits of VPI. Clearly, understanding all the three letter acronyms (TLAs) is a key part of understanding ATM.

http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img93000.gif
Figure: ATM cell format at the UNI


Starting from the leftmost byte of the cell (which is the first one transmitted), the UNI cell has 4 bits for `Generic Flow Control' ( GFC). The use of these bits is not well defined at the time of writing, but they are intended to have local significance at a site and may be overwritten in the network. The basic idea behind the GFC bits is to provide a means to arbitrate access to the link if the local site uses some shared medium to connect to ATM.
The next 24 bits contain a 16-bit Virtual Circuit Identifier (VCI) and a 8-bit Virtual Path Identifier (VPI). The difference between the two is explained below, but for now it is adequate to think of them as a single 24-bit identifier that is used to identify a virtual connection, just as in Section . Following the VCI/VPI is 3-bit Type field that has eight possible values. Four of them, when the first bit in the field is set, relate to management functions. When that bit is clear, this indicates the cell contains user data. In this case, the second bit is the `explicit forward congestion indication' (EFCI) bit and the third is the `user signalling' bit. The former can be set by a congested switch to a tell an end node it is congested; it has its roots in the DECbit described in Section and, although the standards are not yet firm on this point, is intended to be used similarly. The latter is used primarily in conjunction with ATM Adaptation Layer 5 to delineate frames, as discussed below.
Next is a bit to indicate `Cell Loss Priority' ( CLP); a user or network element may set this bit to indicate cells that should be dropped preferentially in the event of overload. For example, a video coding application could set this bit for cells that, if dropped, would not dramatically degrade the quality of the video. A network element might set this bit for cells that have been transmitted by a user in excess of the amount that was negotiated.
The last byte of the header is an 8-bit CRC, known as the `Header Error Check' ( HEC). It uses the CRC-8 polynomial given in Section , and provides error detection and single-bit error correction capability on the cell header only. Protecting the cell header is particularly important because an error in the VCI will cause the cell to be mis-delivered.

Segmentation and Reassembly

Up to this point, we have assumed that a low level protocol could just accept that packet handed down to it by a high level protocol, attach its own header, and pass the packet on down. This is not possible with ATM, however, since the packets handed down from above are often larger than 48 bytes, and thus, will not fit in the payload of an ATM cell. The solution to this problem is to fragment the high-level message into low-level packets on the source, transmit the individual low-level packets over the network, and then reassemble the fragments back together at the destination. This general technique is usually called fragmentation and reassembly. In the case of ATM, however, it often called segmentation and reassembly.

http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img94000.gif
Figure: Segmentation and reassembly in ATM.


Segmentation is not unique to ATM, but it is much more of a problem than in a network with a maximum packet size of, say, 1500 bytes. To address the issue, a protocol layer was added that sits between ATM and the variable length packet protocols that might use ATM, such as IP. This layer is called the ATM Adaptation Layer (AAL), and to a first approximation, the AAL header simply contains the information needed by the destination to reassemble the individual cells back into the original message. The relationship between the AAL and ATM is illustrated in Figure .
Because ATM was designed to support all sorts of services, including voice, video, and data, it was felt that different services would have different AAL needs. Thus, four adaptation layers were originally defined: 1 and 2 were designed to support applications, like voice, that require guaranteed bit rates, while 3 and 4 were intended to provide support for packet data running over ATM. The idea was that AAL3 would be used by connection-oriented packet services (such as X.25) and AAL4 would be used by connectionless services (such as IP). Eventually, the reasons for having different AALs for these two types of service were found to be insufficient, and the AALs merged into one that is inconveniently known as AAL3/4. Meanwhile, some perceived shortcomings in AAL3/4 caused a fifth AAL to be proposed, called AAL5. Thus there are now four AALs: 1,2, 3/4 and 5.
The two that support computer communications are described below.

ATM Adaptation Layer 3/4


http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img95000.gif
Figure: ATM Adaptation Layer 3/4 packet format


The main function of AAL3/4 is to provide enough information to allow variable length packets to be transported across the ATM network as a series of fixed length cells. That is, the AAL supports the segmentation and reassembly process. Since we are now working at a new layer of the network hierarchy, convention requires us to introduce a new name for a packet---in this case, they are called protocol data units (PDUs). The task of segmentation/reassembly involves two different packet formats. The first of these is the `convergence sublayer protocol data unit' (CS-PDU), as depicted in Figure . The CS-PDU defines a way of encapsulating variable length PDUs prior to segmenting them into cells. The PDU passed down to the AAL layer is encapsulated by adding a header and a trailer, and the resultant CS-PDU is segmented into ATM cells.
The CS-PDU format begins with an 8-bit `common part indicator' ( CPI), which is like a version field, indicating which version of the CS-PDU format is in use. Only the value zero is currently defined. The next 8 bits contain the `beginning tag' ( Btag), and it is supposed to match the `end tag' ( Etag) for a given PDU. This protects against the situation where the loss of the last cell of one PDU and the first cell of another causes two PDUs to be inadvertently joined into a single PDU and passed up to the next layer in the protocol stack. The BAsize field (Buffer Al******** size) is not necessarily the length of the PDU (which appears in the trailer); it is supposed to be a hint to the reassembly process as to how much buffer space to allocate for the reassembly. The reason for not including the actual length here is that the sending host might not have known how long the CS-PDU was when it transmitted the header.
The CS-PDU trailer contains the Etag, the real length of the PDU, and a padding byte of zeroes.

http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img96000.gif
Figure: ATM cell format for AAL3/4


The second part of AAL3/4 is the header and trailer that is carried in each cell, as depicted in Figure . Thus, the CS-PDU is actually segmented into 44-byte chunks; an AAL3/4 header and trailer is attached to each one, bringing it up to 48 bytes, which is then carried as the payload of an ATM cell.
The first two bits of the AAL3/4 header contain the Type field, which indicates if this is the first cell of a CS-PDU, the last cell of a CS-PDU, a cell in the middle of a CS-PDU, or a single cell PDU (in which case it is both first and last). The official names for these four conditions are shown in Table , along with the bit encodings.

http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img97000.gif
Table: AAL3/4 type field.


Next is a 4-bit sequence number ( SEQ), which is intended simply to detect cell loss or misordering so that reassembly can be aborted. Clearly, a sequence number this small can miss cell losses if the number of lost cells is large enough. This is followed by a multiplexing identifier ( MID), which can be used to multiplex several PDUs onto a single connection. The 6-bit Length field contains the number of bytes of PDU that are contained in the cell; it must equal 44 for BOM and COM cells. Finally, a 10-bit CRC is used to detect errors anywhere in the 48-byte cell payload.
One thing to note about AAL 3/4 is that it exacerbates the fixed per-cell overhead that we discussed above. With 44 bytes of data to 9 bytes of header, the best possible bandwidth utilization falls to 83%.

ATM Adaptation Layer 5

One thing you may have noticed in the discussion of AAL3/4 is that it seems to take a lot of fields, and thus a lot of overhead to perform the conceptually simple function of segmentation and reassembly. This observation was, in fact, made by several people in the early days of ATM, and numerous competing proposals arose for an AAL to support computer communications over ATM. There was a movement known informally as `Back the Bit' which argued that, if we could just have one bit in the ATM header (as opposed to the AAL header) to delineate the end of a frame, then segmentation and reassembly could be accomplished without using any of the 48-byte ATM payload for segmentation/reassembly information. This movement eventually led to the definition of the user signalling bit described above, and to the standardization of AAL5.
What AAL5 does is replace the 2-bit Type field of AAL3/4 with 1 bit of framing information in the ATM cell header. By setting that one bit, we can identify the last cell of a PDU; the next cell is assumed to be the first cell of the next PDU, and subsequent cells are assumed to be COM cells until another cell is received with the user signalling bit set. All the pieces of AAL3/4 that provide protection against lost, corrupt, or misordered cells, including the loss of an EOM cell, are provided by the AAL5 CS-PDU packet format, depicted in Figure .

http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img98000.gif
Figure: ATM Adaptation Layer 5 packet format


The AAL5 CS-PDU consists simply of the data portion (the PDU handed down by the higher layer protocol) and an 8-byte trailer. To make sure that the trailer always falls at the tail end of an ATM cell, there may be up to 47 bytes of padding between the data and the trailer. The first two bytes of the trailer are currently reserved and must be zero. The length field is the number of bytes carried in the PDU, not including the trailer or any padding before the trailer. Finally, there is a 32-bit CRC.
Somewhat surprisingly, AAL5 provides almost the same functionality as AAL3/4 without using 4 bytes out of every cell. For example, the CRC-32 detects lost or misordered cells, as well as bit errors in the data. In fact, having a checksum over the entire PDU rather than doing it on a per-cell basis as in AAL3/4 provides stronger protection. For example, it protects against the loss of 16 consecutive cells, an event that would not be picked up by the sequence number checking of AAL3/4. Also, a 32-bit CRC protects against longer burst errors than a 10-bit CRC.
The main feature missing from AAL5 is the ability to provide an additional layer of multiplexing onto one virtual circuit using the MID. It is not clear whether this is a significant loss. For example, if one is being charged for every virtual circuit that one sets up across a network, then multiplexing traffic from lots of different applications onto one connection might be a plus. However, this approach has the drawback that all applications will have to live with whatever quality of service (e.g., delay and bandwidth guarantees) has been chosen for that one connection, which may mean that some applications are not receiving appropriate service. Certainly the large (24-bit) space available for VCI/VPI combinations suggests that it should be possible for a host to open many virtual connections and avoid multiplexing at this level, in which case the MID is of little value.
In general, AAL5 has been wholeheartedly embraced by the computer communications community (at least, by that part of the community that has embraced ATM at all). For example, it is the preferred AAL in the IETF for transmitting IP datagrams over ATM. Its more efficient use of bandwidth and simple design are the main things that make it more appealing than AAL3/4.

Virtual Paths

As mentioned above, ATM uses a 24-bit identifier for virtual circuits, and these circuits operate almost exactly like the ones described in Section . The one twist is that the 24-bit identifier is split into two parts: an 8-bit Virtual Path Identifier (VPI) and a 16-bit Virtual Circuit Identifier (VCI). What this does is provide some hierarchy in the identifier, just as there may be hierarchy in addresses as we discussed in Section . To understand how a hierarchical virtual circuit identifier might be used, consider the following example. (We ignore the fact that in some places there might be a Network-network interface (NNI) with a different sized VPI; just assume that 8-bit VPIs are used everywhere.)
Suppose that a corporation has two sites that connect to a public ATM network, and that at each site the corporation has a network of ATM switches. We could imagine establishing a virtual path between two sites using only the VPI field. Thus, the switches in the public network would use the VPI as the only field on which to make forwarding decisions. From their point of view, this is a virtual circuit network with 8-bit circuit identifiers. The 16-bit VCI is of no interest to these public switches, and they neither use it for switching nor remap it. Within the corporate sites, however, the full 24-bit space is used for switching. Any traffic that needs to flow between the two sites is routed to a switch that has a connection to the public network, and its top 8 bits (the VPI), is mapped onto the appropriate value to get the data to the other site. This idea is illustrated in Figure . Note that the virtual path acts like a fat pipe that contains a bundle of virtual circuits, all of which have the same 8 bits in their most significant byte.

http://dmi.uib.es/%7Eloren/docencia/webxtel/bibliografia/img99000.gif
Figure: Example of Virtual Path


The advantage of this approach is clear: although there may be thousands or millions of virtual connections across the public network, the switches in the public network behave as if there is only one connection. This means that there can be much less connection state stored in the switches, avoiding the need for big, expensive tables of per-VCI information.

Physical Layers for ATM

While the layered approach to protocol design might lead you to think that we do not need to worry about what type of point-to-point link ATM runs on top of, this turns out not to be the case. From a simple pragmatic point of view, when you buy an ATM adaptor for a workstation or an ATM switch, it comes with some physical medium over which ATM cells will be sent. Of course, this is true for other networking protocols such as FDDI and Ethernet. Like these protocols, ATM can also run over several physical media and physical layer protocols.
From early in the process of standardizing ATM, it has been assumed that ATM will run on top of a SONET physical layer (see Section ). Some people even get ATM and SONET confused because they have been so tightly coupled for so long. While it is true that standard ways of carrying ATM cells inside a SONET frame have been defined, and that you can now buy ATM-over-SONET products, the two are entirely separable. For example, one could lease a SONET link from a phone company and send whatever one wants over it, including variable length packets. Also, one can send ATM cells over many other physical layers instead of SONET, and standards have been (or are being) defined for these encapsulations.
When you send ATM cells over some physical medium, the main issue is how to find the boundaries of the ATM cells; this is exactly the framing problem described in Chapter . With SONET, there are two easy ways to do this. One of the overhead bytes in the SONET frame can be used as a pointer into the SONET payload to the start of an ATM cell. Having found the start of one cell, the next cell starts 53 bytes further on in the SONET payload, and so on. In theory, you only need to read this pointer once, but in practice, it makes sense to read it every time the SONET overhead goes by so that you can detect errors or resynchronize if needed.
The other way to find the edges of ATM cells takes advantage of the fact that every cell has a CRC in the 5th byte of the cell. Thus, if you run a CRC calculation over the last 5 bytes received and the answer comes out to indicate no errors, then it is probably true that you just read an ATM header. If this happens several times in a row at 53 byte intervals, you can be pretty sure you have found the cell boundary.


Sidebar: ATM in the LAN



As we mentioned above, ATM grew out of the telephony community, who envisioned it being used as a way to build large public networks that could transport voice, video and data traffic. However, it was subsequently embraced by the computer and data communications industries as a technology to be used in LANS---a replacement for Ethernet and FDDI. Its popularity in this realm can be attributed to many factors, most notably the fact that it offered significantly higher bandwidth than Ethernet and, unlike FDDI, the bandwidth is switched rather than shared, meaning that, in theory, every host can send or receive at the full link speed.
The problem with running ATM in a LAN, however, is that it doesn't look like a `traditional' LAN. Because most LANs (i.e. Ethernets and token rings) are shared media networks, i.e., every node on the LAN is connected to the same link, it is easy to implement things like broadcast (sending to everybody) and multicast (sending to a group). Thus, lots of the protocols that people depend on in their LANs---e.g., the Address Resolution Protocol (ARP) described in Section ---depend in turn on the ability of all LANs to support multicast and broadcast. However, because of its connection-oriented and switched nature, ATM behaves rather differently shared media LAN. For example, how can you broadcast to all nodes on an ATM LAN if you don't know all their addresses? There are two possible solutions to this problem, and both of them have been explored. One is to redesign the protocols that assume things about LANs which are not in fact true of ATM. Thus, for example, there is a new protocol called ATMARP which, unlike traditional ARP, does not depend on broadcast. The alternative is to make ATM behave more like a shared media LAN, in the sense of supporting multicast and broadcast---without losing the performance advantages of a switched network. This approach has been developed by the ATM Forum under the title `LAN Emulation' (which might be more correctly titled `Shared Media Emulation'). This approach aims to add functionality to ATM LANs so that anything that runs over a shared media LAN can operate over an ATM LAN.