Forward and Filter Decision Switching
Â
There are three distinct functions of layer 2 switching: address learning, forward/filter decisions, and loop avoidance.
Address learning Layer 2 switches and bridges remember the source MAC address of each frame received on an interface, and enter this information into a MAC table called a forward/filter table.
Forward/filter decisions When a frame is received on an interface, the switch looks at the destination MAC address and finds the exit interface in the MAC table. The frame is only forwarded out the specified destination port.
Loop avoidance If multiple connections between switches are created for redundancy purposes, network loops can occur. Spanning Tree Protocol (STP) is used to stop network loops while still permitting redundancy.
When a switch is first powered on, the MAC table is empty. When a frame is received on a port, the source MAC address is placed in the MAC address table, along with the port ID of the port on which it was received. If the MAC address was already in the table, its associated aging countdown timer is reset (300 seconds by default). Then the MAC address table is searched using the destination MAC address to determine which action to take.Â
- Forward If the destination MAC address comes from another port within the switch, then the frame is sent to the identified port for transmission.
- Flood If the destination MAC address is not in the MAC address table, then the frame needs to be flooded and is sent to all ports except for the port through which it arrived. This action is known as unicast flooding.
- Filter If the destination MAC address comes from the same port on which it was received, (in another words, source mac address and destination mac address have the same exit port) then there is no need to forward it, and it is discarded.
Â
Cut-Through Switching
In addition to large numbers of interfaces, support for multitudes of physical media types and transmission rates, and enticing network management features, Ethernet switch manufacturers often tout that their switches use cut-through switching rather than store-and-forward packet switching, used by routers and bridges. The difference between store-and-forward and cut-through switching is subtle. To understand this difference consider a packet that is being forwarded through a packet switch (i.e., a router, a bridge, or an Ethernet switch). The packet arrives to the switch on a inbound link and leaves the switch on a outbound link. When the packet arrives, there may or may not be other packets in the outbound link's output buffer. When there are packets in the output buffer, there is absolutely no difference between store-and-forward and cut-through switching. The two switching techniques only differ when the output buffer is empty.
When a packet is forwarded through a store-and-forward packet switch, the packet is first gathered and stored in its entirety before the switch begins to transmit it on the outbound link. In the case when the output buffer becomes empty before the whole packet has arrived to the switch, this gathering generates a store-and-forward delay at the switch, a delay which contributes to the total end-to-end delay . An upper bound on this delay is L/R, where L is the length of the packet and R is transmission rate of the inbound link. Note that a packet only incurs a store-and-forward delay if the output buffer becomes empty before the entire packet arrives to the switch.
With cut-through switching, if the buffer becomes empty before the entire packet has arrived, the switch can start to transmit the front of the packet while the back of the packet continues to arrive. Of course, before transmitting the packet on the outbound link, the portion of the packet that contains the destination address must first arrive. (This small delay is inevitable for all types of switching, as the switch must determine the appropriate outbound link.) In summary, with cut-through switching a packet does not have to be fully "stored" before it is forwarded; instead the packet is forwarded through the switch when the output link is free. If the output link is shared with other hosts (e.g., the output link connects to a hub), then the switch must also sense the link as idle before it can "cut-through" a packet.
A cut-through switch can reduce a packet's end-to-end delay, but by how much? As we mentioned above, the maximum store-and-forward delay is L/R, where L is the packet size and R is the rate of the inbound link. The maximum delay is approximately 1.2 msec for 10 Mbps Ethernet and .12 msec for 100 Mbps Ethernet (corresponding to a maximum size Ethernet packet). Thus, a cut-through switch only reduces the delay by .12 to .2 msec, and this reduction only occurs when the outbound link is lightly loaded. How significant is this delay? Probably not very much in most practical applications, so you may want to think second about selling the family house before investing in the cut-through feature.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure : Comparison of the typical features of popular interconnection devices.