cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1461
Views
9
Helpful
14
Replies

Cut-through switching mode with VLAN tag

toefte
Level 1
Level 1

Hello,

theoretically, I have my switch configured to work in cut-through switching mode. How is it possible to read an 802.1q VLAN tag? Cut-through forwards the frame after the switch has received the destination MAC address, therefore the switch can't see if there is a VLAN tag and if the destination port is allowed to send out the frame.

How is this working on a real system? How does the switch approach this issue?

 

Thanks

1 Accepted Solution

Accepted Solutions

"I don't understand why cut-through switching is defined, that it forwards the frame after receiving the destination MAC address (14 byte including the preamble), but in reality it has to be implemented different because of the VLAN tag problem."

Possibly your issue is, you're locked to the idea cut through is always done immediately after the destination MAC is acquired.

You believe it's "defined" to always be done immediately after destination MAC is acquired, correct?  If so, what's the standard that so requires this?  What about fragment free or adaptive cut through, as they both work a little differently?  Why wait on receiving the whole destination MAC?  (I have production experience using network devices that would begin forwarding an Ethernet frame before the whole destination MAC was received!  Do you know what we called those?)

I assume you understand why cut through waits for the whole destination MAC?  I.e. because without it, it cannot determine a specific egress port without it.

If ingress is using VLAN tagged frames, likewise cut through cannot determine a specific egress port.

Oh, another wrinkle to using cut through, what if you want to use L2 or L3 QoS?  Those tags come after the MACs.

What if there's multiple ingress ports that will resolve to the same egress port?

I recall when some switches first introduced cut through.  Interestingly, another network technology, pretty much, negated the need for cut through.

The reason for the above, rather then believing cut through is "defined" to go into action as soon as destination MAC received, and VLAN tags cause a problem, if you really think about, what you read is incorrect or you mistook "defined" for a general description.

Lastly, the real goal of cut through isn't so much concerned about an extra 10 bytes delay, but the following 1.5k to 9k bytes of delay.

View solution in original post

14 Replies 14

Cut-through is for data not for l2 header of frame.

So sure it will read vlan tag. 

MHM

@MHM Cisco World reference's summary, says it well, and concisely, with:  "In cut-through switching mode, switches receive only a fraction of the frame and immediately start making a forwarding decision."

What that fraction is can vary based on need!

If, as that reference also mentioned, there's an ACL, the frame cannot be forwarded until the ACL information is seen.  I.e. for such a case, we cannot forward based just on having obtained the destination MAC.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Possibly as simple as switch realizes for VLAN tagged frames, it needs to see the VLAN ID, so the correct egress port can be selected.  Sort of a variation of fragment free cut thru switching, i.e. just having destination MAC, for VLAN tagged frames, is insufficient.

Effectively, it would add an additional 80 bit times.

toefte
Level 1
Level 1

I found this post which deals with my question:

https://community.cisco.com/t5/switching/understanding-cut-through-switching-mode/td-p/2622635

Is this answer correct? I don't understand why cut-through switching is defined, that it forwards the frame after receiving the destination MAC address (14 byte including the preamble), but in reality it has to be implemented different because of the VLAN tag problem.

 

"I don't understand why cut-through switching is defined, that it forwards the frame after receiving the destination MAC address (14 byte including the preamble), but in reality it has to be implemented different because of the VLAN tag problem."

Possibly your issue is, you're locked to the idea cut through is always done immediately after the destination MAC is acquired.

You believe it's "defined" to always be done immediately after destination MAC is acquired, correct?  If so, what's the standard that so requires this?  What about fragment free or adaptive cut through, as they both work a little differently?  Why wait on receiving the whole destination MAC?  (I have production experience using network devices that would begin forwarding an Ethernet frame before the whole destination MAC was received!  Do you know what we called those?)

I assume you understand why cut through waits for the whole destination MAC?  I.e. because without it, it cannot determine a specific egress port without it.

If ingress is using VLAN tagged frames, likewise cut through cannot determine a specific egress port.

Oh, another wrinkle to using cut through, what if you want to use L2 or L3 QoS?  Those tags come after the MACs.

What if there's multiple ingress ports that will resolve to the same egress port?

I recall when some switches first introduced cut through.  Interestingly, another network technology, pretty much, negated the need for cut through.

The reason for the above, rather then believing cut through is "defined" to go into action as soon as destination MAC received, and VLAN tags cause a problem, if you really think about, what you read is incorrect or you mistook "defined" for a general description.

Lastly, the real goal of cut through isn't so much concerned about an extra 10 bytes delay, but the following 1.5k to 9k bytes of delay.

toefte
Level 1
Level 1

Thank you for your reply.
Yes, I thought it is like a standard. Some trustworthy sources told me there are these "predefined" switching modes and described how they operate.
Now I know more.
"I have production experience using network devices that would begin forwarding an Ethernet frame before the whole destination MAC was received! Do you know what we called those?"
That is very interesting. How is it called and how does this work?

"That is very interesting. How is it called and how does this work?"

They were usually called Ethernet "hubs".  As a frame started to ingress on one port, its bits could be replicated on all the other ports.

When switches started to replace them, one notable negative was the additional latency added compared to a hub.  Up to, roughly, 12,000 (yes, 12,000) times longer, per hop!  Cut through, could hugely narrow that, down to about only 47 times longer.

The technology that mitigated the need for using cut through, was Fast Ethernet.

As FE only cuts the time by 10x, so for example, the 12,000 is only 1,200 longer, it may be unclear why cut through wasn't still better.  That's because the 1,200 is both worst case, and other latency issues are usually much more of an issue.

Cut through has come back, but generally only used where there's some critical need for its additional hardware expense.  Also, as you may now see, it has its own usage issues.

rvallejr
Level 1
Level 1

Hello.

Taking advantage of this discussion, I'd like to ask about the issue of cut-through compared to store-and-foward for optimised VoIP use. The Netacad material mentions that store-and-foward is the indicated method, but a controversy has been raised in which cut-through would be ideal because it reduces latency. I think the question here is whether the critical element for VoIP is latency or the quality of the frames arriving at the destination. Still based on the Netacad material, I wonder if cut-through fragment-free, a mix of the advantages of store-and-foward and cut-through, isn't the reason for the argument that cut-through is more interesting for VoIP traffic. As Joseph mentions in his reply, the characteristic of cut-through is that it forwards the frame immediately after reading the destination MAC and, in this case, cut-through fragment-free reads and calculates the error in the first 64 bytes. I would be grateful if colleagues could clarify this issue.

Well, consider, decades ago I was doing VoIP across FE LANs and fractional, as low as 256 Kbps, (international) T1/E1 WANs (with other traffic) all of which were store-and-forward, without issue.  So, the reduction of latency, using cut-through, for VoIP, only, perhaps, benefits the sellers of more expensive cut-through switches.

BTW, what I was doing decades ago, possibly wasn't the best explanation of why cut-through is very unlikely to benefit VoIP.

So, consider how big a VoIP packet is.  A quick Internet search appears to describe a typical VoIP packet is less than 256 bytes.  So, the latency to receive the whole packet is 256 bytes * 8 divided by link "speed".  If our link speed was 1 Mbps, it would take (256 * 8 / 1,000,000) 2.048 ms.  As, generally, end-to-end latency under 150 ms is usually acceptable, there's not much need for cut-through for VoIP.  (This is also why I didn't have VoIP issues on the lower bandwidth connections of decades past.)

Also, consider, if it was 10 Mbps, latency is .2 ms, 100 Mbps, .02 ms, gig, .002 ms or 2 microseconds.

Not withstanding the above, a whole queue of other packets, ahead of VoIP can be problematic, both decades ago, and today, and cut-through doesn't help very much there, but QoS does, which is why QoS almost always is mandated for VoIP, but cut-through is not.

Also, don't take the above as cut-through is worthless, it's not worthless, but the need for it is often over hyped by hardware vendors, just as all-port wire-speed switches and/or 10g, 25g, 40g and 100g ports.  Again, all can have real need, but hardware vendors are in the business of selling hardware, so they hype new/better, regardless of whether it's a cost effective benefit for you.

(Oh, a side issue almost all of us are sensitive to, we don't like to work in a technology "backwater", we want to work with state-of-the-art equipment.  So, @rvallejr for those saying we should use cut-through to support VoIP, see if they can actually make a good argument for why.)

(Laugh, in one company I worked at, experienced engineers considered me a "mad scientist".  Junior engineers a "stick in the mud".  The former because I would consider using any technology, and would.  The latter, because any technology had to serve a business purpose, and be "better".  It also had to actually work correctly.)

Hello, Joseph.
I believe that pragmatism is recommended when dealing with technologies and it is important to keep an open mind to deliver solutions that solve everyday problems, always in favor of the organization's business. Don't worry, the boring and crazy people are experts in this (lol), because they don't accept the new, just because it is new, without questioning it.
Thank you for the clarifications, I learned much more than I was looking for with your arguments and your experience.

Well said.

BTW, if you like "controversy", for other latency concerning issues, rather than VoIP, you might consider high def video conferencing, like Cisco's (past?)Telepresence, jumbo Ethernet and/or QoS. You might also consider optical switching.