04-01-2008 08:39 AM - edited 03-05-2019 10:06 PM
What is it with QoS and flowcontrol that you cannot do both at the same time on a 3550 switch? Can anyone tell me why? I cannot really understand what they have to do with each other.
This flowcontrol, is it the GigE flowcontrol, as in flowcontrol receive on? Or is it something else they are talking about?
Does this restriction apply only to the 3550 platform, or is it universal? I have 4500 in my Gig core. I have phone traffic sharing links with bulk data transfer, so I need QoS and flow control. Is the 4500 affected by this issue?
Kevin Dorrell
Luxembourg
04-01-2008 08:12 PM
"This flowcontrol, is it the GigE flowcontrol, as in flowcontrol receive on? Or is it something else they are talking about?"
Yes, according to the configuration guide. Mentioned both in the QoS chapter and interface chapter. I.e. the latter has: "Note You must not configure both IEEE 802.3z flowcontrol and quality of service (QoS) on a switch. Before configuring flowcontrol on an interface, use the no mls qos global configuration command to disable QoS on the switch."
"What is it with QoS and flowcontrol that you cannot do both at the same time on a 3550 switch? Can anyone tell me why? I cannot really understand what they have to do with each other."
I suspect the thinking is with the additional QoS features, you no longer need the simpler back pressure model provided by Ethernet flow control. Also consider that Ethernet flow control is an all or nothing model. E.g. you have a host sending different priority traffic, how do you allow the high priority traffic precedence over the low priority traffic just by stopping the host from sending?
The solution to your sharing a link with phone and bulk traffic is placing the former in a 3550 expedite queue and the latter in another queue.
04-01-2008 09:33 PM
Hello Kevin,
I am not an expert but I came across a document which discusses about why Flow control is not compatible with QoS.
===========Excerpt===========
While flow control offers the clear benefit of no packet loss, it also introduces a problem for quality of service. When a source port receives an Ethernet flow control signal, all microflows originating at that port, well behaved or not,are halted. A single packet destined for a congested output can block other packets destined for uncongested
outputs. The resulting head-of-line blocking phenomenon means that quality of service cannot be assured with high confidence when flow control is enabled.
==================
Document Link
http://assets.zarlink.com/AN/ZLAN_44_AppNote_Dec04.pdf
HTH
Padmanabhan
04-02-2008 12:52 AM
Thanks for participating in this discussion guys, both comments are useful, although I am still not convinced.
I understand now that we are talking about the PAUSE feature defined in 802.3x (100 Mpbs) and 802.3z (Gig). (I temporarily got confused with 802.1x, which of course is something else entirely.)
I understand that flowcontrol can mean that the QoS is no longer able to guarantee a QoS, but I still don't see why the two features shouldn't work in harmony. What I am after is flow control so that I don't drop any frames, but that when flow control is released, it should still look to the expedited queue before servicing any others. I don't really see why that should cause any head-of-queue blocking.
Does anyone know how much jitter can be introduced by flow control, assuming the expedited queue is the first to be serviced once the PAUSE is released?
Looking at the Zarlink chip, it seems to say that when you have flow control, it only has one queue, and therefore has the head-of-queue blocking problem. I can understand that. But I am not sure whether that applies to the queueing schemes on the 4500 switches.
Or is it saying that the flow-control applies only to the lowest priority queue? If that is the case, then I actually want QoS and flow control at the same time. My first priority is the expedited queue. The bulk data can suffer packet loss, but if I can avoid most of the packet loss by flow control, that is even better. But I can still accept a small amount of packet loss on the bulk queue if that is the price of have QoS and flowcontrol at the same time.
This is a very useful discussion, so if you have any more information, please come back to me with it. I might also open a TAC case in parallel, and I'll post any more information here.
Currently I have QoS on my core links, but no flow control. I therefore have packet drops that could be avoided. I need to make a decision before my scheduled downtime on Saturday whether to introduce flow-control into my core links.
Kevin Dorrell
Luxembourg
04-02-2008 05:00 AM
My understanding of Ethernet flow control was it was really intended to pause the sending host, not buffer management between switches. Only the actual sending host has ideal blockage to avoid drops since it can indicate to the individual sourcing application the path is blocked. (Actually not ideal when you consider real-time apps.)
Consider you have a congested link between distribution feeding core, i.e. typical aggregation. What distribution downstream access link do you issue the pause to? Unless you're tracking the actual impact of each flow, the distribution could send a pause to all its access downlinks. Unlikely what we desire. In an aggregation situation with a larger bandwidth uplink, we would likely need to pause multiple downstream access links. But whether it's one downstream access link, multiple links or all links, all traffic from the access link(s) is blocked. This creates a head of line blocking issue for all the other access flows that are not making the congestion problem.
You mention you want to avoid drops. Without source quench or ECN, drops shouldn't be avoided, they should be managed. Without some type of source flow control, networks that have oversubscription can easily reach congestion collapse. The latter being where the network is full of traffic, very little is effectively being delivered. A simple example of such is what happens on shared Ethernet under high load. The latter is especially interesting since some switches, as Padmanabhan's attachment also mentions, can provide back pressure to half duplex connected devices by making the link appear busy.
Today, most often the best way to indicate congestion to flows is by dropping packets such that all the available bandwidth is utilized but without overdriving it. This does incur some loss due to the drops, but it provides the best "goodput". In the near future (within 5 years?), ECN should accomplish the same but without packet loss. Some interim approaches (if interim), are the newer TCP stacks that analyze congestion based on high resolution ACK timings, Microsoft's Vista CTCP does such.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide