cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
229
Views
2
Helpful
7
Replies

QoS behaviour during congestion

Mitrixsen
Level 1
Level 1

Hello, everyone.

If someone configures QoS classes and prioritizes different types of traffic and let’s say that they configure CBWFQ for any non-voice data and LLQ for voice data, these shouldn’t take effect unless the traffic amount exceeds the link’s or the device’s bandwidth capacities, right? Or in other words, the configured traffic prioritization and queuing and such doesn’t happen until we have congestion, correct?

So then it’s correct to say that if there is no congestion, the devices will forward the received traffic using the FIFO mechanism, right? So in whatever order the packets arrive, the same order they are forwarded. Since if there is available bandwidth for everyone, why should someone be given preferential teratment?

My second question is, if we connect two devices using a 1 gigabit link, it’s not exactly right to say that congestion will occur once we go over 1 gigabit, right? Because we should also account the devices’ performance into this. Even if the link is 1000 mbits, congestion can occur earlier if the device’s performance can only handle, let’s say, 250 mbits, right?

Thank you.
David

1 Accepted Solution

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame

"If someone configures QoS classes and prioritizes different types of traffic and let’s say that they configure CBWFQ for any non-voice data and LLQ for voice data, these shouldn’t take effect unless the traffic amount exceeds the link’s or the device’s bandwidth capacities, right?"

It depends on the QoS policy and device's architecture.

For example, at CBWFQ policy with embedded policer(s) can drop packets even when there's ample bandwidth.  Embedded shapers can queue traffic, again even when there is ample physical bandwidth.

Device's internal processing and bandwidth capacity is its own can of worms.

"So then it’s correct to say that if there is no congestion, the devices will forward the received traffic using the FIFO mechanism, right?"

Well, yes and no.  Both CBWFQ and FIFO queue traffic, if there's no congestion, there's no queue(s) but possibly traffic passes though the same logic (for instance, a CBWFQ stats should still count packets that logically transited the class even if such traffic was never queued).

"So in whatever order the packets arrive, the same order they are forwarded. Since if there is available bandwidth for everyone, why should someone be given preferential teratment?"

Well, maybe, or maybe not, depends on the policy.  Say you have two classes with active traffic, one with a shaper, and adequate bandwidth for all the active traffic yet the one class is queued due to the shaper, so the other class traffic might be transmitted before earlier received shaped traffic.

BTW, QoS isn't just about "preferential treatment", it's about providing certain service levels, i.e. a Quality of Service.  For example, perhaps we desire to insure two hosts get a guarantee to half the available bandwidth, is either "preferred"?

"Even if the link is 1000 mbits, congestion can occur earlier if the device’s performance can only handle, let’s say, 250 mbits, right?"

Yea, it would be congestion, but often device offers little to no QoS support for such congestion.

Consider a router whose PPS rate cannot sustain wire-rate for ingress rate to egress rate, such as gig ingress, gig egress, 500 Mbps of ingress traffic, and a PPS rate that only supports 400 Mbps.  So what happens?  Possibly, ingress interface starts to drop received ingress frames as they are not removed often enough.  Or, device hardware, can buffer ingress traffic, until buffer overflows.  But, what if that traffic was a bunch of FTP with interleaved VoIP?  (Remember combined ingress rate only 500 Mbps on gig.)  How is the VoIP protected from NIC overruns and/or ingress buffer drops?  (BTW, some devices can do some "built-in" ingress queue management, some with very, very limited configuration options.)

BTW, traditionally congestion is taught as a QoS consideration for two cases.  First, faster ingress interface (e.g, gig) to slower egress interface (e.g. FE).  Second, multiple ingress interfaces, whose aggregation's bandwidth (e.g. 2 FEs) exceeds egress bandwidth (e.g. 1 FE).

You've touch upon another issue, where device cannot meet the performance needs of its interfaces.  This is usually addressed by insuring the hardware is adequate for the need, but it's certainly a QoS consideration.

Likewise, another QoS consideration that can be overlooked, as bumping into it is rare, but consider can 5 FE ingress interfaces be a problem when there's a gig egress and the device can easily handle gig at wire rate?  Yes, there is because . . .

Spoiler
what if all 5 FE interface deliver a frame as the same time?  Only 1 can be transmitted immediately, the other 4 will need to be (very briefly) queued.  Should there be any prioritization across the 5 frames?

View solution in original post

7 Replies 7

M02@rt37
VIP
VIP

Hello @Mitrixsen 

You are correct that QoS mechanisms such as CBWFQ and LLQ only take effect during periods of congestion... When there is sufficient bandwidth for all traffic, packets are forwarded using the default FIFO mechanism, where packets are processed in the order they arrive. This is because, without congestion, there is no need to prioritize or queue traffic; all packets can be transmitted immediately as they come in. QoS prioritization ensures that critical traffic, like voice or video, is handled preferentially only when the link is oversubscribed, maintaining service quality for sensitive applications during high utilization.

Regarding congestion thresholds, you are correct also. Congestion is not solely defined by the link's bandwidth capacity. While a 1 Gbps link theoretically supports up to 1 Gbps of traffic, the actual threshold for congestion can depend on the performance capabilities of the devices connected to the link. For example, if a device has a forwarding or queuing capacity of only 250 Mbps, congestion will occur when traffic exceeds this limit, even though the link itself can handle more. This demonstrates the importance of considering both link bandwidth and the performance characteristics of network devices when designing and managing a network. 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

Adding to this- When dealing with QoS and attempting to ensure a valid QoS outcome you need to remember that certain traffic flows are heavier in one direction than the other and when/where to police or shape traffic. Web traffic is usually 20/80 (TX/RX) in which the bulk of your data is returning from a relatively small request as where voice/video streams are typically standardized and symmetric. 

Joseph W. Doherty
Hall of Fame
Hall of Fame

"If someone configures QoS classes and prioritizes different types of traffic and let’s say that they configure CBWFQ for any non-voice data and LLQ for voice data, these shouldn’t take effect unless the traffic amount exceeds the link’s or the device’s bandwidth capacities, right?"

It depends on the QoS policy and device's architecture.

For example, at CBWFQ policy with embedded policer(s) can drop packets even when there's ample bandwidth.  Embedded shapers can queue traffic, again even when there is ample physical bandwidth.

Device's internal processing and bandwidth capacity is its own can of worms.

"So then it’s correct to say that if there is no congestion, the devices will forward the received traffic using the FIFO mechanism, right?"

Well, yes and no.  Both CBWFQ and FIFO queue traffic, if there's no congestion, there's no queue(s) but possibly traffic passes though the same logic (for instance, a CBWFQ stats should still count packets that logically transited the class even if such traffic was never queued).

"So in whatever order the packets arrive, the same order they are forwarded. Since if there is available bandwidth for everyone, why should someone be given preferential teratment?"

Well, maybe, or maybe not, depends on the policy.  Say you have two classes with active traffic, one with a shaper, and adequate bandwidth for all the active traffic yet the one class is queued due to the shaper, so the other class traffic might be transmitted before earlier received shaped traffic.

BTW, QoS isn't just about "preferential treatment", it's about providing certain service levels, i.e. a Quality of Service.  For example, perhaps we desire to insure two hosts get a guarantee to half the available bandwidth, is either "preferred"?

"Even if the link is 1000 mbits, congestion can occur earlier if the device’s performance can only handle, let’s say, 250 mbits, right?"

Yea, it would be congestion, but often device offers little to no QoS support for such congestion.

Consider a router whose PPS rate cannot sustain wire-rate for ingress rate to egress rate, such as gig ingress, gig egress, 500 Mbps of ingress traffic, and a PPS rate that only supports 400 Mbps.  So what happens?  Possibly, ingress interface starts to drop received ingress frames as they are not removed often enough.  Or, device hardware, can buffer ingress traffic, until buffer overflows.  But, what if that traffic was a bunch of FTP with interleaved VoIP?  (Remember combined ingress rate only 500 Mbps on gig.)  How is the VoIP protected from NIC overruns and/or ingress buffer drops?  (BTW, some devices can do some "built-in" ingress queue management, some with very, very limited configuration options.)

BTW, traditionally congestion is taught as a QoS consideration for two cases.  First, faster ingress interface (e.g, gig) to slower egress interface (e.g. FE).  Second, multiple ingress interfaces, whose aggregation's bandwidth (e.g. 2 FEs) exceeds egress bandwidth (e.g. 1 FE).

You've touch upon another issue, where device cannot meet the performance needs of its interfaces.  This is usually addressed by insuring the hardware is adequate for the need, but it's certainly a QoS consideration.

Likewise, another QoS consideration that can be overlooked, as bumping into it is rare, but consider can 5 FE ingress interfaces be a problem when there's a gig egress and the device can easily handle gig at wire rate?  Yes, there is because . . .

Spoiler
what if all 5 FE interface deliver a frame as the same time?  Only 1 can be transmitted immediately, the other 4 will need to be (very briefly) queued.  Should there be any prioritization across the 5 frames?

This is a very well formulated response, thank you very much.

I haven't seen the spoiler tag here on these forums before. Was this added by you?

Mitrixsen_0-1733663326508.png

If so, the queuing would be very brief so there doesn't really need to be any prioritization as such queuing would be a matter of microseconds, no?

"I haven't seen the spoiler tag here on these forums before. Was this added by you?"

I'm not the first to use it, in fact seeing it used on another post is how I "discovered" the feature (found in the additional items at top of text reply box [although not when using my phone's browser]).

"If so, the queuing would be very brief so there doesn't really need to be any prioritization as such queuing would be a matter of microseconds, no?"

Usually not, but if the situation is one where something like cut through switching is needed, which is also in the realm of saving microseconds, possibly it might be.

JosephWDoherty_0-1733677471089.png

 

Ramblin Tech
Spotlight
Spotlight

"Or in other words, the configured traffic prioritization and queuing and such doesn’t happen until we have congestion, correct?"

At the risk of belaboring the point… consider a switch/router with an NPU which applies all egress QoS policies at *ingress*. That is, at the ingress interface the NPU runs the packet header through the ingress QoS policy, looks up the next-hop & egress interface, and then classifies the packet according to the egress policy and queues it the appropriate hardware VOQ for its class. All this happens without a priori knowledge of the current congestion state of the egress interface.

It’s the NPU’s scheduler that then serializes for transmission all the packets in all the VOQs for that interface. If there is no congestion, then the scheduler essentially operates FIFO. If there is congestion, then the scheduler serializes according to egress QoS queueing policy. In either event, the packets are still classified and queued for prioritization by the scheduler irrespective of any egress interface congestion. 

 

Disclaimer: I am long in CSCO

Review Cisco Networking for a $25 gift card