01-04-2025 08:38 AM
Hello, everyone.
I am studying WRED from NetworkLessons.com and they have this graph
I understand that WRED is a mechanism that drops packets but why is the queue depth meassured in packets? Why are we monitoring the amount of packets that are in the queue instead of how much bandwidth is that queue using in that specific moment? Because 10 TCP packets in the queue with a size of 1kb are not the same as 10 TCP packets in the queue with the size of 5kb
Thank you.
David
Solved! Go to Solution.
01-04-2025 10:29 AM
Probably because, at the time Dr. Floyd published RED, packets were easy to count and were already being queued as "items". Traditionally, even a basic FIFO queue counts what?
In later IOSs, they now provide options to figure queue capacity via bytes or max ms delay.
I also might mention that RED queue depth is a moving average, so smaller packets will both increase/decrease queue depth faster, i.e. RED may not "see" the higher instaneous queue depths, and not drop those smaller packets as it would larger packets. (NB: there are other issues that go along with using a moving average, which I'll not go into here, but again, really using RED effectively, is more complicated than the way this technology is often presented.)
01-04-2025 10:31 AM - edited 01-04-2025 10:31 AM
Hello,
Take a look at this document from Cisco:
https://www.cisco.com/c/en/us/td/docs/ios/qos/configuration/guide/queue_depth.html#wp1054046
Especially the part where it mentions that queue depth is measured in packets.
I would assume that's because a packet count is definitive whereas you mention a packet can have an infinite amount of sizes. Could be a way of having a standard. Its wise to understand packets in your network as this will help you develop QoS policies. That and configurable values like MTU help with max packets sizes traversing the links.
Hope this helps
-David
01-04-2025 10:29 AM
Probably because, at the time Dr. Floyd published RED, packets were easy to count and were already being queued as "items". Traditionally, even a basic FIFO queue counts what?
In later IOSs, they now provide options to figure queue capacity via bytes or max ms delay.
I also might mention that RED queue depth is a moving average, so smaller packets will both increase/decrease queue depth faster, i.e. RED may not "see" the higher instaneous queue depths, and not drop those smaller packets as it would larger packets. (NB: there are other issues that go along with using a moving average, which I'll not go into here, but again, really using RED effectively, is more complicated than the way this technology is often presented.)
01-04-2025 10:59 AM
I also happened to find this
So I suppose that the queue depth can be either in bytes or in packets, depending on the platform
01-04-2025 11:21 AM
Yup, and again, I recall, some platforms, for some queuing, also offers a ms delay option.
Why these options, now, and not then? Advancements in hardware. Even today, often on many software based routers, you'll see a drop in forwarding capacity if using QoS features.
BTW, a good way to learn more about a non-vendor specific network topology is to search the Internet and/or Wiki on the subject. Often, technically, Wiki does a fair job of explaining networking technology, but where it can be especially nice is for its external "see also'", "references", "external links"., etc.
For example, you wondering about the impact of different sized packets, to RED, well possibly not the first to wonder that. Using Wiki, quickly found: Effect of different packet sizes on RED performance
01-06-2025 06:51 AM
There's one more thing that I would like to ask about.
Say we reserve 20% of our link's BW for some class. Given that some platforms use a queue depth in packets, doesn't this mean that if enough small packets fill up the queue, taildrop will occur despite maybe having a certain % of bandwidth still available?
01-06-2025 08:33 AM
"Say we reserve 20% of our link's BW for some class. Given that some platforms use a queue depth in packets, doesn't this mean that if enough small packets fill up the queue, taildrop will occur despite maybe having a certain % of bandwidth still available?"
Sure! Basically, shouldn't (much) happen if that class doesn't exceed its 20% allocation.
Remember it takes less time to transmit smaller packets, so, with RED moving average, RED doesn't "see" a short time burst in small packets, it sees a packet count based on an average volume, regardless of packet sizes. I recall this is the primary reason for using the moving average.
Actually, it's a very sharp way to maintain a high utilization loading while trying to maximize goodput. What it's not really too ideal for is various service level guarantees, other than longer term averages.
For example, consider we have VoIP and FTP sharing a queue, with WRED you provide very high drop probability to FTP. Do you think VoIP will always work well? If not, why not? Further, what's the impact to the FTP traffic, even when there's no VoIP traffic? Good, bad, neither? Why?
Your touching upon an important area of QoS (statistically probabilities), often neglected in teaching QoS, which, when not understood can lead to many unexpected and undesired results.
I always liked the quotation: "there are lies, dam lies, and statistics". Which appears true when you don't understand statistics.
Best effort networking might be equated with playing roulette, but how playing it, can be very much "influenced" by how you bet and how many consecutive times you play. (Like, roulette, we might "rig" the game [QoS] to better the odds or even guarantee a desired result.)
01-04-2025 02:31 PM
correct
queue-limit 5000 bytes/packets <<- this command change between two modes
01-04-2025 10:31 AM - edited 01-04-2025 10:31 AM
Hello,
Take a look at this document from Cisco:
https://www.cisco.com/c/en/us/td/docs/ios/qos/configuration/guide/queue_depth.html#wp1054046
Especially the part where it mentions that queue depth is measured in packets.
I would assume that's because a packet count is definitive whereas you mention a packet can have an infinite amount of sizes. Could be a way of having a standard. Its wise to understand packets in your network as this will help you develop QoS policies. That and configurable values like MTU help with max packets sizes traversing the links.
Hope this helps
-David
01-05-2025 02:47 AM
Hello @Mitrixsen
Thanks to open that post with that question.
The reason queue depth is measured in packets rather than bandwidth is rooted in simplicity and alignment with how TCP and other transport protocols handle congestion, from my point of view. Counting packets is straightforward for network devices because it avoids the computational overhead of tracking and summing varying packet sizes in real-time. In high-speed networks where queues are managed at scale, this simplicity ensures efficient processing without introducing unnecessary delays.
Another important factor is that TCP's congestion control algorithms, such as Additive Increase - Multiplicative Decrease, called "AIMD" [ https://www.geeksforgeeks.org/aimd-algorithm/], operate at the packet level. TCP responds to packet drops or acknowledgments, not directly to bandwidth usage. Measuring queue depth in packets allows WRED to align with TCP's behavor, ensuring effective signaling of congestion to senders and helping maintain stable network performance.
Additionally, using packets as the metric provides a predictable way to manage queue utilization ?! While packet sizes can vary significantly, traffic patterns in most networks tend to balance out over time, with a mix of small control packets and large data packets. This averaging effect makes packet-based measurement a reliable approximation for queue depth in typical scenarios...
In contrast, bandwidth-based measurements can be more variable and may lead to instability in congestion management.
Finally, bandwidth measurement introduces practical challenges. Real-time bandwidth usage can fluctuate rapidly, making it harder to implement stable and responsive congestion-avoidance mechanisms. Packet-based metrics offer a simpler and more consistent alternative. For cases where bandwidth-specific control is necessary, mechanisms like traffic shaping or policing are used alongside WRED to provide comprehensive traffic management. Together, these approaches balance simplicity, performance, and alignment with protocol behavior.