01-04-2025 08:38 AM
Hello, everyone.
I am studying WRED from NetworkLessons.com and they have this graph
I understand that WRED is a mechanism that drops packets but why is the queue depth meassured in packets? Why are we monitoring the amount of packets that are in the queue instead of how much bandwidth is that queue using in that specific moment? Because 10 TCP packets in the queue with a size of 1kb are not the same as 10 TCP packets in the queue with the size of 5kb
Thank you.
David
Solved! Go to Solution.
01-04-2025 10:29 AM
Probably because, at the time Dr. Floyd published RED, packets were easy to count and were already being queued as "items". Traditionally, even a basic FIFO queue counts what?
In later IOSs, they now provide options to figure queue capacity via bytes or max ms delay.
I also might mention that RED queue depth is a moving average, so smaller packets will both increase/decrease queue depth faster, i.e. RED may not "see" the higher instaneous queue depths, and not drop those smaller packets as it would larger packets. (NB: there are other issues that go along with using a moving average, which I'll not go into here, but again, really using RED effectively, is more complicated than the way this technology is often presented.)
01-04-2025 10:31 AM - edited 01-04-2025 10:31 AM
Hello,
Take a look at this document from Cisco:
https://www.cisco.com/c/en/us/td/docs/ios/qos/configuration/guide/queue_depth.html#wp1054046
Especially the part where it mentions that queue depth is measured in packets.
I would assume that's because a packet count is definitive whereas you mention a packet can have an infinite amount of sizes. Could be a way of having a standard. Its wise to understand packets in your network as this will help you develop QoS policies. That and configurable values like MTU help with max packets sizes traversing the links.
Hope this helps
-David
01-04-2025 10:29 AM
Probably because, at the time Dr. Floyd published RED, packets were easy to count and were already being queued as "items". Traditionally, even a basic FIFO queue counts what?
In later IOSs, they now provide options to figure queue capacity via bytes or max ms delay.
I also might mention that RED queue depth is a moving average, so smaller packets will both increase/decrease queue depth faster, i.e. RED may not "see" the higher instaneous queue depths, and not drop those smaller packets as it would larger packets. (NB: there are other issues that go along with using a moving average, which I'll not go into here, but again, really using RED effectively, is more complicated than the way this technology is often presented.)
01-04-2025 10:59 AM
I also happened to find this
So I suppose that the queue depth can be either in bytes or in packets, depending on the platform
01-04-2025 11:21 AM
Yup, and again, I recall, some platforms, for some queuing, also offers a ms delay option.
Why these options, now, and not then? Advancements in hardware. Even today, often on many software based routers, you'll see a drop in forwarding capacity if using QoS features.
BTW, a good way to learn more about a non-vendor specific network topology is to search the Internet and/or Wiki on the subject. Often, technically, Wiki does a fair job of explaining networking technology, but where it can be especially nice is for its external "see also'", "references", "external links"., etc.
For example, you wondering about the impact of different sized packets, to RED, well possibly not the first to wonder that. Using Wiki, quickly found: Effect of different packet sizes on RED performance
01-06-2025 06:51 AM
There's one more thing that I would like to ask about.
Say we reserve 20% of our link's BW for some class. Given that some platforms use a queue depth in packets, doesn't this mean that if enough small packets fill up the queue, taildrop will occur despite maybe having a certain % of bandwidth still available?
01-06-2025 08:33 AM
"Say we reserve 20% of our link's BW for some class. Given that some platforms use a queue depth in packets, doesn't this mean that if enough small packets fill up the queue, taildrop will occur despite maybe having a certain % of bandwidth still available?"
Sure! Basically, shouldn't (much) happen if that class doesn't exceed its 20% allocation.
Remember it takes less time to transmit smaller packets, so, with RED moving average, RED doesn't "see" a short time burst in small packets, it sees a packet count based on an average volume, regardless of packet sizes. I recall this is the primary reason for using the moving average.
Actually, it's a very sharp way to maintain a high utilization loading while trying to maximize goodput. What it's not really too ideal for is various service level guarantees, other than longer term averages.
For example, consider we have VoIP and FTP sharing a queue, with WRED you provide very high drop probability to FTP. Do you think VoIP will always work well? If not, why not? Further, what's the impact to the FTP traffic, even when there's no VoIP traffic? Good, bad, neither? Why?
Your touching upon an important area of QoS (statistically probabilities), often neglected in teaching QoS, which, when not understood can lead to many unexpected and undesired results.
I always liked the quotation: "there are lies, dam lies, and statistics". Which appears true when you don't understand statistics.
Best effort networking might be equated with playing roulette, but how playing it, can be very much "influenced" by how you bet and how many consecutive times you play. (Like, roulette, we might "rig" the game [QoS] to better the odds or even guarantee a desired result.)
01-04-2025 02:31 PM
correct
queue-limit 5000 bytes/packets <<- this command change between two modes
01-04-2025 10:31 AM - edited 01-04-2025 10:31 AM
Hello,
Take a look at this document from Cisco:
https://www.cisco.com/c/en/us/td/docs/ios/qos/configuration/guide/queue_depth.html#wp1054046
Especially the part where it mentions that queue depth is measured in packets.
I would assume that's because a packet count is definitive whereas you mention a packet can have an infinite amount of sizes. Could be a way of having a standard. Its wise to understand packets in your network as this will help you develop QoS policies. That and configurable values like MTU help with max packets sizes traversing the links.
Hope this helps
-David
01-05-2025 02:47 AM
Hello @Mitrixsen
Thanks to open that post with that question.
The reason queue depth is measured in packets rather than bandwidth is rooted in simplicity and alignment with how TCP and other transport protocols handle congestion, from my point of view. Counting packets is straightforward for network devices because it avoids the computational overhead of tracking and summing varying packet sizes in real-time. In high-speed networks where queues are managed at scale, this simplicity ensures efficient processing without introducing unnecessary delays.
Another important factor is that TCP's congestion control algorithms, such as Additive Increase - Multiplicative Decrease, called "AIMD" [ https://www.geeksforgeeks.org/aimd-algorithm/], operate at the packet level. TCP responds to packet drops or acknowledgments, not directly to bandwidth usage. Measuring queue depth in packets allows WRED to align with TCP's behavor, ensuring effective signaling of congestion to senders and helping maintain stable network performance.
Additionally, using packets as the metric provides a predictable way to manage queue utilization ?! While packet sizes can vary significantly, traffic patterns in most networks tend to balance out over time, with a mix of small control packets and large data packets. This averaging effect makes packet-based measurement a reliable approximation for queue depth in typical scenarios...
In contrast, bandwidth-based measurements can be more variable and may lead to instability in congestion management.
Finally, bandwidth measurement introduces practical challenges. Real-time bandwidth usage can fluctuate rapidly, making it harder to implement stable and responsive congestion-avoidance mechanisms. Packet-based metrics offer a simpler and more consistent alternative. For cases where bandwidth-specific control is necessary, mechanisms like traffic shaping or policing are used alongside WRED to provide comprehensive traffic management. Together, these approaches balance simplicity, performance, and alignment with protocol behavior.
01-05-2025 04:48 AM - edited 01-05-2025 05:13 AM
M02@rt37 (truly) a very thoughtful and well reasoned reply, of which I believe is mostly incorrect when it comes to "why" much was done as is was.
Today we have smart wrist watches that may have hardware resources much beyond the multi-million (in dollars then) mainframes that I programmed on when I entered the industry.
When I entered the industry, the folk I worked with had been using computers in the 60's, some the 50's, and possibly even the 40's, and just like listening to someone like me, today, I got to hear, first hand how primitive computer systems were in prior years or decades, compared, to the then, state-of-the-art 70's. Why, for example, we can type up punched card data on CRT terminals. Dig it, man!
I believe in hindsight, it's very difficult to appreciate how technology evolves when you haven't yet found a way to accomplish something, and you have multiple possibilities. Heck, even IP was just one networking candidate approach.
Basically, IMO, much technology isn't created with a grand scheme based on concepts like simplicity, performance, protocol alignments, but on a possible approach based much on necessity. Something works, or it doesn't, or it doesn't work well. Improvements come along with knowledge gained, and improvement might be just refinements to something already being effectively used, or a totally different approach.
Another way of saying the above, consider the difference between personally travelling between point A to point B, between a daily commute and first time trip. For the daily commute, you can ponder improvement, like simplicity, less time, etc., pretty easily, but for first time trip, your primary focus is how do you get to your destination at all, especially is no one has done it before.
Today, we have a science base at the South Pole, where we fly people and supplies in and out of. But say you wanted to go to the South Pole over a hundred years ago, how would you get there? (Laugh, how do you even know when you're there?). Decades ago, read a very interesting book that detailed the first two expeditions to reach the South Pole, and if not known, the second to reach it, didn't make it back.
I'm unsure, the above, conveys my issue at all, or even any, I have with M02@rt37's reply, because on its face, it's an excellent reply, just having personal experience living through, and also providing a very, very small part of technical development, over half a century, such development isn't as clear cut as it's so often viewed in hindsight.
01-05-2025 05:21 AM
You're absolutely correct that hindsight often oversimplifies the complex and iterative nature of technological development.
Much of what we take for granted in technology today—like protocols, architecture, and design principles—was born from necessity rather than premeditated elegance. Early engineers and developers didn’t have the luxury of our modern tools, abundant resources, or refined theories. They were often operating under tight constraints, using the tools and knowledge available at the time to solve immediate problems. The choices made were less about aligning with grand schemes and more about "making it work." In the process, they laid foundations that later generations refined, sometimes retaining the original designs simply because they worked well enough or had gained widespread adoption...
Your analogy of traveling to the South pole is particularly apt. The first expeditions weren't about efficiency or optimization—they were about survival and discovery. Similarly, early computing pioneers were navigating uncharted territory, experimenting with approaches that sometimes worked by chance as much as by design. The iterative nature of progress means that today's "best practices" are often yesterday's ad hoc solutions, polished and contextualized by hindsight.
When I framed aspects like simplicity, performance, and alignment with protocols in my previous reply, it reflected the polished view of how we analyze and teach technology today. But you're right—those concepts often emerge after something has been built and proven functional. The clarity we ascribe to historical decisions is frequently a retrospective narrative applied to what was, at the time, a series of pragmatic, trial-and-error steps.
Understanding technology requires more than technical knowledge; it demands an appreciation for the human ingenuity, limitations, and sheer determination that shaped it.
01-05-2025 06:36 AM
Very nicely written.
Thank you!
BTW, also, hopefully others readers will not misunderstand that no planning goes into trying to accomplish something for the first time, but the effectiveness of that planning often is only "obvious" in hindsight.
Regarding the first two expeditions to the South Pole, the two things that struck me were how really difficult it was to accomplish, at the time, (sort of like landing on the moon, or possibly even more difficult than the moon landing), and the shared and vastly different approaches the two expeditions used.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide