05-14-2011 11:04 AM - edited 03-04-2019 12:24 PM
I'm trying to understand the relationship between bandwidth and interface queueing. Do full queues mean 100% bandwidth or 100% bandwidth mean full queues? Always? I'm looking at a network graph that reports 15 minute averages and the traffic never goes above 100Mb of a Gig link, but I see tail drops in the egress fifo queue. I assume that means somewhere during that 15 minute sample, bandwidth went to 100% causing the drops but the event wasn't long enough (bursty traffic) to affect the graphs much. Then I started wondering if you could fill queues without actually hitting 100% bandwidth. I guess I'm not sure what bandwidth REALLY means.
Thanks.
Solved! Go to Solution.
05-16-2011 07:07 PM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
While an interface queue contains packets, interface should be 100%. Note, I didn't state interface queue is full.
Interface bandwidth can be at 100% without any queued packets (discounting queuing for serialization).
Interface bandwidth consumption is really a measure of bits transmitted divided by bits that could be transmitted for some time period. So, for example, if you had an 100 Mbps interface, you can transmit 100,000,000 bits in one second. If during a measured second, you only transmitted 10,000,000 bits, you would have 10% bandwidth utilization. If same 10,000,000 bits were only transmitted during a 10 second measure, you would have 1% utilization.
As you've discovered, bandwidth utilization graphs can be misleading. 100 Mbps on a gig links, across 15 minutes, means only that about 10% of availble bandwidth, for the 15 minutes, was utilized. However, during some of that 15 minutes, if you have more bandwidth upstream of your gig egress, demand can exceed downstream bandwidth leading to queuing which can exceed queue limits.
Imagine if you had a 10 gig upstream link sent at 100% for .15 minutes. If would take a gig link 1.5 minutes to forward this traffic, and if the queue could not contain .15 @ 10 gig, you'll see queue drop(s), yet stats would show less than 100 Mbps utilization for the 15 minute period.
05-15-2011 12:58 AM
Hi,
You have to consider the difference between INTERFACE capacity (1Gbps) and your existing bandwidth. When you say bandwidth you should think at the "practical" throughput transfer between two end points. Your interface is connected to other equipment's interface. What flows between those two interfaces for some time can be "categorised as bandwidth.
About queues there will always be a hardware queue and many more software queues. SW queues capacity and number depend of the QoS queueing technology applied on the equipment. And from that bandwidth is related to the total capacity of those queues. But bear in mind that a user can configure those queues so the available BW can be just a percent (30-60%) of all.
If possible start a lab, and we can talk on real configs.
05-16-2011 07:07 PM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
While an interface queue contains packets, interface should be 100%. Note, I didn't state interface queue is full.
Interface bandwidth can be at 100% without any queued packets (discounting queuing for serialization).
Interface bandwidth consumption is really a measure of bits transmitted divided by bits that could be transmitted for some time period. So, for example, if you had an 100 Mbps interface, you can transmit 100,000,000 bits in one second. If during a measured second, you only transmitted 10,000,000 bits, you would have 10% bandwidth utilization. If same 10,000,000 bits were only transmitted during a 10 second measure, you would have 1% utilization.
As you've discovered, bandwidth utilization graphs can be misleading. 100 Mbps on a gig links, across 15 minutes, means only that about 10% of availble bandwidth, for the 15 minutes, was utilized. However, during some of that 15 minutes, if you have more bandwidth upstream of your gig egress, demand can exceed downstream bandwidth leading to queuing which can exceed queue limits.
Imagine if you had a 10 gig upstream link sent at 100% for .15 minutes. If would take a gig link 1.5 minutes to forward this traffic, and if the queue could not contain .15 @ 10 gig, you'll see queue drop(s), yet stats would show less than 100 Mbps utilization for the 15 minute period.
05-16-2011 07:43 PM
Thanks, so if I understand correctly, if I see drops on a "sh int" I can be sure that at some point in time the interface reached 100% utilization (bits were being placed on the wire as fast as possible). If it doesn't register on network graphs it just means the nature of the traffic is bursty (relative to the sample rate).
05-16-2011 07:55 PM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Technically, everytime a frame/packet is transmitted, the interface/wire is 100% utilized.
As long as there are any packets in the queue, the interface/wire should be 100% utilized.
Dropped packets indicate a queue overflow. While the queue had packet(s), inteface/wire should be at 100% (as noted above).
When nothing in queue, interface/wire shoud be 0% utilized.
Interface/wire either 100% or 0%, all or nothing; bandwidth utilization just a time mearsurement between these two states, although drops, usually not counted torward utilization.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: