ā06-29-2017 12:40 AM - edited ā03-08-2019 11:08 AM
Hi All
what is the greatest detail we can monitor bandwidth through an interface is it 5 seconds ?
this obviously isn't enough to capture microbursts etc
how do people get this detailed info when needed ?
ā06-29-2017 05:32 AM
Why do you believe the interface measurement limit is 5 seconds?
What do you consider a microburst? Assuming you had no limitations, what would be the criteria you would use to call some traffic a microburst?
how do people get this detailed info when needed ?
You generally don't, but there are stats that imply microbursts and there are some ways to get additional stats to confirm their existence. However, I'll defer discussing either until we find "we're on the same page" concerning what a microburst is.
ā06-29-2017 07:00 AM
Sorry, to correct myself I believe its 10 by default.
This is too long as it averages out and the graphs produced look smooth, the less time you poll the smoother they are.
Can you get down to 1 second intervals?
Is there a hidden command which allows the switches to recompute the snmp stats to 1 sec?
>snmp-server hc poll 100
Is this correct?
what are these stats that imply microbursts?
I am guessing you look at interface output drops? which are the buffers filling up etc
ā06-29-2017 07:36 AM
I still don't know what you understand a microburst to be. Without knowing that, difficult to discuss how we would recognize one.
ā06-29-2017 08:09 AM
To my knowledge a microburst is a sub second burst of network traffic that we don't see on monitoring if polling for 1 sec for example as the 1 seconds worth of info we get is averaged over the second at best.
....to my knowledge
ā06-29-2017 10:17 AM
Ok, that's pretty much on the mark for a microburst, but "seeing" it, or measuring it, via bandwidth usage, alone, can be problematic.
Envision a gig ingress port and an FE egress port. The former can accept data 10x faster than the egress can transmit it. However, assuming the gig port isn't accepting more than an "average" of 100 Mbps, the egress port should be able to transmit it, but even if we have an instantaneous bandwidth measurement of the egress port, we could not detect microbursting.
For example, assume the gig port ingress port is accepting packets, with delays between them, such that as each packet is forwarded to the FE egress port, the prior packet has just completed its transmission. The FE port would show 100% utilization, over any time interval, yet there's no bursting. Or, consider the gig port accepts packets for 100 ms, then nothing for 900 ms. The FE port, assuming there's egress queuing, would initially buffer most of the "burst", and transmit at 100%, again, over any time interval at 100% utilization. At the end of a second, the egress queue would be empty.
In both of the prior, bandwidth utilization, on the egress would not "show" the burst, although subsecond bandwidth usage of the ingress port could see the "bursts".
What you can use, if the device supports it, is an egress policer that forwards all packets, but its stats can be used to "see" bursts to the egress port. Policers generally support millisecond timers, so we can obtain some real visibility into microbursts.
Again, as port bandwidth alone cannot be used to detect a burst, monitoring the current egress queue size, over time, also shows bursts. Unfortunately, like port bandwidth measurements, you're probably not going to be able to obtain a short enough time interval for in depth microburst analysis, but sort of like Netflow sampling, longer term queue depth measurements will give some indication of bursty traffic. Further, if you're using WRED, it too gives a view into bursts based on its parameter settings and its dropped packet stats.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide