cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
710
Views
20
Helpful
7
Replies

Line Rate - Reading Data at millisecond level

Hi all,

 

hope to find everyone well! 

 

If possible I would like to have your expertise in the following. Today I entered in an argument with someone regarding line rate and the amount of data we can pass in a 100Mbps link, where I was stating that in 100ms we can pass 10Mb and in one second we can pass 100Mbps. This argument started because I was seeing the equipment in this case radios the load being 10Mbps and in the switches, the same, but these radios have an option that allows us to do a load checking at 0.1 seconds, and this would show spikes of around 30Mbps and sometimes 40Mbps per 0.1s. And we started to enter in disagreement because how can you have a load of 45Mbps (it's in the name seconds!) when you do a measure at 0.1 seconds? In a 1Gbps interface yes but in a 100Mbps?  

He gave me the explanation below but I don't seem to understand it and I leave below in bold my commentaries:

 

"the switches are passing data to the radios at a very high ‘rate’ for a very short period of time. Theoretically, a 100Mbps ethernet port can transfer data at a ‘rate’ of 100Mbps for any ‘period’ of time. Regardless of whether this is for 1second or 0.1second – the ‘rate’ is still the same. Obviously, the ‘amount’ of data transferred in this time period varies not only on the ‘rate’ of transfer but also the ‘duration’ of transfer. 

My point, which can be demonstrated by the load meter of the radios, is that an ethernet port connected at 100Mbps can pass traffic at a rate of 100Mbps… regardless of the duration that is measured over. (agreed) The load meter can measure the throughput of the ethernet port over varying periods of time. If we set this sampling time to 0.1s it reports a much higher throughput rate, for example today we saw 45Mbps (how can we have 45Mbps in a 0.1s when the max is 10Mb in a 100Mb link?). If, for example, each 0.1s interval of time measured 45Mbps of traffic for a total duration of 1s (10 samples)  – there would have been a total ‘amount’ of data transferred of 45Mb but the rate at any 0.1s interval would still be 45Mbps. (10 samples at 45Mbps would give us 450Mbps!) The ‘amount’ of data transferred at each 0.1s interval would be 4.5Mb but the rate is still 45Mbps (don't understand this). We can only add the amount of data transferred in each interval together to give us a total amount of data, we cannot add the rate of data transfer together for each interval"

 

Please can your expertise be provided to explain the above?

 

Thank you

1 Accepted Solution

Accepted Solutions

Ah, vendor QoS that's enforcing a bandwidth limit, eh?

In my OP, I had a BTW about shapers might not behave like a real interface of similar bandwidth.  Microbursting, if happening, doesn't do well with often set default shaping (or policing) parameters, resulting in less throughput than expected.  Actually, this is also true, with microbursting interactions with "real" interfaces too.  It boils down to how well a real interface buffers bursts, whereas with a shaper (or policer) how they process bursts (according to their parameters).  Often the default buffers, on "real" interfaces, handle bursting better than many (default) shaping/policing parameters.

One simple test you might try, is to obtain a traffic generator that can send at a specified data rate (not using TCP, but UDP).  Set transmission rate below, then at, and then above "contracted" rate, and set what bandwidth is delivered on "far" side.  As such traffic generators don't "burst" you should be able to (easily) determine if your contracted rate can be obtained.

If it can, then either your wireless bandwidth provider's enforcement doesn't much allow for your microbursting and/or your applications are bursting "too much" (like ingress of 100 or gig going to wireless egress less than 100 Mbps).

If wireless vendor won't or cannot tune their bandwidth enforcement to better handle your bursting (and to be fair to them, if a UDP bandwidth test shows you can obtain the contracted bandwidth, they can argue they are providing what's contracted) then you need to "handle" your bursting before it's handled off to your wireless vendor.

View solution in original post

7 Replies 7

Hi

 This is not a productive discussion but I´ll tell you what I think. The interface is able to handle 100Mb of data per second and it means at any time but It dont means the data flow will be the same during the whole second. Maybe, that´s why you may see difference when taking smaller interval.  The fact is that the speed will never greater then 100 Mbps.

 

Thank you for your reply Flávio.

Like with everything this is about money, basically this discussion is due to the radios having a QoS policy where it limits the trouput to let's say 50Mbps, but when watching the data per second we see 25 Mbps and if it goes higher than that it starts dropping, but if checked at 0.1s it shows 5xMbps. I say that the QoS policy in this radios is 25Mbps up and 25Mbps down, what gives 50Mbps. But the person I was discussing this with says it's adaptive but I only see 25Mbps going up when it starts dropping unless the checks are done at 0.1, but at 0.1 I can't understand the 50 plus Mbps...

And this discussion started with this... 

 

Leo Laohoo
Hall of Fame
Hall of Fame

About ten years ago, we had this sort of "discussion".  And it started like this:  Someone got a puny 1800 router and the logic was "the WAN port is FastEthernet, ergo, the router can support 100 Mbps".  

The radio might have a Fast-, Gigabit, Ten, TwentyFive, etc. interface but it does not translate that it has the CPU power to process that rate.  This argument is very much the same with 802.11ac/ax wireless and my smartphone.  

There are several reasons why radio ports have FastEthernet ports (instead of Ethernet) and one of them is supply:  Ethernet (10 Mbps) ports are no longer being manufactured.

The best way to settle this impasse is to get a bandwidth monitoring software and monitor the traffic going in and out of the radio or switch port.  

 

Joseph W. Doherty
Hall of Fame
Hall of Fame

Hmm, I would say Occam's razor response might be the radio bandwidth load measuring is likely inaccurate at high sample rates.

However, for a more complicated possible reason you're seeing what you're seeing, since wireless can be a strange beast, suppose the wireless link can go much higher than your 100 Mbps, but its actual bps rate is also highly variable, i.e. one moment it can only transmit, effectively, at 50 Mbps, but at the next moment it can do 200 Mbps.  However, it can, overtime, sustain/guarantee an effective/average transfer rate of 100 Mbps.  In this latter case, the 100 ms measurements might be correct, but over time, they too should average out to whatever the actual "average" transfer rate is.

BTW, a similar issues arises when trying to explain why a gig link with a 100 Mbps shaper behaves differently than a fastEthernet interface running at 100 Mbps, especially during small (like ms) time intervals.

Thank you all for the replies, it's extremely appreciated.

So this is going to be a difficult one to deal with. For sure wireless is a different beast but this is a weird one and to put this to rest I'm going to mirror a port and see the bandwidth going and coming out of the radios.

Just to give a little bit of background this started with the following. We have an installation where we have the switches connected to radios but the manufacturer of this radios applies a QoS to the radio and sells the licenses. If we want 20Mbps we have to pay, we have 50Mbps we have to pay and then then next licenses "opens" the radio for whatever it can achieve. 

Now I had one installation where the radios had 20Mbps and another that the radios had 50. And I was seeing in the one from 20Mbps the radios stopping passing traffic at +-10 and the ones for 50 stop passing traffic around 25Mbps. I took this to the wireless guy and all discussion started, he says it's the microbursting and said the above, but I said it's a freaking too much of a coincidence in both radios the maximum bandwidth I see that they can pass is always half of the license... then all this discussion started because I think we are being ripped off, but he can with that explanation of at 0.1 we get 25+- and it drops but at 1 sec we see +- 10Mbps...

This will be only, that only with the correct tools we will be able to get to bottom I guess.

Ah, vendor QoS that's enforcing a bandwidth limit, eh?

In my OP, I had a BTW about shapers might not behave like a real interface of similar bandwidth.  Microbursting, if happening, doesn't do well with often set default shaping (or policing) parameters, resulting in less throughput than expected.  Actually, this is also true, with microbursting interactions with "real" interfaces too.  It boils down to how well a real interface buffers bursts, whereas with a shaper (or policer) how they process bursts (according to their parameters).  Often the default buffers, on "real" interfaces, handle bursting better than many (default) shaping/policing parameters.

One simple test you might try, is to obtain a traffic generator that can send at a specified data rate (not using TCP, but UDP).  Set transmission rate below, then at, and then above "contracted" rate, and set what bandwidth is delivered on "far" side.  As such traffic generators don't "burst" you should be able to (easily) determine if your contracted rate can be obtained.

If it can, then either your wireless bandwidth provider's enforcement doesn't much allow for your microbursting and/or your applications are bursting "too much" (like ingress of 100 or gig going to wireless egress less than 100 Mbps).

If wireless vendor won't or cannot tune their bandwidth enforcement to better handle your bursting (and to be fair to them, if a UDP bandwidth test shows you can obtain the contracted bandwidth, they can argue they are providing what's contracted) then you need to "handle" your bursting before it's handled off to your wireless vendor.

Indeed enforcement by the manufacturer and the thing is, when we check the QoS od the radio we have two channels being shown, one for upload and another for download and they are divided by whatever license the radio has, if 20Mbps it's 10Mbps for each channel. But the radio guy states that internally the QoS manages this in order to remove bandwidth from one side and give to another if required but we don't see it and I'm a little dubious of this. 

Without a doubt the best will be like proposed Joseph is to generate the data myself with something like Iperf and see what we get. 

 

Thank you very much

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: