03-28-2016 04:34 PM - edited 03-08-2019 05:08 AM
A bit confused...
Two switches may have the same number of ports and claim to operate at line rate, but their latency numbers may differ. Not sure how that can be.
For example, if 2 switches have 48 x 10Gbps ports and they operate at line speed, that means there is 960Gbps of full-duplex traffic flowing into and out of the switch at any given time. If that is the case, the time it takes for a bit/byte/packet to flow into the switch and back out of it is a constant for all switches. You just do the math.
For a 64-byte packet, it comes out to 25.6 nanoseconds or 25.6 x 10 to the -9. I arrived at that number by considering one flow between two hosts connected to 2 different 10G ports. Host "A" will send 10G of traffic to Host "B" and and Host "B" will send the same to Host "A". So, at any given port, you have 20Gbps flowing through each of those ports (in and out). Lets focus on one port. That means 2.5GBytes of data is traversing one port. Divide 1 sec by 2.5GB and you will get the number of seconds for a single Byte to traverse that port. Multiply that number by 64 and you will get the time it takes for a 64-Byte frame to If 2 switches operate at the same speed (10G at line rate), then the latency numbers should be the same. Why arent they?
Take note that I am not considering HoL blocking or any other situation that would cause resource contention. Imagine I have 24 pairs of hosts all singularly talking to each other uniformly.
Now, I know that no such 10G switch offers 25 nanaseconds of nominal latency. The Broadcom Trident2 chipset offers about 500 nanaseconds, I believe. So, why the disparity between doing the math in a straightforward manner and the real numbers advertised by the vendor or even between vendors of different chipsets?
03-28-2016 05:02 PM
Part of the latency is the processing time. How long does it take the device to figure out which destination port to send it out (or all ports potentially if it is a broadcast and needs to be replicated).
03-28-2016 05:08 PM
Sorry, but that doesn't tell me much at all.
What I am saying is that, given the math, if a 10Gbps switch port is really operating at line rate, it means that 20Gbps is traversing that port. That equals to 2.5GBytes per second. So, a 64-Byte frame would have to take 25 nanoseconds to traverse the interfaces if those interfaces were indeed operating at line rate. But we know that latency is usually in the 700 ns or more range for a 64-Byte frame, so how can that be?
03-28-2016 05:15 PM
Your assumption is wrong.
Lets say traffic arrived at 10Gb/s, line rate. Lets say the switch took 1s to process it, and then spat it out at 10Gb/s.
If you then offered 10Gb/s of input load, then after 1s, the switch would be forwarding 10Gb/s at full line rate with a latency just over 1s.
03-28-2016 06:18 PM
Ah, good way to think about it. Still trying to wrap my head around it, but I think I got it. I think I may have been thinking about it too deeply than it deserved. lol...Let me think about it and get back to you, Phil. Thank you.
03-29-2016 07:27 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
As Philip described, line rate just means the switch has the capacity to accept and forward traffic at full line rate. It doesn't address delay within the switch moving from ingress to egress.
Besides the forwarding processing time that Philip mentioned, part of a switch's delay is its store-and-forward delay. (This is a delay not seen on hubs.)
As the packet/frame size increases, so does this latency. As transmission rate bandwidth increases, this latency decreases.
Some switches support some kind of "cut through" switching to decrease the store-and-forward latency.
Internally, as switch might store-and-forward more than once, which would increase latency. Also, different switches actual forwarding latency may vary based on hardware performance. The two together would likely vary between different architectures accounting for different internal forwarding latencies between different switches.
As some applications are looking for minimum latency, some switches are designed to minimize internal processing/forwarding latency. The already mentioned cut-through is often such a feature provided by "ultra low latency" switches.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide