09-19-2025 07:59 AM - edited 09-19-2025 08:00 AM
I have a question I'm hoping somebody can explain in layman's terms for me regarding the inner workings of line rate speeds.
I've attached a very basic drawing of an example camera network. A PoE camera connects via Ethernet to a local switch at 1 Gbps and passes 3 Mbps worth of data > local switch connects to distro switch with a 1 Gbps trunk > distro connects back to core via a 10 Gbps trunk, recording servers connect to core also with 10 Gbps links.
Now, as that 3 Mbps camera passes data back to the recorder, each time it goes to the next switch, I would assume that the speed gets faster - my understanding is line rate is the fastest maximum speed at which data can pass. So 1 Gbps > 1 Gbps > 10 Gbps > 10Gbps in this example. But what happens on the way back to a viewing client when the speeds go from 10 Gbps down to 100 Mbps? Assuming the client is only viewing a handful of cameras and not overtaxing the connection, is there going to be buffering by reducing the speed at each connection?
In my head, I'm imagining the 10 Gbps links as a major highway, the 1Gbps link as an exit from that and then the 100 Mbps as a smaller road off of that. If that is an appropriate illustration, then what happens at these exits? Is there buffering by dropping to links with slower speeds? Internally, how does the switch handle these transitions? Ultimately, I'm just trying to better understand what's happening in the background of my network.
Any help would be greatly appreciated. Thank you
Solved! Go to Solution.
09-19-2025 12:45 PM - edited 09-20-2025 06:30 AM
In your specific case, assuming none of the interfaces are otherwise congested, video stream may be effectively 3 Mbps, both up and down.
For "up", if the camera is generating a 3 Mbps stream, it will effectively flow at 3 Mbps.
For "down", server could transmit it at 10g, but if a video client app is involved, the client is likely interacting with the server, and having the server effectively send the video stream at 3 Mbps, or possibly less (if a lower resolution has been requested).
However, if the video client just wants a particular chunk of the (pre-recorded) video stream, then it would likely be effectively sent at 100 Mbps.
As a side note, laymen often misunderstand bandwidth. Using you highway metaphor, all the highways have the same speed limit. However, some highways allow vehicles with greater carrying capacity, so vehicles with greater carrying capacity can deliver more cargo in less time.
Your 3 Mbps camera stream, on a gig link, is transmitted on a gig capable capacity vehicle, but only at 0.3% of that vehicle's capacity. When it is transfers to a 10g link, it only consumes .03% of the 10g capacity.
Effectively, the (live) video 3 Mbps data stream, would take about the same time to record on the server on a 10 Mbps, 100 Mbps, 1 Gbps or 10 Gbps links.
However, pre-recorded video, which if transferred as quickly as possible (not being watched in real-time), would take one tenth the time across 100 Mbps vs. 10 Mbps, gig would take 1/100 the time and 10g would take 1/1000 the time to transfer.
If you use telnet (literally) from one side of the world to the other, while typing, unless you're a very fast typist, you will see little difference between running across a 64 Kbps vs. 10g, end-to-end. However, if you do something that commands a display of lots of output, then you should see a huge difference.
09-19-2025 12:45 PM - edited 09-20-2025 06:30 AM
In your specific case, assuming none of the interfaces are otherwise congested, video stream may be effectively 3 Mbps, both up and down.
For "up", if the camera is generating a 3 Mbps stream, it will effectively flow at 3 Mbps.
For "down", server could transmit it at 10g, but if a video client app is involved, the client is likely interacting with the server, and having the server effectively send the video stream at 3 Mbps, or possibly less (if a lower resolution has been requested).
However, if the video client just wants a particular chunk of the (pre-recorded) video stream, then it would likely be effectively sent at 100 Mbps.
As a side note, laymen often misunderstand bandwidth. Using you highway metaphor, all the highways have the same speed limit. However, some highways allow vehicles with greater carrying capacity, so vehicles with greater carrying capacity can deliver more cargo in less time.
Your 3 Mbps camera stream, on a gig link, is transmitted on a gig capable capacity vehicle, but only at 0.3% of that vehicle's capacity. When it is transfers to a 10g link, it only consumes .03% of the 10g capacity.
Effectively, the (live) video 3 Mbps data stream, would take about the same time to record on the server on a 10 Mbps, 100 Mbps, 1 Gbps or 10 Gbps links.
However, pre-recorded video, which if transferred as quickly as possible (not being watched in real-time), would take one tenth the time across 100 Mbps vs. 10 Mbps, gig would take 1/100 the time and 10g would take 1/1000 the time to transfer.
If you use telnet (literally) from one side of the world to the other, while typing, unless you're a very fast typist, you will see little difference between running across a 64 Kbps vs. 10g, end-to-end. However, if you do something that commands a display of lots of output, then you should see a huge difference.
09-25-2025 10:52 AM
Short answer yes, the switch will need to buffer and then serialize those frames. And buffer can have misses, so will always have the chance of output drops.
Check this recent post from Daniel talking about this topic.
09-25-2025 02:54 PM
@Htonieto wrote:
Short answer yes, the switch will need to buffer and then serialize those frames. And buffer can have misses, so will always have the chance of output drops.
"Always"? No, as it can be deterministic. Your reference's first case would be a case, as explained in the reference, where drops shouldn't occur.
@Htonieto wrote:
Check this recent post from Daniel talking about this topic.
I did. He's correct that faster to slower, can cause drops but his focus on the interframe gap (IFG) being an issue is bogus (unless you're dealing with NICs not adhering to the IEEE 802.3 standard). He also doesn't describe the uncommon case where multiple ingress interfaces, whose combined aggregate bandwidth is LESS than egress bandwidth can also encounter drops, although that might be covered by ". . . or from multiple interfaces in to one interface out, you risk having output drops." However, this uncommon case can be easily 100% avoided by proper buffer allocation, unlike the more usual cases shown in his second and third examples.
He poses a "thought experiment" on the impact of increasing an interface's bandwidth have on its buffers (incidentally with insufficient information to answer), but even a more basic question would be what is optimal buffer allocation for an interface?
Honestly, for someone with both CCIE and CCDE certifications, I'm disappointed by his post.
Laugh, if you like "thought experiments", do you know a RS-232 byte has a start and stop bit, but the stop bit might be a single bit, or two bits or even 1 and one half bits? I mention this because if you understand how you can have 1.5 stop bits, an Ethernet frame's IFG, might be a little easier to understand.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide