cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
866
Views
2
Helpful
10
Replies

How server determines  at which network speed it can send traffic?

NetworkingGeek1
Level 1
Level 1

Hello Community,

I always had a question how server determines at which network speed it can send traffic? I mean, for example some user in Australia wants to download file from server in Canada, there are a lot of links with different network bandwidth between user's PC and server. For example: link between server and switch is 100Gbps, link between Switch and Router 10Gbps, link between Router and to another Router connected the user is 5Gbps and link between Router and user is 1 Gbps. So, speed at which server can send traffic to the user can't be more than 1Gbps, but how Server can determine it if it's connected to the switch with 100Gbps link? So, how server determines at which network speed it can send traffic to particular user?

1 Accepted Solution

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame

Usually most hosts don't determine available "speed", they just hand off data to the network software as quickly as that software accepts the data.

How the network software determines "speed" varies on what network software is being used.  A very common and widely used network software, for data transfers, is TCP.

Traditional, TCP determines end-to-end "speed"by starting slow and increasing transmission rate (rapidly) until it determines packets are being lost.  It assumes lost packets are due to sending too quickly.  It then slows its transmission rate to half and then increases transmission rate, slowly.  If lost packets are again detected, it repeats the cycle, i.e. slows to half current transmission rate, etc.

There's even more to TCP's flow rate control, which you can read about if you search that subject on the Internet.

There are other protocols too, that do flow rate control, all usually using some form of end-to-end monitoring.

Again, all this is usually "hidden" from the network host, such as some kind of "server".

View solution in original post

10 Replies 10

M02@rt37
VIP
VIP

Hello @NetworkingGeek1 

When a server sends traffic to a client, it doesn't directly control the network speed between them. Instead, the network speed is determined by the capacity and performance of the network infrastructure between the server and the client.

The speed of the network connection between the server and the client depends on various factors such as the quality of the physical cables, the capacity of the routers and switches along the path, and any congestion or limitations imposed by ISPs or network providers. The data packet travel through multiple network devices, including routers and switches, to reach their destination. The routing decisions made by these devices can affect the speed and efficiency of the data transmission.

The available bandwidh on each link along the path determines the maximum speed at which data can be transmitted. 

Network congestion can occur when there is more demand for bandwidth than available capacity, leading to slower speeds and potential packet loss. Congestion can occur at various points in the network, including at the server's ISP, the client's ISP, or any intermediate network nodes.

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

M02@rt37  Hello.

Thanks, but you didn't answer my question. For example: link between server and switch is 100Gbps, link between Switch and Router 10Gbps, link between Router and to another Router connected the user is 5Gbps and link between Router and user is 1 Gbps. So, speed at which server can send traffic to the user can't be more than 1Gbps, but how Server can determine it if it's connected to the switch with 100Gbps link?

@NetworkingGeek1 

Server doesn't actively determine the speed at which it can send traffic to a particular user. Instead, the speed is determined by the network infrastructure and the limitations imposed by the slowest link in the data path between the server and the user. The server simply sends data at the maximum speed supported by its connection to the network, and the network infrastructure handles the routing and transmission of data based on the capabilities of the devices and links involved.

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

M02@rt37  So, for example server can send traffic at maximum speed of 10Gbps. Server will send it at 10Gbps, but user can receive only at 1Gbps. Does it mean that 9Gbps of traffic is discarded by the network devices?

Usually not, if using software that supports flow rate control.  If not supported, yes that can happen, or worse, along with adverse impact to other traffic sharing path/link/bandwidth.

@Joseph W. Doherty  Hello. Thank you for the reply. I have a couple of questions: As far as I understand, UDP doesn't support flow control, it just send the traffic. So, in case of UDP, traffic will be send from server to client at full speed and in that case, traffic can happen, if there is network speed bottleneck somewhere in the middle between server and client or client itself can't handle such amount of traffic. Is my assumption correct?

"Usually not, if using software that supports flow rate control. If not supported, yes that can happen, or worse, along with adverse impact to other traffic sharing path/link/bandwidth." - was it reply to my answer that about hypothetical traffic loss? As my understanding, if software works on top of TCP, then that situation with such packet loss with 9Gbps which bring as an example can't happen, because TCP's flow control will handle it regardless of software itself. This situation with huge packet loss can happen with software which runs on top of UDP and doesn't have other in-build flow rate control mechanism. Are all my above assumptions correct?

Yes, UDP, itself, doesn't provide flow transmission rate, so your hypothetical traffic isn't always hypothetical (in fact, I use traffic generators, sending UDP, to intentionally stress links).

That said, applications that use UDP either send at a fixed transmission rate, often well below typical link bandwidths, with the assumption that bandwidth is always available (e.g. VoIP G.711 UDP/RTP about 90 Kbps on Ethernet), or the application using UDP does its own transmission rate management (but how well?).

BTW, every now an then, someone decides to build their own Frankenstein, i.e. a faster data transfer method than TCP, which can often disrupt the network path. 

@NetworkingGeek1 

The excess traffic beyond the capacity of a downstream link is typically buffered temporarily in network devices. However, if the buffers fill up or congestion persists, packets may be dropped, and flow control mechanisms may be invoked to regulate the transmission rate. The network devices work together to manage congestion and ensure efficient data delivery while minimizing packet loss.

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Usually most hosts don't determine available "speed", they just hand off data to the network software as quickly as that software accepts the data.

How the network software determines "speed" varies on what network software is being used.  A very common and widely used network software, for data transfers, is TCP.

Traditional, TCP determines end-to-end "speed"by starting slow and increasing transmission rate (rapidly) until it determines packets are being lost.  It assumes lost packets are due to sending too quickly.  It then slows its transmission rate to half and then increases transmission rate, slowly.  If lost packets are again detected, it repeats the cycle, i.e. slows to half current transmission rate, etc.

There's even more to TCP's flow rate control, which you can read about if you search that subject on the Internet.

There are other protocols too, that do flow rate control, all usually using some form of end-to-end monitoring.

Again, all this is usually "hidden" from the network host, such as some kind of "server".

liviu.gheorghe
Spotlight
Spotlight

In addition to what M02@rt37 said, it really depends on how data is transferred in terms of Transport Layer protocols. For IP there are two major protocols: Transport Control Protocol (TCP) and User Datagram Protocol (UDP).

TCP is used when the data that needs to be transported requires a reliable connection between the hosts - it implements mechanisms for acknowledgement and retransmission. UDP is connectionless and doesn't care about reliability - it just sends data.

Another mechanism implemented by TCP is called windowing - if the transmission medium is reliable, it increases the number of TCP segments it sends until it waits for a confirmation of successful transmission from the other side. In this way, under ideal conditions, it can transmit at almost the speed of the physical link between the host and the first network device.

In your example, the user who is the initiator of the communication, is connected to the switch with a 1 Gbps physical link. Under ideal conditions (no congestion, no QoS configured, no packet drops on the network path between the server and the user, no congestion on the server, etc.) the highest download speed would be 1 Gbps minus the protocol overhead. In my opinion you would get a real download speed of maximum 800-900 Mbps.

Hope it helps.

Regards, LG
*** Please Rate All Helpful Responses ***
Review Cisco Networking for a $25 gift card