04-19-2007 04:29 AM - edited 03-03-2019 04:37 PM
I am trying to wrap my head around exactly what the bandwidth delay product says about TCP performance.
Bandwidth delay product is defined as capacity of a pipe = bandwidth (bits/ sec) * RTT (s) where capacity is specific to TCP and is a bi-product of how the protocol itself operates.
Assume we have a 15Mb pipe with 100 ms of latency. The BW delay product is 15,000,000 * .1 = 1,500,000 b/s.
So this says that theoretically, under optimal conditions, on a 15Mb pipe, a single TCP session cannot consume more then 1.5 Mbps (note theoretical). HOWEVER...
I have also seen the bandwidth delay product compared to the TCP window receive size. In TCP/IP illustrated Stephens seems to suggest that if the bandwidth delay product is larger then the TCP receive window, that the TCP receive window is the limiting factor.
So let's convert our previous example from bits to bytes we get 187,500 Bytes. The default TCP receive window is 65,535 Bytes. So our bandwidth delay product is 2.86 times greater then the TCP receive window size.
Now to my question..
In this scenario, with my current understanding, even if I increase bandwidth, & even if I stretch the pipe out by adding latency, the throughput will never exceed the TCP receive window size. Ergo, the TCP receive window size is the limiting factor.
Is this a correct statement? If not, can you please describe the relationship between the TCP receive window size & the bandwidth delay product.
Thank you!
Solved! Go to Solution.
04-19-2007 08:10 AM
Yes, you are correct. This is sometimes called the long, fat pipe problem. The solution is bigger TCP window sizes.
Dave
04-19-2007 07:21 AM
Jeremy,
Here's a good definition of delay-bandwidth product:
The delay-bandwidth product of a transmission path defines the amount of data TCP should have within the transmission path at any one time, in order to fully utilize the available channel capacity.
The TCP window size much be large enough to allow the sender to fill the pipe with no acks from the receiver. In your example the window size needs to be larger than 1.5mb or 187 Kbytes in order for a single TCP session to utilize the full bandwidth.
Please rate helpful posts.
Dave
04-19-2007 07:58 AM
This seems to confirm what I was saying. So in my example, the answer to increasing throughput is increasing the TCP receive window (buffer) on the end stations.
04-19-2007 08:10 AM
Yes, you are correct. This is sometimes called the long, fat pipe problem. The solution is bigger TCP window sizes.
Dave
04-19-2007 08:11 AM
OK, this is good. It means my understanding is correct!
Thank you for your input.
07-02-2015 05:49 PM
In original posting both these show the wrong units,
1,500,000 b/s.
a single TCP session cannot consume more then 1.5 Mbps
The units for bw x Delay product are bits or bytes.
bits x sec = bits
----
sec
... the seconds cancel each other out
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide