The sender increases the congestion window gradually and slighly in tcp communication. Which means as time elapses, the amount of data should be increasing slightly and gradually. BUT, if the sender detects the packet loss, the sender would cut the congestion window in half. So, the amount of data should be cut in half. I think this graph describes the tcp communication very well. If you believe that this graph describes well the relationship between time and amount of data being sent, please let me know. If not :(, please explain to the relationship between time and amount of data being sent in tcp communication.
Here is the link: https://imgur.com/a/0sOmA
In case, citation: http://www.potaroo.net/ispcol/2005-06/faster.html
Solved! Go to Solution.
Believe it or not, I've been kicking around in my mind, for some time, a similar approach for TCP, but Google published first.
"Classical" TCP flow management leaves much to be desired. Its primary purpose is to avoid network congestion collapse, which it does, but otherwise, its far from optimal. The new BBR looks very, very nice.