09-12-2017 08:36 PM - edited 03-08-2019 12:00 PM
The sender increases the congestion window gradually and slighly in tcp communication. Which means as time elapses, the amount of data should be increasing slightly and gradually. BUT, if the sender detects the packet loss, the sender would cut the congestion window in half. So, the amount of data should be cut in half. I think this graph describes the tcp communication very well. If you believe that this graph describes well the relationship between time and amount of data being sent, please let me know. If not :(, please explain to the relationship between time and amount of data being sent in tcp communication.
Here is the link: https://imgur.com/a/0sOmA
In case, citation: http://www.potaroo.net/ispcol/2005-06/faster.html
Solved! Go to Solution.
09-13-2017 06:43 AM
09-13-2017 06:43 AM
09-13-2017 09:55 PM
09-14-2017 03:03 AM
Believe it or not, I've been kicking around in my mind, for some time, a similar approach for TCP, but Google published first.
"Classical" TCP flow management leaves much to be desired. Its primary purpose is to avoid network congestion collapse, which it does, but otherwise, its far from optimal. The new BBR looks very, very nice.
09-14-2017 03:29 AM
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: