Hello all, (bit of an concept question)
I have been asked to provide statistics on how long a TCP session for certain clients on various regions of our global network would remain open for before the TCP session times out (when data is in transit). This is using Microsoft TCP stacks as all client/server machines are running win2k with max-tcp-retransmissions set to 5. This is for a very time sensitive pricing application.
TCP RTO Timer. (RFC2988)
TCP retransmissions are based on RTO calulations.
At TCP startup, the RTO is set to 3 secs, (this is when no RTT measurement has been made)
RTO calulations is based on SRTT and RTTVAR values. "RTO = SRTT + max (G, K*RTTVAR)"
(2.2) When the first RTT measurement R is made, the host MUST set
SRTT <- R
RTTVAR <- R/2
RTO <- SRTT + max (G, K*RTTVAR)
where K = 4.
G is Clock Granularity.
Two questions :-
How often is RTO calculated within a TCP session, I assume that the RTO changes with every TCP ACK back from the receiver to the sender. Could someone please claify this.
What is G set to in most TCP implementations? Surely it has to be 1 or more? Can someone explain this component for me?
Many thanks for your help.
Ken