11-18-2008 11:35 AM - edited 03-04-2019 12:23 AM
Hi All,
We have 2 MPLS PE nodes(Cisco 7600). The Server1 is connected on PE1 and Server 2 is connected on PE2.We have a latency of around 20 ms between these two PE nodes. When we tried to do data transfer from Server 1 to Server 2 by means of FTP,we could acheive some were around 20Mbps has peak utilization for that FTP session. Then we understood the BDP(Bandwidth delay product) comes into issue and the throughput depends on the latency. So inorder to increase the throughput,we try to increase the server 1 and server 2 window buffer sizes to more than 64Kbytes,but we couldn't see much differnce in the throughput.
Meanwhile we are tring to understand the ip tcp window-size command on the PE router and does this has any relation to the above scenario.Also what this command really does for the router and if it is not set,whats the default value on it.Any help would be really appreciated.
Also, when we increased the buffer sizes from 32k to 64k ,there are significant increase in the throughput but not when we increased above 64k.
Thanks
Regards
Anantha Subramanian Natarajan
11-19-2008 05:41 AM
10 gig WAN - don't see many of those!
1 gig to 10 gig, won't be a problem for single host. However, I didn't mention the issue of mutiple hosts.
Assuming the hosts have gig links, but there's 10 gig or more bandwidth leading to your WAN router, we could see congestion at the aggregation bottleneck. You'll need to consider that multiple hosts could transmit at the same instant and so you'll need to allow for sufficient queue space at the aggregation bottleneck. Worst case, queue would need to support path's BDP.
What you have to keep in mind, standard TCP at the host doesn't "meter" its transmission rate like a shaper would, it sends at full bandwidth all the packets in its send window (NB: send window won't exceed receiver's receive window). When standard TCP adjusts its send rate, it really is adjusting the size of its send window. This is why data traffic is "bursty", which is one reason we need to queue.
If TCP gets up to full rate, and BDP is optimal, it will "self clock" sending new packets as returning ACKs release them. At that point you shouldn't see any queuing on the router (assuming single flow) beyond a packet or two (assuming one ACK per two sent packets).
11-19-2008 07:17 AM
Hi Joseph,
Thank you very much ....Do you know the command to determine the queue size available on the router ?
Thanks
Regards
Anantha Subramanain Natarajan
11-19-2008 07:46 AM
Seeing queue allocation varies. On interfaces running simple FIFO, show interface should show it.
e.g.
FastEthernet0/1 is up, line protocol is up
.
.
Queueing strategy: fifo
Output queue: 0/40 (size/max)
But for other queuing, you might need to use other commands.
e.g.
ATM0/IMA0 is up, line protocol is up
.
.
Queueing strategy: Per VC Queueing
For the ATM example, there's an attached service policy, and various queues attached to its classes.
11-19-2008 08:02 AM
Hi Joseph,
Thank you very much
Regards
Anantha Subramanian Natarajan
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide