09-16-2014 10:06 AM - edited 03-07-2019 08:46 PM
We have a branch office that has about 220ms of latency between it and head office. The branch is composed of about 15 client machines (Windows 7 desktops and laptops). We have been struggling with application performance across this link. The Branch office has a 2Mbps MPLS connection and head office has a 50Mbps MPLS connection. Because of the Latency and BDP we get horrible throughput and rarely reach 2Mbps. Increasing the bandwidth to this site would not have much affect I don't think. One thing that was suggested was getting a second 2Mbps MPLS connection there and routing certain traffic over each each pipe. Idea is that we can double the throughput by having 2 separate pipes rather than 1 larger pipe.
My question is, does anyone have any experience with this sort of setup? Will this improve overall performance to the site?
09-16-2014 12:12 PM
Hello
Before you go and purchase another link of upgrade you existing line, you need to fathom out whats is utilising the current connection?
Do you have any monitoring in place?
Is qos applied ( specially priority queuing)
You can apply some ios monitoring if applicable such had NBAR protocol discovery which can help in the discovery of what's is actually running over that link.
res
Paul
09-16-2014 03:26 PM
We use Netflow to monitor traffic on the lines. The problem isn't so much that the lines are saturated, its that the latency is high enough that we struggle with BDP.
09-16-2014 06:44 PM
So you are saying that you have high latency all times of the day, including evening and weekends? If the answer is yes, then you'll need to talk to your provider. Adding a 2nd link is not a solution.
Have you checked the interfaces of your router for any line errors?
09-17-2014 12:28 PM
Sites are roughly 7000 miles apart so I expect latency. We have spoken to our Service provider about the latency and they said that there is nothing we can do to reduce it. Its a product of the distance between the 2 sites.
The router does show any significant errors on the line. The 220ms latency is consistent throughout the day/night/weekend.
09-17-2014 09:11 PM
This is path latency and can not be reduced. Throughput depends on TCP windowing method and latency also has its effect. If you are seeing packet drop in path, throughput will be reduced. Though link utilization is not crossing 2 mbps, check if there is any output drop on the link or in the path. adding second link should not help.
09-18-2014 02:16 AM
Can you execute the traceroute command from source to destination IP and verify if there is a big different of time in msec between destination
09-18-2014 08:51 AM
from head office to remote office
HO-Cisco#traceroute 172.16.108.21
Type escape sequence to abort.
Tracing the route to SAC-cisco (172.16.108.21)
VRF info: (vrf in name/id, vrf out name/id)
1 van-sprint (172.17.1.5) 8 msec 0 msec 12 msec
2 10.255.253.174 [AS 812] 4 msec 0 msec 4 msec
3 10.255.253.173 [AS 812] 12 msec 12 msec 12 msec
4 172.17.1.110 [AS 1803] 208 msec * 208 msec
HO-Cisco#ping 172.16.108.21
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.108.21, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 204/206/208 ms
HO-Cisco#
09-18-2014 09:57 AM
do
HO-Cisco#ping 172.16.108.21 rep 30 val df
HO-Cisco#ping 172.16.108.21 rep 30 val df siz 1000
HO-Cisco#ping 172.16.108.21 rep 30 val df siz 1300
HO-Cisco#ping 172.16.108.21 rep 30 val df siz 1500
09-18-2014 10:17 AM
HO-Cisco#ping 172.16.108.21 rep 30 val df
Type escape sequence to abort.
Sending 30, 100-byte ICMP Echos to 172.16.108.21, timeout is 2 seconds:
Packet sent with the DF bit set
Reply data will be validated
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 100 percent (30/30), round-trip min/avg/max = 204/207/208 ms
HO-Cisco#ping 172.16.108.21 rep 30 val df siz 1000
Type escape sequence to abort.
Sending 30, 1000-byte ICMP Echos to 172.16.108.21, timeout is 2 seconds:
Packet sent with the DF bit set
Reply data will be validated
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 100 percent (30/30), round-trip min/avg/max = 212/215/216 ms
HO-Cisco#ping 172.16.108.21 rep 30 val df siz 1300
Type escape sequence to abort.
Sending 30, 1300-byte ICMP Echos to 172.16.108.21, timeout is 2 seconds:
Packet sent with the DF bit set
Reply data will be validated
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 100 percent (30/30), round-trip min/avg/max = 216/221/308 ms
HO-Cisco#ping 172.16.108.21 rep 30 val df siz 1500
Type escape sequence to abort.
Sending 30, 1500-byte ICMP Echos to 172.16.108.21, timeout is 2 seconds:
Packet sent with the DF bit set
Reply data will be validated
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 100 percent (30/30), round-trip min/avg/max = 216/219/220 ms
HO-Cisco#
09-18-2014 10:48 PM
it looks like good data link. no packet losses
09-18-2014 07:51 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Yes, I have lots of experience (supporting international WAN links).
No, I don't see a 2nd pipe being better than one twice as large. If fact, it can be detrimental. However, the killer in these situations is the distance based latency which isn't "fixed" by adding bandwidth.
For distance based latency, there's no 100% "fix" beyond "just don't do it". If possible, use local resources especially for "interactive" network access. WAAS/WAFS, with their "tricks", can mitigate much of the distance latency impact (basically the most benefit comes from local caching which avoids the WAN's RTT).
For long distance throughput, you want to avoid any drops (because of slow error recovery) and need to have (for TCP apps) the receiving host's RWIN support (available) BDP. This is difficult to manually optimize, although often, on older hosts, you need to increase their RWIN to allow a TCP flow to utilize all available bandwidth. Again, WAAS/WAFS or dynamic traffic shapers can do "tricks" to optimize throughput (basically they often spoof the connection across the WAN link so you don't need to adjust a host's RWIN).
QoS can keep bulk transfers from disrupting interactive applications, which can make a noticeable improvement.
Bulk transfers, w/o WAN acceleration, can be faster if concurrent flows (not multiple links) are used. This because the high distance based RTT slows how fast a TCP flow will initially increase it's flow rate and/or (especially) recover its transmission rate when there's packet loss.
PS:
BTW, different hosts, with different "vintage" TCP stacks and/or file processing logic, can fare quite differently across a high latency network.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide