cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4268
Views
0
Helpful
6
Replies

Cisco ASA site to site VPN slow

slee
Level 1
Level 1

Hello, our organization utilizes 2 Cisco ASA 5520s for site to site endpoints.  We recently upgraded the link between the sites to both be 50up/50down fiber links.  However, whenever I run iperf tests over the VPN tunnel, it seems to top out at 5 mbps.  We also have our Veeam backups being copied over the link, and each job goes about 5 mbps, whether only one job is being copied or 2-3.  I've looked, but I don't see any class maps or policy maps on the ASAs.  Is there anything else that could be causing this?  Should I post the sh run, or the sh tech?

6 Replies 6

Marvin Rhoads
Hall of Fame
Hall of Fame

The most common cause is MTU mismatch causing unnecessary fragmentation across the VPN. The easiest way to check is to send some ping with increasing MTU size and the DF (Don't Fragment) bit set and see where it breaks. Typically you need to set the end systems down to soemwhere around 1470-1492 bytes MTU so as to avoid making the ASA fragment the frames and incur the overhead that slows down your data transfers. It's not only the fragmentation that slows things down but also the resultant potential for TCP resets and Maximum Segment Size (MSS) not optimizing well.

There are several papers both on Cisco's and other sites describing the issue in a lot of detail. Do a search on "Cisco VPN MTU" if you want more detail.

I thought a MTU mismatch wouldn't have that big of an effect on the bandwidth?  We should be getting 50 mbps, so getting about 5-10 mbps seems a bit much.  

 

Also, for the ping test, I ran it to www.google.com, and found an mtu of 1270.  However, when I run the test over our VPN tunnel, it doesn't work until I set it to 990.  I haven't seen it ever be so low, and also when I set it to 1480 and up, it tells me the packet needs to be fragmented, but from 1000-1470 is just times out.  Does this point to something else?

You definitely have some issues with your path. I might even suspect something as low level as a duplex mismatch somewhere.

I'd break it down in pieces and check out the characteristics of each segment.

Hi Marvin,

I took your suggestion and have been running ping tests to the various network equipment and segments.  From what I've seen, everything on our server switches pings with packet size of 1470 to the internet and internal, from our user vlan it's about 1270, which is something I can narrow down at a later time.  

 

What I've found on our remote site, however, is that all packets to the internet need to be about 980 MTU.  When I ping our inside router interface (we have a router-on-a-stick configuration), I cannot ping unless MTU is set to 980.  To our firewall interface, however, as well as other internal equipment, ping works when MTU is set to 1480.  This points to something on that router, but there is no duplex mismatch, and MTUs on all interfaces are set to 1500 (this includes both firewall and router).  I'm stumped, could it be a bug?

Hmm. Sounds like we're narrowing down the problem domain - that's always a good sign. :)

It could be the upstream router from your remote site router-on-a-stick is restricting it to a 1000 byte MTU, leaving 980 bytes once packet headers are factored out. Can you check pinging from the remote site router to its gateway (i.e. towards your main site)?

Hey Marvin,

 

When I ping from the other site router to the gateway of the WAN link that our VPN connection is on, it's successful up to packet size 18024 (the max allowed).

 

However, from the firewall to that same IP, unless I make the size less than 1100, it fails.  It really seems like it's either the outside interface of the FW, or the inside interface of the router.  The MTUs are both set to 1500 on both interfaces, Full duplex, 1 Gbps.  I do see alot of input errors on the inside interface of the router, they are all ignored.  That counter is going up several hundred a second.  No other types of errors though.  The Input queue is also going up for the drops, although not as fast.  Could that be the dropped packets that are the wrong size?  

 

The outside interface has the same pattern, although the error counters are not going up as fast.

 

I really appreciate all your help Marvin!