cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2686
Views
0
Helpful
10
Replies

Internet Speeds Slow Over DMVPN

johnwoods
Level 1
Level 1

We have approximately 20 remote sites running DMVPN in a typical hub and spoke configuration. Hub is a 4431 and spokes are 4331s. We have recently moved web filtering to a centralized appliance at our data center and, as such, I need to backhaul internet traffic to the data center. I have configured route-maps on all spokes to set the next hop address to the hub's tunnel address.

 

When I do this, we experience a major drop in internet speeds at the spoke sites, from approximately 100 Mbps to a fairly consistent 10-15 Mbps.

 

Any ideas would be greatly appreciated. Am I backhauling traffic in an incorrect way?

10 Replies 10

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello John,

if before each remote site was using the DMVPN only to reach the central site subnets or other spoke subnets and each remote had an indipendent internet access the comparison is the following:

a) with indipendent internet access at each remote site, only NAT is performed locally and traffic to and from the internet travels without any encryption and without any MTU reduction caused by DMVPN overhead (IPsec+GRE)

 

b) when using the DMVPN from each remote site to access the internet the following happens:

NAT is performed later at central site before or after the centralized web filtering appliance.

Traffic to and from the internet needs to be encrypted using the DMVPN encapsulation adding GRE and IPsec overhead.

The possible reasons for a so reduced performance are two:

1)  the need for encryption of all traffic to/from the internet

2) the reduced MTU on the DMVPN cloud can cause a lot of fragmentation specially in download in direction from internet to users packets with size 1500 bytes coming from web servers need  to be fragmented before sending over DMVPN after decryption the fragments may travel to the client on the remote site.

 

Check the MTU on the physical interfaces used to reach the DMVPN. Check for fragmentation activity on the hub router in downstream direction. This is probably the main cause for so reduced performance.

To avoid fragmentation the device performing NAT on the central site should use a reduced IP MTU and TCP MSS in order to take in account the overhead on the DMVPN section of the network.

also remote site router interfaces facing the clients should use a reduced MTU and TCP MSS, but this is probably already configured.

 

Hope to help

Giuseppe

 

Hello Giuseppe,

Thank you for the reply. All remote sites (spokes) were originally routed (no NAT) over MPLS to our data center. However, internet access for each individual site was centralized through the MPLS provider. There was a filtering appliance at each remote site allowing us to bring only private traffic back to our data center.

We have now moved away from MPLS and each site is connected via broadband internet access and a DMVPN tunnel from each remote site. We stayed with the filtering appliances local to each site for a while but now desire to bring all internet traffic back to our data center to be filtered through one appliance.

A few things to note:

1, we are presently NATting all traffic at each individual remote site

2, the hub DMVPN router is behind a Juniper firewall stack that is also NATing at the central data center

Is this a bad/incorrect design?

As far as MTU and TCP-MSS, they are set to 1400 and 1360 respectively at both hub and spokes.

Any thoughts?

Thank you,

John

Hello John,

so you had previously MPLS links between remote sites and central site and now you have DMVPN over internet access.

However, there are still some aspects that I do not understand of your topology

1) you perform NAT locally at each remote site before sending traffic over the DMVPN

2) The DMVPN hub router is behind a cluster of Juniper SRX firewalls that are doing also NAT.

Here my doubts arise. As far as I know DMVPN is a Cisco proprietary solution and also if I remember correctly Juniper SRX do not support IPSec protected GRE. So my question is what the SRXs are supposed to NAT?

For me they cannot inspect inside DMVPN packets and change the payload. They may be able to NAT the DMVPN endpoints to make the remote site IP addresses to appear different to the DMVPN Hub.

Can you clarify on what the Juniper SRXs are performing NAT ?

 

3) As far as MTU and TCP-MSS, they are set to 1400 and 1360 respectively at both hub and spokes.

This is good news but this may not be enough to avoid fragmentations see following point.

 

4)  Where is located the centralized web filtering appliance and what is its operation mode?

I would expect the centralized web filtering appliance to be after the DMVPN hub router and to process de-encrypted traffic coming from remote sites.

In this way the appliance can see the clear text traffic and takes decisions on what to do with client connection requests.

However, it is important to find out what is the operational mode of the web filtering appliance. IF the appliance works as a proxy, each client TCP session is handled by the appliance and the appliance starts another TCP session with the real internet based web server.

If this is the way the appliance works it can be able to see the reduced IP MTU and TCP MSS on the appliance to client TCP session, but on the other TCP session to the real web server it might use standard values for IP MTU and TCP MSS and this will cause fragmentation on downstream traffic.

 

Hope to help

Giuseppe

 

 

 

Sorry for the confusion Giuseppe,

Our Juniper stack is NATing the (public) NBMA address of the hub in to the (private) address assigned to the tunnel source interface.

John

Hello John,

I have seen that you have opened a new thread asking for design suggestions.

 

However, if you want you can try to further troubleshoot looking at the centralized web appliance operations?

 

It acts as a proxy?

If so two TCP sessions are interconnected via it.

One TCP session between client and web appliance

One TCP session between web appliance and true web server on the internet.

What are the settings for IP MTU and MSS on the appliance?

Can you check these and eventually use lower values on internal and external interfaces of the web appliance?

 

I think the low performance is caused by excessive fragmentation and this can be caused by web appliance if it does not reflect the reduced IP MTU and MSS on the internal side on the external to the internet side.

 

Hope to help

Giuseppe

 

Thank you for the reply Giuseppe,

I opened the separate thread hoping to get advice and recommendations on how others are "properly" doing this.

As far as troubleshooting my existing configuration, I do not think the issues have anything to do with the filter appliance considering that we have several subnets at the same site as the appliance that are being filtered through it with absolutely no performance/speed issues.

I am just thinking that including internet traffic in the IPSec tunnels is unnecessary and probably not "best practice". This is why I opened the other thread and am looking forward to seeing how others are doing this.

Thank you,

John

Hello John,

>> As far as troubleshooting my existing configuration, I do not think the issues have anything to do with the filter appliance considering that we have several subnets at the same site as the appliance that are being filtered through it with absolutely no performance/speed issues.

 

The question is open for the remote site subnets when using the DMVPN that may have a lower IP MTU and TCP MSS (1400 bytes and 1360 bytes respectively according to your settings)

I agree with you that with default settings the web filtering appliance works well for users in central site IP subnets that do not face a path over IPSEC/GRE like DMVPN provides.

 

Hope to help

Giuseppe

 

Hello,

 

not sure if this has already been mentioned, or if you have already configured it, but you could try and set 'tunnel path-mtu-discovery' on the tunnel interfaces.

 

Post the full configs of both the hub and one of the spokes, maybe we can spot something else...

Georg,

Thank you for the reply. Would the "tunnel path-mtu-discovery" command replace the static MTU and TCP-MSS commands on both the hub and spokes or would it coexist with these existing commands?

John

Hello,

 

no, leave both the MTU and MSS settings as they are (I assume 1400 and 1360, respectively). Use path discovery in addition to these settings.

Review Cisco Networking for a $25 gift card