cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7625
Views
0
Helpful
26
Replies

Unbelievably slow DMVPN connection.

gmulvaney1
Level 1
Level 1

Hi, I've been working on this for 3 days and no resolution. Hopefully some smarter folks here will throw me a bone!

I have a hub and spoke phase 3 topology that works great except for speed- internet speeds (DIA) are as expected, spoke is 100M and hub is 1 gig link.

The hub is an old Cisco 2911 running  15.7(3)M5, the spoke is a small ISR (881W, wireless disabled) running 15.4(3)M8.

 

Actual bandwidth through the tunnel (iperf3) is only 2.2-3Mbits average- I'm pulling what little hair I have left out over that.

Configs are below, I've played w/ the tcp-mss settings and mtu, it didn't really make much difference. I also removed the tunnel protection as a test and the BW on both sets was actually slightly slower, but a negligible difference. 

Hub Config:

interface Tunnel1
bandwidth 1000000
ip vrf forwarding labnetwork
ip address 192.168.185.2 255.255.255.0
no ip redirects
ip mtu 1400
no ip next-hop-self eigrp XXXX
no ip split-horizon eigrp XXXX
ip nhrp authentication XXXX

ip nhrp network-id 69
ip nhrp holdtime 450
ip nhrp redirect
ip virtual-reassembly in max-reassemblies 1000
no ip split-horizon
ip tcp adjust-mss 1365
delay 1000
tunnel source GigabitEthernet0/0.1
tunnel mode gre multipoint
tunnel key XXX
tunnel path-mtu-discovery
tunnel bandwidth transmit 1000000
tunnel bandwidth receive 1000000
tunnel protection ipsec profile ROME
end

 

Spoke config:

interface Tunnel1
bandwidth 1000000
bandwidth receive 100000
ip address 192.168.185.1 255.255.255.0
no ip redirects
ip mtu 1400
ip nat inside
ip nhrp authentication XXXX
ip nhrp map 192.168.185.2 158.93.X.X
ip nhrp map multicast 158.93.X.X
ip nhrp network-id 69
ip nhrp holdtime 450
ip nhrp nhs 192.168.185.2
ip nhrp shortcut
ip nhrp redirect
ip virtual-reassembly in
ip tcp adjust-mss 1365
delay 1000
tunnel source FastEthernet4
tunnel mode gre multipoint
tunnel key XXX
tunnel path-mtu-discovery
tunnel bandwidth transmit 100000
tunnel bandwidth receive 100000
tunnel protection ipsec profile ROME

 

Anyone has a solution the beer is on me the next time you're in Augusta, Ga.

 

Thanks!

Jerry

26 Replies 26

Hi Joseph,

 

There's virtually no internet traffic over the physical interface - it is used solely for incoming DMVPN connections, only two hubs that have this same issue.

 

For bandwidth my ISP is supposed to be providing, it's 100M. I switched the 881 ISR for a 2921 this morning and my internet (non-tunneled bandwidth) did increase a bit, which I expected, but tunnel throughput remained the same. I do plan (Tuesday) on removing the vlan tagging on the gi 0/0.1 interface and just using the gi 0/0 interface connected directly to the gateway as my tunnel source. At least that would eliminate teh core trunk as being an issue, although it doesn't show any errors.

Can we see a "show interface summary" and "show proc cpu history" outputs from all side?  A "show crypto engine accelerator statistics" from each may also be telling if you could share.

Sure!

Hub:


AB-Lab-2911-Gateway#sh int sum

*: interface is up
IHQ: pkts in input hold queue IQD: pkts dropped from input queue
OHQ: pkts in output hold queue OQD: pkts dropped from output queue
RXBS: rx rate (bits/sec) RXPS: rx rate (pkts/sec)
TXBS: tx rate (bits/sec) TXPS: tx rate (pkts/sec)
TRTL: throttle count

Interface IHQ IQD OHQ OQD RXBS R XPS TXBS TXPS TRTL
-------------------------------------------------------------------------------- ---------------------------------
Em0/0 0 0 0 0 0 0 0 0 0
* GigabitEthernet0/0 0 0 0 0 12000 20 5000 2 0
* GigabitEthernet0/0.1 - - - - - - - - -
* Gi0/0.500 - - - - - - - - -
* GigabitEthernet0/1 0 0 0 0 4000 1 1000 1 0
* GigabitEthernet0/2 0 0 0 0 0 0 0 0 0

Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL
-----------------------------------------------------------------------------------------------------------------
* Loopback1 0 0 0 0 0 0 0 0 0
* NVI0 0 0 0 0 0 0 0 0 0
Tunnel0 0 0 0 0 0 0 0 0 0
* Tunnel1 0 0 0 34149 1000 1 4000 1 0
NOTE:No separate counters are maintained for subinterfaces
Hence Details of subinterface are not shown


AB-Lab-2911-Gateway# sh proc cpu hist

AB-Lab-2911-Gateway 04:48:00 PM Sunday Jan 19 2020 est

 

 

33333 11111 44444 11111 11111
100
90
80
70
60
50
40
30
20
10
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per second (last 60 seconds)

 


1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
454544645446445444445234441454544544542446645566454472454444
100
90
80
70
60
50
40
30
20 * * * * * * * * * * * * * * *
10 * * * * * * * * * * * * * * ** **** * * *
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per minute (last 60 minutes)
* = maximum CPU% # = average CPU%

 


2414212111122333341711114411111111131312314
0676080788946033149697883469898959809096183
100
90
80 *
70 *
60 *
50 * * *
40 * * * * ** *
30 * * ****** * ** * * ** *
20 *******************************************
10 *******************************************
0....5....1....1....2....2....3....3....4....4....5....5....6....6....7..
0 5 0 5 0 5 0 5 0 5 0 5 0
CPU% per hour (last 72 hours)
* = maximum CPU% # = average CPU%

 

Spoke 1 (this location)

 


*: interface is up
IHQ: pkts in input hold queue IQD: pkts dropped from input queue
OHQ: pkts in output hold queue OQD: pkts dropped from output queue
RXBS: rx rate (bits/sec) RXPS: rx rate (pkts/sec)
TXBS: tx rate (bits/sec) TXPS: tx rate (pkts/sec)
TRTL: throttle count

Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL
-----------------------------------------------------------------------------------------------------------------
Em0/0 0 0 0 0 0 0 0 0 0
* GigabitEthernet0/0 1 0 0 2 3093000 337 61000 85 2
* GigabitEthernet0/1 0 0 0 0 59000 85 3061000 272 0
Serial0/0/0 0 0 0 0 0 0 0 0 0
* Loopback7 0 0 0 0 0 0 0 0 0
* NVI0 0 0 0 0 0 0 0 0 0

Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL
-----------------------------------------------------------------------------------------------------------------
* Tunnel1 0 0 0 9517 3000 0 0 0 0

CaesarNorth_GW 04:51:11 PM Sunday Jan 19 2020 est

 

 

722222222223333333333444442222244444222222222222222222226666
100
90
80
70
60
50
40
30
20
10 * ****
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per second (last 60 seconds)

 


2 1
627778778976669676877984778888887877866666667666676777888877
100
90
80
70
60
50
40
30
20 *
10 ************************************************************
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per minute (last 60 minutes)
* = maximum CPU% # = average CPU%

 


12132259
52240239
100 *
90 *
80 *
70 *
60 *
50 **
40 **
30 * **
20 ** *****
10 ********
0....5....1....1....2....2....3....3....4....4....5....5....6....6....7..

 

 

Uptme on teh spoke has only been 8 hours, teh spike was likely when it was spinning up. I did have an 881W ISR here, I swapped it this morning for a 2921 thinking maybe the horsepower would make at least a slight difference- nope.

 

Thanks for the response!

Hello,

 

taking into consideration that you are taking a massive hit with regard to throughput, I think there must be something fundamental that is not correctly configured. I have made some changes to the spoke configuration (marked in bold). 

 

! Last configuration change at 09:20:51 est Sat Jan 18 2020 by mulvanj
! NVRAM config last updated at 09:10:04 est Sat Jan 18 2020 by mulvanj
!
version 15.4
service timestamps debug datetime msec
service timestamps log datetime localtime
no service password-encryption
!
hostname CaesarNorth_GW
!
boot-start-marker
boot-end-marker
!
logging count
logging userinfo
logging buffered 132000
enable secret 5 XXXX
!
aaa new-model
!
aaa authentication login default local
!
aaa session-id common
clock timezone est -5 0
clock summer-time EST recurring
service-module wlan-ap 0 bootimage autonomous
!
ip vrf JOANNE
rd 300:1
!
ip domain name ThirteenthLegion.org
ip host CISCO-CAPWAP-CONTROLLER 192.168.194.10
ip name-server 8.8.8.8
ip name-server 192.168.184.1
ip multicast-routing
ip cef
login on-failure log
login on-success log
ipv6 unicast-routing
ipv6 cef
!
multilink bundle-name authenticated
!
cts logging verbose
license udi pid C881W-A-K9 sn FTX162985WE
!
vtp mode transparent
vtp version 2
username mulvanj privilege 15 secret 5 XXXX
username gmulvaney privilege 15 secret 5 XXXX
!
vlan 2
!
vlan 3
name Datacenter
!
vlan 4
name GatewaySVI
!
crypto isakmp policy 10
hash md5
authentication pre-share
crypto isakmp key XXXX address 0.0.0.0
crypto isakmp invalid-spi-recovery
!
crypto ipsec security-association replay window-size 1024
!
crypto ipsec transform-set LAB esp-des esp-md5-hmac
mode transport
crypto ipsec df-bit clear
!
crypto ipsec profile ROME
set transform-set LAB
!
interface Loopback7
ip address 7.7.7.7 255.255.255.255
ipv6 address 2007::2222/64
ipv6 ospf 1 area 2
!
interface Tunnel0
no ip address
!
interface Tunnel1
ip address 192.168.185.1 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp authentication XXXX
ip nhrp map 192.168.185.2 158.93.7.197
ip nhrp map multicast 158.93.7.197
ip nhrp network-id 69
ip nhrp holdtime 300
ip nhrp nhs 192.168.185.2
ip nhrp shortcut
ip virtual-reassembly in
ip tcp adjust-mss 1360
tunnel source FastEthernet4
tunnel mode gre multipoint
tunnel key XXXX
tunnel protection ipsec profile ROME
!
interface FastEthernet0
switchport mode trunk
no ip address
!
interface FastEthernet1
no ip address
!
interface FastEthernet2
no ip address
!
interface FastEthernet3
no ip address
!
interface FastEthernet4
ip address dhcp
ip nat outside
--> no ip nat enable
ip virtual-reassembly in
duplex auto
speed auto
!
interface Wlan-GigabitEthernet0
no ip address
!
interface wlan-ap0
no ip address
!
interface Vlan1
no ip address
!
interface Vlan4
ip address 192.168.184.1 255.255.255.0
ip nat inside
--> no ip nat enable
ip virtual-reassembly in
!
router eigrp CaesarNorth
!
address-family ipv4 unicast autonomous-system 7576
!
af-interface FastEthernet4
passive-interface
exit-af-interface
!
af-interface Vlan4
passive-interface
exit-af-interface
!
topology base
redistribute ospf 30909 metric 10000000 100 255 1 1400
exit-af-topology
network 75.0.0.0
network 192.168.184.0
network 192.168.185.0
metric rib-scale 255
exit-address-family
!
router ospf 30909
network 192.168.84.0 0.0.0.255 area 0
network 192.168.184.0 0.0.0.255 area 0
network 192.168.185.0 0.0.0.255 area 0
default-information originate always
!
ip forward-protocol nd
no ip http server
no ip http secure-server
!
ip nat inside source list 1 interface FastEthernet4 overload
ip route 0.0.0.0 0.0.0.0 FastEthernet4 dhcp
!
ip ssh version 2
!
ip access-list extended Limit_SSH
permit tcp 192.168.0.0 0.0.255.255 any eq 22
deny ip any any log
ip access-list extended VPN
permit tcp 192.168.0.0 0.0.255.255 any
!
ip prefix-list DEF seq 5 permit 0.0.0.0/0
ipv6 router ospf 1
!
access-list 1 permit 192.168.184.0 0.0.0.255
!
access-list 30 permit any
access-list 30 permit 192.168.84.0 0.0.0.255
access-list 30 permit 192.168.0.0 0.0.255.255
access-list 40 deny 172.19.0.0
!
access-list 130 permit ip 192.168.84.0 0.0.0.255 any
!
control-plane
!
mgcp behavior rsip-range tgcp-only
mgcp behavior comedia-role none
mgcp behavior comedia-check-media-src disable
mgcp behavior comedia-sdp-force disable
!
mgcp profile default
!
vstack
!
line con 0
exec-timeout 0 0
no modem enable
line aux 0
line 2
no activation-character
no exec
transport preferred none
stopbits 1
line vty 0 4
access-class Limit_SSH in
exec-timeout 60 0
password 7 08221C400A0B000317
transport input ssh
line vty 5 189
access-class Limit_SSH in
exec-timeout 60 0
password 7 08221C400A0B000317
transport input ssh
!
scheduler allocate 20000 1000
ntp update-calendar
ntp server time2.google.com
ntp server 2.us.pool.ntp.org
!
end

Thanks- some of those were leftover configs from past testing, I cleaned them up along with some of the old routing info that's not needed. I've also changed this hub from the original 881 to a 2921 and gotten rid of the vlan stuff completely- it's now L3 to the access switch at the hub.

 

The next step, tomorrow, is to eliminate the vlan tagging on the external interface on the hub. I think I can get a connection directly to teh outside edge pretty easy, that will eliminate that as an issue.

Only two hubs, eh? Have you tried any performance tests using a p2p gre tunnel or an Internet traffic performance server?

I would ignore the previous person who told you to do a default route to an interface.  Routing to an interface instead of an IPv4 next-hop IP is not best practice, especially for a default route.  You'll ARP for every IP you use a default route to get to.  Good for labs and CCNA study, but not real-world mature networks.

 

That said, is there a reason a spoke would be advertising a default route into OSPF towards the hub from the spoke router, noting this in your spokes config: default-information originate always?  I am guessing the intent was to advertise the default below the router, like a stub area.  Are you able to ping across the tunnel with 1400-byte packets and the DF bit set?

 

From spoke:

ping 192.168.185.2 repeat 10 size 1400 df-bit

 

881's are slow little routers.  I would be surprised if you got more than 25Mbps over DMVPN on one but would expect more than 7Mbps if it were idle.  I agree with a previous sender that I would test with all NAT removed.  Also, delete your post with the configs attached if those vty line passwords are production sensitive.  Those are type 7, which we can all decode with no effort.  Scrub those and repost without those credentials.  

Hey, thanks! Good info- the default route was from an old test propagating a default route from the spoke and not needed and now removed.  And thanks for the vty note, I missed that when I posted the configs- it's now changed on the boxes.

 

I did swap the 881 for a 2921 to see if that would affect the tunnel performance. No difference, but I did see about a 20% improvement on the internet default route.

 

Good catch on the mtu question- I ran 10 tests with it and 8 ran fine, 2 of them dropped 1 packet. I would have thought the path-mtu-discovery would have prevented that.

Interested in the low-level packet loss.  MTU issues would surface as 100% loss with the DF bit set.  This makes me wonder if one of your circuits is lossy which would manifest as slowness.  What do you see with a long ping public to public, something like this from the spoke:

 

ping 158.93.7.197 repeat 100 size 1500 df-bit

 

Were you able to resolve your low level packet loss issues?

Hey, thanks for following up- not yet, we're moving some data center DR equipment this week (last minute, as always) and it's been eating up all my time. I do plan on picking this up this weekend though. Appreciate it!

Haven't yet- I did remove the crypto on the tunnel entirely for a test, however with minimal difference. Oddly enough, the average was just slightly slower with no crypto.

Review Cisco Networking for a $25 gift card