cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Bookmark
|
Subscribe
|
1178
Views
5
Helpful
5
Replies

MTU / MSS doesn't return to normal after testing

corycandia
Level 1
Level 1

Experts,

 

Weird situation here with a 2921 (15.4): 

 

During testing, I adjusted the IP MTU of a MGRE Tunnel interface (used for DMVPN) down from 'ip mtu 1400' to something lower, like 'ip mtu 1300'.

 

With wireshark running live, we could correctly/appropriately see the packet length of the tunnel traffic decrease from 1360, to 1314 bytes.  That's expected.

 

I have returned the tunnel interface configuration to what it was before (ip mtu 1400 / MSS 1360) and shut/no shut the interface, but the packet length of all the tunnel traffic remains at 1314 instead of returning to 1360 (MSS from before).  I can confirm this with Wireshark.

 

I figure with stuff like 'tunnel path-mtu-discovery', other routers might make changes, but I don't understand why the packet lengths remain decreased instead of adjusting back up.

 

1. Anyone have any ideas? 

2. Do the sending/receiving hosts need to be rebooted? Sender is a FreeBSD based NAS, receiver is debian 10 VM on ESXi 6.5.

 

I've already:

Reloaded the 2921 the change was made on

cleared crypto used by the DMVPN on both sides

shut and no shut the tunnel on both sides

shut and no shut some down stream 3560 switch ports / vlans in the path to the hosts

 

Here's the actual tunnel config:

interface Tunnel0
description DMVPN
bandwidth 100000
bandwidth qos-reference 100000
ip address 172.20.254.2 255.255.255.0
no ip redirects
ip mtu 1400
no ip next-hop-self eigrp 1
no ip split-horizon eigrp 1
ip pim dr-priority 10
ip pim nbma-mode
ip pim sparse-dense-mode
ip nat inside
ip nhrp authentication CNDMNTCS
ip nhrp group DMVPN-QOS_100MBPS
ip nhrp map group DMVPN-QOS_100MBPS service-policy output POLICYMAP_DMVPN-100MBPS-LINE-RATE
ip nhrp map group DMVPN-QOS_60MBPS service-policy output POLICYMAP_DMVPN-60MBPS-LINE-RATE
ip nhrp map group DMVPN-QOS_GERMANY service-policy output POLICYMAP_DMVPN-GERMANY-LINE-RATE
ip nhrp network-id 1
ip nhrp holdtime 300
ip virtual-reassembly in
zone-member security LAN
ip summary-address eigrp 1 172.20.16.0 255.255.248.0
ip tcp adjust-mss 1360
delay 100
qos pre-classify
tunnel source GigabitEthernet0/1
tunnel mode gre multipoint
tunnel key 1
tunnel path-mtu-discovery
tunnel protection ipsec profile IPSEC-PROFILE_DMVPN
end

 

5 Replies 5

corycandia
Level 1
Level 1

In case anyone finds this helpful, it took a host reboot. (OMG, I hate rebooting hosts to fix a problem.  FreeBSD allows selective restarting of networking, but FreeNAS customization makes rebooting required.)

 

After rebooting the host, it's MSS went from the smaller size (Picture 1), to the proper size (Picture 2).

 

Now, both hosts are using 1360.

corycandia
Level 1
Level 1

Lastly, can anyone talk specifically about tuning a tunnel's MTU for distant end requirements?

 

I have a distant end that is using German Telekom which requires PPPoE over fiber ethernet.

 

Question 1: With the branch using PPPoE, do I need to plan on them losing 8 bytes for PPPoE, so should we be limiting the DMVPN tunnel to less than MTU 1400 so our encapsulated packets are less than 1494 as seen in picture 3?  (Picture 3 shows IPSEC GRE packets headed to the branch that uses PPPoE)

 

Question 2:  If we send the branch too large of a packet, the ISP's router with frag it, correct?

 

Lastly, a real conundrum: 

 

Connectivity between the US based HQ site and the German branch site has reliability problems.  It works well at night, and reaches the old hardware's limit of about 30mbps over 100mbps rated lines.

 

Connectivity during the day seems to be unreliable and crap.  There's about 15% packet loss between the routers using:

ping X.X.X.X repeat 100 tos 50 size 1492 df-bit

With that kind of loss, it makes sense that the tunnel throughput slows down to almost nothing, 2-3 mbps.

 

Capturing packets on the external interface of the German branch router showed incoming IPSEC GRE packets with a length of 1460, which is much smaller than the 1494 we are sending, and smaller than even 1492 if you start with 1500 and remove 8 for PPPoE.  I know there's some extra header stuff over 1500, but 1460 seemed extra small, so I'm curious if the ISP frags packets differently depending on network conditions.

Question 3: Has anyone seem this before?

 

Question 4: What should we set our tunnel MTU to if trying to avoid some hateful ISP fragging?

 

Thanks

Hello,

 

for DMVPN, 'ip mtu 1400' and 'ip tcp adjust-mss 1360' work in most cases. I am looking at the throughput/performance problem between your US and German sites. First of all, you are running an IOS (15.4) that is at least five years old, how old is the router ? You might want to upgrade to the recommended release (Release 15.7.3M7 MD).

 

Also, if you do a 'show interfaces x' for the tunnel and physical interfaces, do you see any drops/errors on the interfaces ?

 

The routers are very old, yes.

 

I do have output drops, but I have to QoS the tunnels to no more than 30mbps, as that's the limit of the CPU on those old devices.

 

HQ:

CARM-GATEWAY#show int tun 0
Tunnel0 is up, line protocol is up
Hardware is Tunnel
Description: DMVPN
Internet address is 172.20.254.2/24
MTU 17912 bytes, BW 100000 Kbit/sec, DLY 1000 usec,
reliability 255/255, txload 2/255, rxload 1/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel linestate evaluation up
Tunnel source 199.27.251.53 (GigabitEthernet0/1)
Tunnel Subblocks:
src-track:
Tunnel0 source tracking subblock associated with GigabitEthernet0/1
Set of tunnels with source GigabitEthernet0/1, 4 members (includes iterators), on interface <OK>
Tunnel protocol/transport multi-GRE/IP
Key 0x1, sequencing disabled
Checksumming of packets disabled
Tunnel TTL 255, Fast tunneling enabled
Path MTU Discovery, ager 10 mins, min MTU 92
Tunnel transport MTU 1472 bytes
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
Tunnel protection via IPSec (profile "IPSEC-PROFILE_DMVPN")
Last input 00:00:00, output never, output hang never
Last clearing of "show interface" counters 11:31:44
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 13187
Queueing strategy: fifo (QOS pre-classification)
Output queue: 0/0 (size/max)
5 minute input rate 39000 bits/sec, 38 packets/sec
5 minute output rate 888000 bits/sec, 77 packets/sec
14966616 packets input, 1169113940 bytes, 0 no buffer
Received 0 broadcasts (8516 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
30061743 packets output, 3606540686 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out

 

Branch:

GERM-GATEWAY#show int tun 0
Tunnel0 is up, line protocol is up
Hardware is Tunnel
Description: DMVPN
Internet address is 172.20.254.3/24
MTU 17912 bytes, BW 30000 Kbit/sec, DLY 1000 usec,
reliability 255/255, txload 1/255, rxload 7/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel linestate evaluation up
Tunnel source 79.199.216.254 (Dialer1)
Tunnel Subblocks:
src-track:
Tunnel0 source tracking subblock associated with Dialer1
Set of tunnels with source Dialer1, 2 members (includes iterators), on interface <OK>
Tunnel protocol/transport multi-GRE/IP
Key 0x1, sequencing disabled
Checksumming of packets disabled
Tunnel TTL 255, Fast tunneling enabled
Path MTU Discovery, ager 10 mins, min MTU 92
Tunnel transport MTU 1472 bytes
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
Tunnel protection via IPSec (profile "IPSEC-PROFILE_DMVPN")
Last input 00:00:00, output never, output hang never
Last clearing of "show interface" counters 1w1d
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 542
Queueing strategy: fifo (QOS pre-classification)
Output queue: 0/0 (size/max)
5 minute input rate 897000 bits/sec, 85 packets/sec
5 minute output rate 59000 bits/sec, 44 packets/sec
271102248 packets input, 373867107535 bytes, 0 no buffer
Received 0 broadcasts (22990 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
139556152 packets output, 25810472993 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out

 

 To illustrate the packet size difference, I pasted some packet captures together showing GRE IPSEC traffic sending and receiving as it appears on both routers.  I'm just not understanding the extra 14 bytes.  It's probably a simple answer, but I can't track it.

 

Packet Captures.jpg

The picture shows simultaneous captures on both side of the same traffic.

Q1 - Maybe, maybe not.  The Cisco recommended tunnel IP MTU of 1400 often is a tad smaller than needed, so you might be alright for 8 bytes.

Q2 - it should.  Otherwise the packet would be dropped.

Q3 - I haven't, but WANs with different vendors can be an "interesting" environment.

Perhaps part of the cause of congestion drops and/or smaller received MTU is being caused by a different path being used during the day, which also might have a smaller MTU in that path.  (Also, I have seen ISPs misconfigure their equipment.)  Further, routers can be loaded down by needing to fragment packets, so if that is happening, it also might contribute to your noticed higher drop rate.

As path MTU, on WANs, can vary, it can be helpful to enable PMTUD on your router (off by default, I believe).

What you might also consider, if drops are due to congestion, would a later feature of DMVPN, Adaptive QoS , which can dynamically shape its rate (if supported on your router).

Q4 - try 1300.