04-30-2024 06:17 AM
I have created a new VTI. I see output drops incrementing. I migrated from a 100M circuit to now 1G. The old interface also had output drops which I attributed to an old 100M circuit. The new interface also has output drops. How do I troubleshoot this? I don't have support on this old 2921.
OLD:
interface GigabitEthernet0/2.602
description To ISP
bandwidth 100000
encapsulation dot1Q 602
ip address x.x.x.x 255.255.255.252
no cdp enable
crypto map CRYPTO-MAP
end
interface Tunnel0
ip address 192.168.191.2 255.255.255.252
load-interval 30
tunnel source x.x.x.x
tunnel destination y.y.y.y
end
Tunnel0 is up, line protocol is up
Hardware is Tunnel
Internet address is 192.168.191.2/30
MTU 17916 bytes, BW 100 Kbit/sec, DLY 50000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel linestate evaluation up
Tunnel source x.x.x.x, destination y.y.y.y
Tunnel protocol/transport GRE/IP
Key disabled, sequencing disabled
Checksumming of packets disabled
Tunnel TTL 255, Fast tunneling enabled
Tunnel transport MTU 1476 bytes
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
Last input 00:00:02, output 00:00:02, output hang never
Last clearing of "show interface" counters 1y35w
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1856875
Queueing strategy: fifo
Output queue: 0/0 (size/max)
30 second input rate 0 bits/sec, 0 packets/sec
30 second output rate 0 bits/sec, 0 packets/sec
3703103592 packets input, 2831901639 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
1691109591 packets output, 470049187 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
NEW:
interface GigabitEthernet0/0
description TO another_switch Te2/0/3
ip address a.a.a.a 255.255.255.240
duplex auto
speed auto
end
interface Tunnel2
description Primary tunnel to somewhere
ip address 192.168.191.10 255.255.255.252
load-interval 30
tunnel source a.a.a.a
tunnel mode ipsec ipv4
tunnel destination b.b.b.b
tunnel protection ipsec profile IPSEC_PROF
end
Tunnel2 is up, line protocol is up
Hardware is Tunnel
Description: Primary tunnel to somewhere
Internet address is 192.168.191.10/30
MTU 17878 bytes, BW 100 Kbit/sec, DLY 50000 usec,
reliability 255/255, txload 255/255, rxload 255/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel linestate evaluation up
Tunnel source a.a.a.a, destination b.b.b.b
Tunnel protocol/transport IPSEC/IP
Tunnel TTL 255
Tunnel transport MTU 1438 bytes
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
Tunnel protection via IPSec (profile "IPSEC_PROF")
Last input never, output never, output hang never
Last clearing of "show interface" counters 1w0d
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 13052
Queueing strategy: fifo
Output queue: 0/0 (size/max)
30 second input rate 113000 bits/sec, 163 packets/sec
30 second output rate 7370000 bits/sec, 664 packets/sec
90784069 packets input, 2451758946 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
80191952 packets output, 2446789603 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
04-30-2024 06:39 AM
You may want to start with with configuring the MTU, MSS, and a QoS policy with a shaper on the tunnel interface that matches the bandwidth on the remote site.
interface Tunnel2
description Primary tunnel to somewhere
ip address 192.168.191.10 255.255.255.252
ip mtu 1400
ip tcp adjust-mss 1360
load-interval 30
tunnel source a.a.a.a
tunnel mode ipsec ipv4
tunnel destination b.b.b.b
tunnel protection ipsec profile IPSEC_PROF
service-policy output <policy-map name>
04-30-2024 07:09 AM
MTU 17878 <- this is issue
You need to hardcoded the MTU under tunnel'
Mtu 1400
Instead of making tunnel discover the mtu in path.
Do that and check output drop
MHM
04-30-2024 11:24 AM
Hello,
do you see output drops on the physical interface as well ?
Post the output of:
show buffers
04-30-2024 03:49 PM
Ditto on physical interface stats.
As tunnel interfaces are virtual interfaces (much like SVIs), what might cause a tunnel interface drop?
Well, one potential cause is if a too large packet is received, for the tunnel interface, with its DF bit set. See reply in this thread. Suggest you try the things mentioned in that reply to confirm. (Of course, hard to see how the tunnel's logical 17K MTU is being exceeded, but could see the tunnel's transport MTU being often exceeded, but likely not too often with packets with DF set.)
Another possibility would be, I suspect, would be transient overtaxing the CPU of a 2921 which Cisco only recommends for 50 Mbps WAN connections. What's your CPU history stats look like? (BTW, software based routers doing fragmentation, may significantly load up the CPU.)
BTW, I compute your old tunnel drop rate as being about 0.1% and your new tunnel drop rate being about 0.02%. Possibly your drop problem is not too adversely impacting your tunnel traffic.
A very worthwhile Cisco TechNote to review, although it doesn't explicitly address VTIs, same principals apply.
A very, very useful tunnel feature is the "ip tcp adjust-mss", mentioned in the TechNote. However, only works on TCP traffic, and only works if the TCP setup handshake is "seen".
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide