cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2607
Views
0
Helpful
4
Replies

Transmit discards - Total output drops on Tunnel

gajanangavli
Level 1
Level 1

Hi ,

I have eigrp over tunnel.During monitoring the tunnel through solarwind observed too many transmit discards for below tunnel.

sh int tu0

Tunnel0 is up, line protocol is up

Hardware is Tunnel

Internet address is 10.254.11.98/30

MTU 1514 bytes, BW 10000 Kbit, DLY 100 usec,

reliability 255/255, txload 179/255, rxload 248/255

Encapsulation TUNNEL, loopback not set

Keepalive set (10 sec), retries 3

Tunnel protocol/transport GRE/IP

Key disabled, sequencing disabled

Checksumming of packets disabled

Tunnel TTL 255

Fast tunneling enabled

Tunnel transmit bandwidth 8000 (kbps)

Tunnel receive bandwidth 8000 (kbps)

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 01:49:58

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 674

Queueing strategy: fifo (QOS pre-classification)

Output queue: 0/0 (size/max)

5 minute input rate 9806000 bits/sec, 2019 packets/sec

5 minute output rate 7035000 bits/sec, 2019 packets/sec

10376324 packets input, 1355200320 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

10334112 packets output, 3480887620 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 output buffer failures, 0 output buffers swapped out

Link has 10 Mbps capacity , configuration is as below.

interface Tunnel0

bandwidth 10000

ip address x.x.x.x

ip load-sharing per-packet

ip tcp adjust-mss 1380

delay 10

qos pre-classify

keepalive 10 3

Can somebody help to resove this issue.

4 Replies 4

Edison Ortiz
Hall of Fame
Hall of Fame

I remember your previous posting.

You are shaping to 9Mbps on the FastEthernet interface that is facing your MPLS provider.

If the tunnel is sending more than 9Mbps, your FE interface will buffer that traffic up to a point. Any excess traffic over 9Mbps can potentially drop and that's what's occurring with the tunnel. Looking at your output rate, it looks like you are hovering around that amount.

You need more bandwidth from the MPLS Service Provider or you need to identify who is creating the bandwidth demand and address the problem at the source.

__

Edison.

Thanks for reply.

This link is diffrenent than mentioned.

Thoough link utilization is normal like 3 to 4 Mbps , no packet drop in ping but in sh int tu0 ,output drop increases continuously.

3 to 4Mbps?

I see more like 7 to 8Mbps based on the output posted.

What's the source interface from the tunnel? What's the bandwidth end-to-end?

Any QoS?

We need more info..

Tunnel interfaces are virtual interfaces and drops occurred in the tunnel are often due to the underlying physical interface.

Hi,

Tunnel source and destination are physical interface.i have fresh capture at 3 to 4 Mb in which tunnel showing output drop but no in phy int.

sh int tu0

Tunnel0 is up, line protocol is up

MTU 1514 bytes, BW 10000 Kbit, DLY 100 usec,

reliability 255/255, txload 43/255, rxload 85/255

Encapsulation TUNNEL, loopback not set

Keepalive set (10 sec), retries 3

Tunnel protocol/transport GRE/IP

Key disabled, sequencing disabled

Checksumming of packets disabled

Tunnel TTL 255

Fast tunneling enabled

Tunnel transmit bandwidth 8000 (kbps)

Tunnel receive bandwidth 8000 (kbps)

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 00:13:56

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 82

Queueing strategy: fifo (QOS pre-classification)

Output queue: 0/0 (size/max)

sh int fa1/1

FastEthernet1/1 is up, line protocol is up

Full-duplex, 100Mb/s, 100BaseTX/FX

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:01, output 00:00:00, output hang never

Last clearing of "show interface" counters 00:12:24

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: Class-based queueing

Output queue: 0/1000/64/0 (size/max total/threshold/drops)

Also no drop observed in QOS for that phy interface.

sh policy-map interface fastEthernet 1/1

FastEthernet1/1

Service-policy output: VOICE-VC

Class-map: CLASS-EF (match-all)

26140 packets, 5812960 bytes

5 minute offered rate 8000 bps, drop rate 0 bps

Match: access-group name VOICE-EF

Queueing

Strict Priority

Output Queue: Conversation 264

Bandwidth 20 (%)

Bandwidth 20000 (kbps) Burst 500000 (Bytes)

(pkts matched/bytes matched) 0/0

(total drops/bytes drops) 0/0

Class-map: CLASS-SIG (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name VOICE-SIG

Queueing

Output Queue: Conversation 265

Bandwidth 50 (kbps) Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

Class-map: QOS-VC-CLASS (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name QOS-VC-VOICE

Queueing

Output Queue: Conversation 266

Bandwidth 2048 (kbps) Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

Class-map: class-default (match-any)

1058141 packets, 254149904 bytes

5 minute offered rate 2406000 bps, drop rate 0 bps

Match: any

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card