cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6899
Views
0
Helpful
3
Replies

Output drops / Transmit Discards

Aqe@2016
Level 1
Level 1

Hello Folks, 


One of our branch router is facing latency issues. Upon checking I found the circuit clean and utilization is normal. However, found lot of output drops. This is a 6M MPLS circuit. 

Solarwinds NMS is also showing Transmit discards on the WAN facing interface. The applied QoS policy was shaping 100%, I changed it to 85% but output drops are still increasing. E.g around 8:00 PM CST I cleared the counters before applying 85% policy. At the moment at 1:27 AM, I can see 520 Output drops on the interface.  NMS also showing transmit discards. Can anyone please help? It is a Cisco 2901 router. 

Here is the output with a 100% Shape, before applying 85% shaping policy.... 

show int gigabitEthernet 0/1
GigabitEthernet0/1 is up, line protocol is up
Hardware is CN Gigabit Ethernet, address is 00f6.6384.2181 (bia 00f6.6384.2181)
MTU 1500 bytes, BW 6000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 4/255, rxload 2/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full Duplex, 100Mbps, media type is RJ45
output flow-control is unsupported, input flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 4w6d
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 121037
Queueing strategy: Class-based queueing
Output queue: 0/1000/0 (size/max total/drops)
30 second input rate 52000 bits/sec, 25 packets/sec
30 second output rate 107000 bits/sec, 25 packets/sec
96300307 packets input, 38294522 bytes, 0 no buffer
Received 302429 broadcasts (154240 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 302429 multicast, 0 pause input
93874826 packets output, 86768059 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out

Here is the configuration of interface with 100% shaping... 

show runn int GigabitEthernet0/1
Building configuration...

Current configuration : 277 bytes
!
interface GigabitEthernet0/1
bandwidth 6000
ip address x.x.x.x 255.255.255.252
ip flow ingress
ip pim sparse-mode
load-interval 30
duplex full
speed 100
history BPS
service-policy output SHAPE_100

----------------------------------------------------------------------
policy-map SHAPE_100
class class-default
shape average percent 100
service-policy QUEUE
--------------------------------------------------------------------
policy-map QUEUE
class EF
priority percent 50
class AF41
bandwidth remaining percent 15
class AF31
bandwidth remaining percent 60
class AF21
bandwidth remaining percent 20
class AF11
bandwidth remaining percent 3
class BE
bandwidth remaining percent 1
---------------------------------------------------------------------

Show int gi0/1 output when SHAPE_85 was applied. 

#show int gigabitEthernet 0/1
GigabitEthernet0/1 is up, line protocol is up
Hardware is CN Gigabit Ethernet, address is 00f6.6384.2181 (bia 00f6.6384.2181)
MTU 1500 bytes, BW 6000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full Duplex, 100Mbps, media type is RJ45
output flow-control is unsupported, input flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 04:48:49
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 562
Queueing strategy: Class-based queueing
Output queue: 0/1000/562 (size/max total/drops)
30 second input rate 15000 bits/sec, 14 packets/sec
30 second output rate 47000 bits/sec, 15 packets/sec
201041 packets input, 39694711 bytes, 0 no buffer
Received 1736 broadcasts (871 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 1736 multicast, 0 pause input
233482 packets output, 82459145 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out

--------------------------------
Interface configuration with 85% Shape... 

show runn int gigabitEthernet 0/1
Building configuration...

Current configuration : 276 bytes
!
interface GigabitEthernet0/1
bandwidth 6000
ip address x.x.x.x 255.255.255.252
ip flow ingress
ip pim sparse-mode
load-interval 30
duplex full
speed 100
history BPS
service-policy output SHAPE_85
end



--------------------------------
policy-map SHAPE_85
class class-default
shape average percent 85
service-policy QUEUE

--------------------------------------------------------------------
policy-map QUEUE
class EF
priority percent 50
class AF41
bandwidth remaining percent 15
class AF31
bandwidth remaining percent 60
class AF21
bandwidth remaining percent 20
class AF11
bandwidth remaining percent 3
class BE
bandwidth remaining percent 1
---------------------------------------------------------------------

Router is using following image:
c2900-universalk9-mz.SPA.154-3.M3.bin


Help please... 

3 Replies 3

Joseph W. Doherty
Hall of Fame
Hall of Fame

Why are the drops a question?

Is it because your average utilization is less than 6 Mbps?

If so, if your LAN side supports gig, it's not very hard to microburst congest 6 Mbps.

If that's the problem, you could see if your router supports increasing the shaping buffers.  You could also consider adjusting your Bc, but before doing that, you would also want to (ideally) know what your MPLS provider is supporting for Bc.

BTW, it's my belief, most Cisco shapers don't appear to "count" L2 overhead, but providers usually do.  I.e. your 85% would be a good value to account for "average" L2 overhead.

JosephDoherty  

Thanks for the response, sorry my bad I couldn't reply earlier. 

As far as the utilization is concerned, most of the times it is less than 50%. Only sometimes Transmit bar goes above 66%. Output drops keep on moving up during and after business hours. My LAN interface is a Gig one connected to a Gig interface on the Cisco 2960 switch. Would you please help where to check the limit of shaping buffer capacity a router can support? 

Please also have a look at following output: show process cpu history 

#Show process cpu history
05:14:55 AM Monday Jan 23 2017 CST





222666662222222222222222222222222222222222211111555552222222
100
90
80
70
60
50
40
30
20
10 ***** *****
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per second (last 60 seconds)




1
555555455655565565565535456655555555555555555554455555555655
100
90
80
70
60
50
40
30
20
10 ****** ***************** ********************** ***********
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per minute (last 60 minutes)
* = maximum CPU% # = average CPU%




11214111121 21111 112221121252111111111112122212222152212111312223332121
940878847148277969493525928033288629696460800457220238398885972202548545
100
90
80
70
60
50 * * *
40 * * * * *
30 * * * * ** * * ****
20 * ***** ** ***** *********** *** **** *********** ********************
10 ************************************************************************
0....5....1....1....2....2....3....3....4....4....5....5....6....6....7..
0 5 0 5 0 5 0 5 0 5 0 5 0
CPU% per hour (last 72 hours)
* = maximum CPU% # = average CPU%

Also see output of same command when it was run on switch. 

show processes cpu history

2222222222222222222222222222222222222222222222222222222222
1111111111111111110000011110000011111111111111100000666660
100
90
80
70
60
50
40
30 *****
20 **********************************************************
10 **********************************************************
0....5....1....1....2....2....3....3....4....4....5....5....
0 5 0 5 0 5 0 5 0 5
CPU% per second (last 60 seconds)

2222222222222222222222222222222222222222222222222222222222
6665877766674766566587475766766567766667666766677677676557
100
90
80
70
60
50
40
30 ************ ********* ***********************************
20 ##########################################################
10 ##########################################################
0....5....1....1....2....2....3....3....4....4....5....5....
0 5 0 5 0 5 0 5 0 5
CPU% per minute (last 60 minutes)
* = maximum CPU% # = average CPU%

2222323232223322222222322322232233323722422233242222322222332324333222
8878373808980198879987798398809740084398098933849888787779018492132789
100
90
80
70 *
60 *
50 *
40 * * * * * *
30 **********************************************************************
20 ######################################################################
10 ######################################################################
0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
0 5 0 5 0 5 0 5 0 5 0 5 0
CPU% per hour (last 72 hours)
* = maximum CPU% # = average CPU%

I will get a capture for today CPU process history as well. 

Please have a look at utilization graph as well. The green color of the bar represents transmit and the blue color shows Avg receive. 

Please advise how to proceed on it further? 

Some IOSs allow you to set a shaper's queue resources.  The easiest way to find out if yours does, is to see if that's a configuration option.  If the option is supported, there's probably some limit like 4,096 packets.  Again, what's supported varies.

Even though your usage stats don't show a heavy link load, if you're not already aware, you might want to research "micro burst".  If that's what's happening, as noted in my earlier post, you might be able to reduce drops by increasing the shaper's Bc and/or increasing the shaper's queue depth.

Review Cisco Networking for a $25 gift card