cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1129
Views
5
Helpful
12
Replies

Total output drops incrementing - ASR1004

Navin_Natarajan
Level 1
Level 1

Can someone assist to identify the reasons for the "Total Output Drops" counter incrementing when there are no other output errors?

This interface is SP facing interface which has 6 sub interfaces. But I'm unable to identify from which sub interface the drops are occurring. Please assist.

GigabitEthernet0/0/2 is up, line protocol is up

.

.

Full Duplex, 1000Mbps, link type is force-up, media type is SX
output flow-control is on, input flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:02:11, output 00:00:05, output hang never
Last clearing of "show interface" counters never
Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 134927138
Queueing strategy: Class-based queueing
Output queue: 0/40 (size/max)
5 minute input rate 36772000 bits/sec, 7423 packets/sec
5 minute output rate 76211000 bits/sec, 13112 packets/sec
73193437309 packets input, 39418789122864 bytes, 0 no buffer
Received 70470 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
121475920140 packets output, 85200669659695 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out

12 Replies 12

balaji.bandi
Hall of Fame
Hall of Fame

can you post the same output from the subinterface also?

check some troubleshooting tips :

https://www.cisco.com/c/en/us/support/docs/routers/asr-1000-series-aggregation-services-routers/110531-asr-packet-drop.html

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Navin_Natarajan
Level 1
Level 1

Thank you for the prompt response. below are the requested details.

GigabitEthernet0/0/2.11 is up, line protocol is up

  Hardware is SPA-8X1GE-V2,

  .

  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,

     reliability 255/255, txload 16/255, rxload 4/255

  Encapsulation 802.1Q Virtual LAN, Vlan ID  11.

  ARP type: ARPA, ARP Timeout 04:00:00

  Keepalive not supported

  Last clearing of "show interface" counters never

 

 

GigabitEthernet0/0/2.12 is up, line protocol is up

  Hardware is SPA-8X1GE-V2,

  .

  MTU 1500 bytes, BW 200000 Kbit/sec, DLY 10 usec,

     reliability 255/255, txload 16/255, rxload 4/255

  Encapsulation 802.1Q Virtual LAN, Vlan ID  12.

  ARP type: ARPA, ARP Timeout 04:00:00

  Keepalive not supported

  Last clearing of "show interface" counters never

 

 

GigabitEthernet0/0/2.16 is up, line protocol is up

  Hardware is SPA-8X1GE-V2,

  . 

  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,

     reliability 255/255, txload 16/255, rxload 4/255

  Encapsulation 802.1Q Virtual LAN, Vlan ID  16.

  ARP type: ARPA, ARP Timeout 04:00:00

  Keepalive not supported

  Last clearing of "show interface" counters never

 

 

GigabitEthernet0/0/2.31 is up, line protocol is up

  Hardware is SPA-8X1GE-V2,

  .

  MTU 1500 bytes, BW 70000 Kbit/sec, DLY 10 usec,

     reliability 255/255, txload 16/255, rxload 4/255

  Encapsulation 802.1Q Virtual LAN, Vlan ID  31.

  ARP type: ARPA, ARP Timeout 04:00:00

  Keepalive not supported

  Last clearing of "show interface" counters never

 

 

GigabitEthernet0/0/2.42 is up, line protocol is up

  Hardware is SPA-8X1GE-V2, a

  .

  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,

     reliability 255/255, txload 16/255, rxload 4/255

  Encapsulation 802.1Q Virtual LAN, Vlan ID  42.

  ARP type: ARPA, ARP Timeout 04:00:00

  Keepalive not supported

  Last clearing of "show interface" counters never

 

this bottle neck issue, the Ingress traffic more than egress traffic and the Queue need to drop packet that can not forward.

Hello


@Navin_Natarajan wrote:

GigabitEthernet0/0/2.
Last clearing of "show interface" counters never


int gig0/0/2
load interval 30
exit

clear counters 

Please post interface statistics thereafter 


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

Joseph W. Doherty
Hall of Fame
Hall of Fame

Generally, as @MHM Cisco World also noted, egress traffic, on that interface arrives faster than it can egress, needing to be queued, and when the queue overflows, packets are dropped

I was going to just further write:

On a gig interface, an egress queue size of 40, is very likely woefully undersized.  (BTW, an egress queue of 40, seems to be a long, long standing default for Cisco, on/for many interfaces, and by my calculation, is, more-or-less, almost an ideal value for 10 Mbps Ethernet or 1.5 Mbps T-1.)

Try increasing your egress queue size to something like 4K and see if drops seem to decrease, w/o any other noted adverse impact to your network applications.

But then, I noticed your interface notes it's using "Queueing strategy: Class-based queueing", so please provide interface and all sub-interface configs, and we'll go from there.

I will not recommend change Queue depth, I prefer he put full QoS plan for ASR. 

 

interface GigabitEthernet0/0/2

 description *****

 no ip address

 no ip proxy-arp

 no negotiation auto

 

interface GigabitEthernet0/0/2.2

 description *****

 encapsulation dot1Q 2

 vrf forwarding vrf1

 ip address *****

 no ip proxy-arp

 service-policy output *****

 

interface GigabitEthernet0/0/2.3

 description *****

 bandwidth 200000

 encapsulation dot1Q 3

 vrf forwarding vrf2

 ip address *****

 no ip proxy-arp

 ip flow monitor *****input

 ip flow monitor *****output

 service-policy output *****

 

interface GigabitEthernet0/0/2.4

 description *****

 encapsulation dot1Q 4

 ip vrf forwarding vrf3

 ip address *****

 no ip proxy-arp

 service-policy output *****

 

interface GigabitEthernet0/0/2.5

 description *****

 bandwidth 70000

 encapsulation dot1Q 5

 vrf forwarding vrf4

 ip address *****

 no ip proxy-arp

 ip flow monitor ***** input

 ip flow monitor ***** output

 service-policy output *********

 

interface GigabitEthernet0/0/2.6

 description******

 encapsulation dot1Q 6

 ip address *****

 no ip proxy-arp

 service-policy output *********

Ah, okay, as expected, you're using CBWFQ egress policies.

Since you've redacted so much detail, from your posting, I'm assuming you're very sensitive about what you post.

In that case, if you know how to obtain and analyze CBWFQ interface stats, where you drops are happening, should be revealed in those stats.

If you need help with analysis, you'll need to post those stats, the amount of help possible will be much tied to amount of information you provide.

Further, for additional help/recommendations for possible changes to your CBWFQ policy, we'll need to see the details of current config and what your QoS goals are.

Thank you for the update Joseph. I could see the TAILDROP and WRED packets are incrementing in Global drop stats. Which we can assume the output packets are being dropped due to congestion?

Yesterday stats:-

Global Drop Stats                         Packets                  Octets 

-------------------------------------------------------------------------

BadIpChecksum                                 376                   23688 

Disabled                                        1                      64 

IpTtlExceeded                            19619242              1390638279 

Ipv4NoRoute                                 10834                 5684713 

Ipv4Null0                              1027972687             92809940484 

QosPolicing                                    30                   42540 

TailDrop                                 76633932             89913367358 

Wred                                     58327320             35305186208

 

Today stats:-

 

Global Drop Stats                         Packets                  Octets

-------------------------------------------------------------------------

BadIpChecksum                                 376                   23688

Disabled                                        1                      64

IpTtlExceeded                            19749726              1400948276

Ipv4NoRoute                                 10834                 5684713

Ipv4Null0                              1030840628             93078221489

QosPolicing                                    30                   42540

TailDrop                                 76904613             90222554000

Wred                                     58428087             35365163598

The stats you'll want to see are:

sh policy-map <your policy map name> GigabitEthernet0/0/2.2 !and for .3 .. .6

Thank you Joseph. I could see some drop number in class default, is it something i need to focus on?

 

147/28-RO01#sh policy-map interface gigabitEthernet 0/0/2.2

 

GigabitEthernet0/0/2.2

 

  Service-policy output: Canada_SHAPE

 

    Class-map: class-default (match-any)

      95296841783 packets, 71908649154521 bytes

      5 minute offered rate 45021000 bps, drop rate 41000 bps

      Match: any

      Queueing

      queue limit 783 packets

      (queue depth/total drops/no-buffer drops) 0/87336055/0

      (pkts output/bytes output) 95208936719/71841801570422

      shape (average) cir 188000000, bc 752000, be 752000

      target shape rate 188000000

 

      Service-policy : Canada_QOS

 

        queue stats for all priority classes:

          Queueing

          queue limit 512 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 117955802/80012935574

 

        Class-map: Canada_NETWORK (match-any)

          93116350 packets, 10716455538 bytes

          5 minute offered rate 4000 bps, drop rate 0000 bps

          Match:  dscp cs6 (48) cs7 (56)

          Match: ip precedence 6  7

          Queueing

          queue limit 783 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 93116350/10716455538

          bandwidth remaining 2%

 

        Class-map: Canada_VOIP (match-any)

          117955802 packets, 80012935574 bytes

          5 minute offered rate 62000 bps, drop rate 0000 bps

          Match: access-group name Canada_VZ_SIP_GW

          Match:  dscp ef (46)

          Priority: 50000 kbps, burst bytes 1250000, b/w exceed drops: 0

 

          QoS Set

            dscp ef

              Packets marked 117955802

 

        Class-map: Canada_VOIP-SIGNAL (match-any)

          848398536 packets, 490362439469 bytes

          5 minute offered rate 88000 bps, drop rate 0000 bps

          Match: protocol rtcp

          Match: protocol skinny

          Match: protocol mgcp

          Match:  dscp cs3 (24) af31 (26)

          Match: protocol sip

          Queueing

          queue limit 783 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 848398536/490362439469

          QoS Set

            dscp cs3

              Packets marked 848398536

          bandwidth remaining 5%

 

        Class-map: REAL_TIME_INTERACTIVE (match-any)

          0 packets, 0 bytes

          5 minute offered rate 0000 bps, drop rate 0000 bps

          Match: access-group name -Canada-TP

          Queueing

          queue limit 783 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 0/0

          QoS Set

            dscp cs4

              Packets marked 0

          bandwidth remaining 20%

 

        Class-map: MULTIMEDIA_CONF (match-any)

          10308744 packets, 4829972798 bytes

          5 minute offered rate 0000 bps, drop rate 0000 bps

          Match:  dscp af41 (34) af42 (36) af43 (38)

          Queueing

          queue limit 783 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 10308744/4829972798

          bandwidth remaining 30%

            Exp-weight-constant: 4 (1/16)

            Mean queue depth: 1 packets

            dscp       Transmitted         Random drop      Tail drop          Minimum        Maximum     Mark

                    pkts/bytes            pkts/bytes       pkts/bytes          thresh         thresh     prob

 

            af41    10303562/4827246425      0/0              0/0                341           391  1/10

            af42        5160/2724825         0/0              0/0                292           391  1/10

            af43          22/1548            0/0              0/0                243           391  1/10

 

          Service-policy : MULTIMEDIA_CONF_PM

 

            Class-map: LyncVideo-CM (match-any)

              18 packets, 8219 bytes

              5 minute offered rate 0000 bps, drop rate 0000 bps

              Match: access-group name LyncVideo

              QoS Set

                dscp af42

                  Packets marked 18

 

            Class-map: LyncAudio-CM (match-any)

              0 packets, 0 bytes

              5 minute offered rate 0000 bps, drop rate 0000 bps

              Match: access-group name LyncAudio

              QoS Set

                dscp af41

                  Packets marked 0

 

            Class-map: LyncData-CM (match-any)

              0 packets, 0 bytes

              5 minute offered rate 0000 bps, drop rate 0000 bps

              Match: access-group name LyncData

              QoS Set

                dscp af43

                  Packets marked 0

 

            Class-map: class-default (match-any)

              10308726 packets, 4829964579 bytes

              5 minute offered rate 0000 bps, drop rate 0000 bps

              Match: any

 

        Class-map: CriticalAPP-CM (match-any)

          78415430 packets, 93643947236 bytes

          5 minute offered rate 26000 bps, drop rate 0000 bps

          Match: access-group name BusinessApplication

          Queueing

          queue limit 783 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 78415430/93643947236

          QoS Set

            dscp af21

              Packets marked 78415430

          bandwidth remaining 10%

            Exp-weight-constant: 4 (1/16)

            Mean queue depth: 1 packets

            dscp       Transmitted         Random drop      Tail drop          Minimum        Maximum     Mark

                    pkts/bytes            pkts/bytes       pkts/bytes          thresh         thresh     prob

 

            af21    78415430/93643947236      0/0              0/0                341           391  1/10

 

        Class-map: BULK_DATA_CM (match-all)

          0 packets, 0 bytes

          5 minute offered rate 0000 bps, drop rate 0000 bps

          Match: none

          Queueing

          queue limit 783 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 0/0

          bandwidth remaining 30%

            Exp-weight-constant: 4 (1/16)

            Mean queue depth: 0 packets

            dscp       Transmitted         Random drop      Tail drop          Minimum        Maximum     Mark

                    pkts/bytes            pkts/bytes       pkts/bytes          thresh         thresh     prob

 

 

          Service-policy : BULK_DATA_PM

 

            Class-map: Exchange-CM (match-any)

              0 packets, 0 bytes

              5 minute offered rate 0000 bps, drop rate 0000 bps

              Match: access-group name Exchange

              QoS Set

                dscp af12

                  Packets marked 0

 

            Class-map: ActiveSync_CM (match-any)

              0 packets, 0 bytes

              5 minute offered rate 0000 bps, drop rate 0000 bps

              Match: access-group name ActiveSync

              QoS Set

                dscp af13

                  Packets marked 0

 

            Class-map: class-default (match-any)

              0 packets, 0 bytes

              5 minute offered rate 0000 bps, drop rate 0000 bps

              Match: any

 

        Class-map: class-default (match-any)

          94148646877 packets, 71229083390611 bytes

          5 minute offered rate 44831000 bps, drop rate 42000 bps

          Match: any

          Queueing

          queue limit 1248 packets

          (queue depth/total drops/no-buffer drops/flowdrops) 0/87336055/0/29494388

          (pkts output/bytes output) 94060741857/71162235819807

          QoS Set

            dscp default

              Packets marked 94148063760

            Exp-weight-constant: 4 (1/16)

            Mean queue depth: 3 packets

            class       Transmitted      Random drop      Tail/Flow drop Minimum Maximum Mark

                        pkts/bytes       pkts/bytes      pkts/bytes   thresh  thresh  prob

 

            0       94060741857/71162235819807 15050472/9649245142 42791195/24807698023        312           624  1/10

            1               0/0               0/0              0/0                351           624  1/10

            2               0/0               0/0              0/0                390           624  1/10

            3               0/0               0/0              0/0                429           624  1/10

            4               0/0               0/0              0/0                468           624  1/10

            5               0/0               0/0              0/0                507           624  1/10

            6               0/0               0/0              0/0                546           624  1/10

            7               0/0               0/0              0/0                585           624  1/10

          Fair-queue: per-flow queue limit 312 packets

 

"I could see some drop number in class default, is it something i need to focus on?"

Well, if you want to try to reduce them, knowing where they are happening is step one.  Then we can try to determine why they are happening.  That said, however, actually the overall drop percentage is only (about) .09%, but that's an overall average.  You might have much higher drop rate percentage during brief periods.  (To determine which, you would need a management tool to graph drop rate, over time, for much shorter time periods.)

The .2 subinterface is the only one showing drops?

I see you're shaping .2; shaping all other interfaces too?  For other subinterfaces, can aggregate of all together exceed physical bandwidth of parent interface?

Is there a reason why you're using WRED and FQ, together, in class-default?  (BTW, I generally strongly recommend those who are not QoS experts not to use WRED at all.  Using WRED and FQ, together, is, in my not so humble opinion, a very bad thing.)

Hmm, WRED stats lists "Tail/Flow drop".  Not sure how to interpret that, as there can be both FQ flow drops and WRED tail drops.  I.e. if both are actually combined, that sort of conceals what's happening.

If you don't have a real need for WRED, while using FQ, I would suggest removing WRED.

Any idea what your BDP (bandwidth delay product) is for this subinterface?  (I've found setting up buffer resources for about half the BDP often works fairly well.)

Review Cisco Networking for a $25 gift card