cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7937
Views
25
Helpful
9
Replies

ASR9006 A9K-8T-L Bundle-Ether input drops

peteroseneff
Level 1
Level 1

Hello,

How to determine the source of the input drops?

interface Bundle-Ether2

description ISKRA-CORE

mtu 9216

bundle minimum-active links 1

interface TenGigE0/0/0/2

description ISKRA-CORE-BUNDLE-2-1

bundle id 2 mode active

cdp

transceiver permit pid all

interface TenGigE0/0/0/3

description ISKRA-CORE-BUNDLE-2-2

bundle id 2 mode active

cdp

transceiver permit pid all

show interfaces bundle-ether 2

Mon Aug 15 15:25:24.792 MSD

Bundle-Ether2 is up, line protocol is up

  Interface state transitions: 23

  Hardware is Aggregated Ethernet interface(s), address is f025.72a6.5112

  Description: ISKRA-CORE

  Internet address is Unknown

  MTU 9216 bytes, BW 20000000 Kbit (Max: 20000000 Kbit)

     reliability 255/255, txload 48/255, rxload 82/255

  Encapsulation ARPA,  loopback not set,

  ARP type ARPA, ARP timeout 04:00:00

    No. of members in this bundle: 2

        TenGigE0/0/0/2       Full-duplex      10000Mb/s    Active

        TenGigE0/0/0/3       Full-duplex      10000Mb/s    Active

  Last input 00:00:00, output 00:00:00

  Last clearing of "show interface" counters 03:04:17

  5 minute input rate 6438521000 bits/sec, 840063 packets/sec

  5 minute output rate 3839071000 bits/sec, 671273 packets/sec

     8862456791 packets input, 8542257864525 bytes, 46144565 total input drops

     19006 drops for unrecognized upper-level protocol

     Received 1546760 broadcast packets, 1066715008 multicast packets

              0 runts, 23 giants, 0 throttles, 0 parity

     807 input errors, 631 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

     7055446493 packets output, 5076223926722 bytes, 0 total output drops

     Output 617340 broadcast packets, 16415285 multicast packets

     0 output errors, 0 underruns, 0 applique, 0 resets

     0 output buffer failures, 0 output buffers swapped out

     0 carrier transitions

show interfaces TenGigE0/0/0/2

Mon Aug 15 15:28:20.805 MSD

TenGigE0/0/0/2 is up, line protocol is up

  Interface state transitions: 25

  Hardware is TenGigE, address is f025.72a6.5112 (bia 108c.cf17.a5e2)

  Layer 1 Transport Mode is LAN

  Description: ISKRA-CORE-BUNDLE-2-1

  Internet address is Unknown

  MTU 9216 bytes, BW 10000000 Kbit (Max: 10000000 Kbit)

     reliability 250/255, txload 20/255, rxload 83/255

  Encapsulation ARPA,

  Full-duplex, 10000Mb/s, LR, link type is force-up

  output flow control is off, input flow control is off

  loopback not set,

  ARP type ARPA, ARP timeout 04:00:00

  Last input 00:00:00, output 00:00:00

  Last clearing of "show interface" counters 03:04:16

  5 minute input rate 3288038000 bits/sec, 429126 packets/sec

  5 minute output rate 786022000 bits/sec, 133124 packets/sec

     3596474548 packets input, 3436690262422 bytes, 9914616 total input drops

     3 drops for unrecognized upper-level protocol

     Received 611691 broadcast packets, 436106754 multicast packets

              1 runts, 94 giants, 0 throttles, 0 parity

     1790 input errors, 1208 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

     1160027585 packets output, 859300872067 bytes, 0 total output drops

     Output 259477 broadcast packets, 172558 multicast packets

     0 output errors, 0 underruns, 0 applique, 0 resets

     0 output buffer failures, 0 output buffers swapped out

     2 carrier transitions

show interfaces TenGigE0/0/0/3

Mon Aug 15 15:28:34.960 MSD

TenGigE0/0/0/3 is up, line protocol is up

  Interface state transitions: 57

  Hardware is TenGigE, address is f025.72a6.5112 (bia 108c.cf17.a5e3)

  Layer 1 Transport Mode is LAN

  Description: ISKRA-CORE-BUNDLE-2-2

  Internet address is Unknown

  MTU 9216 bytes, BW 10000000 Kbit (Max: 10000000 Kbit)

     reliability 251/255, txload 78/255, rxload 82/255

  Encapsulation ARPA,

  Full-duplex, 10000Mb/s, LR, link type is force-up

  output flow control is off, input flow control is off

  loopback not set,

  ARP type ARPA, ARP timeout 04:00:00

  Last input 00:00:00, output 00:00:00

  Last clearing of "show interface" counters 02:06:20

  5 minute input rate 3224956000 bits/sec, 416334 packets/sec

  5 minute output rate 3081579000 bits/sec, 543474 packets/sec

     3118867582 packets input, 3035698603580 bytes, 24569671 total input drops

     0 drops for unrecognized upper-level protocol

     Received 549814 broadcast packets, 360615879 multicast packets

              0 runts, 0 giants, 0 throttles, 0 parity

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

     3959745159 packets output, 2801126805691 bytes, 0 total output drops

     Output 202506 broadcast packets, 11144551 multicast packets

     0 output errors, 0 underruns, 0 applique, 0 resets

     0 output buffer failures, 0 output buffers swapped out

     0 carrier transitions

1 Accepted Solution

Accepted Solutions

Hi,

Am i correct assuming you run l2vpn with a bridge-domain (BD) configuration?

RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT - we know the counter shows drops for an ingress unicast VPLS frame that has a source XID that matches its destination XID.

The counter indicates that we are in a flooding mode. In that mode we send the packets to all the NP's that are part of that bridge-domain and we drop the packets on the np that actually originated that one. (packets in the bridge-domain are landing back on the NP that originally received this packet on its enet. So we're dropping this packet as it doesn't need to get forwarded out this NP/interface again.)

for example, bundle-ether 2 gets a packet to unknown destination and hence it floods it to every single interface which is a part of this BD. bundle-ether 2 is part of the BD too, hence fabric sends same packet back to the NP which handles bundle-ether 2. as the source XID would match the destination XID, the packet would get dropped.

Flooding reasons can be found here

http://www.cisco.com/en/US/products/hw/switches/ps700/products_tech_note09186a00801d0808.shtml

regards,

/A

View solution in original post

9 Replies 9

Alexei Kiritchenko
Cisco Employee
Cisco Employee

Hello,

Look at the following packet troubleshooting guidelines that might help you finding out more about the packet losses.

https://supportforums.cisco.com/docs/DOC-15552

Regards,

/A

akiritch,

Thanks for the link.

According to the DOC-15552, ASR9K NPs experience following types of drops:

Offset  Counter                                                              FrameValue   Rate (pps)

31       PARSE_INGRESS_DROP_CNT                           15435715166        2549

439     RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT         862544686107      101616

565     UIDB_TCAM_MISS_AGG_DROP                          15376608187        2549

Hi,

For easy drop tracking we can clear NP statistics to confirm if these are still increasing counters for our drops using “clear controller np counters all location 0/0/CPU0”.

RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT             An ingress unicast VPLS frame that has a source XID that matches its destination XID.   These are dropped since it cannot be sent back to the source port.

Reflection filtering happens when the mac look up sees the egress interface would be the same as the ingress interface. So we drop them as those packets are sent to hosts which are already located on the same interface where we get those packets on.

UIDB_TCAM_MISS_AGG_DROP – an interface handling structure does not exist to handle arriving packets. A likely cause for these drops counts is tagged traffic that's sent into the physical port but doesn't match the encapsulation statement of any subinterface. The parent/main interface is an untagged L3 interface, and will reject any tagged traffic that fails classification against any of its subinterfaces/children.

Regards,

/A

Hi,

I've just cleared the counters:

**********************************

Tue Aug 16 17:01:24.958 MSD

counters or files successfully cleared

Tue Aug 16 17:01:27.370 MSD

                Node: 0/0/CPU0:

----------------------------------------------------------------

Show global stats counters for NP1, revision v3

Read 20 non-zero NP counters:

Offset  Counter                                         FrameValue   Rate (pps)

-------------------------------------------------------------------------------

  22  PARSE_ENET_RECEIVE_CNT                               1046948      434238

  23  PARSE_FABRIC_RECEIVE_CNT                             1678389      696138

  24  PARSE_LOOPBACK_RECEIVE_CNT                              3684        1528

  29  MODIFY_FABRIC_TRANSMIT_CNT                           1041955      432167

  30  MODIFY_ENET_TRANSMIT_CNT                             1419596      588800

  31  PARSE_INGRESS_DROP_CNT                                  4985        2068

  34  RESOLVE_EGRESS_DROP_CNT                               258725      107310

  38  MODIFY_MCAST_FLD_LOOPBACK_CNT                           3683        1528

  56  PARSE_LC_INJECT_TO_PORT_CNT                                1           0

  57  PARSE_FAB_INJECT_IPV4_CNT                                  7           3

  69  RESOLVE_INGRESS_IFIB_PUNT_CNT                              1           0

  74  RESOLVE_LEARN_FROM_NOTIFY_CNT                              6           2

170  PUNT_IFIB                                                  1           0

224  PUNT_STATISTICS                                            9           4

439  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT               258739      107316

469  RESOLVE_MAC_NOTIFY_CTRL_DROP_CNT                           6           2

490  RESOLVE_EGR_LAG_NOT_LOCAL_DROP_CNT                        79          33

565  UIDB_TCAM_MISS_AGG_DROP                                 4982        2066

566  UIDB_TCAM_MISS_PORT0_DROP_FOR_HOST                      4982           0

601  PARSE_FAB_MACN_RECEIVE_CNT                                 6           2

As we can see percentage pps of RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT  is huge

I'm trying to find out the source of this kind of drops.

sh ver

Cisco IOS XR Software, Version 4.0.1[Default]

Copyright (c) 2010 by Cisco Systems, Inc.

As an option Upgrade to 4.1.1 is available

Hi,

Am i correct assuming you run l2vpn with a bridge-domain (BD) configuration?

RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT - we know the counter shows drops for an ingress unicast VPLS frame that has a source XID that matches its destination XID.

The counter indicates that we are in a flooding mode. In that mode we send the packets to all the NP's that are part of that bridge-domain and we drop the packets on the np that actually originated that one. (packets in the bridge-domain are landing back on the NP that originally received this packet on its enet. So we're dropping this packet as it doesn't need to get forwarded out this NP/interface again.)

for example, bundle-ether 2 gets a packet to unknown destination and hence it floods it to every single interface which is a part of this BD. bundle-ether 2 is part of the BD too, hence fabric sends same packet back to the NP which handles bundle-ether 2. as the source XID would match the destination XID, the packet would get dropped.

Flooding reasons can be found here

http://www.cisco.com/en/US/products/hw/switches/ps700/products_tech_note09186a00801d0808.shtml

regards,

/A

Hi,

You are correct regarding BD configuration

l2vpn

bridge group 1120

  bridge-domain 1120

   interface Bundle-Ether1.1120

   !

   interface Bundle-Ether2.1120

   !

  !

!

bridge group 1121

  bridge-domain 1121

   interface Bundle-Ether1.1121

   !

   interface Bundle-Ether2.1121

   !

  !

!

bridge group 1122

  bridge-domain 1122

   interface Bundle-Ether1.1122

   !

   interface Bundle-Ether2.1122

   !

  !

!

bridge group 1123

  bridge-domain 1123

   interface Bundle-Ether1.1123

   !

   interface Bundle-Ether2.1123

   !

  !

!

...

The source of the input drops is found. This is another side of the bundle-ether 2 link - 7600 port-channel 2 configuration.

I added switchport trunk allowed vlan X,X,X,X,... command to allow vlans that need to be swiched to ASBR (ASR9006) only.

interface Port-channel2

description ASBR

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 86,87,100,143,156,212-214,841,1120-1129,1131

switchport trunk allowed vlan add 1132,1135,1140-1173,1177

switchport mode trunk

switchport nonegotiate

mtu 9216

logging event link-status

mls qos trust cos

no vtp

spanning-tree portfast trunk

spanning-tree bpdufilter enable

end

Hi,

Good to hear it is solved.

Yes, you are correct, that has been indicted by UIDB_TCAM_MISS_AGG_DROP  counter as we’ve found earlier.

There still might be RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT due to unicast flooding, which might OK or not OK, depending on your network topology and traffic flows.

/A

Hi,

Yes, drops still exsist on NP1 (TenGigE0/0/0/3) for instance

439  RESOLVE_VPLS_REFLECTION_FILTER_DROP_CNT          15547132736      102135

I'm going deeply investigate this later.

Thanks for your help.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: