cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4519
Views
5
Helpful
15
Replies

7204VXR NPE-G1 Consistent Packet-Loss

gordonmarquis
Level 1
Level 1

Hello All...

We have an issue with two 7204VXRs with the same revision C NPE-G1 supervisor cards, which connect to one another over a circuit provided by our provider.

In pinging the other 7206VXR on the other end of the circuit we lose about 1 ping in every 700 or so, whether tagged as EF voice or natively.

However, if we source from loopback0 then repeat this test there is no packet loss!

Trouble-shooting steps completed:

1) Service provider has done two full circuit tests, which has shown no loss on the circuit

2) We have removed all QoS and user traffic from the circuit, packet-loss still same

3) Have worked with TAC in adding a shaper to our QoS policy, but this shaper gives inconsistent statistical output...TAC are at a loss, and as a last

    resort we will change the supervisor in each router soon. Seems to be a software bug/problem, not hardware in my opinion.

4) Our service provider has shown us graphs of our bandwidth usage, only generally 8to10% of the 100Mb circuit speed

5) gigabit controllers show no problem with ring size or output queue software drops.

6) There are "Total Output Drops" on the interface, but the increasing number does not match the level of loss, although it is regular too:

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 2477


We had worked with TAC in changing the input and output queue hold sizes - no help. As mentioned, this circuit is not congested it would seem from both ourselves the the service provider's point of view.

7) Show controller gigabitX/X shows no drops or loss, etc

8) We have upgraded IOS, from latest T1 to T4: Both sides now on C7200-SPSERVICESK9-M), Version 12.4(24)T4

9) Router CPU usage around 10% or so

Coule we be bursting and this is not registering?

Ping loss is not huge:

Success rate is 99 percent (93230/93333), round-trip min/avg/max = 20/23/96 ms


But we are attempting to run other services, such as Cisco's circuit emulation product, over this circuit, and such regular drops cause major problems with the A and B end.


Short of testing this core circuit with other routers, has anybody heard of any bug or packet-loss related issue that could be causing what we see? I wonder why packet-loss is eliminated if we source from our loopback address when pinging - in any traffic class - yet pinging natively causes the packet-loss. The loss is real, and is noticable by other services running through; so it's not just the router dropping pings.

I do not trust the stats output on the 72XX platform, due to the way the platform is designed.

Thanks in advance for any input...

15 Replies 15

paolo bevilacqua
Hall of Fame
Hall of Fame

For realiable packet loss measurement, you must use separate endpoints not the routers themselves.

Router can drop ping for a variety of reasons too long to explain here.

Also, the IOS you are using is not the best choice for robust operation of a 7200.

You would be better choosing from the "service provider" releases.

Also, please include complete "show interfaces" and "show controllers".

Hi there,

we are removing the circuit tonight, and testing with a combination of other equipment. Provider has tested the router ok though...

Genera linterface config same at each end:

interface GigabitEthernet0/3
ip address 192.168.1.1 255.255.255.254
ip flow ingress
ip ospf network point-to-point
ip ospf cost 10
load-interval 30
duplex full
speed 100
media-type rj45
no negotiation auto
mpls label protocol ldp
mpls ip
end

THUS-ABZ-CORE02#sh controller gigabit0/3
Interface GigabitEthernet0/3 (idb 0x658A575C)
Hardware is BCM1250 Internal MAC (Revision B2/B3)
  network link is up
  Config is 100Mbps, Full Duplex
  Selected media-type is RJ45
  GBIC is not present
MAC Registers:
ds->rx_all_multicast = 0x0
  mac_hash_0_0 =   0x0000000000000000
  mac_hash_1_0 =   0x0000000000000000
  mac_hash_2_0 =   0x0000000400000002
  mac_hash_3_0 =   0x0000000000000000
  mac_hash_4_0 =   0x0000000000000000
  mac_hash_5_0 =   0x0000000000000000
  mac_hash_6_0 =   0x2001000000000000
  mac_hash_7_0 =   0x0000000000000000
  mac_admask_0 =     0x0000000000000F28, mac_admask_1  = 0x0000000000000000
  mac_cfg =          0x000000C4000A0176, mac_thrsh_cfg = 0x0000080400084004
  mac_vlantag =      0x0000000000000000, mac_frame_cfg = 0x05F4400000284500
  mac_adfilter_cfg = 0x0000000000000F28, mac_enable    = 0x0000000000000C11
  mac_status =       0x0000000000000000, mac_int_mask  = 0x00004F0000C300C3
  mac_txd_ctl =      0x000000000000000F, mac_eth_addr  = 0x000019FAB9D70E00
  mac_fifo_ptrs =    0x05F4400000284500, mac_eopcnt    = 0x0000440024242424
  MAC RX is enabled  RX DMA - channel 0 is enabled, channel 1 is disabled
  MAC TX is enabled  TX DMA - channel 0 is enabled, channel 1 is disabled
  Device status = 100 Mbps, Full-Duplex
PHY Registers:
  PHY is Marvell 88E1011S (Rev 1.3)
  Control                = 0x2100           Status                 = 0x794D
  PHY ID 1               = 0x0141           PHY ID 2               = 0x0C62
  Auto Neg Advertisement = 0x0001           Link Partner Ability   = 0x0000
  Auto Neg Expansion     = 0x0004           Next Page Tx           = 0x2001
  Link Partner Next Page = 0x0000           1000BaseT Control      = 0x0000
  1000BaseT Status       = 0x0000           Extended Status        = 0x3000
  PHY Specific Control   = 0x0008           PHY Specific Status    = 0x6D00
  Interrupt Enable       = 0x6C00           Interrupt Status       = 0x0000
  Ext PHY Spec Control   = 0x0C64           Receive Error Counter  = 0x0000
  LED Control            = 0x4100
  Ext PHY Spec Control 2 = 0x006A           Ext PHY Spec Status    = 0x801F
  PHY says Link is UP, Speed 100Mbps, Full-Duplex [FIXED Speed/Duplex]
  Physical Interface - RJ45
Internal Driver Information:
  lc_ip_turbo_fs = 0x600635B0, ip_routecache = 0x11 (dfs = 0/mdfs = 0)
  rx cache size = 1000, rx cache end = 872
  max_mtu = 1524
Software MAC address filter(hash:length/addr/mask/hits):
need_af_check = 0
  0x00:  0  ffff.ffff.ffff  0000.0000.0000         0
  0x5B:  0  0100.5e00.0005  0000.0000.0000         0
  0x5C:  0  0100.5e00.0002  0000.0000.0000         0
  0xC0:  0  0180.c200.0002  0000.0000.0000         0
  0xC0:  1  0100.0ccc.cccc  0000.0000.0000         0
  0xCE:  0  000e.d7b9.fa11  0000.0000.0000         0
  ring sizes: RX = 128, TX = 256
  rx_particle_size: 512
Rx Channel 0:
  dma_config0 =     0x0000000000800888, dma_config1 =    0x002D000000600021
  dma_dscr_base =   0x000000000E2A3240, dma_dscr_cnt =   0x0000000000000080
  dma_cur_dscr_a =  0x000010000E2A4D72, dma_cur_dscr_b = 0x0148000000000001
  dma_cur_daddr  =  0x000080000E2A3480
  rxring = 0x0E2A3240, shadow = 0x65875748, head = 23 (0x0E2A33B0)
  rx_overrun=0, rx_nobuffer=0, rx_discard=0
  Error Interrupts: rx_int_dscr = 0, rx_int_derr = 0, rx_int_drop = 0
Tx Channel 0:
  dma_config0 =     0x0000000001001088, dma_config1 =    0x00B6000000000010
  dma_dscr_base =   0x000000000E2A3A80, dma_dscr_cnt =   0x0000000000000000
  dma_cur_dscr_a =  0x800013000E460006, dma_cur_dscr_b = 0x0948000000000003
  dma_cur_daddr  =  0x000000000E2A4750
  txring = 0x0E2A3A80, shadow = 0x658A6E18, head = 222, tail = 222, tx_count = 0
  Error Interrupts: tx_int_dscr = 0, tx_int_derr = 0, tx_int_dzero = 0
  chip_state = 2, ds->tx_limited = 1
  throttled = 0, enabled = 0, disabled = 0
  reset=4(init=1, restart=3), auto_restart=5
  tx_underflow = 0, tx_overflow = 0
  rx_underflow = 0, rx_overflow = 0, filtered_pak=0
  descriptor mismatch = 0, fixed alignment = 4993
  bad length = 0 dropped, 0 corrected
  unexpected sop = 0
Address Filter:
  Promiscuous mode OFF
  Multicast software filter needed: 1
  Exact match table (for unicast, maximum 8 entries):
    Entry 0 MAC Addr = 000e.d7b9.fa11
    (All other entries are empty)
  Hash match table (for multicast, maximum 8 entries):
    Entry 0 MAC Addr = 0100.5e00.0002
    Entry 1 MAC Addr = 0180.c200.0002
    Entry 2 MAC Addr = 0100.0ccc.cccc
    Entry 3 MAC Addr = 0100.5e00.0005
    (All other entries are empty)
Statistics:
  Rx Bytes                220188910055   Tx Bytes                278519197306
  Rx Good Packets            697866088   Tx Good Packets            715419988
  Rx Multicast                  633246
  Rx Broadcast                       3

  Rx Bad Pkt Errors                  0   Tx Bad Pkt Errors                  0
  Rx FCS Errors                      0   Tx FCS Errors                      0
  Rx Runt Errors                     0   Tx Runt Errors                     0
  Rx Oversize Errors                 0   Tx Oversize Errors                 0
  Rx Length Errors                   0   Tx Collisions                      0
  Rx Code Errors                     0   Tx Late Collisions                 0
  Rx Dribble Errors                  0   Tx Excessive Collisions            0
                                         Tx Abort Errors                    0

GigabitEthernet0/3 is up, line protocol is up
  Hardware is BCM1250 Internal MAC, address is 000e.d7b9.fa11 (bia 000e.d7b9.fa11)
  Internet address is 89.251.209.91/31
  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
     reliability 255/255, txload 6/255, rxload 3/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is RJ45
  output flow-control is unsupported, input flow-control is XON
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 2483
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  30 second input rate 1458000 bits/sec, 651 packets/sec
  30 second output rate 2561000 bits/sec, 708 packets/sec
     697770115 packets input, 2648621038 bytes, 0 no buffer
     Received 631297 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 633263 multicast, 0 pause input
     0 input packets with dribble condition detected
     715447516 packets output, 619938328 bytes, 0 underruns
     5 output errors, 0 collisions, 3 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     5 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

What is input to transmit to Gi0/3?

Output drops indicates sometime it gets congested over the relatively low speed of 100 mbps.

It may due to burts that you can alleviate increasing output hold-queue to e.g 150.

However the drop ratio is still neglibible at about 4 per million.

There are a number or interfaces carrying transient MPLS traffic on both boxes. The actual total traffic is not much at all really.

Any manipulative troubleshooting commands were removed from the interface, to remove the possibility of those presenting further issues.

We have already set the output hold-queue to various values, in multiple combinations - alone, as well as with input queue manipulations. I agree the 'total output drops' value is low, and I have seen this value increment on various 7200's in the past, so it doesn't concern me too much.

As I should stress, pinging whilst sourcing from the loopback address presents no packet loss, which is what I do not understand.

If sourcing from a loopback address entails different encapsulation, with loopback routing MPLS traffic set at maxmium priority, could that mean any routing based traffic, including traffic sourced from a loopback, gets ultimate priority, and therefore that is why we do not see drops when sourcing from a loopback?

Extra information:

* If we source from one particular customer router, the problem also exists, packet-loss at the same time as we would normally see whilst pinging natively point-to-point

*If we source from our own office, no special setup, there is no packet-loss, similar results as sourcing from a loopback address!

 

* If we source from one particular customer router, the problem also exists, packet-loss at the same time as we would normally see whilst pinging natively point-to-point

*If we source from our own office, no special setup, there is no packet-loss, similar results as sourcing from a loopback address!

As mentioned above, pinging from or to  router interface is not a valid testing methodology to pinpoint minor or occasional packet loss.

If you have no packet loss between end devices, there is no issue and you're good to go.

That's the problem though; this is a production network. Takes time to bring things down for maintenance, particularly at the core.

End to end there are noticable probelms, that is why we have been trying to resolve this. As I mentioned, TAC are at a loss as to a solution. I really believe changing supervisor hardware will not rectify this is any way.

We have a window to try some other hardware, I will post with an update after this is complete.

If you have end-to-end packet loss imputable to routers, then there is an issue and TAC must be able to deal with it.

In my opinion, replacing hardware is unlikely to help, switching to service provider release may produce better results.

"TAC at loss" is not an acceptable position for Cisco. You should ask for team escalation and keep the case at P2 until resolved. You can also discuss the issue with your local Cisco sale office and ask to be considered as "Critical Account".

Update:

After two months, the service provider have changed their stance, and have stated they see an issue with the circuit overall. This contrasts with the 100% availability IP test conducted a while ago, which is particularly irritating.

For now, it's safe to assume this thread is null and void as a result of the above.

That is normal for telcos and SPs worldwide.

The only tests that you can believe, are the ones you see with your own eyes.

Thanks for letting us know how it ended.

bcoverstone
Level 1
Level 1

I know this is an old thread, but it shows up high on google for NPE-G1 and rx-ring.  So this is for those people who are experiencing legitimate packet loss with the NPE-G1.

 

When using the NPE-G1/G2 platforms with the gigabit interfaces, you will always get packet loss on incoming packets during high speed transmissions at a gigabit.

 

The only way I've found to fix this is to turn on flow control on the device feeding the router (the NPE-G's send a LOT of pause frames).  There can still be packet loss during the time when the router pauses the upstream device and the packets build up.  So if you control the device feeding the router, set a large outgoing queue.  If it is a switch, this can be mitigated by setting up queue-set 2 to reserve and grow ALL buffers to the appropriate outgoing queue (i.e. mls qos queue-set output 2 threshold 2 3200 3200 100 3200) and limit the speed to 200mbps (i.e. srr-queue bandwidth limit 20).  This won't completely eliminate the upstream packet loss on the switch, but it will drastically reduce the drops.

I would agree, if a 7200 cannot sustain an ingress rate, using flow control to throttle the sending device would take the pressure off the 7200, and may indeed avoid some packet drops.

However, it should be understood, by using general pause frames, all ingress traffic is blocked, so you may be trading off lower drops for additionally latency.

Additionally, if a 7200 is actually showing ingress drops, increasing the interface's ingress queue might avoid these drops.

It should also be understood that unidirectional Ethernet, using the minimal packet size (64 bytes) requires 1.488 Mpps for wire rate. As the NPE-G1 is rated at only 1 Mpps, it cannot guarantee 1 Gbps. As a NPE-G2 is rated at 2 Mbps, it cannot guarantee 1 Gbps duplex. Also keep in mind, NPE CPU cycles are used for "control plane" needs and some traffic, depending on the configuration, will require more than the minimal CPU cycles to forward a packet. I.e. when you want a 7200 to guarantee 1 Gbps, you need something more powerful than a NPE-G1/G2 (such as the NSE-1).

Random packet drops are deadly, especially for voice.  I want the router to choose the packets that are dropped.  Unfortunately the ingress queue in the Marvell hardware is stuck at 128 rx buffers and cannot be changed (that I am aware of, if you know a secret command I would be quite happy!).

 

While the NPE-G1 can handle 1Mpps, the gigabit hardware can be overwhelmed in a couple of millisecond at full gigabit speed.

 

Given an internet speed of 50Mbps being received via gigabit ethernet, the line can transmit at gigabit speeds 1/20th of the time (50Mbps = .05 Gbps = 50ms of transmit time/second).  Unless those micro bursts are buffered and spread out, the NPE-G router will register quite a few drops.

 

"Random packet drops are deadly, especially for voice."

Yes they are, but queuing latency that impacts jitter or the overall minimal delay requirements for VoIP aren't too good either. I.e. the risk of pausing all traffic on an interface.

"I want the router to choose the packets that are dropped."

Yes, indeed, if you can select what gets dropped first, you do want that, but that's the problem with general pauses too, i.e. no granular control. If you had DCB Ethernet for pause control, using pauses would make much more sense.

"Unfortunately the ingress queue in the Marvell hardware is stuck at 128 rx buffers and cannot be changed (that I am aware of, if you know a secret command I would be quite happy!)."

Yes, but then the question arises can we drain the hardware RX queue at least to get packets into the software input queue. I suspect the hardware is capable of accepting even sustained gig input, and storing it in RAM (i.e. the RX ring of 128, itself, might not be an issue). The more likely issues are, is there a place to store RX packets in RAM (no hit for buffers are something to be avoided) and whether the NPE has the capacity to process the packets (which it may not). The latter is an issue we cannot escape if there's just too much volume for the NPE to deal with. However, assuming we're dealing with bursts, we agree we want to avoid dropping packets. I touched on buffers, avoiding no hits, but we also want to have a sufficient ingress "queue" to not drop packets. However, just doing that, moves the upstream device egress queue to the 7200 ingress queue, i.e. we still end up trading latency to avoid drops. That said, it's possible a 7200 might support a larger ingress queue than an upstream device supports for its egress queue.

Again, though, if VoIP traffic was part of our ingress, we both agree we would like to be selective about what gets dropped. That's possible if you take advantage of the 7200's SPD feature. Have the VoIP packets, on the upstream device's egress, marked with IPPrec 6.

"While the NPE-G1 can handle 1Mpps, the gigabit hardware can be overwhelmed in a couple of millisecond at full gigabit speed."

Perhaps or perhaps not. Much depends on the size of the packets, and what you're going to "do" to them. As noted above, I suspect the 7200 hardware can capture gig ingress, but the NPE may not be able to keep up.

BTW, in your 1st post, if you "shape" a gig port less than gig, I agree, that may avoid drops on the 7200, especially if the 7200 hasn't been "tuned" to deal with gig bursts. (Also BTW, I've sometimes suggested run a gig interface at 100 Mbps, to accomplish the same.) Further, by queuing data on the switch, you may have more control over what get's dropped and/or delayed.  (Unfortunately, in your 1st post, what you also suggest may short other ports of buffers.  [Which I've also suggesting doing, if you understand the risk.])

So, in many ways, whether you slow egress with a shaper or with pause frames or with a sufficient ingress queue, they all should reduce drops (for bursty traffic). However, what using pause frames or ingress queuing on the 7200 adds, it makes the throughput dynamically adjust to what the 7200 can handle, moment-to-moment (i.e which is lost with a "shaper" or using a lower physical bandwidth setting). Further, again, using general pause frames blocks all traffic, but with SPD you can at least insure very important traffic is less likely to be dropped while not being delayed.

 

Aha, I forgot about SPD!  I had set it for 900/1000, but I just raised it to 2000/4000 to see what happens.  I'll let you know if that solves the drops!

 

Realistically, how much can flow control delay packets?  I see as many as 100 pause frames per second, with the average being 1 or 2 per second.  I would assume a pause frame is triggered when the rx ring rises above a max threshold, then resumes after the rx ring goes below a min threshold, but I'm not an expert on this subject.

Review Cisco Networking for a $25 gift card