cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
11303
Views
0
Helpful
8
Replies

3750G "show platfrom port-asic stats drop" showing some packet drop

mzhecisco
Level 1
Level 1

#show platform port-asic stats drop

showing some packet drops, what is the implication of the result? Should I concerned anything? I did not clear the counter for two monthes.

Thanks,

Port-asic Port Drop Statistics - Summary

========================================

RxQueue 0 Drop Stats: 0

RxQueue 1 Drop Stats: 0

RxQueue 2 Drop Stats: 0

RxQueue 3 Drop Stats: 0

Port 0 TxQueue Drop Stats: 0

Port 1 TxQueue Drop Stats: 0

Port 2 TxQueue Drop Stats: 3298

Port 3 TxQueue Drop Stats: 0

...............................

Port 2 TxQueue Drop Statistics

Queue 0

Weight 0 Frames 0

Weight 1 Frames 0

Weight 2 Frames 0

--More-- Queue 1

Weight 0 Frames 0

Weight 1 Frames 0

Weight 2 Frames 0

Queue 2

Weight 0 Frames 0

Weight 1 Frames 0

Weight 2 Frames 0

Queue 3

Weight 0 Frames 0

Weight 1 Frames 0

Weight 2 Frames 3298

8 Replies 8

Should you be concerned? It depends....

Check what DSCP & CoS values map to threshold 3 of the 4th queue as that is the only one that has dropped any frames (show mls qos maps). I haven't got a 3560/3750 to hand so I don't know what this is by default, although I have a feeling by default nothing maps to the 3rd threshold of any queue.

3298 frames over two months doesn't sound a lot to me, however if the network is not loaded at any time then it is possible the queue thresholds/sizes need tuning.

Andy

Actually he should be concerned ...I am not a QOS expert but , a misconfigured QOS can cause asic level drops ... you may try to disable qos on all the interfaces that are using this asic .

After that please check if the drops are still incrementing or not .

If its not because of QOS then this is a hardware issue , asic limitation .

It may be the desired behaviour - i.e. congestion avoidance. The 3560/3750 uses congestion avoidance (random dropping) when thresholds are being reached in queues, this can be a desired behaviour to slow down flows if TCP traffic is mapped to that queue/threshold.

Andy

We had a similar issue due to a misconfigured QoS and had to tweak the thresholds to rectify it.

Check whether the counters are increasing as you cannot clear them without a reload.

HTH

Narayan

Hi

We don't have any QoS in use, but I see a high number of drops when the load increases to over 600Mbit/s on that interface. See:

show platform port-asic stats drop g1/0/26

  Interface Gi1/0/26 TxQueue Drop Statistics
    Queue 0
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 1
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 2
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 3
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 3529272
    Queue 4
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 5
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 6
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 7
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0

And here the interface:

sh int g1/0/26

GigabitEthernet1/0/26 is up, line protocol is up (connected)

  Hardware is Gigabit Ethernet, address is 0026.9981.e11a (bia 0026.9981.e11a)

  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,

     reliability 255/255, txload 17/255, rxload 2/255

  Encapsulation ARPA, loopback not set

  Keepalive not set

  Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP

  input flow-control is off, output flow-control is unsupported

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input 00:00:04, output 00:00:00, output hang never

  Last clearing of "show interface" counters 00:19:07

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 150

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  5 minute input rate 10629000 bits/sec, 6922 packets/sec

  5 minute output rate 67813000 bits/sec, 10457 packets/sec

     16633696 packets input, 2137400609 bytes, 0 no buffer

     Received 111580 broadcasts (56109 multicasts)

     0 runts, 0 giants, 0 throttles

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

     0 watchdog, 56109 multicast, 0 pause input

     0 input packets with dribble condition detected

     28212625 packets output, 35494347176 bytes, 0 underruns

     0 output errors, 0 collisions, 0 interface resets

     0 unknown protocol drops

     0 babbles, 0 late collision, 0 deferred

     0 lost carrier, 0 no carrier, 0 pause output

     0 output buffer failures, 0 output buffers swapped out

sh mls qos

  QoS is disabled globally

sh int g1/0/26
GigabitEthernet1/0/26 is up, line protocol is up (connected)
  Hardware is Gigabit Ethernet, address is 0026.9981.e11a (bia 0026.9981.e11a)
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
     reliability 255/255, txload 17/255, rxload 2/255
  Encapsulation ARPA, loopback not set
  Keepalive not set
  Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:04, output 00:00:00, output hang never
  Last clearing of "show interface" counters 00:19:07
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 150
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 10629000 bits/sec, 6922 packets/sec
  5 minute output rate 67813000 bits/sec, 10457 packets/sec
     16633696 packets input, 2137400609 bytes, 0 no buffer
     Received 111580 broadcasts (56109 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 56109 multicast, 0 pause input
     0 input packets with dribble condition detected
     28212625 packets output, 35494347176 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

sh mls qos
  QoS is disabled globally

It's actually in a portchannel with a second interface, which doesn't show any errors.

Could that be a broken hardware?

It's a stack consisting of two WS-C3750E-24TD with software 12.2(58)SE1.

Thanks

Interestingly we have the same build version on 3750-x stack and observing crazy amount of packet discards on some interfaces.

Without QoS enabled?

Even without QoS, I suspect the 3750 likely does its interface buffer management similar as it does with QoS, but probably w/o reservations per interface egress queues.

If had some fantastic success with QoS enabled and after buffer tuning.  I.e. had a couple of SAN ports having multiple drops per second, after buffer tuning, just a few drops per day.

What I did was place almost all the buffers into the shared/common pool, i.e. stopped buffers being reserved per interface egress queue.  Also increased what an interface was allowed to dynamically allocate from the shared/common pool.  (Of course, understand, this better supports just a few ports with large micro bursts while other ports don't need buffers.)

Thanks for the great Info and reply.

Do you think Updating the firmware to a more recent one will fix this?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: