cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2432
Views
0
Helpful
12
Replies

3750E Output Drops when interface not full.

Net Admins
Level 1
Level 1

I am having a switch with constant output drops, however the bandwidth usage is low for the interface.

This is just a trunk interface, what could be causing the switch to drop so many packets? Is there a limit of the number of packets per interface?

sw3750e-235#sh int gi1/0/24
GigabitEthernet1/0/24 is up, line protocol is up (connected)
  Hardware is Gigabit Ethernet, address is 001d.7066.f898 (bia 001d.7066.f898)
  Description: "sw253 Gi1/0/45"
  MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 17/255, rxload 16/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:01, output 00:00:02, output hang never
  Last clearing of "show interface" counters 1y23w
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1247484253
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  30 second input rate 63739000 bits/sec, 16377 packets/sec
  30 second output rate 70402000 bits/sec, 14699 packets/sec
     923325352621 packets input, 384250695979115 bytes, 0 no buffer
     Received 1800925272 broadcasts (1527218218 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 1527218218 multicast, 0 pause input
     0 input packets with dribble condition detected
     631932585697 packets output, 352187254570142 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out

sw3750e-235#sh int gi1/0/24
GigabitEthernet1/0/24 is up, line protocol is up (connected)
  Hardware is Gigabit Ethernet, address is 001d.7066.f898 (bia 001d.7066.f898)
  Description: "sw253 Gi1/0/45"
  MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 18/255, rxload 29/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:01, output 00:00:06, output hang never
  Last clearing of "show interface" counters 1y23w
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1247489999
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  30 second input rate 113891000 bits/sec, 18977 packets/sec
  30 second output rate 73515000 bits/sec, 15639 packets/sec
     923327052223 packets input, 384251849315561 bytes, 0 no buffer
     Received 1800929232 broadcasts (1527221362 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 1527221362 multicast, 0 pause input
     0 input packets with dribble condition detected
     631933974629 packets output, 352188051573131 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out

interface GigabitEthernet1/0/24
 description "sw253 Gi1/0/45"
 switchport trunk encapsulation dot1q
 switchport mode trunk
 load-interval 30
 spanning-tree mst 0 cost 1200
end

12 Replies 12

Hello,

is this a stack member or a standalone switch ?

It's a standalone switch doing L2 switching only.

I am inclined to say that the interface buffers of this device are no longer enough to the amount of packets being transversed through this interface.

Hello,

there is a bug that causes the interface to show an incorrect output error value.

When you issue the command:

show platform port-asic stats drops

do you see the same drops ?

ye, it seems the frames in queue 3 weight 2 are rising every time i run the command

sw3750e-235#show platform port-asic stats drop gi1/0/24


  Interface Gi1/0/24 TxQueue Drop Statistics
    Queue 0
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 1
      Weight 0 Frames 789076
      Weight 1 Frames 21
      Weight 2 Frames 2
    Queue 2
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 3
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 1352479902
    Queue 4
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 5
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 6
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 7
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0

Hello,

do you QoS enabled on the switch ? If not, enable it with 'mls qos' global command:

Once enabled, post the output of:

sh mls qos int gi1/0/24 statistics

sh mls qos maps dscp-output-q

sh mls qos queue-set

Depending on the output, we can try changing the queue-set buffers...

QoS is not enabled in the device

I understand what probably is happening the hardware queue for this interface is always full, for this kind of traffic. Is there a way of making more buffer for this queue of traffic while sacrificing a little from the other queues?

My doubt is in the other queues what kind of traffic lands there?

#sh mls qos int gi1/0/24 statistics
GigabitEthernet1/0/24 (All statistics are in packets)

  dscp: incoming  
-------------------------------

  0 -  4 :    74558130     12048319   3906982273      1637836    272519512  
  5 -  9 :      882498      5673512       534401       574851        16680  
 10 - 14 :       18649           66        18839         7671          266  
 15 - 19 :         968       358227           12        99375            8  
 20 - 24 :       24479           14           12           20        83786  
 25 - 29 :          17   2280477888           10         2421            7  
 30 - 34 :          12            7          327           14       106982  
 35 - 39 :           9           16           10           23            3  
 40 - 44 :     1735553            9          211           14          119  
 45 - 49 :          13     93712898           13   2686244510         1159  
 50 - 54 :       10722           75         1276           57          160  
 55 - 59 :         115      4353657           10           10           13  
 60 - 64 :           8           14           14            7  
  dscp: outgoing
-------------------------------

  0 -  4 :  3819663708       378674    856647444       184420    418599055  
  5 -  9 :      132241       146961        18075        90565          917  
 10 - 14 :         880            5           13            1           12  
 15 - 19 :           0        31436            2         1538            1  
 20 - 24 :          23            5            3            3          852  
 25 - 29 :           3       302434            1         2121            2  
 30 - 34 :           2            1           42            1       106377  
 35 - 39 :           2            3            2            5            0  
 40 - 44 :      157612            2            4            3            3  
 45 - 49 :           3         5925            3   2491840645        13849  
 50 - 54 :     1647439          259        20039         3794        10877  
 55 - 59 :         654        98325            0            1            2  
 60 - 64 :           3            2            3            2  
  cos: incoming  
-------------------------------

  0 -  4 :  3028725714            0       416701           80            0  
  5 -  7 :        5588   1804712692     27812539  
  cos: outgoing
-------------------------------

  0 -  4 :  4230299207            0            0           30            0  
  5 -  7 :         298     14834609     12180378  
  output queues enqueued:
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:           2           0           0
 queue 1:  1686003605    62706982    23358854
 queue 2:           0           0           0
 queue 3:           0           0  2485322918

  output queues dropped:
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:           0           0           0
 queue 1:      789076          21           2
 queue 2:           0           0           0
 queue 3:           0           0  1355017441

Policer: Inprofile:            0 OutofProfile:            0

#sh mls qos maps dscp-output-q
   Dscp-outputq-threshold map:
     d1 :d2    0     1     2     3     4     5     6     7     8     9
     ------------------------------------------------------------
      0 :    02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01
      1 :    02-01 02-01 02-01 02-01 02-01 02-01 03-01 03-01 03-01 03-01
      2 :    03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01
      3 :    03-01 03-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01
      4 :    01-01 01-01 01-01 01-01 01-01 01-01 01-01 01-01 04-01 04-01
      5 :    04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01
      6 :    04-01 04-01 04-01 04-01

#sh mls qos queue-set
Queueset: 1
Queue     :       1       2       3       4
----------------------------------------------
buffers   :      25      25      25      25
threshold1:     100     200     100     100
threshold2:     100     200     100     100
reserved  :      50      50      50      50
maximum   :     400     400     400     400
Queueset: 2
Queue     :       1       2       3       4
----------------------------------------------
buffers   :      25      25      25      25
threshold1:     100     200     100     100
threshold2:     100     200     100     100
reserved  :      50      50      50      50
maximum   :     400     400     400     400

Hello.

here is what I have come up with to reconfigure your interface queues. Make sure you have qos globally enabled (mls qos):

sw3750e-235(config)#mls qos queue-set output 2 buffers 10 30 30 30
sw3750e-235(config)#mls qos queue-set output 2 threshold 2 3100 100 100 100
sw3750e-235(config)#mls qos queue-set output 2 threshold 3 3100 100 100 100
sw3750e-235(config)#mls qos queue-set output 2 threshold 4 3100 100 100 100

sw3750e-235#conf t
sw3750e-235(config)#int gi1/0/24
sw3750e-235(config-int)#queue-set 2
sw3750e-235(config-int)#end

Make sure queue set 2 is assigned to your interface:

sw3750e-235#sh run int gi1/0/24
interface GigabitEthernet1/0/24
switchport mode access
mls qos trust dscp
queue-set 2

Check if you still see output drops.

Another option would be to configure priority queueing on the interface:

sw3750e-235(config)#int gi1/0/24
sw3750e-235(config-if)#priority-queue out
sw3750e-235(config-if)#end

Then, the following sends all affected traffic to the priority queue:

sw3750e-235(config)#mls qos srr-queue output dscp-map queue 2 threshold 1 0 2 26 48
sw3750e-235(config)#mls qos srr-queue output dscp-map queue 3 threshold 1 0 2 26 48
sw3750e-235(config)#mls qos srr-queue output dscp-map queue 4 threshold 1 0 2 26 48

Joseph W. Doherty
Hall of Fame
Hall of Fame

The 3750 series is a bit infamous for egress port drops.  Bursty traffic often is dropped due to lack of interface buffering, even when overall bandwidth utilization is low.

Port assignments can influence drop rates.  The 3750 provides 2 MB of buffer RAM per 24 copper ports and per the uplink ports.

Also, if QoS is enabled, by default the 3750 uses a buffer allocation scheme that reserves half of the buffer RAM to ports and further reserves that per hardware queue.  (I've had some wonderful decreases in egress drops [from several drops a second to 1 or 2 drops a day] by queue-set settings.  [I see one of George's post changes queue-set settings too, although my approach is pretty different from his.])

Thanks for the help Josep and George.

I have got one question though, if you are dealing with a situation where it's not just bursty traffic, the switch is constantly having discards out or output drops due to excessive amounts of packets having to be sent out.

Will changing the QoS settings help in this case? According to what you said, the RAM buffer is shared, splitting the traffic my multiple interfaces could also have a positive effect?

If you're dealing with long-term oversubscription, a common long term fix is to add bandwidth.  Another solution, which may work both short term and long term is an advanced drop management strategy.  Other solutions deal with 3rd party traffic management devices and/or host changes.

As advanced drop management strategy is often implemented using advanced QoS techniques, yes QoS might help.

Also yes, using multiple interfaces may have a possible effect, if they don't share the same physical buffer, and you're not dealing with long-term oversubscription.

I was just reading a previous post of yours:

On the 3750 series, when dealing with bursty traffic, I've had great success by tuning the hardware buffers.  Basically, rather than using the defaults, which reserve 50% of your buffer space to interfaces (and such reservation is split across each egress queue [this all assuming QoS is enabled]), I reduce the interface reservations (putting more RAM into the common pool) and increase how much buffer RAM an interface can dynamically have.

Can you come forward which kind of values do you use?

mls qos queue-set output A threshold B C C D C

A=1 or 2; NB: ports by default use queue-set 1

B=1, 2, 3, 4; queue number, I normally use same value for all 4 queues.

C=tail drop value; I normally use same value for all 3; I start at 400, which is max value for older IOS versions, later versions support up to 3200, so I might use 800, 1600 or 3200

D=buffer reservation; I lower, I might use 25, 10 or even 1

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card