cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
10030
Views
5
Helpful
29
Replies

2960 and 3560 high output drops ios version 12.2(53)se2

bob brennan
Level 1
Level 1

i have been experiencing a large amount of output queue drops on the 2960 and 3560 switch ports.

here are some of the things i have tried and the conclusions i have reached.

  1. I changed the queuing on one of the 2960 switches numerous times to the following settings

srr-queue bandwidth share 1 75 1 25 (original setting – the 75% is for queue 1 call signaling and 25% for queue 3 best effort )

srr-queue bandwidth share 1 50 1 50

srr-queue bandwidth share 1 25 1 75

            after changing the queue setting I still experienced drops. In most cases it was less drops as the % increased for queue 3. in some cases it actually stayed the same or increased.

  1. I removed qos from the switch I was testing and 2 - 3560 switches. The 3560 switches were experiencing a lot of drops as well. On some ports over a 1 hour period the drop rate was as high as 5%

After removing qos I saw no drops on the 3560 switches. But on the 2960 switch I still saw a small amount of drops on a few ports. With qos all 11 10/100 ports had drops as high as 1000 over the same time period. With qos removed only 2 of the ports experienced drops and they were a small amount 1 and 91 drops.

Once I enabled qos on the switches the drops started occurring again on all 3 switches at a rapid rate.

  1. this leads me to conclude that we may have multiple issues
    1. almost all the ports experiencing drops are 100 mb ports (about 95%). Even when qos was totally removed I saw drops on some ports. I believe at this point the 100mb ports are experiencing bursts in traffic that they can’t handle and packets get dropped. We need to install 1gb cards in the clients.
    2. The current ios we are running on the switches has numerous issues, such as, high cpu utilization and output packet drop issues that have been seen by others. When removing qos completely the amount of drops were none or decreased significantly. I believe they are issues with the ios that need to be addressed by cisco. Current upgrades according to other users have not resolved the issue and a fix is not scheduled until 3/11.
    3. We will need to fine tune the qos configuration on our switches once we implement voip

i have read numerous posts on this issue. so i know i am not the only one seeing it. i contacted cisco tac about the issue and they continue to say that bursty traffic is causing all the drops. but what they can't account for is when removing qos almost all the drop go away.

has anyone else experienced this and if so were you able to fine a resolution. thank you.

i have also attached a show tech support file.

5 Accepted Solutions

Accepted Solutions

Lei Tian
Cisco Employee
Cisco Employee

Hi Bob,

I think the reason you experience egress packet drops after enable qos is because the buffer size has changed. Without qos, there is fixed size buffer reserved for one queue, and all transit packets pass through that queue. That relative larger size buffer make the switch can handle bursty traffic better. After enable qos, each queue only has small amount of reserved buffer; once the reserved buffer is full, you will see drops on that particular queue.

You can use command "show platform port-asic stats drops fax/x" to check which queue is dropping packet, and then use "mls qos queue-set out 1 buffers" to allocate more buffers to that queue. Be aware, the system queue 0 maps queue 1 in config.

HTH,

Lei Tian

View solution in original post

Hi Bob,

Glad to see the improvement. I see you have 'srr-queue bandwidth share 1 75 1 25' configured under the interface, are you expecting to see a lot video traffic on that interface? If that is the case, you might want to allocate more buffer to queue 2 to avoid drops on bursty video traffic. You might also want to adjust the max drop threshold to a higher number, so the queue can borrow more buffer from common buffer pool before drops packet. Please see another thread for some general guideline of buffer tuning.

https://supportforums.cisco.com/message/3260855#3260855

HTH,

Lei Tian

View solution in original post

Hi Bob,

I am glad that I can help. The interface fa0/25's output doesnt seem right. If the drop counter keeps increase, you can use the same 'show platform port-asic stats drops fa0/25" command to check whether the packets is been dropped by the asic.

Please rate if that helps.

Regards,

Lei Tian

View solution in original post

Hi Bob,

I don't have exact number, but I think that might because of the architecture difference between 2960s and other platform. Both platform are using share buffer architecture, which means all ports map to same ASIC share the buffer size. However, the buffer size is different on each platform. 2960S has different buffer size than 2960G. So, I would increase the max threshold to 1000 first before change the buffer reserved for the queue.

HTH,
Lei Tian

View solution in original post

Hi Bob,

You are welcome! Thanks for the great rating!

Yes, that action plan looks good. Please keep us update.

Regards,

Lei Tian

View solution in original post

29 Replies 29

Lei Tian
Cisco Employee
Cisco Employee

Hi Bob,

I think the reason you experience egress packet drops after enable qos is because the buffer size has changed. Without qos, there is fixed size buffer reserved for one queue, and all transit packets pass through that queue. That relative larger size buffer make the switch can handle bursty traffic better. After enable qos, each queue only has small amount of reserved buffer; once the reserved buffer is full, you will see drops on that particular queue.

You can use command "show platform port-asic stats drops fax/x" to check which queue is dropping packet, and then use "mls qos queue-set out 1 buffers" to allocate more buffers to that queue. Be aware, the system queue 0 maps queue 1 in config.

HTH,

Lei Tian

hi lei tian,

first of all, thank you very much for your post. i used the sh platform port-asic drop stats and the sh mls qos int x/x stat commands  to determine that queue 3 and threshold 3 was experiencing the drops. this queue and threshold handles best effor traffic. i had the following queuing and buffers set up

mls qos map cos-dscp 0 8 16 26 32 46 48 56
mls qos srr-queue output cos-map queue 1 threshold 3 5
mls qos srr-queue output cos-map queue 2 threshold 1 4 6 7
mls qos srr-queue output cos-map queue 2 threshold 2 3
mls qos srr-queue output cos-map queue 4 threshold 3 0 1 2
mls qos srr-queue output dscp-map queue 1 threshold 3 46
mls qos srr-queue output dscp-map queue 2 threshold 1 32 48 56
mls qos srr-queue output dscp-map queue 2 threshold 2 26
mls qos srr-queue output dscp-map queue 4 threshold 3 0 1 2 3 4 5 6 7
mls qos srr-queue output dscp-map queue 4 threshold 3 8 9 10 11 12 13 14 15
mls qos srr-queue output dscp-map queue 4 threshold 3 16 17 18 19 20 21 22 23
mls qos srr-queue output dscp-map queue 4 threshold 3 24 25 27 28 29 30 31 33
mls qos srr-queue output dscp-map queue 4 threshold 3 34 35 36 37 38 39 40 41
mls qos srr-queue output dscp-map queue 4 threshold 3 42 43 44 45 47 49 50 51
mls qos srr-queue output dscp-map queue 4 threshold 3 52 53 54 55 57 58 59 60
mls qos srr-queue output dscp-map queue 4 threshold 3 61 62 63
mls qos queue-set output 1 threshold 2 70 100 100 100
mls qos

interface

srr-queue bandwidth share 1 75 1 25

priority-queue out
mls qos trust dscp

FastEthernet0/6
The port is mapped to qset : 1
The allocations between the queues are : 25 25 25 25

i added the following command:

mls qos queue-set output 1 buffers 25 25 5 45

hw-comm-cs356048ps-1#sh mls qos int fa0/6 buffers
Load for five secs: 7%/0%; one minute: 9%; five minutes: 8%
Time source is NTP, 06:30:08.646 arizona Fri Jan 14 2011

FastEthernet0/6
The port is mapped to qset : 1
The allocations between the queues are : 25 25 5 45

hw-comm-cs356048ps-1#sh mls qos int fa0/6 statistics
Load for five secs: 7%/0%; one minute: 7%; five minutes: 9%
Time source is NTP, 06:26:56.084 arizona Fri Jan 14 2011

FastEthernet0/6 (All statistics are in packets)

  dscp: incoming
-------------------------------

  0 -  4 :      730013            0            0            0            0
  5 -  9 :           0            0            0            0            0
10 - 14 :           0            0            0            0            0
15 - 19 :           0            0            0            0            0
20 - 24 :           0            0            0            0            0
25 - 29 :           0           38            0            0            0
30 - 34 :           0            0            0            0            0
35 - 39 :           0            0            0            0            0
40 - 44 :           0            0            0            0            0
45 - 49 :           0            0            0           43            0
50 - 54 :           0            0            0            0            0
55 - 59 :           0            0            0            0            0
60 - 64 :           0            0            0            0
  dscp: outgoing
-------------------------------

  0 -  4 :      582795            0            0            0            0
  5 -  9 :           0            0            0            0            0
10 - 14 :           0            0            0            0            0
15 - 19 :           0            0            0            0            0
20 - 24 :           0            0            0            0            0
25 - 29 :           0          118            0            0            0
30 - 34 :           0            0            0            0            0
35 - 39 :           0            0            0            0            0
40 - 44 :           0            0            0            0            0
45 - 49 :           0            0            0        25263            0
50 - 54 :           0            0            0            0            0
55 - 59 :           0            0            0            0            0
60 - 64 :           0            0            0            0
  cos: incoming
-------------------------------

  0 -  4 :      735162            0            0           38            0
  5 -  7 :           0            0            0
  cos: outgoing
-------------------------------

  0 -  4 :      646983            0            0          118            0
  5 -  7 :       35760        25263            0
  output queues enqueued:
queue:    threshold1   threshold2   threshold3
-----------------------------------------------
queue 0:           0           0       35760
queue 1:       25263         118       64689
queue 2:           0           0           0
queue 3:           0           0      650095

  output queues dropped:
queue:    threshold1   threshold2   threshold3
-----------------------------------------------
queue 0:            0            0            0
queue 1:            0            0            0
queue 2:            0            0            0
queue 3:            0            0           38

Policer: Inprofile:            0 OutofProfile:            0

hw-comm-cs356048ps-1#sh int fa0/6
Load for five secs: 6%/0%; one minute: 7%; five minutes: 9%
Time source is NTP, 06:27:04.867 arizona Fri Jan 14 2011

FastEthernet0/6 is up, line protocol is up (connected)
  Hardware is Fast Ethernet, address is 3cdf.1e20.e608 (bia 3cdf.1e20.e608)
  MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:16, output 00:00:01, output hang never
  Last clearing of "show interface" counters 16:14:05
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 38
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 2000 bits/sec, 3 packets/sec
     735207 packets input, 831316921 bytes, 0 no buffer
     Received 5058 broadcasts (1948 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 1948 multicast, 0 pause input
     0 input packets with dribble condition detected
     772911 packets output, 129407141 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out
hw-comm-cs356048ps-1#

as you can see i increased the buffers from 25 to 45 and i am still seeing drops, but it's alot less than i was seeing before. i can increase the buffers even more but the more i increase queue 3 the less the other queues have. since we have not implemented voip at this point, this could become a big issue when we do. any further advice you have on this issue would be greatly appreciated. thanks.

Hi Bob,

Glad to see the improvement. I see you have 'srr-queue bandwidth share 1 75 1 25' configured under the interface, are you expecting to see a lot video traffic on that interface? If that is the case, you might want to allocate more buffer to queue 2 to avoid drops on bursty video traffic. You might also want to adjust the max drop threshold to a higher number, so the queue can borrow more buffer from common buffer pool before drops packet. Please see another thread for some general guideline of buffer tuning.

https://supportforums.cisco.com/message/3260855#3260855

HTH,

Lei Tian

hi lei,

thanks again for the great info. it was really helpful. i went ahead and make the following changes to the switch qos

mls qos queue-set output 1 threshold 1 150 300 100 400
mls qos queue-set output 1 threshold 2 150 300 50 400
mls qos queue-set output 1 threshold 3 150 300 50 400
mls qos queue-set output 1 threshold 4 200 300 100 400
mls qos queue-set output 1 buffers 10 20 10 60

interfaces

srr-queue bandwidth share 10 20 10 60

since the changes were made i have seen no drops except for one port i tracked down the user who has the pc connected to this port and the pc was off when i called him and had not been on today. he has since turned it on. the reason i bring this up is becuase the amount of drops was over 1,500,000. here is the show int

hw-comm-cs356048ps-1#sh int fa0/25
Load for five secs: 6%/0%; one minute: 7%; five minutes: 7%
Time source is NTP, 10:34:55.557 arizona Fri Jan 14 2011

FastEthernet0/25 is up, line protocol is up (connected)
  Hardware is Fast Ethernet, address is 3cdf.1e20.e61d (bia 3cdf.1e20.e61d)
  MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output 00:00:00, output hang never
  Last clearing of "show interface" counters 01:13:57
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1503464
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 15000 bits/sec, 8 packets/sec
  5 minute output rate 40000 bits/sec, 15 packets/sec
     8162 packets input, 1748977 bytes, 0 no buffer
     Received 81 broadcasts (12 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 12 multicast, 0 pause input
     0 input packets with dribble condition detected
     468176 packets output, 653422876 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out

as you can see it's showing more drops than packet output. which leads me to believe there could be a issue with how the ios determined this. any thoughts on this ?

thanks again for all your help, it is greatly appreciated.

Hi Bob,

I am glad that I can help. The interface fa0/25's output doesnt seem right. If the drop counter keeps increase, you can use the same 'show platform port-asic stats drops fa0/25" command to check whether the packets is been dropped by the asic.

Please rate if that helps.

Regards,

Lei Tian

hi lei,

thanks again for everything. here is the output from the fa0/25 port

hw-comm-cs356048ps-1#sh platform port-asic stats drop fa0/25
Load for five secs: 7%/0%; one minute: 11%; five minutes: 13%
Time source is NTP, 12:24:41.909 arizona Fri Jan 14 2011


  Interface Fa0/25 TxQueue Drop Statistics
    Queue 0
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 1
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 2
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 0
    Queue 3
      Weight 0 Frames 0
      Weight 1 Frames 0
      Weight 2 Frames 1858964
hw-comm-cs356048ps-1#

the drops are there, not sure what going on.

by the way, how do you rate ?

Hi Bob,

Looks like the drops still keep increasing on that particular port. You can setup a SPAN session and monitor that port, see if that can give you some hints on what traffic is been sent to that interface. Does that user report any performance issue?

Thanks for the great rating!

Regards,

Lei Tian

hey lei,

the user connecting to that port is experiencing no issues at all. i am going to try the new qos commands on some switches where i know the users are experiencing slowness issues, but i believe this i due to our rollout of active directory istead of switch issues. i will keep you posted. thank you so much for all your help.

Hi Bob,

Thanks for the update. Since that's the only port still has drops after tuning the buffer, I feel it is caused by some issue other than qos. To further troubleshooting that issue, span and capture the packets on the part might help.

Please keep us update, and good luck!

Regards,

Lei Tian

hi lei,

there have been no further drops on that port and the user is reporting no issues. not sure what caused the high drop rate. at this point i believe it to be a non-issue.

since adjusting the qos on the following switches 2960-tc, 2960-s, 3560 i have noticed that i am seeing no drops or just a few on the 2960-tc and 3560, but i am still seeing drops on the 2960s switches and only on the ports connected at 100mb, no drops on 1gb connected ports. the amount of drop is still small 124 drops out of 170000 packets and 119 drops out of 132000 packets. i could further tune the qos to try and reduce the number of drops, but not sure if that would help that much. what i do find interesting is that the 2960-tc and 3560 are using the same qos configuration and i am seeing hardly any drops. but these switches only support 10/100

what would you suggest ?

thank you.

Hi Bob,

I don't have exact number, but I think that might because of the architecture difference between 2960s and other platform. Both platform are using share buffer architecture, which means all ports map to same ASIC share the buffer size. However, the buffer size is different on each platform. 2960S has different buffer size than 2960G. So, I would increase the max threshold to 1000 first before change the buffer reserved for the queue.

HTH,
Lei Tian

hi lei,

i will increase the max threshold just for that queue and see what happens. if that doesn't work i will try adjusting the buffers a little

here is the command i used

mls qos queue-set output 1 threshold 4 200 300 100 1000

here is the buffer adjustment that i will make if the above doesn't resolve the issue

mls qos queue-set output 1 buffers 10 20 10 60 (current)

mls qos queue-set output 1 buffers 10 15 5 70 (proposed)

thanks again for all your help.

Hi Bob,

You are welcome! Thanks for the great rating!

Yes, that action plan looks good. Please keep us update.

Regards,

Lei Tian

hi lei,

i have posted the results of the qos changes in a text file on my last post. increasing the max threshold to 1000 on queue 4 worked like a charm. most of my drop issues have been resolved as a result of your recommendations. thank you so much for alll your help is it greatly appreciated.

Review Cisco Networking products for a $25 gift card