cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4506
Views
2
Helpful
4
Replies

Catalyst 2960 drops packets after enabling QoS

HyeonCheol Cho
Level 1
Level 1

Hello there,

Would someone can help explain why the QoS deployment causes ports on 2960 switches to drop packets although there is no bandwidth contension?

switchport of client - In.Out.Bits.per.Sec.jpg (output bit/second of the interface)switchport of client - QoSQueueDrops.jpgpacket drops at threshold 1, queue 2 ( queue 3 in config)

We are using Catalyst 2960 switches and are experiencing issues with QoS. After the QoS, we experience a lot of packet drops and it make the network slow. It appears that the slow performance relates to the egress queuing of each port.

Cisco IOS Software, C2960 Software (C2960-LANBASEK9-M), Version 12.2(50)SE5

WS-C2960-48TT-L

switch_06#show inter fa0/1 | in drops|queue|output

  input flow-control is off, output flow-control is unsupported

  Last input never, output 00:00:01, output hang never

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 10512

  Output queue: 0/0 (size/max)

  30 second output rate 1000 bits/sec, 1 packets/sec

     1970587 packets output, 838210463 bytes, 0 underruns

     0 output errors, 0 collisions, 2 interface resets

     0 lost carrier, 0 no carrier, 0 PAUSE output

     0 output buffer failures, 0 output buffers swapped out

switch_06#show mls qos

QoS is enabled

QoS ip packet dscp rewrite is enabled

switch_06#show mls qos inter fa0/1 queueing

FastEthernet0/1

Egress Priority Queue : enabled

Shaped queue weights (absolute) :  25 0 0 0

Shared queue weights  :  1 50 49 1

The port bandwidth limit : 100  (Operational Bandwidth:100.0)

The port is mapped to qset : 1

switch_06#show mls qos queue-set 1

Queueset: 1

Queue     :       1       2          3       4

----------------------------------------------

buffers   :        25      25        25      25

threshold1:     100     100     100     100

threshold2:     100     100     100     100

reserved  :       50       50      50       50

maximum   :   400     400     400     400

switch_06#show platform port-asic stats drop fa0/1

  Interface Fa0/1 TxQueue Drop Statistics

    Queue 0

      Weight 0 Frames 0

      Weight 1 Frames 0

      Weight 2 Frames 0

    Queue 1

      Weight 0 Frames 0

      Weight 1 Frames 0

      Weight 2 Frames 0

    Queue 2

      Weight 0 Frames 10431

      Weight 1 Frames 0

      Weight 2 Frames 0

    Queue 3

      Weight 0 Frames 81

      Weight 1 Frames 0

      Weight 2 Frames 0

4 Replies 4

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The      Author of this posting offers the information contained within this      posting without consideration and with the reader's understanding   that    there's no implied or expressed suitability or fitness for any    purpose.   Information provided is for informational purposes only  and   should not   be construed as rendering professional advice of any  kind.   Usage of  this  posting's information is solely at reader's own  risk.

Liability Disclaimer

In      no event shall Author be liable for any damages whatsoever    (including,   without limitation, damages for loss of use, data or    profit) arising  out  of the use or inability to use the posting's    information even if  Author  has been advised of the possibility of   such  damage.

Posting

What's likely happening, when you enable QoS the switch shares the resource it had for a single queue across four queues, often causing premature drops.

Hi Joseph,

Thank you for your reply. However, I suspect that it is not right for the switch to drop packets while bandwidth is still available and it doesn't comply with the purpose of QoS deployment.

We deploy QoS to guarangee configured amonut of bandwidth to identified applications at the time of congestion.

In addition, if it is true that the switch drops packets due to the sharing of the same amount of resource with 4 queues, is there any clear methodology to calculate right numbers for SRR weight, buffer size, threshold numbers for each queue?

if Cisco is not going to fix it, I would say that every site have to come out with their uniqueu numbers for SRR weight, buffer size, threshold number for each queue as every site has different pattern of applications and it would make whole thing very challenging.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

I recall (?) in the 3750 series Cisco changed defaults in later IOS images to mitigate similar issue.  Don't know if similar was done for 2960s.  I see your IOS version isn't real old, i.e. if there is a difference, would expect it in your version.

I also see: "Shared queue weights  :  1 50 49 1".  Is this something you've configured, or the default?

Regarding clear methodologies, none that I'm aware of beyond monitoring and "tuning" as necessary.

If the problem is limited to just this one port, if your not using queue set 2, you might switch that port to the queue set and experiment with different QoS settings.  The ones of interest with be SRR settings, buffer allocations and threshold limits.

Looking at your current stats, you might just first try increasing the treshold limit.

jkeeffe
Level 2
Level 2

We had the same problem on a 37500 switch a couple years ago. Our PBX was connected to our switch and the thought was to enable QOS so as to prioritize marked voice traffic.  Unfortunately, we didn't know enough about the switch, so when we enabled QOS and Autoqos on the PBX ports, we started getting dropped phone calls and lots of dropped packets.

Come to find out that Autoqos (If my memory is correct) set up the priority queue to be 25% of bandwidth, which is 25mb on a 100mb port. We had over 300 IP phone calls going on at the time (G-711) which was exceeding the 25mbps of the queue. Default behavior was to police the queue of anything over 25mbps - so everything over that was dropped. We found out that adding the keyword 'priority' to the port configuration gave 100% of the queue to priority traffic. There doesn't have to be congestion in a queue for the priority traffic to be policed. Adding 'priority' solved our dropped packet problem.

Could this be happening with you?

Could this

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: