cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2245
Views
0
Helpful
8
Replies

Total output drops

MarioLieb
Level 1
Level 1

Hi @all 

 

6 Month ago we changed your switches from 2960 to 2960x. The 2960x are connected as a stack with max 6 switches and a uplink of 2*10GBit Etherchannel. Since we changed the hardware we have the issue that at the access ports have output drops. I have no idea why this happend and how i can fix this problem. Any idea ?

====================================================================================================

GigabitEthernet5/0/40 is up, line protocol is up (connected)
  Hardware is Gigabit Ethernet, address is 189c.5dd8.22a8 (bia 189c.5dd8.22a8)
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output 00:00:03, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1988762
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 202000 bits/sec, 61 packets/sec
  5 minute output rate 245000 bits/sec, 68 packets/sec
     500131403 packets input, 182892336075 bytes, 0 no buffer
     Received 424177 broadcasts (256028 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 256028 multicast, 0 pause input
     0 input packets with dribble condition detected
     593398679 packets output, 201759850373 bytes, 0 underruns
     0 output errors, 0 collisions, 1 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

=========================================================================================

Gi5/0/40                     connected    217        a-full a-1000 10/100/1000BaseTX

=========================================================================================

Switch Ports Model              SW Version            SW Image
------ ----- -----              ----------            ----------
     1 52    WS-C2960X-48TD-L   15.0(2)EX4            C2960X-UNIVERSALK9-M
     2 52    WS-C2960X-48TD-L   15.0(2)EX4            C2960X-UNIVERSALK9-M
     3 52    WS-C2960X-48TD-L   15.0(2)EX4            C2960X-UNIVERSALK9-M
     4 52    WS-C2960X-48TD-L   15.0(2)EX4            C2960X-UNIVERSALK9-M
     5 52    WS-C2960X-48TD-L   15.0(2)EX4            C2960X-UNIVERSALK9-M
*    6 52    WS-C2960X-48TD-L   15.0(2)EX4            C2960X-UNIVERSALK9-M

8 Replies 8

InayathUlla Sharieff
Cisco Employee
Cisco Employee

Hi Mario,

What is connected at the other end?

1-Can you clear the counters and check how often the drops are increasing?

2- Also check if there is any burst traffic happening which is causing the output drops to get increase?

3- Is QOS enabled on the box?

4- Can you try changing the port and put the port on different ASIC?

 

HTH

 

Hi insharie

The end device is a Windows 7 client HP 6000 Pro

1 I will monitor the counter and tell you when he will increase

2 We will analyze

3 QoS is not enabled

4 This behavior is on different Ports and Switches at the Stack

=========================================================================

interface GigabitEthernet5/0/40
 description Test Gi5/0/40
 switchport mode access
 authentication host-mode multi-host
 authentication port-control auto
 authentication periodic
 authentication timer reauthenticate 7200
 authentication violation protect
 mab
 dot1x pae authenticator
 dot1x timeout quiet-period 1
 dot1x timeout tx-period 5
 dot1x max-reauth-req 3
 spanning-tree portfast

 

Hello now i saw that the problem is also on 2960+ maybe there is a problem with the buffers ?! Here are my QoS- Settings. As you ca see the Drops are only on Q2 T3. In this Q there sould only come packets with DSCP 48 & 56. Here you can see my QoS settings for a 2960+. Did someone have an idea ? On an other switch which have the same problem i made a Wireshark. The only packets which i saw with DSCP 48 are HSRP and OSPF hello Packets ... 

===========================================

Switch Ports Model SW Version SW Image
------ ----- ----- ---------- ----------
* 1 52 WS-C2960+48PST-L 15.0(2)SE6 C2960-LANBASEK9-M

===========================================

mls qos map cos-dscp 0 8 16 24 32 46 48 56
mls qos srr-queue input bandwidth 70 30
mls qos srr-queue input threshold 1 80 90
mls qos srr-queue input buffers 70 30
mls qos srr-queue input priority-queue 2 bandwidth 30
mls qos srr-queue input dscp-map queue 1 threshold 2 24
mls qos srr-queue input dscp-map queue 1 threshold 3 48 56
mls qos srr-queue input dscp-map queue 2 threshold 3 32 40 46
mls qos srr-queue output dscp-map queue 1 threshold 3 32 40 46
mls qos srr-queue output dscp-map queue 2 threshold 1 16 18 20 22 26 28 30 34
mls qos srr-queue output dscp-map queue 2 threshold 1 36 38
mls qos srr-queue output dscp-map queue 2 threshold 2 24
mls qos srr-queue output dscp-map queue 2 threshold 3 48 56
mls qos srr-queue output dscp-map queue 3 threshold 3 0
mls qos srr-queue output dscp-map queue 4 threshold 1 8
mls qos srr-queue output dscp-map queue 4 threshold 2 10 12 14
mls qos queue-set output 1 threshold 1 100 100 100 100
mls qos queue-set output 1 threshold 2 80 90 100 400
mls qos queue-set output 1 threshold 3 100 100 100 400
mls qos queue-set output 1 threshold 4 60 100 100 400
mls qos queue-set output 1 buffers 15 30 35 20

class-map match-all CM-DEFAULT-Traffic
match access-group name DEFAULT
class-map match-all CM-VOIP-Signal
match ip dscp cs3
class-map match-all CM-VOIP-Traffic
match ip dscp ef
!
policy-map PM-AccessPort
class CM-VOIP-Traffic
set dscp ef
police 128000 8000 exceed-action drop
class CM-VOIP-Signal
set dscp cs3
police 32000 8000 exceed-action drop
class CM-DEFAULT-Traffic
set dscp default

interface FastEthernet0/1
switchport access vlan 239
switchport mode access
switchport voice vlan 158
srr-queue bandwidth share 1 30 35 5
priority-queue out
mls qos trust dscp
spanning-tree portfast
service-policy input PM-AccessPort

==========================================================

interface FastEthernet0/14
switchport access vlan 239
switchport mode access
switchport voice vlan 158
srr-queue bandwidth share 1 30 35 5
priority-queue out
mls qos trust dscp
spanning-tree portfast
service-policy input PM-AccessPort
end

==================================================================

nunw-30-01-097#sh inter fa 0/14
FastEthernet0/14 is up, line protocol is up (connected)
Hardware is Fast Ethernet, address is 689c.e28c.8c8e (bia 689c.e28c.8c8e)
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, media type is 10/100BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:00, output hang never
Last clearing of "show interface" counters 3d20h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 219322
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 9000 bits/sec, 7 packets/sec
5 minute output rate 21000 bits/sec, 15 packets/sec
19844021 packets input, 7793279618 bytes, 0 no buffer
Received 25273 broadcasts (13441 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 13441 multicast, 0 pause input
0 input packets with dribble condition detected
32898556 packets output, 36209591913 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out

===========================================================

FastEthernet0/14

dscp: incoming
-------------------------------

0 - 4 : 19834972 0 0 0 0
5 - 9 : 0 0 0 0 0
10 - 14 : 0 0 0 0 0
15 - 19 : 0 0 0 0 0
20 - 24 : 0 0 0 0 0
25 - 29 : 0 0 0 0 0
30 - 34 : 0 0 0 0 0
35 - 39 : 0 0 0 0 0
40 - 44 : 0 0 0 0 0
45 - 49 : 0 0 0 0 0
50 - 54 : 0 0 0 0 0
55 - 59 : 0 0 0 0 0
60 - 64 : 0 0 0 0
dscp: outgoing
-------------------------------

0 - 4 : 31235685 0 2648 0 0
5 - 9 : 0 0 0 0 0
10 - 14 : 0 0 0 0 0
15 - 19 : 0 0 0 732 0
20 - 24 : 0 0 0 0 0
25 - 29 : 0 8579 0 0 0
30 - 34 : 0 0 0 0 2353
35 - 39 : 0 0 0 0 0
40 - 44 : 0 0 0 0 0
45 - 49 : 0 0 0 1086925 0
50 - 54 : 0 0 0 0 0
55 - 59 : 0 5594 0 0 0
60 - 64 : 0 0 0 0
cos: incoming
-------------------------------

0 - 4 : 19842897 0 0 0 0
5 - 7 : 0 0 0
cos: outgoing
-------------------------------

0 - 4 : 31417398 0 732 8579 2353
5 - 7 : 0 1086925 0
output queues enqueued:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 0 0 0
queue 1: 14344 4673 1468682
queue 2: 0 0 31285357
queue 3: 0 0 149593

output queues dropped:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 0 0 0
queue 1: 0 0 0
queue 2: 0 0 219322
queue 3: 0 0 0

Policer: Inprofile: 0 OutofProfile: 0

================================================================

nunw-30-01-097#sh mls qos queue-set
Queueset: 1
Queue : 1 2 3 4
----------------------------------------------
buffers : 15 30 35 20
threshold1: 100 80 100 60
threshold2: 100 90 100 100
reserved : 100 100 100 100
maximum : 100 400 400 400
Queueset: 2
Queue : 1 2 3 4
----------------------------------------------
buffers : 25 25 25 25
threshold1: 100 200 100 100
threshold2: 100 200 100 100
reserved : 50 50 50 50
maximum : 400 400 400 400

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

The issue might be because you might now have more bandwidth that can send to a port.  Did you have dual 10g uplinks before?  If not, a 10g burst could micro congest a gig edge port.  Or, now that you're stacked, the stack ports might allow a micro burst to a gig edge port.

If this is the cause of your issue, it might be resolvable with a custom QoS configuration.  The latter could configure buffers to be more "shared".

But before embarking on QoS tuning, you might want to determine if your existing drops are truly detrimental to your applications.  I.e. 1988762 / 593398679  = .335 % and often anything below 1% is acceptable.

Do you mean that we have to configure QoS at the gi- Interface ? At the moment i have the counter in our monitotion and i have configured QoS so we will see ;-)

interface GigabitEthernet5/0/40
 description Test Gi5/0/40
 switchport mode access
 srr-queue bandwidth share 1 30 35 5
 priority-queue out
 authentication host-mode multi-host
 authentication port-control auto
 authentication periodic
 authentication timer reauthenticate 7200
 authentication violation protect
 mab
 mls qos trust dscp
 dot1x pae authenticator
 dot1x timeout quiet-period 1
 dot1x timeout tx-period 5
 dot1x max-reauth-req 3
 auto qos trust dscp
 spanning-tree portfast
end

 

Mario

 

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

No, I meant you need to tune QoS buffer settings, which are global.  However, as buffer settings are relative, you might also need to adjust interface QoS settings too.

BTW, you can configure either without enabling QoS (globally) but then they don't do anything.

Generally, on 2960 and 3560/3750switches, default QoS results in interface drops not seen when QoS is disabled.  However, I've found (on 3560/3750s; should be true too on 2960s) you can sometimes tune buffers, with QoS enabled, to have even less drops than when QoS is disabled.

MarioLieb
Level 1
Level 1

Hallo @all

 

last day the Output Drops highly increased at the Interface to Total output drops: 4310536. In a timerange of 10 Minutes ! On the monitotiong system i can see that the traffic on the Uplinks also increased at this time. So i think the reason of this drops is the peak on the uplink. But how can i avoid the drops ?

 

 

 

 

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

How to avoids drop?  Well there are several approaches, which are not mutually exclusive.

If they are due to transient microbursts, sufficient buffering/queuing will stop the bursts.  I've already described this might be done with QoS tuning.  Or, you use a different device with more buffer resources.

If cause is oversubscription, you might mitigate by increasing bandwidth.  (Difficult to do for gig edge ports.)

Or, you tune hosts to either stop sending such large bursts and/or use (if TCP) a better stack that slows traffic when it detects a jump in latency.

Or, you use a "smarter" method of drop management.  This doesn't eliminate drops, but can reduce them.

Or, you use a special traffic shaping appliance which can spoof TCP and rate control it without dropping packets.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: