cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4111
Views
5
Helpful
18
Replies

C3850 High Output Drops After Auto QOS

DJX995
Level 3
Level 3

Cisco IOS XE Software, Version 16.09.04

 

After enabling auto QOS on my 3850, I'm showing high output drops and experiencing dropped packets streaming data from servers attached to a LACP group.

 

The only QOS setting on the interfaces is auto qos trust dscp.

From some quick research on the Internet, some people said I should apply this command: qos queue-softmax-multiplier 1200. I have with slightly worse results.

 

I had a similar issue on my 3750X and was able to work around it by adding this command to each interface: queue-set 2

 

I'm open to any and all suggestions.

Thanks!

18 Replies 18

etoromol
Cisco Employee
Cisco Employee
Hi FratianiD,

I would like to give you a hand. Please share with me the following outputs from the interface where the QoS has been applied:

- show run interface x y/z
- show interface x y/z
- show policy-map interface x y/z

Regards.
Eduardo

 

interface GigabitEthernet1/0/3
 description LACP
 switchport access vlan 10
 switchport mode access
 auto qos trust dscp
 channel-group 2 mode active
 spanning-tree portfast
end

interface GigabitEthernet1/0/4
 description LACP
 switchport access vlan 10
 switchport mode access
 auto qos trust dscp
 channel-group 2 mode active
 spanning-tree portfast
end
GigabitEthernet1/0/3 is up, line protocol is up (connected) 
  Hardware is Gigabit Ethernet, address is 00bc.608b.7303 (bia 00bc.608b.7303)
  Description: LACP
  MTU 9198 bytes, BW 1000000 Kbit/sec, DLY 10 usec, 
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
  input flow-control is on, output flow-control is unsupported 
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:08, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 11741028
  Queueing strategy: Class-based queueing
  Output queue: 0/40 (size/max)
  5 minute input rate 8000 bits/sec, 9 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     97900862 packets input, 86450995582 bytes, 0 no buffer
     Received 135967 broadcasts (132184 multicasts)
     0 runts, 0 giants, 0 throttles 
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 132184 multicast, 0 pause input
     0 input packets with dribble condition detected
     35955912 packets output, 24640138851 bytes, 0 underruns
     0 output errors, 0 collisions, 2 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

GigabitEthernet1/0/4 is up, line protocol is up (connected) 
  Hardware is Gigabit Ethernet, address is 00bc.608b.7304 (bia 00bc.608b.7304)
  Description: LACP
  MTU 9198 bytes, BW 1000000 Kbit/sec, DLY 10 usec, 
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
  input flow-control is on, output flow-control is unsupported 
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:21, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 66792
  Queueing strategy: Class-based queueing
  Output queue: 0/40 (size/max)
  5 minute input rate 9000 bits/sec, 10 packets/sec
  5 minute output rate 17000 bits/sec, 29 packets/sec
     99534526 packets input, 86701044576 bytes, 0 no buffer
     Received 92882 broadcasts (91910 multicasts)
     0 runts, 0 giants, 0 throttles 
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 91910 multicast, 0 pause input
     0 input packets with dribble condition detected
     119548160 packets output, 146257621985 bytes, 0 underruns
     0 output errors, 0 collisions, 2 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out
GigabitEthernet1/0/3 

  Service-policy input: AutoQos-ppm-Trust-Dscp-Input-Policy

    Class-map: class-default (match-any)  
      354505429 packets
      Match: any 
      QoS Set
         dscp dscp table AutoQos-4.0-Trust-Dscp-Table

  Service-policy output: AutoQos-ppm-Output-Policy

    queue stats for all priority classes:
      Queueing
      priority level 1
      
      (total drops) 0
      (bytes output) 116173912

    Class-map: AutoQos-ppm-Output-Priority-Queue (match-any)  
      0 packets
      Match:  dscp cs4 (32) cs5 (40) ef (46)
      Match: cos  5 
      Priority: 30% (300000 kbps), burst bytes 7500000, 
      
      Priority Level: 1 

    Class-map: AutoQos-ppm-Output-Control-Mgmt-Queue (match-any)  
      0 packets
      Match:  dscp cs2 (16) cs3 (24) cs6 (48) cs7 (56)
      Match: cos  3 
      Queueing
      
      queue-limit dscp 16 percent 80
      queue-limit dscp 24 percent 90
      queue-limit dscp 48 percent 100
      queue-limit dscp 56 percent 100
      (total drops) 0
      (bytes output) 2637888
      bandwidth remaining 10%
      
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Multimedia-Conf-Queue (match-any)  
      0 packets
      Match:  dscp af41 (34) af42 (36) af43 (38)
      Match: cos  4 
      Queueing
      
      (total drops) 0
      (bytes output) 0
      bandwidth remaining 10%
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Trans-Data-Queue (match-any)  
      0 packets
      Match:  dscp af21 (18) af22 (20) af23 (22)
      Match: cos  2 
      Queueing
      
      (total drops) 0
      (bytes output) 33039381
      bandwidth remaining 10%
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Bulk-Data-Queue (match-any)  
      0 packets
      Match:  dscp af11 (10) af12 (12) af13 (14)
      Match: cos  1 
      Queueing
      
      (total drops) 0
      (bytes output) 2197703
      bandwidth remaining 4%
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Scavenger-Queue (match-any)  
      0 packets
      Match:  dscp cs1 (8)
      Queueing
      
      (total drops) 0
      (bytes output) 21188
      bandwidth remaining 1%
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Multimedia-Strm-Queue (match-any)  
      0 packets
      Match:  dscp af31 (26) af32 (28) af33 (30)
      Queueing
      
      (total drops) 0
      (bytes output) 18244817
      bandwidth remaining 10%
      queue-buffers ratio 10

    Class-map: class-default (match-any)  
      0 packets
      Match: any 
      Queueing
      
      (total drops) 8660816
      (bytes output) 17683781953
      bandwidth remaining 25%
      queue-buffers ratio 25

GigabitEthernet1/0/4 

  Service-policy input: AutoQos-ppm-Trust-Dscp-Input-Policy

    Class-map: class-default (match-any)  
      354508609 packets
      Match: any 
      QoS Set
         dscp dscp table AutoQos-4.0-Trust-Dscp-Table

  Service-policy output: AutoQos-ppm-Output-Policy

    queue stats for all priority classes:
      Queueing
      priority level 1
      
      (total drops) 0
      (bytes output) 114878282

    Class-map: AutoQos-ppm-Output-Priority-Queue (match-any)  
      0 packets
      Match:  dscp cs4 (32) cs5 (40) ef (46)
      Match: cos  5 
      Priority: 30% (300000 kbps), burst bytes 7500000, 
      
      Priority Level: 1 

    Class-map: AutoQos-ppm-Output-Control-Mgmt-Queue (match-any)  
      0 packets
      Match:  dscp cs2 (16) cs3 (24) cs6 (48) cs7 (56)
      Match: cos  3 
      Queueing
      
      queue-limit dscp 16 percent 80
      queue-limit dscp 24 percent 90
      queue-limit dscp 48 percent 100
      queue-limit dscp 56 percent 100
      (total drops) 0
      (bytes output) 2399886
      bandwidth remaining 10%
      
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Multimedia-Conf-Queue (match-any)  
      0 packets
      Match:  dscp af41 (34) af42 (36) af43 (38)
      Match: cos  4 
      Queueing
      
      (total drops) 0
      (bytes output) 0
      bandwidth remaining 10%
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Trans-Data-Queue (match-any)  
      0 packets
      Match:  dscp af21 (18) af22 (20) af23 (22)
      Match: cos  2 
      Queueing
      
      (total drops) 0
      (bytes output) 43749079
      bandwidth remaining 10%
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Bulk-Data-Queue (match-any)  
      0 packets
      Match:  dscp af11 (10) af12 (12) af13 (14)
      Match: cos  1 
      Queueing
      
      (total drops) 0
      (bytes output) 0
      bandwidth remaining 4%
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Scavenger-Queue (match-any)  
      0 packets
      Match:  dscp cs1 (8)
      Queueing
      
      (total drops) 0
      (bytes output) 712948
      bandwidth remaining 1%
      queue-buffers ratio 10

    Class-map: AutoQos-ppm-Output-Multimedia-Strm-Queue (match-any)  
      0 packets
      Match:  dscp af31 (26) af32 (28) af33 (30)
      Queueing
      
      (total drops) 0
      (bytes output) 58812248
      bandwidth remaining 10%
      queue-buffers ratio 10

    Class-map: class-default (match-any)  
      0 packets
      Match: any 
      Queueing
      
      (total drops) 66792
      (bytes output) 144084318466
      bandwidth remaining 25%
      queue-buffers ratio 25

 

 

@etoromol
Any thoughts?
Thanks!

Hi FratianiD,

Please find my inputs below.

1. Considering these interfaces are within the same port-channel, it is interesting
why one of them has 17% aprox. more drops than the other. It may be a good
idea to double check the etherchannel load-balancing algorithm.

2. Let's focus on interface GigabitEthernet1/0/3 which has the biggest drop ratio.

GigabitEthernet1/0/3 is up, line protocol is up (connected)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is on, output flow-control is unsupported
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 11741028
35955912 packets output, 24640138851 bytes, 0 underruns

3. The output-queue drop-rate is equal to 32,653% of the total output traffic
flowing through this interface. If we take a look at the policy in place, we
have the following:

GigabitEthernet1/0/3
Service-policy output: AutoQos-ppm-Output-Policy
Class-map: class-default (match-any)
0 packets
Match: any
Queueing

(total drops) 8660816
(bytes output) 17683781953
bandwidth remaining 25%
queue-buffers ratio 25

4. We can conclude that, QoS is causing the 73,765% of packet drops from the global
drop-ratio mentioned before (11741028).

Please test the following configuration and share with me the result:

configure terminal
auto qos srnd4
interface range 1/0/3 - 4
auto qos trust dscp
priority-queue out

I will touch base on this during the weekend and suggest a more consistent analysis
of your QoS configuration.

Regards,
Eduardo

1. Load balancing algorithm is default (no config)

interface Port-channel2
 description VLAN 10
 switchport access vlan 10
 switchport mode access
 spanning-tree portfast

 I tried your other commands but "auto qos srnd4" and "priority-queue out" are not commands on my switch.

3850(config)#auto qos ?
  global  Global keyword

3850(config)#auto qos global ?
  compact  Compact AutoQoS generated configurations

3850(config)#auto qos global compact ?
  <cr>  <cr>

3850(config-if-range)#p?
pagp power ptp

Hii FratianiD,

 

First of all The interface on which you have enabled Qos so on that interface what operations you are Performing.

Qos means :->Classification & Prioritization.

Classification of Voice,Video & Data |||| Prioritization of Voice over Video over Data.

So Which means on that particular Interface if you are Enabling Qos and that also on Auto Mode these Explains you have one Single Link and after that you have Connected a SWITCH/Any other Device so for what operations you are Enabling QOS That comes first because if you are having Voice & Video Service in your Organization and you have the Link with Minimum Bandwidth & as required to your QOS Service so there you will need Proper Bandwidth so that you should not face Drops Issue.

Otherwise If its only about Data in your Organization so I would suggest you better Stop Qos and Then observe your Organization Drops & Link Speed.

 

Thank you.

 

I'm having trouble understanding your English but I don't believe this should be happening. On a 1Gbps link (actually 2Gbps here with LACP) with no other traffic, unclassified data should not be dropping since there is nothing of priority to preempt it. It is my understanding that QoS should only take effect if the link is saturated. This is not the case here. I am seeing drops on an unsaturated link just because the data is not DSCP tagged.

jalejand
Cisco Employee
Cisco Employee

Which IOS version are you running?

If 16.x.x, please attach:

#show pla hard fed sw active qos queue config interface gig 1/0/3
#show pla hard fed sw active qos queue stats interface gig 1/0/3

If 3.x
#show platform qos queue config gi 1/0/3
#show platform qos queue stats gi 1/0/3


I rebooted the switch late last night to see if "qos queue-softmax-multiplier 1200" needed a reboot to take effect. The counters have been reset as a consequence. It probably won't show many drops until later on today but here you are in the meantime.

 

1. show pla hard fed sw active qos queue config interface gig 1/0/3

DATA Port:27 GPN:1352 AFD:Disabled FlatAFD:Disabled QoSMap:3 HW Queues: 216 - 223
  DrainFast:Disabled PortSoftStart:4 - 5400
----------------------------------------------------------
   DTS  Hardmax  Softmax   PortSMin  GlblSMin  PortStEnd
  ----- --------  --------  --------  --------  ---------
 0   1  7    45   9    45   5     0   0     0   6  7200
 1   1  4     0  10   360   9    80   4    30   6  7200
 2   1  4     0  11  1440   9    80   4    30   6  7200
 3   1  4     0  11  1440   9    80   4    30   6  7200
 4   1  4     0  11  1440   9    80   4    30   6  7200
 5   1  4     0  11  1440   9    80   4    30   6  7200
 6   1  4     0  11  1440   9    80   4    30   6  7200
 7   1  4     0  12  3600  10   200   5    75   6  7200
 Priority   Shaped/shared   weight  shaping_step  sharpedWeight
 --------   -------------   ------  ------------   -------------
 0      1     Shaped           850         255           0
 1      7     Shared           125           0           0
 2      7     Shared           125           0           0
 3      7     Shared           125           0           0
 4      7     Shared           312           0           0
 5      7     Shared          1250           0           0
 6      7     Shared           125           0           0
 7      7     Shared            50           0           0

   Weight0 Max_Th0 Min_Th0 Weigth1 Max_Th1 Min_Th1  Weight2 Max_Th2 Min_Th2
   ------- ------- ------- ------- ------- -------  ------- ------- ------
 0       0      71       0       0      80       0       0      90       0
 1       0     286       0       0     320       0       0     360       0
 2       0    1147       0       0    1282       0       0    1440       0
 3       0    1147       0       0    1282       0       0    1440       0
 4       0    1147       0       0    1282       0       0    1440       0
 5       0    1147       0       0    1282       0       0    1440       0
 6       0    1147       0       0    1282       0       0    1440       0
 7       0    2868       0       0    3206       0       0    3600       0

2. show pla hard fed sw active qos queue stats interface gig 1/0/3

DATA Port:27 GPN:1352 AFD:Disabled FlatAFD:Disabled QoSMap:3 HW Queues: 216 - 223
  DrainFast:Disabled PortSoftStart:4 - 5400
----------------------------------------------------------
   DTS  Hardmax  Softmax   PortSMin  GlblSMin  PortStEnd
  ----- --------  --------  --------  --------  ---------
 0   1  7    45   9    45   5     0   0     0   6  7200
 1   1  4     0  10   360   9    80   4    30   6  7200
 2   1  4     0  11  1440   9    80   4    30   6  7200
 3   1  4     0  11  1440   9    80   4    30   6  7200
 4   1  4     0  11  1440   9    80   4    30   6  7200
 5   1  4     0  11  1440   9    80   4    30   6  7200
 6   1  4     0  11  1440   9    80   4    30   6  7200
 7   1  4     0  12  3600  10   200   5    75   6  7200
 Priority   Shaped/shared   weight  shaping_step  sharpedWeight
 --------   -------------   ------  ------------   -------------
 0      1     Shaped           850         255           0
 1      7     Shared           125           0           0
 2      7     Shared           125           0           0
 3      7     Shared           125           0           0
 4      7     Shared           312           0           0
 5      7     Shared          1250           0           0
 6      7     Shared           125           0           0
 7      7     Shared            50           0           0

   Weight0 Max_Th0 Min_Th0 Weigth1 Max_Th1 Min_Th1  Weight2 Max_Th2 Min_Th2
   ------- ------- ------- ------- ------- -------  ------- ------- ------
 0       0      71       0       0      80       0       0      90       0
 1       0     286       0       0     320       0       0     360       0
 2       0    1147       0       0    1282       0       0    1440       0
 3       0    1147       0       0    1282       0       0    1440       0
 4       0    1147       0       0    1282       0       0    1440       0
 5       0    1147       0       0    1282       0       0    1440       0
 6       0    1147       0       0    1282       0       0    1440       0
 7       0    2868       0       0    3206       0       0    3600       0
core3850#
core3850#show pla hard fed sw active qos queue stats interface gig 1/0/3
DATA Port:27 Enqueue Counters
---------------------------------------------------------------------------------------------
Q Buffers          Enqueue-TH0          Enqueue-TH1          Enqueue-TH2             Qpolicer
  (Count)              (Bytes)              (Bytes)              (Bytes)              (Bytes)
- ------- -------------------- -------------------- -------------------- --------------------
0       0                    0                  128              5726106                    0
1       0                    0                    0                    0                    0
2       0                    0                    0                    0                    0
3       0                    0                    0                    0                    0
4       0                    0                    0                    0                    0
5       0                    0                    0                    0                    0
6       0                    0                    0                    0                    0
7       0                    0                    0               287220                    0
DATA Port:27 Drop Counters
-------------------------------------------------------------------------------------------------------------------------------
Q             Drop-TH0             Drop-TH1             Drop-TH2             SBufDrop              QebDrop         QpolicerDrop
               (Bytes)              (Bytes)              (Bytes)              (Bytes)              (Bytes)              (Bytes)
- -------------------- -------------------- -------------------- -------------------- -------------------- --------------------
0                    0                    0                    0                    0                    0                    0
1                    0                    0                    0                    0                    0                    0
2                    0                    0                    0                    0                    0                    0
3                    0                    0                    0                    0                    0                    0
4                    0                    0                    0                    0                    0                    0
5                    0                    0                    0                    0                    0                    0
6                    0                    0                    0                    0                    0                    0
7                    0                    0                    0                    0                    0                    0

Similar issue as well with total output drops incrementing with auto QoS enabled on a 9300.  We need QoS for WebEx and VoIP but we are seeing exact same results with traffic being dropped from class-default queue.  Softmax multiplier adjusted to 1200.  My TAC Engineer suggested to remove auto QoS (8 queues reduced to 2) and create a policy to either allocate a certain amount of bandwidth/priority for certain traffic (for us, WebEx is AF41 and EF) and let the rest of the traffic in that queue fall where it may.  I am reluctant to do so since WebEx and VoIP are working as expected.  Have you been able to set up a SPAN session to see what traffic is getting dropped?  Is there a business requirement to have auto QoS enabled? We are setting up a SPAN/RSPAN soon to monitor that dropped traffic. Thanks. 

All unclassified traffic is getting effected. I'm not sure why because I thought QOS was only suppose to kick in when there was no bandwidth left. I tested this on a dedicated 1Gbps link and it still throttles unclassified traffic and drops it.

QoS will help when there is congestion to ensure certain traffic is prioritized but its in use and queuing your traffic at all times. If there are no business requirements Id remove auto QoS and use softmax multiplier 1100 on the port. See if you can create a custom service policy for CS0 traffic and allocate 20% (200 Mbps) of interface bandwidth to see if that helps.

What if I just want the switch to not strip the DSCP tags? How do I tell it to keep the tags?

I think youd have to create class-maps and define the traffic/services there, and attach to a service-policy on the interface which doesnt seem scalable and it would be difficult to support. If that is the main goal to preserve the tags and your concerned about dropped best effort traffic, not sure how to move forward. Is there any way to inspect the traffic to see what the source of the drops are? Any other ports on the switch having issues like this as well? Is auto QoS enabled throughout network? Setting up QoS took quite some time for us and a lot of tinkering. Thanks.
Review Cisco Networking products for a $25 gift card