cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6053
Views
35
Helpful
40
Replies

Cisco 3850 | 16.6.8 | Total output drops

Alan.A
Level 1
Level 1

I have 3850 switch running in version 16.6.8 and few interfaces having packet drops, like connections going to access points.after a bit of research I come up with this configuration. Is this correct? This 3850 switches are use only for end users (wired, Phones and wireless (no servers).

conf t
!
qos queue-softmax-multiplier 1200
!
ip access-list extended ACL_QUEUE1
permit ip any any
!
class-map match-any CMAPQUEUE1
match access-group name ACL_QUEUE1
!
policy-map QUEUE1
class class-default
bandwidth percent 100


To be applied on interfaces going to Access Points and Uplink to Core
interface GigabitEthernetX/0/X
service-policy output QUEUE1

 

This is a sample of interface with AP connected

GigabitEthernet1/0/48 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 00a5.bfdf.0b30 (bia 00a5.bfdf.0b30)
Description: ***AP8 ***
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is on, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:00, output hang never
Last clearing of "show interface" counters 8w2d
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 20083437
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 2000 bits/sec, 2 packets/sec
5 minute output rate 2000 bits/sec, 2 packets/sec
303584396 packets input, 167930756495 bytes, 0 no buffer
Received 5931092 broadcasts (5219146 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 5219146 multicast, 0 pause input
0 input packets with dribble condition detected
680183326 packets output, 528204738590 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out

40 Replies 40

Reza Sharifi
Hall of Fame
Hall of Fame

What happens if you remove the service-policy under the interface?

no service-policy output QUEUE1

Clear the counter after doing this command and test again.

HTH

 

Hi Reza,

I have not applied it yet in the switch but if the configs looks correct, I will try it tomorrow excluding the “no service-policy output QUEUE1” under the interface and observe the drops.

regards

Hi Alan,

Ok, so, if you have not applied it yet, the issue can't be the policy.

Based on the below numbers from the interface, it does not appear that the interface is congested. So, not sure if the policy will help but it is worth a try.

5 minute input rate 2000 bits/sec, 2 packets/sec
5 minute output rate 2000 bits/sec, 2 packets/sec

 

"Based on the below numbers from the interface, it does not appear that the interface is congested."

Certainly possibly true at the time of that interface stats "snapshot".  However, drop stats, especially if incrementing, generally indicate congestion at some times.

Also, BTW, you can even have drops while bandwidth rates are "low".  For why, you might want to read my recent reply to a posting asking how they could be having drops when their usage rate was low.  See https://community.cisco.com/t5/switching/identifying-a-10g-port/m-p/4762502

show platform qos queue config gigabitEthernet  <<- first I need to see this 

then 
queue-buffers ratio 100 <<- this will make Queue large enough the bandwidth command use to wieght the traffic 

show platform qos queue config gigabitEthernet  <<- last I need to see this to see the effect of command above 

This command is not available in 16.X.X "show platform qos queue config gigabitEthernet"

so I used this command - show platform hardware fed switch active qos queue config interface gigabitEthernet 1/0/48


XXXXXXXXX#show platform hardware fed switch active qos queue config interface gigabitEthernet 1/0/48
DATA Port:26 GPN:48 AFD:Disabled QoSMap:0 HW Queues: 208 - 215
DrainFast:Disabled PortSoftStart:2 - 1080
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
----- -------- -------- -------- -------- ---------
0 1 5 120 2 480 6 320 0 0 4 1440
1 1 4 0 6 720 3 480 2 180 4 1440
2 1 4 0 5 0 5 0 0 0 4 1440
3 1 4 0 5 0 5 0 0 0 4 1440
4 1 4 0 5 0 5 0 0 0 4 1440
5 1 4 0 5 0 5 0 0 0 4 1440
6 1 4 0 5 0 5 0 0 0 4 1440
7 1 4 0 5 0 5 0 0 0 4 1440
Priority Shaped/shared weight shaping_step
-------- ------------- ------ ------------
0 0 Shared 50 0
1 0 Shared 75 0
2 0 Shared 10000 0
3 0 Shared 10000 0
4 0 Shared 10000 0
5 0 Shared 10000 0
6 0 Shared 10000 0
7 0 Shared 10000 0

Weight0 Max_Th0 Min_Th0 Weigth1 Max_Th1 Min_Th1 Weight2 Max_Th2 Min_Th2
------- ------- ------- ------- ------- ------- ------- ------- ------
0 0 478 0 0 534 0 0 600 0
1 0 573 0 0 641 0 0 720 0
2 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0

policy-map QUEUE1
class class-default
bandwidth percent 100
priority level 1 percent 90 <<- add this command, this will sure your traffic will have 90% of HardBuffer. 

then make me see show platform hardware fed switch active qos queue config interface gigabitEthernet 

Just want to confirm, this is the complete configuration?

*******************************************

qos queue-softmax-multiplier 1200
!
ip access-list extended ACL_QUEUE1
permit ip any any
!
class-map match-any CMAPQUEUE1
match access-group name ACL_QUEUE1
!
policy-map QUEUE1
class class-default
bandwidth percent 100
priority level 1 percent 90

!To be applied on interfaces going to AP and Uplink to Core
interface GigabitEthernetX/0/X
service-policy output QUEUE1

*******************************************

 

go a head and show me Queue after add command 

one more point the 
qos queue-softmax-multiplier 1200 <<- this value I think for SW 3860 is wrong you need 400 instead. 

This is the result after changing from "qos queue-softmax-multiplier 1200" to "qos queue-softmax-multiplier 400"

btw, this is a 3850 switch.


XXXXXXXXXXXXX#show platform hardware fed switch active qos queue config interface gigabitEthernet 1/0/48
DATA Port:26 GPN:48 AFD:Disabled QoSMap:0 HW Queues: 208 - 215
DrainFast:Disabled PortSoftStart:3 - 4320
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
----- -------- -------- -------- -------- ---------
0 1 5 120 7 1920 6 320 0 0 5 5760
1 1 4 0 8 2880 3 480 2 180 5 5760
2 1 4 0 5 0 5 0 0 0 5 5760
3 1 4 0 5 0 5 0 0 0 5 5760
4 1 4 0 5 0 5 0 0 0 5 5760
5 1 4 0 5 0 5 0 0 0 5 5760
6 1 4 0 5 0 5 0 0 0 5 5760
7 1 4 0 5 0 5 0 0 0 5 5760
Priority Shaped/shared weight shaping_step
-------- ------------- ------ ------------
0 0 Shared 50 0
1 0 Shared 75 0
2 0 Shared 10000 0
3 0 Shared 10000 0
4 0 Shared 10000 0
5 0 Shared 10000 0
6 0 Shared 10000 0
7 0 Shared 10000 0

Weight0 Max_Th0 Min_Th0 Weigth1 Max_Th1 Min_Th1 Weight2 Max_Th2 Min_Th2
------- ------- ------- ------- ------- ------- ------- ------- ------
0 0 1625 0 0 1816 0 0 2040 0
1 0 2295 0 0 2565 0 0 2880 0
2 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0

Should I leave the configuration like this? and observe for the next couple of days if I can still see ang drops?

Did your 3850 initially accept 1200?  If so, use that.  Basically, use whatever maximum you device accepts.  (1200 might only be possible on later IOS versions.)

Oh, and what is the actual policy you've applied?

Yes, it was accepted. I will use it 1200 theb.