09-26-2023 07:19 AM
Hey guys,
I am seeing a lot of Total output drops, can somebody have a look at this and help me understand if this could be a buffer issue or not.
GigabitEthernet1/1/1 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 1281.90a4.14b2
Description: switch
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 17/255, rxload 3/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 10/100/1000BaseTX SFP
input flow-control is on, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:01, output hang never
Last clearing of "show interface" counters 07:52:58
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 38566227
Queueing strategy: Class-based queueing
Output queue: 0/40 (size/max)
5 minute input rate 13472000 bits/sec, 3707 packets/sec
5 minute output rate 69291000 bits/sec, 6898 packets/sec
172109311 packets input, 139212434085 bytes, 0 no buffer
Received 916423 broadcasts (691158 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 691158 multicast, 0 pause input
0 input packets with dribble condition detected
224922164 packets output, 256013181663 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
show platform hardware fed switch 1 qos queue stats interface gi1/1/1
DATA Port:0 Enqueue Counters
-------------------------------
Queue Buffers Enqueue-TH0 Enqueue-TH1 Enqueue-TH2
----- ------- ----------- ----------- -----------
0 0 0 10454549135 1647570361
1 0 0 0 19285688704
2 0 0 0 245398
3 0 0 0 1059894842
4 0 0 0 0
5 0 0 0 23371419879
6 0 0 0 0
7 0 0 0 0
DATA Port:0 Drop Counters
-------------------------------
Queue Drop-TH0 Drop-TH1 Drop-TH2 SBufDrop QebDrop
----- ----------- ----------- ----------- ----------- -----------
0 0 0 0 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 991388 0 0
4 0 0 0 0 0
5 0 0 2867654416 0 0
6 0 0 0 0 0
7 0 0 0 0 0
show platform hardware fed switch 1 qos queue config interface gi1/1/1
DATA Port:0 GPN:49 AFD:Disabled QoSMap:3 HW Queues: 0 - 7
DrainFast:Disabled PortSoftStart:3 - 306
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
----- -------- -------- -------- -------- ---------
0 1 6 51 7 51 5 0 0 0 5 408
1 1 6 51 8 204 7 136 0 0 5 408
2 1 4 0 8 204 7 136 3 51 5 408
3 1 4 0 8 204 7 136 3 51 5 408
4 1 4 0 9 192 8 128 4 48 5 408
5 1 4 0 9 192 8 128 4 48 5 408
6 1 4 0 5 0 5 0 0 0 5 408
7 1 4 0 5 0 5 0 0 0 5 408
Priority Shaped/shared weight shaping_step
-------- ------------- ------ ------------
0 1 Shared 440 0
1 2 Shared 440 0
2 7 Shared 50 0
3 7 Shared 244 0
4 7 Shared 244 0
5 7 Shared 75 0
6 0 Shared 10000 0
7 0 Shared 10000 0
Weight0 Max_Th0 Min_Th0 Weigth1 Max_Th1 Min_Th1 Weight2 Max_Th2 Min_Th2
------- ------- ------- ------- ------- ------- ------- ------- ------
0 0 81 0 0 90 0 0 102 0
1 0 203 0 0 227 0 0 255 0
2 0 162 0 0 181 0 0 204 0
3 0 162 0 0 181 0 0 204 0
4 0 153 0 0 171 0 0 192 0
5 0 153 0 0 171 0 0 192 0
6 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0
Solved! Go to Solution.
09-26-2023 09:49 AM
Sw(config)#qos queue-softmax-multiplier 400
Do this and monitor output drop
09-26-2023 09:45 AM
- FYI : https://www.frameip.com/wp-content/uploads/dos-cisco-queue-drops.pdf
M.
09-26-2023 09:49 AM
Sw(config)#qos queue-softmax-multiplier 400
Do this and monitor output drop
09-26-2023 10:06 AM
I would suggest trying the often Cisco recommended value of 1200.
https://community.cisco.com/t5/switching/qos-queue-softmax-multiplier-1200/td-p/3048991
NB: 3650 and 3850 share the same architecture.
09-27-2023 04:04 AM
Also, if the 1200 doesn't solve your drops (enough), we can get into buffer tuning.
10-03-2023 05:23 AM - edited 10-03-2023 05:51 AM
I am experiencing this on both sides of the trunk port, should I use this command on both switches?
qos queue-softmax-multiplier
And what impact does it might have on overall performance of both switches ?
These switches are running in production, one of the switches is our core L3 switch which is trunked to an L2 access switch. Both the trunk ports are showing a lot of packet drops.
10-03-2023 06:38 AM
I recommend add it only on SW side that you face output drop.
It effect performance? Not at all' this command only make SW use free memory if there is no free memory then it will drop packet until there there is free.
So it will not impact any performance.
10-03-2023 08:28 AM
"And what impact does it might have on overall performance of both switches ?"
It would most impact the switch the command is applied to.
As to potential impact, there may be none, or it makes things worse or better. Hopefully, the latter, but insufficient information to actually say (i.e. guarantee) what the result will be.
If the issue is drops due to transient burst congestion, and if you have buffers not being used only because you're bumping into logical queue size limits, this change might yield a dramatic decrease in drops. (This is often the case.)
If the issue is drops due to sustained congestion, and if you have buffers not being used only because you're bumping into logical queue size limits, there may be no change or perhaps even more drops.
If for either of the prior two issues, if there are no free buffers, i.e. logical and physical limits are the same, no change.
Of if for the same two prior issues, if there are free buffers, but other interfaces are occasionally making use of them, those buffers might no longer be available for other interfaces, so the above results for the currently congested interface might be the same as noted above but additionally other interfaces will also start to drop packets.
In my first hand experience, and suggesting similar adjustments, to others on these forums, going back to the 3750 series, so far, have not seen or heard anything beyond a positive result without any negative side effects. Again, even Cisco recommends this.
Highly recommend at least trying it, but do monitor the results. If overall impact is "worse", I would NOT expect it to be a major/big worse, just a slight worse.
Again, if there's no positive result, or not as positive as hoped for, there's still more that might be done to "tune" buffers or other QoS "tuning" that may decrease drops on those interfaces. And/or, (often the first "fix" by many) is to provide additional bandwidth might be considered too, e.g. 10g or Etherchannel.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide