10-20-2011 08:52 AM - edited 03-07-2019 02:57 AM
I have a stack of 4 Cisco WS-C2960S-48FPS-L switches running c2960s-universalk9-mz.122-58.SE1 code. One of our network monitoring tools is indicating discards on a certain port on the switch. Upon further investigation I am seeing the Total output drops values change in a very odd manner.
The numbers seem to go from 573 to 1146 to 1719 then back down to 573 and it starts the same pattern over:
RRSO-A#sh int g4/0/20 | i Total output drops
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 573
RRSO-A#sh int g4/0/20 | i Total output drops
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1146
RRSO-A#sh int g4/0/20 | i Total output drops
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1719
RRSO-A#sh int g4/0/20 | i Total output drops
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1719
RRSO-A#sh int g4/0/20 | i Total output drops
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 573
RRSO-A#
The port utilization is quite low, the highest I've seen over the past 7 days is 3.5% with a polling period of every 30 seconds using Statseeker. Yet the discards are bouncing all over the place.
I've searched though the bugs for 12.2(58)SE1 and didn't see anything.
Has anyone else seen behavior like this and been able to find the cause?
Regards,
Dave
Solved! Go to Solution.
02-02-2012 06:11 AM
CSCtq86186 - the bug ID.
10-20-2011 09:20 AM
can you past just the "show interface gigx/x" of one of the interfaces here, also "show interface gigx/x counter error" along with the config of that port.
Do you see these counters incermenting?
Regards,
Ranganath
10-20-2011 09:40 AM
Hello Ranganath,
Thank you for your reply. Here is the output you requested:
RRSO-A#show interface gig4/0/20
GigabitEthernet4/0/20 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is b8be.bf76.0314 (bia b8be.bf76.0314)
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:22, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1146
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
4603072 packets input, 2564855058 bytes, 0 no buffer
Received 31817 broadcasts (541 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 541 multicast, 0 pause input
0 input packets with dribble condition detected
14065070 packets output, 3566033153 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
RRSO-A#
RRSO-A#show interfaces gig4/0/20 counter error
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
Gi4/0/20 0 0 0 0 0 573
Port Single-Col Multi-Col Late-Col Excess-Col Carri-Sen Runts Giants
Gi4/0/20 0 0 0 0 0 0 0
RRSO-A#
RRSO-A#sh run int g4/0/20
Building configuration...
Current configuration : 437 bytes
!
interface GigabitEthernet4/0/20
switchport access vlan 2
switchport mode access
no logging event link-status
srr-queue bandwidth share 1 25 70 5
srr-queue bandwidth shape 3 0 0 0
priority-queue out
no snmp trap link-status
storm-control broadcast level 80.00 50.00
storm-control multicast level 80.00 50.00
spanning-tree portfast
spanning-tree bpduguard enable
service-policy input QOS-Conditionally-Trusted-Endpoint
end
RRSO-A#
I do see the drop numbers changing but only in the pattern indicated in my first port.
Regards,
Dave
10-20-2011 09:56 AM
"show inter gigx/x counters error" - output is showing you that UnderSize counters are incrementing - which is self explanatory that, the packets are getting droped as they are smaller than the expected standard size.
However you could still check this link for the exact explanation:
http://www.cisco.com/en/US/products/hw/switches/ps700/products_tech_note09186a008015bfd6.shtml
UnderSize
So, suggestion would be to know the device that is connected to this port.
Perform a sniffer on this port to see what is the actual size of those packets.
Also on the sniffer learn more about the packet, and check if you could do anything at the source that would stop sending these packets.
Hope this helped to troubleshoot this issue further.
Regards,
Ranganath
(Cisco)
10-20-2011 10:02 AM
Hello Ranganath,
The forums here alter the copied and pasted text somewhat. That value is really under the OutDiscards column.
Regards,
Dave
10-20-2011 10:08 AM
Oh okay, well, then these outdiscards counters indicate that the switch could not buffer all the packets that came in on that port a that moment of time, which happens when the traffic is bursty in nature. For example, if the buffer can hold 100 packets, and if you see 120 packets at a an instant on that output buffer, the port will be able to store only 100 and the rest would be dropped and thats counted as outdiscards.
I also see you have QoS enabled. Can you do a "show mls qos int gigx/x stat" on that port and see on which queue the packets are getting dropped.
Edited:- "also get the output for - show mls qos "
Regards,
Ranganath
10-20-2011 10:13 AM
Hello Ranganath,
What I'm finding odd is that the values in the show interface command keep changing like that... cycling through the same values over and over. I've seen this exact same behavior on other WS-C2960S switches that we're using but never on any other Cisco device. Aren't the Total output drops numbers supposed to just keep climbing? What is the rollover values there?
Here is the output of the command you requested:
Regards,
Dave
10-20-2011 10:30 AM
So do you see that, you are seeing packet drops on Queue 2 at threshold 3. So the QoS is dropping the packets on interface. and i dont see many packets going through queue 3. So you could increase the size of buffers on queue2, and see if that helps, and if that would stop the QoS drops. Do you know what traffic is enqueued on queue2. "show mls qos maps dscp-output-q" should tell you the dscp mappings to the queues and thresholds.
Also check for the buffer allocation on ports by running the command "show mls qos queue-set 1". If possible change the buffer allocation on queue-set 2 and apply that queue-set 2 on this interface and monitor.
This is a good document to start with on troubleshooting this problem:
Hope these recommendations help.
Regards,
Ranganath.
10-20-2011 06:15 PM
I am sorry, I just overlooked your question: "Aren't the Total output drops numbers supposed to just keep climbing? What is the rollover values there?" - Well on this point, I would agree with you that it should be a bug. And you should contact TAC for this.
Regards,
Ranganath
02-01-2012 12:05 PM
I am having a similar issue. Except that my interface is varying much more wildly. Did anyone ever determine the root cause of this behaviour? I have seen it vary from 0 to 47 billion within seconds of the interface being reset..
We will be opening up a TAC to look into this.. but suspect we will likely just upgrade the IOS...
And in case it matters, switch is a WS-C3750V2-24TS-E stack running 12.2(58)SE2
I clear the counters and then 'sh int fa1/0/2 | include Total Output Drops'
the output from this run a few dozen times within a minute results in:
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61070
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61070
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61070
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61070
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61070
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237326964
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237326964
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237326964
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237326964
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237326964
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237326964
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61250
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61250
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61250
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237326994
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237326994
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237326994
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61250
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61250
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61250
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61250
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61250
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237327227
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237327227
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237327227
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237327227
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237327227
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237327227
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237327227
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237327227
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4237327227
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61680
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61680
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61680
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61680
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61680
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61680
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 61680
02-02-2012 05:47 AM
Yes, thats a bug. You should still upgrade it to 15.0(1)SE..
Regards,
Ranganath
02-02-2012 06:07 AM
Can you provide a link to the bug ID?
Dave
Sent from Cisco Technical Support iPhone App
02-02-2012 06:11 AM
CSCtq86186 - the bug ID.
02-02-2012 07:41 AM
Finally, a bug! I'd not been able to find that up until just now (I hadn't checked in a while, though). Odd that Cisco is only fixing it in the 15.x code. I work for a healthcare organization and we don't just upgrade major code versions like this. Once we find code that works without major issues, and the 12.2 code has been problematic to say the least up until the version we have now on the 2960's, we continue to use it because the network stays up.
To me the 15.x code is still too new and untested in the real world to upgrade our entire state-wide network to it.
I'll continue to bug our Cisco sales rep on this. Thank you for the bug ID!
Dave
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide