cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
714
Views
2
Helpful
5
Replies

Total Output drops (c4500) high value on switchport

GigabitEthernet3/33 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet Port, address is 78da.6ef8.1c98 (bia 78da.6ef8.1c98)
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, link type is auto, media type is 10/100/1000-TX
input flow-control is on, output flow-control is on
Auto-MDIX on (operational: on)
ARP type: ARPA, ARP Timeout 04:00:00
Last input 01:04:18, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 10472
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 50000 bits/sec, 3 packets/sec
5 minute output rate 230000 bits/sec, 31 packets/sec
22937057 packets input, 10515380394 bytes, 0 no buffer
Received 150224 broadcasts (148147 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 input packets with dribble condition detected
49328226 packets output, 33190098381 bytes, 0 underruns

0 output errors, 0 collisions, 4 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out

The Windows PC connected to the switch-port experiencing latency issues, although the PC has storage/memory issues but, is this Total output drops value could mean something is wrong from switch perspective.? See the attached.

5 Replies 5

ammahend
VIP Alumni
VIP Alumni

Thats 0.045% drop, generally upto 1% drop is acceptable for TCP traffic, for sake of clarification, clear counters and monitor output drops again, see if it exceeds 1% of input packets, if not than output drops are most likely not the cause of PC latency

 

-hope this helps-

Although what @ammahend wrote is 100% correct, I want to emphasize his "generally" and "most likely", both mean your drops could, in fact, be a problem even at the low average percentage.

The cited 1% acceptable drop rate, is close to an ideal for an ongoing drop rate for an oversubscribed (too large RWIN) TCP flow while in congestion avoidance mode.

In this case, we have a percentage calculated since the device started, which usually allows less busy times to bring the average drop rate percentage down, often with very, very large decreases.

Ideally, what you want to see is the drop rate, millisecond by millisecond, when drops occur.  Further, bandwidth utilization for the same time interval.  Unfortunately, it's difficult, and/or impractical (for multiple reasons), to obtain such data directly from network devices.  Often that level of analysis requires packet captures, and off-line analysis of that data.

BTW, the reason I note the above is because you wrote "experiencing latency issues", which can be caused by drops.  However, really insufficient information to determine if those drops are the issue.

Also, I noticed Ethernet flow control is enabled.  This too can cause latency issues.  (Interestingly, flow control attempts to minimize drops by increasing latency - i.e. try to queue rather than drop.)

To summarize, I agree with @ammahend the drop percentage is so low, those drops, alone, might not be causing any performance issues, but there's not sufficient information to guarantee those drops are irrelevant (again, @ammahend isn't guaranteeing they are not).

You also mention other storage/memory issues with the PC with latency issues, and such might also be the only cause.

Also, if WANs/MANs are involved, the impact of distance based latency are often ignored.

As to the switch, itself, being a potential issue, that's possible too.  For the Catalyst 4500 series, much depends on the particular chassis, sup, and line cards being used.  I recall (?) a port might have drops not due to the specific port being oversubscribed, but by an ASIC, serving multiple ports, or the slot itself, being oversubscribed.

Unfortunately, running down a performance issue isn't always easy.  Laugh, and because it's not, it's not uncommon for one service group to blame another service group.

Thank you, Joseph, for the insight, we found out that the computer max out its storage and low on memory, and the temp folder was too large, after the field tech cleared it, the computer continue to crash and unstable, we are going to replace the PC and I also did clear counters and I will continue to monitor. Thank you for the insight.

Thank you for the follow up information.

I was thinking to add another reply, asking whether the issue was only seen with one particular host, and if so, for troubleshooting, often it's best to initially focus just on that host (for problems unique to that host, such as you initially noted and with the follow up).

If a performance issue is more widespread, like many or all hosts, whose traffic transits a particular network device or link, that points more to a network issue.

But, even with "hints", like the forgoing, some issues can be a combination of the network device and specific hosts, such as flow control, because adjacent devices both must have it enabled to be actively used (and a adjacent device can be a host).  As device device defaults can differ, it can sometimes be unclear that it takes a combination of devices, with specific default configurations, to cause a problem, and, don't forget, something like flow control might be ideal for some neighboring device combinations, so, again, it's easy to miss the cause of some issues.

Also again, glad to read, it seems, the problem its just on a single host.

Laugh, often my network troubleshooting, has been used to show the network isn't the problem, but where the problem looks to be.  Much easier to just say, the problem is due to "x" (like sun spots, laugh) rather than do the work to run down the problem.

Networking engineers, often get used to the network being blamed for all performance problems, when it's so often not, they get used to, more or less, "wasting" their time, to "prove" it's not a network issue, once again.

But, sometimes, it is non obvious network problem.  (My favorite story, I like to tell, we had a MetroE hand-off to a tier-one service provider.  I noticed, and complained to the SP, I don't believe it's working quite right, as I seem to be losing some packets across your cloud.  They checked, doubled checked and tripled checked, it's working just fine, there are no drops, you just don't understand how it's supposed to work [as the physical hand-off had more bandwidth than the CIR].  I stated, oh, I think I do understand, and still believe it's not quite right.  So, since I kept complaining, they went to my networking manager, and complained I kept complaining.  My network manager backed me, if Joe says it's not quite right, it probably isn't.  After 3 months, they finally discovered out-of-date firmware, on one of the line cards, being used, which had a minor bug, when utilization would surpass 97%, there were some/few additional, needless, drops, also not recorded on their interface.  They updated the firmware [surprise] now it worked as I expected.)

Thank you for this insight, this help a lot, I cleared the counter, and I will continue to monitor it, appreciate the effort, we are also going to swapped out the PC since its already crashing. Appreciate your info.