06-17-2025 01:27 PM - edited 06-17-2025 01:34 PM
Hi,
On a Cisco Catalyst 9200L switch, under the interface connected to a modern Access Point, I’m observing some packet drops as shown in the output below.
Question:
Is it a common behavior for all the ports to drop packets like this, and could we tune any settings to reduce or prevent these drops?
-----
#show interfaces Gi3/0/44
GigabitEthernet3/0/44 is up, line protocol is up (connected)
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 736
Queueing strategy: fifo
5 minute input rate 932000 bits/sec, 320 packets/sec
5 minute output rate 1383000 bits/sec, 393 packets/sec
24832456 packets input
0 runts, 938 giants, 0 throttles
938 input errors, 0 CRC
52196632 packets output
Thank you!
Solved! Go to Solution.
06-17-2025 02:13 PM
736 / (736 + 52,196,632) = (about) .00141 % - probably not anything to be overly concerned about.
Is it a common behavior for all the ports to drop packets like this, and could we tune any settings to reduce or prevent these drops?
Common? - Hard to say, as drops can be vary variable.
Possible to reduce? - Maybe, but for the very low overall percentage you have, so far, unlikely worth pursuing.
Your giants, is more an interesting issue. Possible someone is occasionally sending you jumbo frames, but those too are rather rare, so running that down might not be worthwhile either.
What you might want to monitor, is daily percentages, watching for any jumps.
06-17-2025 02:13 PM
736 / (736 + 52,196,632) = (about) .00141 % - probably not anything to be overly concerned about.
Is it a common behavior for all the ports to drop packets like this, and could we tune any settings to reduce or prevent these drops?
Common? - Hard to say, as drops can be vary variable.
Possible to reduce? - Maybe, but for the very low overall percentage you have, so far, unlikely worth pursuing.
Your giants, is more an interesting issue. Possible someone is occasionally sending you jumbo frames, but those too are rather rare, so running that down might not be worthwhile either.
What you might want to monitor, is daily percentages, watching for any jumps.
06-17-2025 02:41 PM - edited 06-17-2025 02:43 PM
Output drops are common in networks where links are oversubscribed, which basically describes every non-trivial, stat-muxed, packet-switched network. The real questions is whether the drops are negatively impacting the SLAs for your network users' applications [Digression: all networks have SLAs whether explicit or implicit]. In your specific case, this particular interface dropped 736 packets while successfully transmitting 52196632 packets. The loss rate is 736 / (736 + 52196632) or 0.0014%; so no, the loss rate on this interface is having negligible impact on your users' apps.
What if the loss rate was higher? Let's say it is 1% for the sake of discussion; would this have a negative impact? Different types of apps tolerate loss rates differently, so it could very well be having a negative impact on the more sensitive apps. This differentiation in app sensitivity to transport characteristics such as loss rate, latency, and jitter (variation in latency) leads to dividing apps into different classes with different transport requirements, which leads to explicitly expressing the delivery of app traffic classes as SLAs. How then do you engineer an oversubscribed network to deliver on multi-class SLAs? QoS. If you don't have a QoS policy in your network now, it is time to start considering one.
What about the received giants? The total is 938 giants / 24832456 received packets, or 0.0038%; again not an issue for this interface, per se. Some device(s) are very occasionally sending packets larger than expected by this interface.
[Edit: While I was typing, Joe was posting a perfectly good answer]
06-17-2025 08:30 PM
[Edit: While I was typing, Joe was posting a perfectly good answer]
Thank you Jim. If your initial post was before you saw mine, interesting how much we overlap. ; )
Regardless, Jim, you make some excellent (additional) points.
One subtle difference, though, although we come up with the same overall drop percentage, I say it's probably not an issue, while Jim says it has a negligible impact.
Why did I use probably?
Because, we don't know anything about the apps being used and we don't know anything about how these drops were "distributed".
In theory, all those drops could have happened in the few seconds before you obtained the network interface stats. If that was the case, the overall average remains the same very low drop percentage, but it could have been very high, and very impactful, for a few seconds. I think that's very, very (very) unlikely to have been what happened, but we really don't know. So, Jim is very, very (very) likely correct that any impact was negligible, but that's just probably the case.
I mention this because, if you don't understand drops can be very impactful, even at low overall averages, you may not believe users complaining about sporadic poor performance. (If I only had the proverbial nickel for every time I heard something like, our users cannot be having performance issues because our bandwidth monitoring shows our peak usage never even exceeds 30%. [Also, we need to obtain more bandwidth, because our bandwidth monitoring shows we pretty often hit 80%.])
Jim mentions considering QoS. QoS can "manage" traffic, providing much better consistency in app performance.
BTW, as you mention the port is feeding a wireless AP, wireless makes wired performance considerations trivial in comparison.
To recap, from the numbers you posted, both the egress drops and ingress giants are probably negligible. ; )
06-17-2025 02:48 PM
@bravealikhan wrote:938 giants
That's MTU being received by the switch.
Not sure if PMTDU is supported on 9200 switch or not.
06-17-2025 08:39 PM
That's MTU being received by the switch.
Possibly better stated like:
That's a too large MTU being received by the switch.
Not sure if PMTDU is supported on 9200 switch or not.
Suspect the forgoing was a simple typo, believe PMTUD was intended.
If so, I would expect it to be supported, but unclear, to me, how it's applicable in this instance.
06-18-2025 04:25 AM
Mismatch mtu setting can lead to these output and giant drop
MHM
06-18-2025 07:07 AM
Mismatch mtu setting can lead to these output and giant drop
So, you're saying too large egress frame drops are also counted on the interface drop counter? If so, it makes sense, but I was unable to, just now, find supporting documentation. Do have any documentation references, or this is something you've seen first hand?
I don't doubt you, but it's something I've never considered. : (
Also, you're not able to distinguish between congestion drops vs. too large frame drops? (Possibly congestion drops can be seen on ASIC QoS stats.)
BTW, if all the drops are MTU related (interestingly the two drop counts are in the same ballpark), either other devices need to use the correct MTU or you need to configure network to support the larger MTU.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide