02-18-2025 07:11 AM
Hi All
We are having some issues wherby when we plug in a 10G SFP int our 9200L-48P-4X switch we get severe performance loss, if we put in a 1G SFP the connection is fine.
We have tried multiple SFPs, different ports, different fibers, software upgrade etc and we will get lots of packet loss, this is backed up with some wireshark testing.
Software is on, 17.09.05 Cupertino
Any known issues here?
02-21-2025 06:57 AM
We will stick with the auto qos for now, we have tested different settings on the softmax multiplier command, rather than 1200, we managed to achieve the same performance with 300, so it means individual ports cannot take all of the buffer space out the shared buffer.
Can anyone clarify, I read that each port gets its own dedicated buffer and also can use the global shared buffer during microbursts.
On the 9K series, from what I see, the dedicated buffer is the the priority queue (hardmax), the other queues then use the global buffer for all other traffic.
Can anyone confirm my understanding is correct?
Many thanks
03-09-2025 05:07 AM - edited 03-10-2025 04:58 PM
I hope below mentioned link that attached article might be useful to configure QoS Service-policy, Policy-map and traffic-classes.
02-23-2025 03:38 AM
To add to this, the strange thing is the effect it has on our download speeds, if we put a gig uplink in we get good speed, as soon as we swap for the 10g link, it’s terrible. I get that the switch will drop it as it will be coming in much faster than the 1gig link can handle, but we are talking terrible performance, the transfer literally drops to kbps not mbps, I would have thought that the congestion control side of tcp would handle this much better.
02-23-2025 04:40 AM
". . . but we are talking terrible performance, the transfer literally drops to kbps not mbps, I would have thought that the congestion control side of tcp would handle this much better."
TCP dops impact, to flow rate, can vary greatly based on several variables. For starters, there are different "flavors" of TCP. (The "flavors" are focused on rate management.) Then there's a huge difference between a half speed back off vs. being driven back into slow start. If working with large TCP buffers, lots of bandwidth might be wasted for unnecessary retransmissions unless SACK is enabled. These issues are often magnified on LFNs (long [distance high latency] fat [high bandwidth] networks).
I've often encountered the situation where app users are complaining about poor network performance, but network team will say it's not the network 'cause our bandwidth usage stats show low (even very low) utilization and drop percentage. Of course such stats are also often based on a 5 minute average for bandwidth usage and a overall, to date, drop percentage.
Yours is a text book case that's counter intuitive, i.e. increasing bandwidth on a link decreases performance.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide