cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1901
Views
8
Helpful
33
Replies

Cisco 9200 packet loss when using 10G SFP

carl_townshend
Spotlight
Spotlight

Hi All

We are having some issues wherby when we plug in a 10G SFP int our 9200L-48P-4X switch we get severe performance loss, if we put in a 1G SFP the connection is fine.

We have tried multiple SFPs, different ports, different fibers, software upgrade etc and we will get lots of packet loss, this is backed up with some wireshark testing.

Software is on, 17.09.05 Cupertino

Any known issues here?

33 Replies 33

carl_townshend
Spotlight
Spotlight

We will stick with the auto qos for now, we have tested different settings on the softmax multiplier command, rather than 1200, we managed to achieve the same performance with 300, so it means individual ports cannot take all of the buffer space out the shared buffer.

Can anyone clarify, I read that each port gets its own dedicated buffer and also can use the global shared buffer during microbursts.

On the 9K series, from what I see, the dedicated buffer is the the priority queue (hardmax), the other queues then use the global buffer for all other traffic.

Can anyone confirm my understanding is correct?

Many thanks

I hope below mentioned link that attached article might be useful to configure QoS Service-policy, Policy-map and traffic-classes.

https://community.cisco.com/t5/networking-knowledge-base/implementation-of-qos-of-mqc-architechture-framework-and-syntax/ta-p/5268906

 

Sisira

carl_townshend
Spotlight
Spotlight

To add to this, the strange thing is the effect it has on our download speeds, if we put a gig uplink in we get good speed, as soon as we swap for the 10g link, it’s terrible. I get that the switch will drop it as it will be coming in much faster than the 1gig link can handle, but we are talking terrible performance, the transfer literally drops to kbps not mbps, I would have thought that the congestion control side of tcp would handle this much better.

". . . but we are talking terrible performance, the transfer literally drops to kbps not mbps, I would have thought that the congestion control side of tcp would handle this much better."

TCP dops impact, to flow rate, can vary greatly based on several variables.  For starters, there are different "flavors" of TCP.  (The "flavors" are focused on rate management.) Then there's a huge difference between a half speed back off vs. being driven back into slow start.  If working with large TCP buffers, lots of bandwidth might be wasted for unnecessary retransmissions unless SACK is enabled.  These issues are often magnified on LFNs (long [distance high latency] fat [high bandwidth] networks).

I've often encountered the situation where app users are complaining about poor network performance, but network team will say it's not the network 'cause our bandwidth usage stats show low (even very low) utilization and drop percentage.  Of course such stats are also often based on a 5 minute average for bandwidth usage and a overall, to date, drop percentage.

Yours is a text book case that's counter intuitive, i.e. increasing bandwidth on a link decreases performance.