cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1764
Views
0
Helpful
7
Replies

Another case of slow performance due to oversubscription (3850 - 2960x)

l.buschi
Level 2
Level 2

Hello,

my network is made of 3850 with 10Gig interfaces on the core and 2960X with 10G fiber uplinks and 1G interfaces for users.

Communication from a 10G server to a client is very slow while the viceversa is ok. (servers are cisco UCS). if I set my servers' nic to 1 G everithing is ok. My costumers complaints about this.

On my users interface I notice many drops. If i set my servers' nic to 1G communication is ok.

I think is an oversubscription problem. I opened a case with cisco TAC for network without any result.

 

what can I do in this case?

I have no QoS implemented ad the moment.

Which is the standard way to proceed in these cases?

 

Tks

Johnny

 

7 Replies 7

Joseph W. Doherty
Hall of Fame
Hall of Fame
Cisco access switches, like the 2960 series, often have (especially by default) insufficient interface buffers to handle bursts. From what you've described, the 10g links allow packet bursts that overrun 2960 switch port buffer capacity.

If the 3850 supports shaping, this might be used to slow bursts to avoid over running the 2960 buffer capacity. (An advantage of using shaping, you might find that although the 2960 cannot handle 10g bursts, it might handle more than gig too.)

I believe with QoS enabled on a 2960, it supports buffer tuning. Such tuning might much better support 10g bursts.

Tks Joseph,

any example on how to configure QoS to handle burst?

(mind that at this moment I don't have any QoS applied to my network).

 

I believe the 2960 series is much like the 3560/3750 series.

If so, after you enable QoS, I would first suggest pushing all the logical tail drops limits to max values. If that doesn't mitigate the drop issue enough, then next (assuming you don't want to use four egress queues), map all traffic to one queue, and give it all "standard" buffer resources. If that's still not enough, then you can adjust how the 2960 shares its buffers between interfaces.

(I'm sure you would appreciate sample configs for the above, but I've recently retired, and I don't have access to Cisco equipment and/or templates I used to. So, I would need to reread the QoS documentation and re-construct the needed config statement. [One of my prior posts might have config samples for a 3750.])

Tks again,

very helpful.

Do you thing that from server side is there anything I can do?

 

Yes, if the server supported some form of traffic shaping, that should work. Again, you might find you can push faster than gig but not at 10g.

The other option might be, to see if the server and your network devices support Ethernet flow control.

I worked with TAC.

We looked at interfaces counters and noticed lot of drops.

so the tac enegneer suggested me to proceed as follow:

 

mls qos queue-set output 2 threshold 1 3200 3200 100 3200
mls qos queue-set output 2 threshold 2 3200 3200 100 3200
mls qos queue-set output 2 threshold 3 3200 3200 100 3200
mls qos queue-set output 2 threshold 4 3200 3200 100 3200
mls qos queue-set output 2 buffers 5 80 5 10
mls qos

 

and then apply 

 

queue-set 2

 

on interfaces with drops.

Now my network is working fine.

 

Tac suggested me to apply queue-set 2 to interface where I noticed drops.

I wonder if it could be possible to apply it in every interface even if it doens't show any drop.

it could be easier and more scalable.

 

What do you think about it?

tks

Johnny

 

I often suggest bumping queue logical limits to max, so that part of what TAC suggested I agree with.

Mucking with the queue reservation percentage and buffer ratios, the way TAC did, though, is probably okay for now if all your traffic is mostly BE and you're only applying this policy on "problem" interfaces. You might run into issues again if you applied to all interfaces (one of your next questions) and/or really wanted to take advantage of QoS using all four egress queues.

That said, you could try setting it on all interfaces or change the settings for queue-set 1, which is the default for all the interfaces. If doing so doesn't cause any problems, now, sure you could do that too.
Review Cisco Networking products for a $25 gift card