cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
948
Views
10
Helpful
3
Replies

3750 QoS - Output Drops - When all traffic is equal priority

swhyde
Level 1
Level 1

We have a WS-C3750V2-24PS and we are seeing output drops.

 

Its buffer related because it only started after turning on flow control so our ASA firewall would stop seeing overruns.

 

We have QoS disabled but I dont know if enabling it will help us as I have found little information on how the egress buffers will work when there is no QoS enabled. It always is just written delivered on best effort. Ok yes that makes sense but can we get a bit more details on how that compares to QoS buffer settings.

 

For instance with QoS enabled we can set a queue set that has access to 3200 memory from the common pool for threshold 1 and 2 of a specified queue but this may not be the default. Without QoS is there any limitations on an interface to access the common memory pool or can any interface access it as much as it wants just as if we had applied QoS on the interface using a queue-set that had maximum access?

 

If by default all interfaces already have maximum access to the common pool for memory and all of the traffic flowing through the switch is the same priority (security video traffic in our case) can QoS help with output drops at all?

 

Thank you in advance for your thoughts.

1 Accepted Solution

Accepted Solutions

"I still hope someone will be able to give us more details on the buffer allocations with QoS disabled because I think we need to know how the system should/will behave in these conditions."

Probably only Cisco could provide that information, and my guess, they are unlikely to do so.  First, because you can pretty much do whatever you want if you enable QoS and, second, because those switches are EoL.  (From working in a shop that was a very, very large customer of Cisco's, I could sometimes obtain such information, if we really pressed them for it, but usually under NDA.)

However, continuing to guess, chances are possibly better than 50/50 that disabled QoS buffer management works like the default QoS configuration, but with one queue rather than four, and w/o WTD.

Again, if it's just one problem port, if possible, consider how you use your edge ports vs. uplink ports.  Less uplink ports means each of those ports, effectively, has more buffer RAM to utilize.  (If problem port is copper, you can use a copper transceiver on an uplink port.)

Also again, option 3, might only be used like I or you posted, when other queues are not being used.  I.e. they shouldn't need any buffers.

Lastly, with option 2, its intent is to release port "reserved" buffer space so that the available size of the common pool increases (allowing more buffers that can draw upon, as dynamically needed.  If you also only use one queue, then just like option 3, we should be able to safely set all the other queues to release their unused queue reservations.

E.g.:
mls qos queue-set output 1 threshold 1 3200 3200 1 3200
mls qos queue-set output 1 threshold 2 3200 3200 100 3200
mls qos queue-set output 1 threshold 3 3200 3200 1 3200
mls qos queue-set output 1 threshold 4 3200 3200 1 3200
mls qos queue-set output 1 buffers 0 100 0 0
(BTW, Cisco documentation isn't clear about exactly how mixing queue-sets 1 and 2 impact sharing of the physical buffer space. So, I would suggest using the same queue-set for all the ports that share the same physical memory buffer space.)

View solution in original post

3 Replies 3

Joseph W. Doherty
Hall of Fame
Hall of Fame

"Without QoS is there any limitations on an interface to access the common memory pool or can any interface access it as much as it wants just as if we had applied QoS on the interface using a queue-set that had maximum access?"

The public information on 3750 buffer usage when QoS is disabled is lacking (especially compared to the documentation when QoS is enabled, although it can be difficult to fully understand 3750 QoS buffer allocation).  So, unable to answer questions on buffer allocations with inactive QoS.

"If by default all interfaces already have maximum access to the common pool for memory and all of the traffic flowing through the switch is the same priority (security video traffic in our case) can QoS help with output drops at all?"

Sometimes QoS buffer configuration can help, very much.  An example, had a 3750 dropping multiple packets per second.  After tuning buffer settings, reduced drop rate to a few per day.

BTW, by default port egress queues do not have to access to all common pool.  Further, by default, half buffer space reserved to individual ports.

I found there are 3 steps you can take.  First, logically set egress limits to max, including for thresholds.  Second, decrease port reservation pool (to enlarge common pool [I've even gone to minimum port reservation settings]), and lastly, adjust port egress allocation percentage, if needed.  I recall I calculated the maximum memory one port's egress queue could acquire was about 2/3 (?) total buffer space.

e.g.:

1) mls qos queue-set output 1|2 threshold 1..4 3200 3200 100 3200 !using max thresholds is fairly safe
2) mls qos queue-set output 1|2 threshold 1..4 3200 3200 1 3200 !reducing reserve to minimum, a bit more risky
3) mls qos queue-set output 1|2 buffers 0 100 0 0 !this assumes all traffic directed to Q2 - much more risky

Also BTW, Cisco did publish a TechNote on how physical buffer RAM is provided.  It's the same physical allocation size to each bank of 24 edge port or to the uplink ports.  I.e. ". . . while 3750-X has 2MB for each set of 24 downlink ports and 2MB for uplinks."  (Ideally, if you have multiple ports needing much buffer space, you should try to balance them across the physical banks.)

Hi Joseph,

 

Thank you for your response.

 

I have tried a bit of option 1 and 3 that you recommended and would also offer this video to anyone looking for more details: https://www.youtube.com/watch?v=6UJZBeK_JCs

 

Its hard to quantify if there is any improvement. I have run the following and found the drops were now appearing in queue 2 threshold 1 

mls qos queue-set output 2 threshold 1 3200 3200 100 3200
mls qos queue-set output 2 threshold 2 3200 3200 100 3200
mls qos queue-set output 2 threshold 3 3200 3200 100 3200
mls qos queue-set output 2 threshold 4 3200 3200 100 3200
mls qos queue-set output 2 buffers 2 94 2 2
mls qos

 

Since the drops were occuring in threshold 1 I checked and found they were marked COS 0. This is still all security video traffic its just that QoS was not being used and they're all marked COS 0. I ran the following command:

 

mls qos srr-queue output cos-map queue 2 threshold 3 0

 

After the above I now see potentially less output drops but when they do occur they are in threshold 3 so that should be better. Unfortunately since the issue is bandwidth related and the cameras are not sending consistent traffic but are very bursty in nature it will take time to see if there is any difference. I still hope someone will be able to give us more details on the buffer allocations with QoS disabled because I think we need to know how the system should/will behave in these conditions.

 

Sincerely,

Shaun

"I still hope someone will be able to give us more details on the buffer allocations with QoS disabled because I think we need to know how the system should/will behave in these conditions."

Probably only Cisco could provide that information, and my guess, they are unlikely to do so.  First, because you can pretty much do whatever you want if you enable QoS and, second, because those switches are EoL.  (From working in a shop that was a very, very large customer of Cisco's, I could sometimes obtain such information, if we really pressed them for it, but usually under NDA.)

However, continuing to guess, chances are possibly better than 50/50 that disabled QoS buffer management works like the default QoS configuration, but with one queue rather than four, and w/o WTD.

Again, if it's just one problem port, if possible, consider how you use your edge ports vs. uplink ports.  Less uplink ports means each of those ports, effectively, has more buffer RAM to utilize.  (If problem port is copper, you can use a copper transceiver on an uplink port.)

Also again, option 3, might only be used like I or you posted, when other queues are not being used.  I.e. they shouldn't need any buffers.

Lastly, with option 2, its intent is to release port "reserved" buffer space so that the available size of the common pool increases (allowing more buffers that can draw upon, as dynamically needed.  If you also only use one queue, then just like option 3, we should be able to safely set all the other queues to release their unused queue reservations.

E.g.:
mls qos queue-set output 1 threshold 1 3200 3200 1 3200
mls qos queue-set output 1 threshold 2 3200 3200 100 3200
mls qos queue-set output 1 threshold 3 3200 3200 1 3200
mls qos queue-set output 1 threshold 4 3200 3200 1 3200
mls qos queue-set output 1 buffers 0 100 0 0
(BTW, Cisco documentation isn't clear about exactly how mixing queue-sets 1 and 2 impact sharing of the physical buffer space. So, I would suggest using the same queue-set for all the ports that share the same physical memory buffer space.)

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card