10-05-2015 04:59 AM - edited 03-08-2019 02:03 AM
I am replacing existing 3750 switches with 3850's in our DMZ. The DMZ will have only server and firewall ports and I want optimize throughput accounting for bursty traffic and minimize any output drops.
What is the best approach to configuring MQC and best practice to configuring the 3850? Any examples would be greatly appreciated. Today our 3750's are not configured for any port policing and I get constant output drops. We are hoping with the right MQC policy we can take full advantage of what the 3850 can offer.
Thanks!
10-05-2015 05:40 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
I haven't any "hands-on" with the 3850 (or 3650), but I recall when I reviewed their buffering specs, they didn't appear to be much better then the 3750X series.
The 3750 series are very prone to bursty traffic packet dropping, both due to their limited hardware resources and to their default buffering configurations. I've found, it's possible, in some situations, to dramatically reduce bursty traffic egress packet dropping by tuning their settings. Perhaps the 3850 might benefit from similar. (Again, though, cannot say for sure for the 3850.)
10-06-2015 05:15 AM
Thanks for you input Joseph, but when I did my research I did see a difference between the two. The 3750 handled 32g switch fab, 38.7mpps forwarding rate and a shared 2mb buffer for each set of 24 downlinks. Where the 3850 has 480g switch fab, 130.95mpps forwarding rate and a shared 12mb buffer for 48 ports.
This post, however, was not to discuss the switches, but more the qos on the switch. I was thinking about using the below configuration. Since it will be in a dmz, there is no need for voice/video control, so I am hoping the configuration will provide optimal performance. Its part of the CVD.
class-map match-any NETWORK_CONTROL-CLASS
match dscp cs6
class-map match-any TRANSACTIONAL-CLASS
match dscp af22
match dscp af23
match dscp af21
class-map match-any SCAVENGER-CLASS
match dscp cs1
class-map match-any BULK-CLASS
match dscp af11
match dscp af12
match dscp af13
policy-map 2P6Q3T
class NETWORK_CONTROL-CLASS
bandwidth remaining percent 5
queue-buffers ratio 10
class TRANSACTIONAL-CLASS
bandwidth remaining percent 50
queue-buffers ratio 10
queue-limit dscp af21 percent 100
queue-limit dscp af22 percent 90
queue-limit dscp af23 percent 80
class SCAVENGER-CLASS
bandwidth remaining percent 5
queue-buffers ratio 10
class BULK-CLASS
bandwidth remaining percent 15
queue-buffers ratio 10
queue-limit dscp af11 percent 100
queue-limit dscp af12 percent 90
queue-limit dscp af13 percent 80
class class-default
bandwidth remaining percent 25
queue-buffers ratio 25
10-07-2015 06:59 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Ah, it's been long since I compared a 3750 vs. 3850, I now think the resource improvement, or lack there of, I was concerned about was TCAM (as we're implementing IPv6). However, after my confusing buffer RAM with TCAM, please don't hold me to an accurate comparison between those two either, as I got the buffer RAM difference so wrong. (I.e. I believe 12 MB of buffers for 48 ports is better than a 3750's 2 MB per 24 copper ports.)
Regarding the other performance differences between the original 3750 series and the 3850, you're spot on, and they can become issues, but my experience with 3750 series, you're more likely to see output drops (as you mention in your OP) and those are often caused by the 3750's limited buffer RAM and the way, by default, it manages that buffer RAM. (When I noted I've had some success with 3750 buffer tuning, for a SAN device, I went from multiple drops a second to a couple per day.)
In your OP, you were concerned about avoiding the interface drops you're seeing on the existing 3750 and in you follow-up post, you're now hoping to use QoS to optimize performance.
Well, to the latter, you can do some very interesting things with QoS, and some of those things do include improving performance. However, cannot comment whether your proposed QoS implementation will hurt or hinder, as I don't know what you're trying to accomplish nor what your traffic is really like.
I will say, as your OP principally expressed concern about minimizing your drops, although most everyone likes zero to few drops, drop management is an important, but often overlooked part of a QoS. I suspect you realize this, as you new post has WTD in two classes, but, again, whether what you've defined, will help or hinder, I cannot say.
10-07-2015 08:37 AM
Thanks for your response Joseph. I don't know if the configuration I posted is right or not. You mentioned (When I noted I've had some success with 3750 buffer tuning, for a SAN device, I went from multiple drops a second to a couple per day.) This sounds like what I am trying to do, although my traffic is not SAN, I would be curious to know what you did.
10-07-2015 10:04 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
The 3750 can have egress interface buffers in a "common pool", or they can be reserved per interface egress queue. What I did was minimized all the interface egress queue reservations. I also maximized how much memory each interface queue can obtain. Lastly, I changed the interface egress queue buffer ratios to favor the BE queue (which was what was being used by the SAN traffic).
10-07-2015 02:17 PM
Hi Joseph, I'm getting the same drops as Bret for my SAN and BE traffic
Can you show us what the config looks like for how you allocated more memory to the queues, buffer ratios and minimize queue reservations?
cheers
10-08-2015 05:27 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
(First, from you other posting, you'll need to globally enable QoS.)
mls qos queue-set output 1 threshold 1 3200 3200 1 3200
mls qos queue-set output 1 threshold 2 3200 3200 1 3200
mls qos queue-set output 1 threshold 3 3200 3200 1 3200
mls qos queue-set output 1 threshold 4 3200 3200 1 3200
mls qos queue-set output 1 buffers 5 10 75 10
The "1" in the threshold statement sets the minimum amount of buffer space reservation for the egress queue.
The "3200" (if your IOS version supports; older versions only allow up to 400) sets the maximum tail drop limits.
The "buffer" statement, sets the proportion between the 4 egress queues. It would impact the "1"(%) allocation, but more importantly for us, it also determines what the "3200" is relative to. You might try the default of 25 25 25 25, and adjust (toward the above) if the threshold settings, alone, don't appear to help enough.
Understand, the above allows Q3 for any one interface to grab a huge chunk of the common pool, which is great if there's no contention between ports at exactly the same time.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide