12-05-2016 02:15 AM - edited 03-08-2019 08:26 AM
I've been trying to figure out the impact to port throughout if the corporate QoS policy is applied to edge ports which only receive DSCP_0
I've reviewed the current QoS config and have written a couple of paragraphs on ingress/egress policy and potential impact to traffic.
Could you be kind enough to sanitise my thoughts on this?
QoS policy (for 3750 / 2960)
mls qos map cos-dscp 0 8 16 26 34 46 48 56
mls qos srr-queue input bandwidth 70 30
mls qos srr-queue input threshold 1 80 90
mls qos srr-queue input priority-queue 2 bandwidth 30
mls qos srr-queue input cos-map queue 1 threshold 1 1 2
mls qos srr-queue input cos-map queue 1 threshold 2 0
mls qos srr-queue input cos-map queue 1 threshold 3 6 7
mls qos srr-queue input cos-map queue 2 threshold 2 3
mls qos srr-queue input cos-map queue 2 threshold 3 4 5
mls qos srr-queue input dscp-map queue 1 threshold 1 9 10 11 12 13 14 15 16
mls qos srr-queue input dscp-map queue 1 threshold 1 17 18 19 20 21 22 23
mls qos srr-queue input dscp-map queue 1 threshold 2 0 1 2 3 4 5 6 7
mls qos srr-queue input dscp-map queue 1 threshold 2 28 30
mls qos srr-queue input dscp-map queue 1 threshold 3 48 49 50 51 52 53 54 55
mls qos srr-queue input dscp-map queue 1 threshold 3 56 57 58 59 60 61 62 63
mls qos srr-queue input dscp-map queue 2 threshold 2 24 25 26 27 29 31 35 36
mls qos srr-queue input dscp-map queue 2 threshold 2 37 38 39
mls qos srr-queue input dscp-map queue 2 threshold 3 32 33 34 40
mls qos srr-queue input dscp-map queue 2 threshold 3 41 42 43 44 45 46 47
mls qos srr-queue output cos-map queue 1 threshold 3 4 5
mls qos srr-queue output cos-map queue 2 threshold 1 2
mls qos srr-queue output cos-map queue 2 threshold 2 3
mls qos srr-queue output cos-map queue 2 threshold 3 6 7
mls qos srr-queue output cos-map queue 3 threshold 3 0
mls qos srr-queue output cos-map queue 4 threshold 3 1
mls qos srr-queue output dscp-map queue 1 threshold 3 32 33 34 40 41 42 43 44
mls qos srr-queue output dscp-map queue 1 threshold 3 45 46 47
mls qos srr-queue output dscp-map queue 2 threshold 1 16 17 18 19 20 21 22 23
mls qos srr-queue output dscp-map queue 2 threshold 1 27 28 29 30 31 35 36 37
mls qos srr-queue output dscp-map queue 2 threshold 1 38 39
mls qos srr-queue output dscp-map queue 2 threshold 2 24 26
mls qos srr-queue output dscp-map queue 2 threshold 3 25 48 49 50 51 52 53 54
mls qos srr-queue output dscp-map queue 2 threshold 3 55 56 57 58 59 60 61 62
mls qos srr-queue output dscp-map queue 2 threshold 3 63
mls qos srr-queue output dscp-map queue 3 threshold 3 0 1 2 3 4 5 6 7
mls qos srr-queue output dscp-map queue 4 threshold 1 8 9 11 13 15
mls qos srr-queue output dscp-map queue 4 threshold 2 10 12 14
mls qos queue-set output 1 threshold 1 100 100 100 700
mls qos queue-set output 1 threshold 2 125 125 50 400
mls qos queue-set output 1 threshold 3 100 100 100 400
mls qos queue-set output 1 threshold 4 60 150 10 200
mls qos queue-set output 1 buffers 50 5 40 5
mls qos
additional interface commands#
srr-queue bandwidth share 10 10 75 5
srr-queue bandwidth shape 10 0 0 0
priority-queue out
mls qos trust dscp
Here are my thoughts
If a port has LAN QoS applied what will be the impact to traffic which enters port only tagged with default QoS marking – DSCP 0?
The pertinent config for CoS / DSCP 0 is listed below :-
mls qos map cos-dscp 0 8 16 26 34 46 48 56
mls qos srr-queue input bandwidth 70 30
mls qos srr-queue input threshold 1 80 90
mls qos srr-queue input priority-queue 2 bandwidth 30
mls qos srr-queue input dscp-map queue 1 threshold 2 0 1 2 3 4 5 6 7
mls qos srr-queue output dscp-map queue 3 threshold 3 0 1 2 3 4 5 6 7
mls qos queue-set output 1 threshold 3 100 100 100 400
mls qos queue-set output 1 buffers 50 5 40 5
srr-queue bandwidth shape 10 0 0 0
srr-queue bandwidth share 10 10 75 5
priority-queue out
For ingress queuing (where applicable) default QoS markings – DSCP / CoS 0, will be mapped to queue 1 threshold 2 which provides a threshold value of 90% and bandwidth guarantee of 70% and allocated 90% of the buffers. Bandwidth is guaranteed but not limited to and therefore each queue can scavenge any available bandwidth from its opposite ingress queue. Queue 1 is allocated 90% of port buffers. The threshold value indicates the level at which the switch will commence Weighted Tail Drop (WTD) once the ports buffers
Queue : 1 2
-----------------------------
buffers : 90 10
bandwidth : 70 30
priority : 0 30
threshold1: 80 100
threshold2: 90 100
Fig 1. Ingress Queue Settings
Egress queueing allows the use of 2
Cisco Catalyst 3750 egress queue does not support Low Latency Queueing (LLQ). It supports priority queueing. When you configure priority-queue out, queue 1 is always serviced when it has a packet. When you configure this command, the SRR weight and queue size ratios are affected because there is one fewer queue participating in SRR. This means that weight1 in the
Queueset: 1
Queue : 1 2 3 4
----------------------------------------------
buffers : 50 5 40 5
threshold1: 100 125 100 60
threshold2: 100 125 100 150 reserved : 100 50 100 10
maximum : 700 400 400 200
Fig 2.Egress Queueset1 Settings
To summarise a port which will only see default QoS markings, DSCP/CoS 0, then both ingress and egress configurations would allow 90% of available bandwidth / buffer space to be used before WTD starts.
12-05-2016 05:22 AM
The key to understanding 2960/3560/3750 QoS's impact, by default, enabling it reserves buffers and sets WTD. If you only have one set of tagged traffic, the default reservations tend to cause earlier drops, if there's congestion, due to default's decreased buffer allocations. Unless you're enabled shaping, one set of tagged traffic should still be able to acquire 100% of port bandwidth.
If you search these forums, you'll find lots of posts on decreased (3750) throughput, due to increased drops rates, when QoS is enabled. Often seen with all traffic still having DSCP BE markings.
BTW, in my experience, I've found by "tuning" QoS settings, specifically by pushing all, or almost all, buffers to the shared pool, and setting WTD and buffers settings to logical maximums, I obtain selective class bandwidth guarantees but w/o increasing drops rates (if fact, I've obtained drop rates lower than with QoS disabled). However, your mileage might differ.
12-05-2016 07:06 AM
Thanks for the response Joseph. I think I was on the right lines as from what I can gather I think WTD will start to take effect when a port reaches 90% of b/w / buffer usage with the policy I am running with.
Im hoping that someone can clarify my assumptions?
12-06-2016 07:54 AM
Sorry, I could probably work it out, but 3750 computation is a bit tedious when you need to compute percentages of percentages. When you change buffer allocation percentage, you also change queue percentages.
12-07-2016 06:07 AM
Do you think my summarisation of 90% saturation of buffer / b/w before WTD starts with QoS policy applied and only DSCP 0 traffic present on
12-18-2016 03:35 AM
Sorry for the belated reply.
If all the traffic is BE, WTD doesn't help.
The purpose of 3750 WTD is to drop some (less important) traffic, from same queue, trying to preclude (more important) traffic, from the same queue, being dropped.
Generally, my QoS approach drops per queue, not within a queue. I generally don't need that level of QoS.
An example of the two approaches.
If you have transaction traffic in one queue and backup traffic in another, I would like want the transaction traffic traffic to have priority for both queuing and avoiding being dropped.
However, if you had two backup flows in the same queue, and one was more "important", you might use WTD to drop one backup flow's traffic before the other.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: