cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1853
Views
10
Helpful
13
Replies

configuring 3560 QoS

soginy
Level 1
Level 1

I have a couple of questions about qos and lan qos configuration:

 

1. I have a read in cisco text that configuring bandwidth command under class-map doesnot reserve bandwidth and bandwidth is available for other traffic if not in use. My question is lets say for example i have configured 30% for one set of traffic and 70% for another. if the group is have configured for 30% is only using 25% can the other group configured for 70% use this 5%. Or the only time the bandwidth in the 30% queue is available is when there is no traffic been transmitted. 

 

2. I will like to configure a scenario on a cisco 3560 switch where i will have 2 queue with different bandwidth configure. I understand that the qos configuration on the switches are different and you cannot configure bandwidth with class-maps on router. what i want is the equivalent of doing this on a route.

 

Class-map XX

   match traffic xx

 

Class-map YY

   match traffic YY

 

 

policy-map zz

class xx

   bandwidth percent 30

class yy

   bandwidth percent 70

 

So basically what i need is how to configure two queues with same priority same drop precedence. i basically want them to be like FiFo queues dont want any queue to have priority over another. i just want to reserve different bandwidth values and every other thing the same. i only want two queues and i want to map different types of traffic to both queue. I do not know if this is possible as it seems different queues have different default properties.

4 Accepted Solutions

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame
"1. I have a read in cisco text that configuring bandwidth command under class-map doesnot reserve bandwidth and bandwidth is available for other traffic if not in use. My question is lets say for example i have configured 30% for one set of traffic and 70% for another. if the group is have configured for 30% is only using 25% can the other group configured for 70% use this 5%. Or the only time the bandwidth in the 30% queue is available is when there is no traffic been transmitted. "

A "bandwidth" class, without a policer or shaper, can use whatever additional bandwidth is available.

The best way, I believe, to think of bandwidth percentages, is the really assign sharing ratios. So, if you only have two classes set to 30 and 70 percent, assuming all interface bandwidth is being used, they will share the bandwidth 30:70 or 3:7. I.e. you would be the same sharing if you set bandwidth percentage as 3 and 7 percent or 9 and 21 percent, etc.

This ratios hold for more classes. For example, say you have 3 classes assigned as 20, 30 and 50 percent (or 4, 6 and 10 percent, etc.). Further assume only the first and third class both want all the bandwidth, they would get it in the ratio of 2:5.

For your second question, when you assign different bandwidth ratios, you're creating different priorities.

That said, yes what you want to do is possible. You can use just 2 of the 4 queues and have them configured to set their bandwidth ratios in a 3:7 ratio. You would direct traffic, of your choice to only those two queues.

On the 3560, you might use an interface ingress policy (much like what you could on a router) to identify and "tag" the traffic into the two classes. (The foregoing assumes the traffic isn't already tagged.) The 3560 hardware will direct traffic to egress queues based on L2 or L3 ToS tags.

View solution in original post

Thank you for the quick response.  is this the same behaviour when i use the bandwidth or priority commands instead ?

A "bandwidth" class, without a policer or shaper, can use whatever additional bandwidth is available.

The best way, I believe, to think of bandwidth percentages, is the really assign sharing ratios. So, if you only have two classes set to 30 and 70 percent, assuming all interface bandwidth is being used, they will share the bandwidth 30:70 or 3:7. I.e. you would be the same sharing if you set bandwidth percentage as 3 and 7 percent or 9 and 21 percent, etc

View solution in original post

". . . use bandwidth, bandwidth percentage . . ."

Yes.

". . . or priority command."

No.

"Which means just configuring a bandwidth class does not prevent congestion."

Correct. It's used to determine how you manage congestion. I.e. how its packets will be dequeued relative to other queues.

View solution in original post

#1 "Any explanation to this behaviour ?"

Yes, but not something to explain in a paragraph or two.

I will say, such behavior is one of the reasons I often suggest using FQ, when it's available.

#2 "What happens if priority queue is transmitting at 100percent link capacity and a best effort flow comes in is the priority queue policed back to 30percent even if the best effort flow is not using up to 70 percent."

PQ, I believe, will be policed.

View solution in original post

13 Replies 13

Joseph W. Doherty
Hall of Fame
Hall of Fame
"1. I have a read in cisco text that configuring bandwidth command under class-map doesnot reserve bandwidth and bandwidth is available for other traffic if not in use. My question is lets say for example i have configured 30% for one set of traffic and 70% for another. if the group is have configured for 30% is only using 25% can the other group configured for 70% use this 5%. Or the only time the bandwidth in the 30% queue is available is when there is no traffic been transmitted. "

A "bandwidth" class, without a policer or shaper, can use whatever additional bandwidth is available.

The best way, I believe, to think of bandwidth percentages, is the really assign sharing ratios. So, if you only have two classes set to 30 and 70 percent, assuming all interface bandwidth is being used, they will share the bandwidth 30:70 or 3:7. I.e. you would be the same sharing if you set bandwidth percentage as 3 and 7 percent or 9 and 21 percent, etc.

This ratios hold for more classes. For example, say you have 3 classes assigned as 20, 30 and 50 percent (or 4, 6 and 10 percent, etc.). Further assume only the first and third class both want all the bandwidth, they would get it in the ratio of 2:5.

For your second question, when you assign different bandwidth ratios, you're creating different priorities.

That said, yes what you want to do is possible. You can use just 2 of the 4 queues and have them configured to set their bandwidth ratios in a 3:7 ratio. You would direct traffic, of your choice to only those two queues.

On the 3560, you might use an interface ingress policy (much like what you could on a router) to identify and "tag" the traffic into the two classes. (The foregoing assumes the traffic isn't already tagged.) The 3560 hardware will direct traffic to egress queues based on L2 or L3 ToS tags.

Thank you for the quick response.  is this the same behaviour when i use the bandwidth or priority commands instead ?

A "bandwidth" class, without a policer or shaper, can use whatever additional bandwidth is available.

The best way, I believe, to think of bandwidth percentages, is the really assign sharing ratios. So, if you only have two classes set to 30 and 70 percent, assuming all interface bandwidth is being used, they will share the bandwidth 30:70 or 3:7. I.e. you would be the same sharing if you set bandwidth percentage as 3 and 7 percent or 9 and 21 percent, etc

On a router, CBWFQ's "priority" is quite different from "bandwidth" class command.

The former uses a dedicated LLQ, which is dequeued before all other queues. It's also special in that the priority command creates an implicit policer (which on many platforms only "triggers" when there's congestion - i.e. w/o congestion, LLQ traffic might exceed the priority bandwidth).

BTW, on most platforms there's only one LLQ regardless of the number of classes using the priority command. (In such cases, the other priority classes allow different implicit policers.)

so does this also mean configuring a bandwidth class without a shaper just means the ratio at which a class is serviced irrespective of whether you use bandwidth, bandwidth percentage or priority command. Which means just configuring a bandwidth class does not prevent congestion. 

". . . use bandwidth, bandwidth percentage . . ."

Yes.

". . . or priority command."

No.

"Which means just configuring a bandwidth class does not prevent congestion."

Correct. It's used to determine how you manage congestion. I.e. how its packets will be dequeued relative to other queues.

hi i have got a couple of questions: 

1. I have read in a couple of journal that separating flows in two separate queues has an effect on delay than using a single queue. my thought is this should have no effect whatsoever as no scheduling mechanism as been configured. This is the excerpt from the journal. 

'''

We experimented with two ways to queue the packets as they cross the switch-to-switch link: (i) in one case, we queue packets belonging to the two flows separately in two queues (i.e., each flow gets its own queue), each configured at a maximum rate of 50 Mbps (ii) in the second case, we queue packets from both flows in the same queue configured at a maximum rate of 100 Mbps. 

1) The per-packet average delay increases in both cases as traffic send rate approaches the configured rate of 50 Mbps. This is an expected queue-theoretic outcome and motivates the need for slack allocations for all applica- tions in general. For example, if an application requires a bandwidth guarantee of 1 Mbps, it should be allocated 1.1 Mbps for minimizing jitter.
2) The case with separate queues experiences lower average per-packet delay when flow rates approach the maximum rates. We observed this effect even more strongly in the case of software switches (Appendix B). This indicates that when more than one flow uses the same queue, there is interference caused by both flows to each other. This becomes a source of unpredictability and eventually may cause the end-to-end delay guarantees for the flows to be not met or perturbed significantly.

Any explanation to this behaviour ?

 

'''

 

2) if you configured a priority queue on switch with say 30 percent of bandwidth for the priority queue and 70 percent for best effort. I understand a priority queue is policed only during congestion and can use unsed bandwidth. What happens if priority queue is transmitting at 100percent link capacity and a best effort flow comes in is the priority queue policed back to 30percent even if the best effort flow is not using up to 70 percent.

#1 "Any explanation to this behaviour ?"

Yes, but not something to explain in a paragraph or two.

I will say, such behavior is one of the reasons I often suggest using FQ, when it's available.

#2 "What happens if priority queue is transmitting at 100percent link capacity and a best effort flow comes in is the priority queue policed back to 30percent even if the best effort flow is not using up to 70 percent."

PQ, I believe, will be policed.

so you mean there will be no difference using 2 queues for different flows if you were using FIFO queuing ?

Sorry, I don't understand your question. Could you rephrase it?

you said in your previous reply to my question that you such behaviour is one of the reasons I often suggest using FQ.   So i was wondering if with FQ you won't get the behaviour explained in the below experiment.  I am assuming FQ mean Fifo queue here.

 

We experimented with two ways to queue the packets as they cross the switch-to-switch link: (i) in one case, we queue packets belonging to the two flows separately in two queues (i.e., each flow gets its own queue), each configured at a maximum rate of 50 Mbps (ii) in the second case, we queue packets from both flows in the same queue configured at a maximum rate of 100 Mbps. 

1) The per-packet average delay increases in both cases as traffic send rate approaches the configured rate of 50 Mbps. This is an expected queue-theoretic outcome and motivates the need for slack allocations for all applica- tions in general. For example, if an application requires a bandwidth guarantee of 1 Mbps, it should be allocated 1.1 Mbps for minimizing jitter.
2) The case with separate queues experiences lower average per-packet delay when flow rates approach the maximum rates. We observed this effect even more strongly in the case of software switches (Appendix B). This indicates that when more than one flow uses the same queue, there is interference caused by both flows to each other. This becomes a source of unpredictability and eventually may cause the end-to-end delay guarantees for the flows to be not met or perturbed significantly.



Regards

 

 

Seyi

Ah, sorry, I should have noted what FQ stands for, which is - fair queuing.

Ideally it's one FIFO queue for every flow. Each active queue obtains the same share of available bandwidth (NB: sometimes FQ can provide some queues a larger percentage of bandwidth).

On Cisco devices, the device creates a number of FIFO queues and then hashes flows into them. Also ideally, there will only be one flow in any one queue, but flows can hash "collide"into the same queue.

Hi is this assumption right? "  that the end-to-end delays are lower and more stable when separate queues are provided to each critical flow – especially as the rates for the flows get closer to their maximum assigned rates" .  

 

I have seen this in a few papers now and i cannot find any evidence to back it. I do not see any reason why using separate queues will reduce end to end delay as no scheduling mechanism has been configured. 

 

Can you please point me to any material that proves this 

 

thanks

I wouldn't agree that separate queues automatically guarantee lower end-to-end delays unless the "critical" flows get precedence over other flows. (Which might explain why you cannot find any papers, without providing any precedence to some flows, to provide such evidence.)

When you have multiple queues, often they are scheduled in some form of round-robin, but not always. Cisco's (very) old PQ would be an example of queues that are not scheduled that way.

I would agree that separate queues (if equally scheduled) do generally provide less variance in end-to-end delays.

As to flows getting closer to their maximum assigned rates, I don't understand what that means, as queuing happens when there's congestion, and a flow getting close to its maximum rate may, or may not, experience congestion.
Review Cisco Networking for a $25 gift card