05-02-2018 04:46 AM - edited 03-08-2019 02:52 PM
I have a couple of questions about qos and lan qos configuration:
1. I have a read in cisco text that configuring bandwidth command under class-map doesnot reserve bandwidth and bandwidth is available for other traffic if not in use. My question is lets say for example i have configured 30% for one set of traffic and 70% for another. if the group is have configured for 30% is only using 25% can the other group configured for 70% use this 5%. Or the only time the bandwidth in the 30% queue is available is when there is no traffic been transmitted.
2. I will like to configure a scenario on a cisco 3560 switch where i will have 2 queue with different bandwidth configure. I understand that the qos configuration on the switches are different and you cannot configure bandwidth with class-maps on router. what i want is the equivalent of doing this on a route.
Class-map XX
match traffic xx
Class-map YY
match traffic YY
policy-map zz
class xx
bandwidth percent 30
class yy
bandwidth percent 70
So basically what i need is how to configure two queues with same priority same drop precedence. i basically want them to be like FiFo queues dont want any queue to have priority over another. i just want to reserve different bandwidth values and every other thing the same. i only want two queues and i want to map different types of traffic to both queue. I do not know if this is possible as it seems different queues have different default properties.
Solved! Go to Solution.
05-02-2018 06:59 AM
05-02-2018 07:37 AM
Thank you for the quick response. is this the same behaviour when i use the bandwidth or priority commands instead ?
A "bandwidth" class, without a policer or shaper, can use whatever additional bandwidth is available.
The best way, I believe, to think of bandwidth percentages, is the really assign sharing ratios. So, if you only have two classes set to 30 and 70 percent, assuming all interface bandwidth is being used, they will share the bandwidth 30:70 or 3:7. I.e. you would be the same sharing if you set bandwidth percentage as 3 and 7 percent or 9 and 21 percent, etc
05-02-2018 08:03 AM
05-15-2018 03:34 AM
05-02-2018 06:59 AM
05-02-2018 07:37 AM
Thank you for the quick response. is this the same behaviour when i use the bandwidth or priority commands instead ?
A "bandwidth" class, without a policer or shaper, can use whatever additional bandwidth is available.
The best way, I believe, to think of bandwidth percentages, is the really assign sharing ratios. So, if you only have two classes set to 30 and 70 percent, assuming all interface bandwidth is being used, they will share the bandwidth 30:70 or 3:7. I.e. you would be the same sharing if you set bandwidth percentage as 3 and 7 percent or 9 and 21 percent, etc
05-02-2018 07:59 AM - edited 05-02-2018 08:00 AM
On a router, CBWFQ's "priority" is quite different from "bandwidth" class command.
The former uses a dedicated LLQ, which is dequeued before all other queues. It's also special in that the priority command creates an implicit policer (which on many platforms only "triggers" when there's congestion - i.e. w/o congestion, LLQ traffic might exceed the priority bandwidth).
BTW, on most platforms there's only one LLQ regardless of the number of classes using the priority command. (In such cases, the other priority classes allow different implicit policers.)
05-02-2018 07:47 AM
so does this also mean configuring a bandwidth class without a shaper just means the ratio at which a class is serviced irrespective of whether you use bandwidth, bandwidth percentage or priority command. Which means just configuring a bandwidth class does not prevent congestion.
05-02-2018 08:03 AM
05-14-2018 10:11 AM
hi i have got a couple of questions:
1. I have read in a couple of journal that separating flows in two separate queues has an effect on delay than using a single queue. my thought is this should have no effect whatsoever as no scheduling mechanism as been configured. This is the excerpt from the journal.
'''
We experimented with two ways to queue the packets as they cross the switch-to-switch link: (i) in one case, we queue packets belonging to the two flows separately in two queues (i.e., each flow gets its own queue), each configured at a maximum rate of 50 Mbps (ii) in the second case, we queue packets from both flows in the same queue configured at a maximum rate of 100 Mbps.
1) The per-packet average delay increases in both cases as traffic send rate approaches the configured rate of 50 Mbps. This is an expected queue-theoretic outcome and motivates the need for slack allocations for all applica- tions in general. For example, if an application requires a bandwidth guarantee of 1 Mbps, it should be allocated 1.1 Mbps for minimizing jitter.
2) The case with separate queues experiences lower average per-packet delay when flow rates approach the maximum rates. We observed this effect even more strongly in the case of software switches (Appendix B). This indicates that when more than one flow uses the same queue, there is interference caused by both flows to each other. This becomes a source of unpredictability and eventually may cause the end-to-end delay guarantees for the flows to be not met or perturbed significantly.
Any explanation to this behaviour ?
'''
2) if you configured a priority queue on switch with say 30 percent of bandwidth for the priority queue and 70 percent for best effort. I understand a priority queue is policed only during congestion and can use unsed bandwidth. What happens if priority queue is transmitting at 100percent link capacity and a best effort flow comes in is the priority queue policed back to 30percent even if the best effort flow is not using up to 70 percent.
05-15-2018 03:34 AM
05-15-2018 04:23 AM
so you mean there will be no difference using 2 queues for different flows if you were using FIFO queuing ?
05-16-2018 02:56 AM
05-16-2018 03:31 AM
you said in your previous reply to my question that you such behaviour is one of the reasons I often suggest using FQ. So i was wondering if with FQ you won't get the behaviour explained in the below experiment. I am assuming FQ mean Fifo queue here.
We experimented with two ways to queue the packets as they cross the switch-to-switch link: (i) in one case, we queue packets belonging to the two flows separately in two queues (i.e., each flow gets its own queue), each configured at a maximum rate of 50 Mbps (ii) in the second case, we queue packets from both flows in the same queue configured at a maximum rate of 100 Mbps.
1) The per-packet average delay increases in both cases as traffic send rate approaches the configured rate of 50 Mbps. This is an expected queue-theoretic outcome and motivates the need for slack allocations for all applica- tions in general. For example, if an application requires a bandwidth guarantee of 1 Mbps, it should be allocated 1.1 Mbps for minimizing jitter.
2) The case with separate queues experiences lower average per-packet delay when flow rates approach the maximum rates. We observed this effect even more strongly in the case of software switches (Appendix B). This indicates that when more than one flow uses the same queue, there is interference caused by both flows to each other. This becomes a source of unpredictability and eventually may cause the end-to-end delay guarantees for the flows to be not met or perturbed significantly.
Regards
Seyi
05-16-2018 04:21 AM
04-07-2019 02:49 PM
Hi is this assumption right? " that the end-to-end delays are lower and more stable when separate queues are provided to each critical flow – especially as the rates for the flows get closer to their maximum assigned rates" .
I have seen this in a few papers now and i cannot find any evidence to back it. I do not see any reason why using separate queues will reduce end to end delay as no scheduling mechanism has been configured.
Can you please point me to any material that proves this
thanks
04-07-2019 03:59 PM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide