02-02-2017 01:58 PM - edited 03-05-2019 07:58 AM
Hi all,
I am setting up a lab to experiment with some bandwidth classes using CBWFQ. My lab scenario is very simple and contains two classes, critical data and scavenger.
Basically, what I am trying to achieve is that I would like either class to be able to take full advantage of the link if no other class is using it, however during heavy utilization, I want the critical data to be able to utilize 90% of the link and scavenger should be scaled back to 5% so the critical data can pass through without issues.
I am planning to make a more complex configuration for our production network, with an additional priority class for voice and business class for data with percentages adjusted accordingly. I am just doing this lab to try and prove that one class can't choke out or have to fight for bandwidth with the other.
I set up two cisco 2901 routers and set the bandwidth command to reflect 100mbps on the "WAN" link. I fired up an iperf session between a client and server to test throughput, and I was able to achieve a little over 90mbps from each class while there was NO other utilization from other classes on the link, so that was good test to prove each class has the ability to use the full link if it is available.
However, I tried to run an iperf session from both classes at the same time to simulate maximum network congestion on both classes, and my critical data class achieved around 45mbps and my scavenger class achieved 40mbps. It was my understanding the critical should have been able to achieve close to 90mbps and the scavenger would be limited to 5mbps.
See command output below to see the policy:
class-map match-any SCAVENGER-QOS
match dscp af11
class-map match-any CRITICAL-DATA-QOS
match dscp af31
policy-map RATE_LIMIT_TEST
class CRITICAL-DATA-QOS
bandwidth percent 90
class SCAVENGER-QOS
bandwidth percent 5
interface GigabitEthernet0/0
bandwidth 100000
ip address 10.17.1.1 255.255.255.252
load-interval 30
duplex auto
speed auto
service-policy output RATE_LIMIT_TEST
I examined the policy map during the test as well... how does this make sense? It clearly says in the command output that critical data should have 90000kbps and scavenger 5000kbps, yet it is offering both classes nearly the same amount of bandwidth (40 - 45mbps). Am I not understanding a concept correctly or forgetting a configuration?
PAS_LAB_RTR#show policy-map int g0/0
GigabitEthernet0/0
Service-policy output: RATE_LIMIT_TEST
Class-map: CRITICAL-DATA-QOS (match-any)
271043 packets, 338373438 bytes
30 second offered rate 45042000 bps, drop rate 0000 bps
Match: dscp af31 (26)
271043 packets, 338373438 bytes
30 second rate 45040002 bps
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 271044/338374776
bandwidth 90% (90000 kbps)
Class-map: SCAVENGER-QOS (match-any)
291463 packets, 435690258 bytes
30 second offered rate 42684000 bps, drop rate 0000 bps
Match: dscp af11 (10)
291463 packets, 435690258 bytes
30 second rate 42684000 bps
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 1/0/0
(pkts output/bytes output) 291462/435688762
bandwidth 5% (5000 kbps)
Class-map: class-default (match-any)
58 packets, 4509 bytes
30 second offered rate 0000 bps, drop rate 0000 bps
Match: any
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 58/4759
Solved! Go to Solution.
02-03-2017 06:59 AM
Hello
Just like to add to the others comments.
Try shaping the overall link to a specified value and test again.
policy-map RATE_LIMIT_TEST
class SCAVENGER-QOS
bandwidth percent 5
fair-queue
Policy-Map WAN_Parent
class class-default
shape average 10240000
service-policy RATE_LIMIT_TEST
interface GigabitEthernet0/0
service-policy output WAN_Parent
res
Paul
02-02-2017 06:07 PM
Stephen,
The interface bandwidth command will impact qos calculations but it doesn't actually lower the interfaces throughput. The qos bandwidth amounts are impacted but since these are just calculating minimums and the interface can still push a gig you aren't causing drops. Try either using a method of pushing a full gig worth of traffic to force some drops or use policing to force the real allowed throughput down to 100mb.
HTH,
Jeremy
02-03-2017 05:31 AM
To add to Jeremy's comments, what's the bandwidth link from the iperf source? I suspect iperf is treating both sessions equally and as Jeremy's notes, your "WAN" interfaces isn't congesting.
With your current setup, you might try physically running your "WAN" interface at 10 Mbps.
02-03-2017 06:59 AM
Hello
Just like to add to the others comments.
Try shaping the overall link to a specified value and test again.
policy-map RATE_LIMIT_TEST
class SCAVENGER-QOS
bandwidth percent 5
fair-queue
Policy-Map WAN_Parent
class class-default
shape average 10240000
service-policy RATE_LIMIT_TEST
interface GigabitEthernet0/0
service-policy output WAN_Parent
res
Paul
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide