cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
626
Views
2
Helpful
9
Replies

QoS "bandwidth" command policy counter "match" shows "0".

Wesker
Level 1
Level 1

Hi guys, I'm trying to get a proof that my Queueing policy works. I do have Catalyst 9300, so I did apply the Marking policy to one of his end-host interface and then  Queueing policy to the uplink. I so see the counters increasing on marking policy, but  queueing policy shows "0" packets matched, as I understand "bandwidth" command kicks in only when link is saturated so I did "iperf" from the computer connected to switch g2/0/46 interface to the server connected to another switch so traffic traverses uplink "1/0/48", I see that(in linux aoutput) iperf shows 10Mbps of transfer speed, and at the same time I'm pinging the server from the laptop but still Queueing policy counter match shows "0"? I also did few "speedtest" web browser tests. How can I show the customer that marked pakets are getting minimum of 2% link bandwidth? 

The aim is to guarantee bandwidth when uplink is congested. 

 

The switch is functions as L2, no routing configured etc, if this matters.

Config:

1)

class-map match-any QoS-Queueing
match dscp cs2

class-map match-all QoS-Marking
match access-group 101

2)

sh access-lists
Extended IP access list 101
10 permit tcp any any eq 3389
20 permit tcp any eq 3389 any
30 permit icmp any any echo
40 permit icmp any any echo-reply

3)

show policy-map:
Policy Map QoS-Queueing-Policy
Class QoS-Queueing
bandwidth 2 (%)

Policy Map QoS-Marking-Policy
Class QoS-Marking
set dscp cs2

 

 

4)

sh policy-map interface 
GigabitEthernet1/0/48                                                   <-------UPLINK

Service-policy output: QoS-Queueing-Policy

Class-map: QoS-Queueing (match-any)
0 packets                                                                    <------------ Counter in question
Match: dscp cs2 (16)
Queueing

(total drops) 0                                                             <---------- drops are also stay 0
(bytes output) 549563059
bandwidth 2% (200 kbps)

Class-map: class-default (match-any)
0 packets
Match: any

(total drops) 634524
(bytes output) 1191818120

 

 

GigabitEthernet2/0/46   <------------------------------ interface connected to laptop

Service-policy input: QoS-Marking-Policy

Class-map: QoS-Marking (match-all)
26023 packets                                                       <-------- packets are being marked
Match: access-group 101
QoS Set
dscp cs2

Class-map: class-default (match-any)
904817 packets
Match: any

 

 

 

 

5)

interface GigabitEthernet2/0/46
switchport access vlan 613
switchport mode access
service-policy input QoS-Marking-Policy

 

interface GigabitEthernet1/0/48
description uplink 
switchport mode trunk
speed 10                                                              <------------ I did limit it's speed to 10 trying to see drops
channel-group 48 mode auto
service-policy output QoS-Queueing-Policy
end

 

 

 

 

 

 

I'm using this guide:

https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9300/software/release/17-9/configuration_guide/qos/b_179_qos_9300_cg/configuring_qos.html#information_about_qos

9 Replies 9

M02@rt37
VIP
VIP

Hello @Wesker 

The bandwidth command in a QoS policy on Cisco devices is used to allocate a minimum bandwidth when there is congestion. If the uplink interface is not congested (meaning the interface is not saturated), the queueing mechanism will not engage, and therefore, the counters will not increment.

Your Cisco switch apply the queueing mechanism only when the traffic exceeds the available bandwidth. Since you have set the uplink speed to 10 Mbps and are running iperf, it should trigger the queueing policy if you are indeed generating enough traffic.

Also, instead of just using the bandwitdh command, try using a shaping policy. Shaping will impose an upper limit on the traffic rate and can be more effective in triggering queueing mechanisms:

policy-map QoS-Queueing-Policy
class QoS-Queueing
shape average 200000
bandwidth 2

--Apply this shaping policy to the uplink and observe if this makes a difference.

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Two possibilities, I think.

First, over the years, I've seen switches often not correctly reflect hardware stats to IOS interface counters.  You sometime need to do a show of the ASIC stats.

Second, I see a byte count (does it increment during you testing?), so possibly the output match count is only against packets that both matched AND needed to be queued in that class queue.

Wesker
Level 1
Level 1

By the way, the goal is to prioritize ICMP traffic, so I assume even if uplink is not congested with traffic (let's say  downloading/uploading the file not on the full speed of this link), if the egress queues(buffer) is full there will be packet drops and "bandwidth" percentage reservation won't kick in?

 

Here I see drops are increasing(also see scrennshot please):

#show platform hardware fed switch active qos queue stats interface gigabitEthernet 1/0/48:

...

Drop-TH2..

...

11876832

 

 

along with another drop counter:

 

#sh interfaces g1/0/48
...
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 110502


@Wesker wrote:

By the way, the goal is to prioritize ICMP traffic, so I assume even if uplink is not congested with traffic (let's say  downloading/uploading the file on the full speed of this link), if the egress queues(buffer) is full there will be packet drops and "bandwidth" percentage reservation won't kick in?

I'm confused by that question.

On switches, on the 9Ks, I'm unsure whether there's a separate interface transmission queue that overflows into the 8 egress queues.

Your stats show bytes being enqueued within the first two queues, including at two different drop levels for Q0, Q1 only shows enqueued and dropped bytes for Q1.

Anyway, the way it works, packets are enqueued, within one of 8 interface egress queues, using up to 3 different drop thresholds, but packets overflowing a particular queue's drop threshold (again, one of 3 possible), are dropped.  What you should see is an enqueued count higher than a dropped count.

Your show platform stats do show drops in the one queue.  Your earlier policy map stats also show drops in the default queue.  The two numbers don't agree, but I recall (?) the platform stats only clear upon reload.

BTW, for testing purposes, I've used a traffic generator to send UDP traffic at 110% (or more) interface bandwidth to one queue, and while pinging or using telnet within another queue.  Ideally, you should see little difference with the traffic generator running or not vs. having ping or telnet share the same queue as the traffic generator.  The latter's impact is usually quite noticeable when it's running vs not running.

Wesker
Level 1
Level 1

"Second, I see a byte count (does it increment during you testing?)" - , yes the byte counter is increasing.

 

May be I was not clear enough in the previous post, so.

Situation 1)  switch uplink is fully utilized 10Mbps out of 10 Mbps by downloading one big file from an external server.

 

Situation 2) switch uplink is  utilized at 30%, but there are multiple ICMP packets pass the interface which result is output queue buffer overflow.

 

 

As I understand, packet drops can happen both situations, in "1" full interface utilization/congestion and in "2)" buffer overflow even if bandwidth is not fully utilized.

 

In situation "2"  bandwidth reservation policy won't kick in as there's no full interface bandwidth utilization.

Am I right?

Situation 1, does not lead to congestion on your interface's EGRESS, where you have control of bandwidth management

In situations where your traffic is two way, like ping and its reply, a fully loaded interface TO you may, or may not, cause packets TO you to be delayed or dropped.  Ideally, the other side of the link would use QoS to insure your overall QoS strategy is effective.

(BTW, an interface running at 100% may not be congested, and an interface running at even 10% might have horrible transient congestion.)

If you have no say/control over far side's QoS, there are "things" that can be done to, somewhat, regulate the traffic TO you, but their effectiveness varies based on the type of traffic.  Further, one of the most effective ways to regulate some traffic TO you is not a feature available to you.  Also, the methods that are available TO you, are not very "precise".

In situation 2, if utilized at 30% (ingress? egress? both? aggregate?) you may, or may not, have transient congestion.  So, QoS, may, or may not, trigger regardless of egress utilization (well, unless you continuously send at over 100%).

Consider you have a gig ingress port which will egress a 10 Mbps port.

The ingress port accepts just 1 second of data.  If bandwidth utilization is measure across 60 seconds, the utilization is only 1.67%.

The 10 Mbps port, being 100x slower will take 100 seconds to transmit the received data.  So for the first minute its utilization is 100% and for the second minute only 66.7%.

Suppose though, the 10 Mbps doesn't have the buffers to store all 100 seconds of data.  If the interface could only buffer (queue) 30 seconds, it would drop 70% of the received data, and show 50% utilization for the first minute and 0% utilization for the second minute.

Can you start to see, bandwidth utilization percentages, alone, don't tell us what's happening.

Do you have in mind good QoS policy strategy/method for situation 1&2? We have an NMS(uses snmp and ICMP) which falsely reports devices down from time to time because, I assume,  egress/ingress port buffer overflow. Seem like "bandwidth" is not the best one in here...

Ingress buffer overflow is possible but very rare.

For traffic that needs some form of SLA, you use egress QoS to try to meet its service needs.

On something like a switch, you would send such traffic to its own class with sufficient resource guarantees to meet its service needs.

BTW, ToS markings are just an efficient/fast way to determine what a packet's QoS class is.

Also, although the "book" calls for QoS, end-to-end, often it's only needed at bottlenecks.

Oh, guess I should add, the QoS "bandwidth" statement might be a fine answer.

What you might need to do, is provide a much higher bandwidth statement than actual bandwidth needed.  This should "prioritize" dequeuing.

policy-map Absolute_priority
 class Foreground
  priority percent 10
 class class-default
  bandwidth remaining percent 100

policy-map Relative_priority
 class Foreground
  bandwidth percent 99
  police percent 10
 class class-default
  bandwidth percent 1

The above two policies should provide similar dequeuing ratios for the two classes, although there are some subtle differences.

So why not use the priority (LLQ) statement?  Because at some point in the future, you may want to use it for something like VoIP.

Review Cisco Networking for a $25 gift card