Both bandwidth and buffer will be utilized simultenously when there is congestion on the port. Bandwidth actually will determine how many packets will be processed from the queue in each round robin cycle and buffer will determine the maximum number of packets that can be stored in the buffer .
Following commands can be used to check the existing mappings:
sh mls qos maps cos-output-q >>> map of cos values to output queue
sh mls qos maps dscp-output-q >>> map of dscp values to output queue
sh mls qos maps cos-input-q >>> map of cos values to input queue
sh mls qos maps dscp-input-q >>> map of dscp values to input queue
Unless changed via configuration, the values seen through the above commands are the default (standard) values and the recommened ones as well. But however, you can change them as per your requirements by commands below:
sw(config)# mls qos srr-queue input cos-map queue 1 thershold 3 0
! this will map cos 0 to input queue 1 threshold 3.
sw(config)# mls qos srr-queue input dscp-map queue 1 thershold 2 46
! this will map dscp 46 to input queue 1 threshold 2.
In similar way, you can change the mappings for output queues.
As far as I know, there is no command to show packet count transmitted per queue on 6500. However we can check the number of packets dropped in each queue using 'sh queueing interface gix/y' command. We can also check the packets transmitted after hitting a particular class-map (if applied via a policy map on an interface) using "sh mls qos ip" command.
Do you see any future developments in bringing a more unified approach to deploying QoS on Catalyst Switches? I understand the hardware used between models in very different, which makes deploying QoS on a 6500 very different to deploying QoS on a 3750. Are there plans to perhaps have the priority queue numbered as queue number X on all Catalyst platforms?
As far as I know, for existing platforms there are no plans to change the prevailing architecture. This again comes from the fact that the ASICs used in different platforms have been designed in different ways. This however may be a possibility for upcoming products.
Below is two routers and two switches connected.
There are 4 devices in the network, located at call center, and accessing the CIC servers. I don't know how to enclose the configuration files.....pls. suggest if i can paste the configurations here for all the devices.
C3900-SPE100/K9 Router A
C3900-SPE100/K9 Router B
CIC Swich 4948 with flash bootflash:cat4500-entservices-mz.122-53.SG5.bin
Thanks for reaching out. Just wanted to let you know that we are focussing on Catalyst QoS in this dedicated discussion and end to end QoS configuration assistance is outside its scope. If you have any specific questions about a feature or a topic on Catalyst QoS we will be more than glad to answer them for you.
For QoS configuration assistance and best practices, I would suggest following the Campus QoS design guide located at
I want to use QoS over QinQ tunnels in Cat3750 (not E or X).
(I posted details of my issue here :
I want L2 QoS only, and by default the switch copies fields of incoming traffic to the outer 802.1q header of the tunnel; what I want is to preserve the marking of the tunneled traffic (I have to be the most transparent as possible) but apply my own QoS to the tunnel (which is associated to a specific customer); is it possible to keep QoS informations of the tunneled traffic and apply my own rules to a specific tunnel ?
I couldn't find the answer in the documentation.
If I use on my ingress interface something like
mls qos cos 1 override
will it destroy the incoming QoS infos and set the .1p field of the outer header to 1 ?
I will have 2 tunnels (one for each customer) and my final goal is to provide 2% of the badnwidth to the customer A and 10% to B.
You can use the "set cos" or "set ip dscp" policy map commands which change the cos/dscp value of only the outer tag of the encapsulated packet.
Though I assume that 'mls qos cos 1 override' will also affect only the outer (service provider) tag, I am not 100% sure on this.
I'll try to trust dscp and override cos; if I can at least keep dscp intact and enforce my qos at layer 2 it would be ok.
Is there any guide about QinQ related QoS somewhere ? I couldn't find anything except the documentation and the configuration examples page :
Seems to be far from being simple !
Following guide talks anout QoS with QinQ, though on a different platform. I believe it will still be useful to understand the workings.