08-12-2011 03:38 PM - edited 03-07-2019 01:41 AM
With Shashank Singh
Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to get an update on QoS on Catalyst 2960, 3550, 3560, 3750, 4500 and 6500 series switches with Cisco expert Shashank Singh. Shashank graduated in 2009 with a bachelor's degree in Computer Science and Engineering from VIT University, Vellore India. Prior to joining Cisco he worked at General Electric as a software engineer. Later on he joined the Cisco Technical Assistance Center as an engineer in October of 2009. He has been working on LAN Switching technologies in TAC since then. Shashank also holds a CCNP certificate. QoS on Catalyst switches is one of the areas of his interest.
Remember to use the rating system to let Shashank know if you have received an adequate response.
Shashank might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Network InfrastructureLAN Switching discussion forum shortly after the event. This event lasts through August 26 , 2011. Visit this forum often to view responses to your questions and the questions of other community members.
08-25-2011 03:37 AM
Hi Shashank,
I see:
Switch(config)#int gi1/0/45
Switch(config-if)#auto qos voip trust
Switch(config-if)#
*Mar 1 00:05:52.221: auto qos srnd4
*Mar 1 00:05:52.258: mls qos trust cos
*Mar 1 00:05:52.274: no queue-set 1
*Mar 1 00:05:52.279: queue-set 1
*Mar 1 00:05:52.284: priority-queue out
*Mar 1 00:05:52.284: srr-queue bandwidth share 1 30 35 5
that's on a switch stack
Switch Ports Model SW Version SW Image
------ ----- ----- ---------- ----------
1 52 WS-C2960S-48FPS-L 15.0(1)SE C2960S-UNIVERSALK9-M
* 2 52 WS-C2960S-48FPS-L 15.0(1)SE C2960S-UNIVERSALK9-M
Any idea what might be causing the difference between the queue-set commands that we see?
Thanks
Lee Flight
08-25-2011 09:59 PM
Hi Lee,
I just confirmed that the behavior has been changed 12.2(58)SE onwards. IOS versions older than this one generate queue-set 2 while newer ones including 12.2(58)SE do not.
Cheers,
Shashank
08-26-2011 12:29 AM
Hi Shashank,
thank for resolving this; perhaps you could pass this information back to the owners of the SBA guides for update in
any later revisions of those documents?
A QoS on Catalyst Ask the Expert would be a very useful event to have scheduled again (regularly).
Thank you for your diligence in reproducing this issue and your activity in this forum.
Lee Flight
08-24-2011 11:02 AM
Shashank,
I am still investigating whether or not the priority-queue, without congestion, actually will "overide" FIFO and forward the priority traffic before the other queues?
08-24-2011 11:34 AM
Hi Douglas,
At least on catalyst switches, this is the behavior. Priority queue is serviced until empty, and the link does not need to be congested for this to happen.
Cheers,
Shashank
08-24-2011 12:17 PM
Shashank,
Thanks. Also what show commands can you use to see this?
08-24-2011 12:56 PM
Hi Douglas,
You can use "sh mls qos int gix/y statistics" to check the markings getting enqueued/dropped on all the four queues.
Note that queue-1 will always be the priority queue on 3750/2960 switches once you configure 'priority-queue out'. To check whether priority queue is enabled or not you may use the following command.
3700#sh mls qos int gi3/0/1 queueing
GigabitEthernet3/0/1
Egress Priority Queue : enabled
Shaped queue weights (absolute) : 25 0 0 0
Shared queue weights : 1 30 35 5
The port bandwidth limit : 100 (Operational Bandwidth:100.0)
The port is mapped to qset : 1
Cheers,
Shashank
08-25-2011 04:53 AM
Hello Shashank,
Might sound a silly question, but I couldn't find a definitive answer for it.
What is considered congestion? Does it mean 1 packet sitting in the queue? 100% bandwidth being utilized?
Regards,
Bruno Silva.
08-25-2011 05:42 PM
Hi Bruno,
"Conceptually, congestion is defined by the Cisco IOS software configuration guide as: "During periods of transmit congestion at the outgoing interface, packets arrive faster than the interface can send them."
In other words, congestion typically occurs when a fast ingress interface feeds a relatively slow egress interface.
Functionally, congestion is defined as filling the transmit ring on the interface. A ring is a special buffer control structure. Every interface supports a pair of rings: a receive ring for receiving packets and a transmit ring for transmitting packets."
Taken from http://www.cisco.com/en/US/tech/tk543/tk760/technologies_tech_note09186a0080108e2d.shtml#topic1
Hope that helps,
Shashank
08-25-2011 02:33 PM
Shashank,
Have you come across the problem when you apply QoS queing for the 10G linecards in 6509 (WS-6704-10GE CFC and WS-6708-10GE DF3C) it prevents packet bytes greater than 718 bytes?
Sample queue:
interface TenGigabitEthernet1/1
no ip address
logging event link-status
load-interval 30
udld port aggressive
wrr-queue cos-map 1 1 2
wrr-queue cos-map 2 1 3
wrr-queue cos-map 3 1 1 6
wrr-queue cos-map 4 1 0 7
priority-queue cos-map 1 4 5
rcv-queue cos-map 1 1 4 5
rcv-queue cos-map 1 2 1 2 6
rcv-queue cos-map 1 3 3
rcv-queue cos-map 1 4 0 7
mls qos trust dscp
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 500
switchport trunk allowed vlan 200,500
switchport mode trunk
Turning it on one side or both the 10G trunks is having this problem.
IRTest6509-01#debug qm error
QM error debugging is on
IRTest6509-01#debug qm error event port-err
IRTest6509-01#debug qm port-error
Port ASIC QoS error debugging is on
IRTest6509-01#debug qm port-error event
Port ASIC QoS event debugging is on
IRTest6509-01#debug qm port-eventint
QM interface events debugging is on
IRTest6509-01#
IRTest6509-01#conf t
Enter configuration commands, one per line. End with CNTL/Z.
IRTest6509-01(config)#end
IRTest6509-01#
IRTest6509-01#
IRTest6509-01#ping
IRTest6509-01#ping 10.255.90.2 si 10000
Type escape sequence to abort.
Sending 5, 10000-byte ICMP Echos to 10.255.90.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 8/11/20 ms
IRTest6509-01#conf t
Enter configuration commands, one per line. End with CNTL/Z.
IRTest6509-01(config)#mls qos
IRTest6509-01(config)#
02:23:33: HWIF QOS:hwif_qos_set_portqos_state: enqueueing port_qos_enable = 1, earl_qos_enable = 1
02:23:33: qos_enable_maccos(1): failed
02:23:33: QM enqueue global state change event for event:6, slot 0, up ENABLED
IRTest6509-01(config)#do ping 10.255.90.2 si 718
Type escape sequence to abort.
Sending 5, 718-byte ICMP Echos to 10.255.90.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
IRTest6509-01(config)#do ping 10.255.90.2 si 719
Type escape sequence to abort.
Sending 5, 719-byte ICMP Echos to 10.255.90.2, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
IRTest6509-01(config)#no mls qos
IRTest6509-01(config)#
02:24:39: HWIF QOS:hwif_qos_set_portqos_state: enqueueing port_qos_enable = 0, earl_qos_enable = 0
02:24:39: qos_enable_prec2cos(0): failed
02:24:39: qos_enable_maccos(0): failed
02:24:39: QM enqueue global state change event for event:6, slot 0, up DISABLED
02:24:40: hwif_qos_cos_mut_map_qos_statechange: TenGigabitEthernet1/1
02:24:40: hwif_qos_cos_mut_set_in_hardware: TenGigabitEthernet1/1
02:24:40: hwif_qos_cos_mut_set_in_hardware: Setting cos-mutation for TenGigabitEthernet1/1
IRTest6509-01(config)#do ping 10.255.90.2 si 10000
Type escape sequence to abort.
Sending 5, 10000-byte ICMP Echos to 10.255.90.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 8/10/12 ms
IRTest6509-01(config)#
Would appreciate it.
Cheers.
Erwyn
08-25-2011 05:55 PM
Hi Erwyn,
I am not familiar with this issue. Please open a TAC case for further analysis.
Cheers,
Shashank
08-25-2011 06:27 PM
Thanks, we have. Just trying to see if someone have already come across this, as there is no specific TAC case mentioned around it neither on the IOS bugs.
Cheers,
Erwyn
05-11-2015 07:37 PM
100mbps link with QoS vs 100mbps link without QoS
Both link are less than 50% utilized.
Both link have same source and same destination
Router is a decent model/router with 10% cpu utilization
Traffic is voice/jitter
Really a simple question:
Will there be a difference?
05-12-2015 04:49 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
"It depends" on the nature of your traffic.
50% utilization tells us almost nothing. You might desperately need QoS at 1% utilization or not need it at 100% utilization.
QoS is useful to manage congestion, including transient congestion, that's adverse to the service needs of your traffic. This also assumes, such QoS congestion management can actually help.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide