cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements

Community Helping Community

270
Views
0
Helpful
0
Replies
Beginner

QoS configuration on 2960X seems to have no impact

Dear network people

We did have some issues with Dect stations that have a very high requirement in regards of delay. Communication between 2 devices should not exceed 500ns (0.5ms). With lots of different QoS configuration, we decided to make test the configuration with some ping statistics. So our setup is like this:

 

DeviceA - - - - SwitchA - - - - SwitchB - - - - SwitchC - - - - SwitchD - - - - DeviceB

We just did a 1h ping from DeviceA, that is a Windows 2016 Server on a ESXi, to DeviceB that is just a Windows 10 PC. First, we did a test without any QoS active, as a reference. Then, we put the ICMP traffic into DSCP18, so it has its own queue to use. This is the policy configuration on SwitchA and SwitchD:

 

 

ip access-list extended QOS_ICMP_COLORING
 permit icmp host 172.16.144.35 any
 permit icmp any host 172.16.144.35

class-map match-any QOS_ICMP_COLORING
 match access-group name QOS_ICMP_COLORING

policy-map QOS
 class QOS_ICMP_COLORING
  set dscp af21

SwitchA
interface GigabitEthernet1/0/37
description DeviceA with IP 172.16.144.35
switchport trunk native vlan 10
switchport mode trunk
power inline port 2x-mode
mls qos trust dscp
mls qos dscp-mutation HIDE
spanning-tree portfast trunk
service-policy input QOS
end

SwitchD
interface GigabitEthernet1/0/5
description DeviceB with IP 172.16.144.243
 switchport access vlan 10
switchport mode access
power inline port 2x-mode
mls qos trust dscp
mls qos dscp-mutation HIDE
spanning-tree portfast
service-policy input QOS
end

Example uplink interface (same on all switches)
interface GigabitEthernet1/0/47
description uplink
switchport trunk native vlan 10
switchport mode trunk
power inline port 2x-mode
mls qos trust dscp
mls qos dscp-mutation HIDE
end

Since, there is a fixed DSCP to Queue mapping on 2960X, we use a dscp-mutation map to overwrite all unwanted markings in the network, only allowing desired markings. So here is the mls qos part of the configuration:

 

mls qos

mls qos map dscp-mutation HIDE 1 2 3 4 5 6 7 8 to 0
mls qos map dscp-mutation HIDE 9 10 11 12 13 14 15 16 to 0
mls qos map dscp-mutation HIDE 17 19 20 21 22 23 25 26 to 0
mls qos map dscp-mutation HIDE 27 28 29 30 31 33 34 35 to 0
mls qos map dscp-mutation HIDE 36 37 38 39 40 41 42 43 to 0
mls qos map dscp-mutation HIDE 44 45 47 48 49 50 51 52 to 0
mls qos map dscp-mutation HIDE 53 54 55 56 57 58 59 60 to 0
mls qos map dscp-mutation HIDE 61 62 63 to 0
SwitchA#show mls qos maps dscp-mutation HIDE Dscp-dscp mutation map: HIDE: d1 : d2 0 1 2 3 4 5 6 7 8 9 --------------------------------------- 0 : 00 00 00 00 00 00 00 00 00 00 1 : 00 00 00 00 00 00 00 00 18 00 2 : 00 00 00 00 24 00 00 00 00 00 3 : 00 00 32 00 00 00 00 00 00 00 4 : 00 00 00 00 00 00 46 00 00 00 5 : 00 00 00 00 00 00 00 00 00 00 6 : 00 00 00 00

We did not change any DSCP to Queue maping so this remains the default on all ports which is:

 

 queue 0:  DSCP 40–47         Voice RTP
 queue 1:  DSCP 00–15         Class-default
 queue 2:  DSCP 16–31         Voice Signaling
 queue 3:  DSCP 32–39,48–63   Citrix & STP

Since we do not use voice signaling (DSCP24) in this location, the ICMP traffic (DSCP18) can use the otherwise unused queue2. In the show mls qos statistics, we can see the DSCP18 markings. I verified the whole path, everything looks good so far. We use the default queue set settings.

 

 

SwitchA#show mls qos interface gigabitEthernet 1/0/37 statistics
GigabitEthernet1/0/37 (All statistics are in packets) dscp: incoming ------------------------------- 0 - 4 : 1066030 0 0 0 0 5 - 9 : 0 0 0 0 0 10 - 14 : 0 0 0 0 0 15 - 19 : 0 0 0 0 0 20 - 24 : 0 0 0 0 0 25 - 29 : 0 0 0 0 0 30 - 34 : 0 0 0 0 0 35 - 39 : 0 0 0 0 0 40 - 44 : 0 0 0 0 0 45 - 49 : 0 0 0 0 0 50 - 54 : 0 0 0 0 0 55 - 59 : 0 0 0 0 0 60 - 64 : 0 0 0 0 dscp: outgoing ------------------------------- 0 - 4 : 2038575 0 0 0 0 5 - 9 : 0 0 0 0 0 10 - 14 : 0 0 0 0 0 15 - 19 : 0 0 0 68618 0 20 - 24 : 0 0 0 0 0 25 - 29 : 0 0 0 0 0 30 - 34 : 0 0 679380 0 0 35 - 39 : 0 0 0 0 0 40 - 44 : 0 0 0 0 0 45 - 49 : 0 974219 0 0 0 50 - 54 : 0 0 0 0 0 55 - 59 : 0 0 0 0 0 60 - 64 : 0 0 0 0

output queues enqueued:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 974219 0 0
queue 1: 6166478 1226 544798
queue 2: 72375 0 0
queue 3: 679900 0 0

output queues dropped:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 0 0 0
queue 1: 8 0 0
queue 2: 0 0 0
queue 3: 0 0 0

SwitchA#show mls qos queue-set 1
Queueset: 1
Queue : 1 2 3 4
----------------------------------------------
buffers : 25 25 25 25
threshold1: 100 200 100 100
threshold2: 100 200 100 100
reserved : 50 50 50 50
maximum : 400 400 400 400

The thing that seems strange to me is, that the DSCP18 count on the mls qos statistics DSCP outgoing, do not match the output queues enqueued for queue2. But maybe this is normal. Saldy, our ping statistics haven't improved. So we changed the policy-map to set DSCP46 and activated priority-queue:

policy-map QOS
 class QOS_ICMP_COLORING
  set dscp ef

interface GigabitEthernet1/0/5
 description DeviceB with IP 172.16.144.243
 switchport access vlan 10
switchport mode access power inline port 2x-mode priority-queue out mls qos trust dscp mls qos dscp-mutation HIDE service-policy input QOS end

The service-policies have only been applied to the interfaces where the end devices are attached. The priority-queue has been enabled on all interfaces along the path (uplinks included). But this did not affect the ping statistics. This is a network with 21 switches, most of them in a star configuration. We did the test from the same server to 3 PC's simultaneously to get more reliable data. But we had the same picture everywhere. Here are the results:

TC0686CQ swDAI15 no QoS		TC0686CQ swDAI15 QoS (18)	TC0686CQ swDAI15 QoS (46) Priority	
More then 1ms	2872		More then 1ms	4872		More then 1ms	1870
More then 2ms	1012		More then 2ms	2080		More then 2ms	735
More then 3ms	123		More then 3ms	160		More then 3ms	449
More then 4ms	75		More then 4ms	246		More then 4ms	39
More then 5ms	64		More then 5ms	59		More then 5ms	29
Average (ms)	0.768		Average (ms)	0.935		Average (ms)	0.873
Maximum (ms)	26.5		Maximum (ms)	23		Maximum (ms)	26.3
Timeouts	0		Timeouts	0		Timeouts	0
							
TC0686JZ swDAI12 no QoS		TC0686JZ swDAI12 QoS (18)	TC0686JZ swDAI12 QoS (46) Priority
More then 1ms	2507		More then 1ms	2891		More then 1ms	746
More then 2ms	865		More then 2ms	1012		More then 2ms	385
More then 3ms	150		More then 3ms	894		More then 3ms	41
More then 4ms	106		More then 4ms	64		More then 4ms	19
More then 5ms	87		More then 5ms	73		More then 5ms	34
Average (ms)	0.866		Average (ms)	0.736		Average (ms)	0.679
Maximum (ms)	48.5		Maximum (ms)	59.3		Maximum (ms)	27.9
Timeouts	1		Timeouts	0		Timeouts	1
							
TC0868KD swDAI08 no QoS		TC0868KD swDAI08 QoS (18)	TC0868KD swDAI08 QoS (46) Priority
More then 1ms	3145		More then 1ms	1010		More then 1ms	406
More then 2ms	1095		More then 2ms	171		More then 2ms	154
More then 3ms	145		More then 3ms	134		More then 3ms	27
More then 4ms	63		More then 4ms	62		More then 4ms	11
More then 5ms	70		More then 5ms	43		More then 5ms	27
Average (ms)	0.737		Average (ms)	0.717		Average (ms)	0.703
Maximum (ms)	43.7		Maximum (ms)	40.3		Maximum (ms)	55.7
Timeouts	1		Timeouts	1		Timeouts	2

We always did 1 hour of testing with a ping every 50ms, thats 20 pings/second thats 72'000 pings/hour. The average is fine for me all the way thru, maybe this cannot be improved any more. But what bugs me are the maximums we can't get rid of even with priority-queue activated. Is there something we missed in our configuration? Only explanation for me is, that the delay is coming from the end devices. We cross referenced the values to see, if maybe the server was having some load, causing all the pings to be delayed at the same time, but this was not the case. Thanks for having a look into this long post.

 

 

 

Everyone's tags (3)
CreatePlease to create content
Content for Community-Ad