cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2645
Views
0
Helpful
7
Replies

ICMP Latencies in a QoS lab not as expected

alvarezp2000
Level 1
Level 1

Hi. I just set up a QoS lab, trying to verify the "priority" statement in a couple of Cisco 871.

This is the diagram: http://blog.alvarezp.org/imagenes/qos-lab-icmp.png

I connected the two 871 back-to-back in the Fa4 (WAN) interface and set it to 10/Auto (both went to full duplex).

I set up two VLANs in each router: one will simulate data traffic that saturates the WAN link and the other one voice will simulate prioritized traffic. I chose ICMP for both, saturation and prioritized traffic because that's the tool I have available that allows me to measure RTT.

I, then, applied the service-policy to both Fa4 interfaces, expecting to have low latency in the prioritized traffic. By "low" I mean about the same as when th link is full: 1 to 2 ms.

The "saturation" traffic is about 9.3 Mbps +- 0.2 Mbps ICMP traffic. I'm actually using 4 different physical devices for traffic simulation, to remove any kind of CPU and NIC-induced variables into the lab.

I have tried several combinations for the service policies now:

1. One simple ACL-based class with priority percent 50% and the rest in fair-queue.

2. Hierarchical, with a default class to shape to 10 Mbps and a child service-policy to priority percent 50.

3. Both combined (that's why you see some lines beginning with ! in the diagram)

So, the problem: when the WAN link is saturated I get oscillating values, but usually steadies something around 30 to 60 ms. To make sure it's not a normal value, I tried removing the service-policy but got values on the same range. If the service-policy were actually working, then I would have gotten different RTTs, right?

The only combination that has given me somewhat good results latency-wise is having the default class policied or shaped to a value lower than the full link, but even a slight policing (say, to 9 Mbps) gives me about 40 to 50% data loss for the saturated traffic.

Also, I could not apply the service-policy to the Vlan1 and Vlan2 interfaces because that implies queueing in an input, so that is no wonder.

Is this expected? Is the 871 not good enough for service policying? Is there something wrong in the lab setup or configuration? Wrong assumptions? What am I missing?

7 Replies 7

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Do you see queues forming on the 10 Mbps egress?  Reason I ask, if your saturation traffic is only 9.3 Mbps and priority traffic .2 Mbps, and CBR, there there be little egress queuing.

All the other links are running at 100 Mbps?

Have you made any adjustment to tx-ring-limit?

You've verified you're getting priority class matches?

Hi, and thanks for your answer.

Do you see queues forming on the 10 Mbps egress?  Reason I ask, if your saturation traffic is only 9.3 Mbps and priority traffic .2 Mbps, and CBR, there there be little egress queuing.

Uhmmm... I'm not sure (because I hadn't used queues before), but it looks like it isn't:

leftie#show queueing 
Current fair queue configuration:
Current DLCI priority queue configuration:
Current priority queue configuration:
Current custom queue configuration:
Current random-detect configuration:
Current per-SID queue configuration:

leftie#show queueing int fa4
Interface FastEthernet4 queueing strategy: none

leftie#sh queue fa4
'Show queue' not supported with FIFO queueing.

leftie#show int fa4 | inc [Qq]ueu
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 418
  Queueing strategy: fifo
  Output queue: 0/4096 (size/max)

I tried incrementing my saturation traffic to the point of getting drops but I still get the same result: FIFO, apparently no queue. I also tried finding for an "optimal" point (most traffic, no drops) but same result.

All the other links are running at 100 Mbps?

Yes.

Have you made any adjustment to tx-ring-limit?

Not before the original post, but now that you mentioned it, I tried measuring with tx-ring-limit 1, as well as the default. I see no difference: values in the range to 20 to 30 ms, sometimes more for both values. (I'm noticing, though that show controller Fa4 | inc ring entries give rx:64 and tx:128 regardless of the value from the running configuration, but I don't know if this is ok or not.)

You've verified you're getting priority class matches?

The ACL is getting matches, if that's what you mean. 1 packet per second, as expected (the prioritized traffic generatir is a Windows machine running Ping).

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

The above shows default FIFO?

Please post your config.

Hi again.

So, in the previous reply I found out I was still using FIFO, which led me to think my setup was incorrect. Also, your questions on queues and rings made me go back and read about that and try different settings and combinations. Thus, I had to change the router configurations. Now I think FIFO was correct after all because I was shaping traffic.

Anyway, I can confirm I'm now using CBQ, as shown in the routers outputs.

Physical setup is exactly the same. Routers are named "leftie" and "rightie".

I also changed my saturation technique. Instead of using a stream of 20-kbyte ICMP packets with 0 interval from a single command, I chose to use simoultaneous ping commands (currently 11) sending 0-interval 1400-byte ICMP each. This is just to avoid any kind of fragmentation processing that may be interfering in any point of the setup. Pings are being sent in a 1000-count so I can periodically measure packet loss.

The prioritized traffic is a 1-second interval 300-byte ping, 50-count. None of the pings are sent to the routers themselves, to make sure I'm using their data-plane only.

I think I have a better and more predictable setup now. However, I'm still not getting the result I expect. I tried two scenarios:

First one, with hold-queue 4096 in and hold-queue 1 out in all interfaces. I started to get some packet loss at 9 concurrent ping commands (5 out of 1000). Average RTT in the saturating streams is 10 ms. The prioritized traffic average RTT was also 10 ms. I was expecting a significatly lower RTT for it.

Second (and current one) is with a hold-queue 4096 in both directions in all interfaces. I'm now running 11 concurrent ping commands with no packet loss. Average RTT is 14 ms for both, staturating and prioritzed traffic. Removing the service-policy command did not issue any change at all.

So, in the router output I include show running as requested, but also show queue Fa4, show queueing and show policy-map interface Fa4. Commands were run with saturating traffic, and after some prioritized traffic, but not during prioritized traffic.

I promise not to change my configurations anymore. :-)

=== LEFTIE ===

leftie#show running

Building configuration...

Current configuration : 1788 bytes

!

version 12.4

no service pad

service timestamps debug datetime msec

service timestamps log datetime msec

no service password-encryption

!

hostname leftie

!

boot-start-marker

boot-end-marker

!

logging buffered 10000

!

no aaa new-model

!

!

!

!

ip cef

!

!

no ip dhcp use vrf connected

!

ip dhcp pool vlan1

network 192.168.1.0 255.255.255.0

default-router 192.168.1.1

!

ip dhcp pool vlan2

network 192.168.3.0 255.255.255.0

default-router 192.168.3.1

!

!

no ip domain lookup

!

multilink bundle-name authenticated

!

!

archive

log config

hidekeys

!

!

!

class-map match-any priorizado-out

match access-group name PRIORIZADO-OUT

!

!

policy-map fa4-out

class priorizado-out

priority percent 1

!

!

!

!

interface FastEthernet0

hold-queue 4096 in

hold-queue 1 out

!

interface FastEthernet1

hold-queue 4096 in

hold-queue 4096 out

!

interface FastEthernet2

!

interface FastEthernet3

switchport access vlan 2

hold-queue 4096 in

hold-queue 4096 out

!

interface FastEthernet4

ip address 10.10.10.2 255.255.255.0

load-interval 30

tx-ring-limit 1

tx-queue-limit 1

duplex auto

speed 10

max-reserved-bandwidth 100

service-policy output fa4-out

hold-queue 4096 in

hold-queue 4096 out

!

interface Vlan1

ip address 192.168.1.1 255.255.255.0

hold-queue 4096 in

hold-queue 4096 out

!

interface Vlan2

ip address 192.168.3.1 255.255.255.0

hold-queue 4096 in

hold-queue 4096 out

!

ip route 10.200.1.0 255.255.255.0 10.10.10.1

ip route 192.168.2.0 255.255.255.0 10.10.10.1

!

!

no ip http server

no ip http secure-server

!

ip access-list extended PRIORIZADO-OUT

permit ip 192.168.3.0 0.0.0.255 10.200.1.0 0.0.0.255

!

!

!

!

!

control-plane

!

!

line con 0

logging synchronous

no modem enable

line aux 0

line vty 0 4

login

!

scheduler max-task-time 5000

!

webvpn cef

end

leftie#show queue Fa4

Input queue: 0/4096/4/0 (size/max/drops/flushes); Total output drops: 752919

Queueing strategy: Class-based queueing

Output queue: 0/4096/64/646016 (size/max total/threshold/drops)

Conversations  0/2/256 (active/max active/max total)

Reserved Conversations 0/0 (allocated/max allocated)

Available Bandwidth 9900 kilobits/sec

leftie#show queueing

Current fair queue configuration:

Interface           Discard    Dynamic  Reserved  Link    Priority

threshold  queues   queues    queues  queues

FastEthernet4       64         256      256       8       1

Current DLCI priority queue configuration:

Current priority queue configuration:

Current custom queue configuration:

Current random-detect configuration:

Current per-SID queue configuration:

leftie#show policy-map interface Fa4

FastEthernet4

Service-policy output: fa4-out

Class-map: priorizado-out (match-any)

20 packets, 6840 bytes

30 second offered rate 0 bps, drop rate 0 bps

Match: access-group name PRIORIZADO-OUT

20 packets, 6840 bytes

30 second rate 0 bps

Queueing

Strict Priority

Output Queue: Conversation 264

Bandwidth 1 (%)

Bandwidth 100 (kbps) Burst 2500 (Bytes)

(pkts matched/bytes matched) 6/2052

(total drops/bytes drops) 0/0

Class-map: class-default (match-any)

125986 packets, 181660899 bytes

30 second offered rate 9730000 bps, drop rate 0 bps

Match: any

leftie#

=== RIGHTIE ===

rightie#show running

Building configuration...

Current configuration : 1670 bytes

!

version 12.4

no service pad

service timestamps debug datetime msec

service timestamps log datetime msec

no service password-encryption

!

hostname rightie

!

boot-start-marker

boot-end-marker

!

!

no aaa new-model

!

!

!

!

ip cef

!

!

no ip dhcp use vrf connected

!

ip dhcp pool vlan1

network 192.168.2.0 255.255.255.0

default-router 192.168.2.1

!

!

no ip domain lookup

!

multilink bundle-name authenticated

!

!

archive

log config

hidekeys

!

!

!

class-map match-any priorizado-out

match access-group name PRIORIZADO-OUT

!

!

policy-map fa4-out

class priorizado-out

priority percent 1

!

!

!

!

interface FastEthernet0

hold-queue 4096 in

hold-queue 1 out

!

interface FastEthernet1

hold-queue 4096 in

hold-queue 4096 out

!

interface FastEthernet2

!

interface FastEthernet3

switchport access vlan 2

hold-queue 4096 in

hold-queue 4096 out

!

interface FastEthernet4

ip address 10.10.10.1 255.255.255.0

load-interval 30

tx-ring-limit 1

tx-queue-limit 1

duplex auto

speed 10

max-reserved-bandwidth 100

service-policy output fa4-out

hold-queue 4096 in

hold-queue 4096 out

!

interface Vlan1

ip address 192.168.2.1 255.255.255.0

hold-queue 4096 in

hold-queue 4096 out

!

interface Vlan2

ip address 10.200.1.2 255.255.255.0

hold-queue 4096 in

hold-queue 4096 out

!

ip route 192.168.1.0 255.255.255.0 10.10.10.2

ip route 192.168.3.0 255.255.255.0 10.10.10.2

!

!

no ip http server

no ip http secure-server

!

ip access-list extended PRIORIZADO-OUT

permit ip 10.200.1.0 0.0.0.255 192.168.3.0 0.0.0.255

!

!

!

!

!

control-plane

!

!

line con 0

logging synchronous

no modem enable

line aux 0

line vty 0 4

!

scheduler max-task-time 5000

!

webvpn cef

end

rightie#show queue Fa4

Input queue: 0/4096/0/0 (size/max/drops/flushes); Total output drops: 107887

Queueing strategy: Class-based queueing

Output queue: 0/4096/64/0 (size/max total/threshold/drops)

Conversations  0/2/256 (active/max active/max total)

Reserved Conversations 0/0 (allocated/max allocated)

Available Bandwidth 9900 kilobits/sec

rightie#show queueing

Current fair queue configuration:

Interface           Discard    Dynamic  Reserved  Link    Priority

threshold  queues   queues    queues  queues

FastEthernet4       64         256      256       8       1

Current DLCI priority queue configuration:

Current priority queue configuration:

Current custom queue configuration:

Current random-detect configuration:

Current per-SID queue configuration:

rightie#show policy-map interface Fa4

FastEthernet4

Service-policy output: fa4-out

Class-map: priorizado-out (match-any)

19 packets, 6498 bytes

30 second offered rate 0 bps, drop rate 0 bps

Match: access-group name PRIORIZADO-OUT

19 packets, 6498 bytes

30 second rate 0 bps

Queueing

Strict Priority

Output Queue: Conversation 264

Bandwidth 1 (%)

Bandwidth 100 (kbps) Burst 2500 (Bytes)

(pkts matched/bytes matched) 16/5472

(total drops/bytes drops) 0/0

Class-map: class-default (match-any)

234654 packets, 338350391 bytes

30 second offered rate 9833000 bps, drop rate 0 bps

Match: any

rightie#

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

What ping times do you get with no side load?

With side load as seen in your last post?  (10 ms?)

With side load, say at 110%?

> What ping times do you get with no side load?

2 ms when the line is empty.

> With side load as seen in your last post? (10 ms?)

15 ms average as with the posted configurations.

(10 ms was with hold-queue 1 in, which, due to packet loss I had to reduce the side load, thus, reducing the RTT.)

> With side load, say at 110%?

I can't tell for sure. I'm not sure if I'm being able to exactly control that. Also, one thing is the raw injected side load and another thing is the actual side load that got through the WAN link.

The router says this:

reliability 255/255, txload 250/255, rxload 250/255

I can increase the simoultaneous number of pings but I can't make iftop go beyond 9.3 Mbps TX without packet loss (and RX never beyond 9.3 Mbps).

For example: right now I added two more pings. Now I have 13 simoultaneous and I'm getting a 7 to 10% packet drop in side load. I'm also getting some packet loss at priority traffic (about 15% a la Windows). iftop reports TX=10.3Mbps, but RX keeps at 9.29 Mbps. Average RTT is about 65 ms for both, side and priority traffic. This is what the router says (txload oscilates from 252 to 255, and sometimes it says 1):

The side-traffic-sending laptop is 25% CPU, so that's not the cause for packet loss.

leftie#sh int Fa4 | inc load

reliability 255/255, txload 253/255, rxload 250/255

leftie#sh queue fa4

Input queue: 0/4096/4/0 (size/max/drops/flushes); Total output drops: 752919

Queueing strategy: Class-based queueing

Output queue: 0/4096/64/646016 (size/max total/threshold/drops)

Conversations  0/2/256 (active/max active/max total)

Reserved Conversations 0/0 (allocated/max allocated)

Available Bandwidth 9900 kilobits/sec

leftie#sh queueing

Current fair queue configuration:

Interface           Discard    Dynamic  Reserved  Link    Priority

threshold  queues   queues    queues  queues

FastEthernet4       64         256      256       8       1

Current DLCI priority queue configuration:

Current priority queue configuration:

Current custom queue configuration:

Current random-detect configuration:

Current per-SID queue configuration:

leftie#show policy-map int fa4

FastEthernet4

Service-policy output: fa4-out

Class-map: priorizado-out (match-any)

170 packets, 58140 bytes

30 second offered rate 0 bps, drop rate 0 bps

Match: access-group name PRIORIZADO-OUT

170 packets, 58140 bytes

30 second rate 0 bps

Queueing

Strict Priority

Output Queue: Conversation 264

Bandwidth 1 (%)

Bandwidth 100 (kbps) Burst 2500 (Bytes)

(pkts matched/bytes matched) 43/14706

(total drops/bytes drops) 0/0

Class-map: class-default (match-any)

11193971 packets, 16141464079 bytes

30 second offered rate 10681000 bps, drop rate 0 bps

Match: any

leftie#

Disclaimer

The    Author of this posting offers the information contained within this    posting without consideration and with the reader's understanding that    there's no implied or expressed suitability or fitness for any  purpose.   Information provided is for informational purposes only and  should not   be construed as rendering professional advice of any kind.  Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In    no event shall Author be liable for any damages whatsoever  (including,   without limitation, damages for loss of use, data or  profit) arising  out  of the use or inability to use the posting's  information even if  Author  has been advised of the possibility of such  damage.

Posting

You're correct, from what you've posted, it does appear you priority traffic isn't being prioritized.

I haven't carefully checked you config and diagram to insure your test priority packets are should be placed in LLQ but you do show a very low volume of LLQ matches.  For instance in your second to last post, you note you priority packets are set in a 50 count sets yet the LLQ stats only show an offered LLQ count of 20 and 19 packets for your two instances.

If I get the chance, I'll study your setup (from you diagram) and your configurations more carefully, although at a glance they look correct.

Several things to try (independently):

Remove the explicitly 4096 queue depths.  (I'm wondering if this is somehow confusing the router.)

Remove the LLQ class and add an explicit class class-default with FQ defined.  (FQ should isolate the individual flows.)

Remove the policy from interfaces and try a trafffic-shape at 5 Mbps.  (Traffic shaping uses WFQ.)

See if any of the above show a major difference in ping times for your high priority traffic when side load at 110%.

PS:

It's expected when side load is at 110% transmit load won't exceed 100% and there will be many drops for that traffic, but what shouldn't be seen is adverse impact to you low rate, test stream.