cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
560
Views
0
Helpful
5
Replies

Issues with QoS on 1841 routers

mattyackley
Level 1
Level 1

Hi all,

I have run into what appears to be an issue with my QoS settings on some new 1841 routers. I used the same settings as our older 2600 & 3600 routers, but now it does not seem to work.

My router is a 1841 running IPBASE 12.4(1b). The WAN connection is a T1x2 using a multilink interface. After running tests pushing as much traffic as the link would handle, I ran a sh policy-map for this interface and the results look a bit strange for my VoIP class:

Class-map: VoIP (match-all)

38075 packets, 55940389 bytes

5 minute offered rate 916000 bps, drop rate 0 bps

Match: access-group 120

Queueing

Strict Priority

Output Queue: Conversation 264

Bandwidth 768 (kbps) Burst 19200 (Bytes)

(pkts matched/bytes matched) 2/3004

(total drops/bytes drops) 0/0

The matched queue packets don't match up with the number of class-map matched packets. Also instead of limiting the traffic to 768K it was able to reach 1.43M. A second instance of the test was pushing traffic on a non-priority IP range and took up the other 1.47M.

Here are some of the configs:

class-map match-all VoIP

match access-group 120

class-map match-all Vision

match access-group 121

policy-map CoS

class VoIP

priority 768

class Vision

bandwidth 384

class class-default

fair-queue

interface Multilink1

description T1 MLPPP Bundle

bandwidth 3088

ip address x.x.109.250 255.255.255.252

no ip route-cache cef

no cdp enable

ppp multilink

ppp multilink fragment disable

ppp multilink group 1

service-policy output CoS

interface Serial0/0/0

no ip address

encapsulation ppp

no fair-queue

service-module t1 timeslots 1-24

ppp multilink

ppp multilink group 1

interface Serial0/1/0

no ip address

encapsulation ppp

no fair-queue

service-module t1 timeslots 1-24

ppp multilink

ppp multilink group 1

I see similar stats from another new 1841 router. Class-map matched packets were over 19 million, but the queue stats for that policy-map only show 2 million matched packets.

On our older 36xx and 26xx routers running 12.2 seem to show that the claa-map matches and queue matches have the same number of matched packets when the link is under a heavy load.

Any ideas what I may have done wrong or missed?

Thanks

5 Replies 5

pkhatri
Level 11
Level 11

HI Matt,

There is a subtle difference between the two counters that you should be aware of:

- the class-map counter counts all packets that were classified to that class-map

- the 'pkts matched; queue counter only shows the number of packets that were actually queued.

The difference is that, under non-congestion conditions, packets bypass the software queue altogether and are placed straight into the hardware tx-ring. Therefore, these packets do not count under the queue counter.

As you mentioned, under heavily loaded conditions, the counters match, because most packets are being software-queued.

So the behaviour you are seeing is quite correct.

Hope that helps - pls rate the post if it does.

Paresh

Hi Paresh,

When I was running the tests the network was under 95-100% load. The test site has 2 T1 circuits setup as a multilink bundle. During the test phase I was pushing out a total of 3Mb of traffic. Using Iperf I had two hosts sending TCP data as fast a they could out of the site with the 1841. One host was setup to match the "VoIP" ACL and another host was setup to match the default class.

Under full load I would expect two things.

1. Some of the data matching the "VoIP" ACL should have been queued.

2. Under full load the priority 768 limit should have capped the data sent through the "VoIP" queue.

What I saw was that the two hosts each were able to send through roughly the same amount of data. the pkts matched queue counter only recorded 2 packets being queued.

When I reverse this test and send data out from a site that has a 3660 running 12.2(16a) I get the expected results. Packets in the VoIP queue match the class-map and the host in the "VoIP" ACL is limited to 0.76 Mb throughput and the host in the default class is able to push through 1.87 Mb.

Matt

Hi Matt,

I see now what you are getting at...

Would you beable to post the output of the following at the time you are running the test:

- show policy-map interface multilink1

- show interface - for the interfaces which are pushing data in (from your two hosts).

Paresh

I'll need to run some more tests to get the output from the FastEth interface, but I do have a copy of the "show policy-map" from my earlier test. Counters were cleared shortly before the test was run, test lasted 5 minutes.

"sh policy-map interface multilink1 output" results

Multilink1

Service-policy output: CoS

Class-map: VoIP (match-all)

38075 packets, 55940389 bytes

5 minute offered rate 916000 bps, drop rate 0 bps

Match: access-group 120

Queueing

Strict Priority

Output Queue: Conversation 264

Bandwidth 768 (kbps) Burst 19200 (Bytes)

(pkts matched/bytes matched) 2/3004

(total drops/bytes drops) 0/0

Class-map: Vision (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group 121

Queueing

Output Queue: Conversation 265

Bandwidth 384 (kbps) Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

Class-map: class-default (match-any)

46287 packets, 58159363 bytes

5 minute offered rate 969000 bps, drop rate 0 bps

Match: any

Queueing

Flow Based Fair Queueing

Maximum Number of Hashed Queues 256

(total queued/total drops/no-buffer drops) 0/0/0

Matt

Hi Matt,

You might want to configure 'load-interval 30' on that interface to get more accurate stats...

The total traffic hitting the output interface is: 916000 + 969000 = 1885000, which is still less than 2*T1. That means that interface is not congested and your stats are correct. To observe congestion, you will need to offer traffic that is really well in excess of 3Mbps. Maybe 3.5 Mbps would be a good test...

Paresh