04-16-2013 08:47 AM - edited 03-04-2019 07:36 PM
Hi All
I've an ASR1001 with 15.1(2)S code on it connected to out ISP, we've been get some complaints about performance and I'm seeing drop on the output policy. Checking the bandwidth consumption we have plenty spare when drops are occuring, there's 300Mb/s. Details below, any suggestions gratefully received
Thanks
Patrick
The policy is to guarantee the following bandwidth:
Outbound policy:
class 1 => 50% guaranteed
class 2 => 8% guaranteed
class 3 => 1% guaranteed
class 4 => 5% guaranteed
class 5 => 1% guaranteed
class 6 => 5% guaranteed
class => 7% guaranteed
class 8 => 7% guaranteed
class default => not configured
config :
policy-map priority
class 1
priority percent 50
class 2
priority percent 8
class 3
priority percent 1
class 4
priority percent 5
class 5
priority percent 1
class 6
priority percent 5 2000000
class 7
priority percent 7
class 8
priority percent 7
class class-default
policy-map outbound-internet
class class-default
shape average 300000000
service-policy priority
and the drops are seen :
Service-policy output: outbound-internet
Class-map: class-default (match-any)
157386235086 packets, 142277415785757 bytes
30 second offered rate 158203000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 1250 packets
(queue depth/total drops/no-buffer drops) 0/83753649/0
(pkts output/bytes output) 157302481432/142155881947373
shape (average) cir 300000000, bc 1200000, be 1200000
target shape rate 300000000
Service-policy : priority
queue stats for all priority classes:
Queueing
queue limit 512 packets
(queue depth/total drops/no-buffer drops) 0/83753649/0
(pkts output/bytes output) 149622194962/136919622255672
Class-map: 1 (match-all)
38746601707 packets, 44785520236642 bytes
30 second offered rate 47228000 bps, drop rate 0000 bps
Match: access-group name web-farm
Priority: 50% (150000 kbps), burst bytes 3750000, b/w exceed drops: 22745
Class-map: 2 (match-all)
32261813730 packets, 23350595823948 bytes
30 second offered rate 11610000 bps, drop rate 0000 bps
Match: access-group name webfarm-VPN
Priority: 8% (24000 kbps), burst bytes 600000, b/w exceed drops: 3959031
Class-map: 3 (match-all)
258399421 packets, 267198408224 bytes
30 second offered rate 0000 bps, drop rate 0000 bps
Match: access-group name web-farm-mail
Priority: 1% (3000 kbps), burst bytes 75000, b/w exceed drops: 87883
Class-map: 4 (match-all)
17160556172 packets, 4209151495211 bytes
30 second offered rate 18982000 bps, drop rate 0000 bps
Match: access-group name corporate-browsing
Priority: 5% (15000 kbps), burst bytes 375000, b/w exceed drops: 129850
Class-map: 5 (match-all)
239923 packets, 42322375 bytes
30 second offered rate 0000 bps, drop rate 0000 bps
Match: access-group name newscientist-media
Priority: 1% (3000 kbps), burst bytes 75000, b/w exceed drops: 0
Class-map: 6 (match-all)
34710766533 packets, 49678200430642 bytes
30 second offered rate 61048000 bps, drop rate 0000 bps
Match: access-group name corporate-DR-VPN
Priority: 5% (15000 kbps), burst bytes 2000000, b/w exceed drops: 79015405
Class-map: 7 (match-all)
14845163520 packets, 8016844416864 bytes
30 second offered rate 7632000 bps, drop rate 0000 bps
Match: access-group name corporate-VPN
Priority: 7% (21000 kbps), burst bytes 525000, b/w exceed drops: 40342
Class-map: 8 (match-all)
11721867776 packets, 6733254903938 bytes
30 second offered rate 10804000 bps, drop rate 0000 bps
Match: access-group name corporate
Priority: 7% (21000 kbps), burst bytes 525000, b/w exceed drops: 498393
Class-map: class-default (match-any)
7680247559 packets, 5236234911120 bytes
30 second offered rate 897000 bps, drop rate 0000 bps
Match: any
queue limit 1250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 7680286470/5236259691701
04-17-2013 01:08 AM
Hello
The queue-limits of the classes dont look like they are default? - I think I read ( NOT 100% certain) that the total numbers of class queue-limits cannot exceed the physical interfaces hold-queue limit? ( default 1000), if so tail drop occurs before the class queue limit is reached .
Maybe someone else can jump in here to clarify this?
Regards your qos config, you could try not performing LLQ on all classless other than low latency traffic such as voice/video.
eg:
Class 1
Priority xx
Class 2
Bandwidth percent xx
etc......
Class class-default
Policy-map outbound-Internet
Class class-default
Shape 3000000000
Policy priority
Also
sh policy-map int xxxx
sh queue xxx(int)
res
Paul
Please don't forget to rate any posts that have been helpful.
Thanks.
04-17-2013 01:51 AM
Thanks for this, I was a little curious about the LLQ on all the queues as we don't have any voice or video on this.
How do I check the class queue limits vs the interface hold queue?
Thanks
04-17-2013 02:08 AM
Hello
Can you post your config
and also
sh policy-map int xxxx
sh queue xxx(int)
res
Paul
Please don't forget to rate any posts that have been helpful.
Thanks.
04-17-2013 02:59 AM
Hi Paul
Config and sh policy map already posted, have I missed something? The show queue isn't support on this box, it's IOS-XE, 15.1.
Thanks
04-17-2013 04:38 AM
Hello Patrick,
I can see an aggregate queue limit of 512 which is for all 8 classes
8*64= 512 (64 packets is the default for a traffic class)
Doe any of your physical interfaces have the hold-queue xx in/out or queue-limit xx configured?
res
Paul
Please don't forget to rate any posts that have been helpful.
Thanks.
04-17-2013 05:33 AM
Hi Paul
No neither configured, can you explain the aggregate queue limit? I doing some reading on this subject now too
Thanks
04-17-2013 05:55 AM
Hello Patrick
Going back to what I first stated regards the defining a queue-limit value on each traffic class in hierarchical qos, These values together ( aggregate) should not exceed the hold-queue limt ( default 1000) of the physical interface otherwise drops could occur.
http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6558/sol_ov_c22-708224.html
res
Paul
Please don't forget to rate any posts that have been helpful.
Thanks.
04-17-2013 06:55 AM
Hi Paul
Was also reading this doc it's very good. So if I have 8 low latency queues do these share the 512 limit?
Looks like I need to redefine the queues as non LLQs as a starting point anyway
Regards
Patrick
04-17-2013 07:10 AM
Hello Patrick,
Was also reading this doc it's very good. So if I have 8 low latency queues do these share the 512 limit? - Yes they total to 512 packets -thats what we can see in your output
Try and not to LLQ if you dont need to and apply a BW percentage to each class and maybe even apply WRED to them also for congestion avoidance
Class 1
Bandwidth percent xx
random-detect
Class 2
etc..
Class 3
etc..
res
Paul
Please don't forget to rate any posts that have been helpful.
Thanks.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide