02-01-2012 03:57 AM - edited 03-04-2019 03:05 PM
Hi Everyone
I have an issue with an ASR 1001. The problem occurs with MQC shaping applied to a gigabit interface in the outbound direction. The CIR of the provider we are using is 100Mb/sec, so we are shaping to that value. However when reported traffic levels are about 60Mb/sec, we see a steady increase in output drops.
Here is the relevant config;
policy-map QOS-OUT
class PREMIUM
priority 6144
class ENHANCED-1
bandwidth 1024
class class-default
set dscp af11
policy-map OUT
class class-default
shape average 100000000
service-policy QOS-OUT
interface GigabitEthernet0/0/0
no ip address
no ip redirects
no ip proxy-arp
load-interval 30
no negotiation auto
interface GigabitEthernet0/0/0.101
bandwidth 100000
encapsulation dot1Q 101
ip address x.x.x.x x.x.x.x
no ip redirects
no ip proxy-arp
service-policy output OUT
The physical interface shows the following ( note we are using dot1q sub interfaces but only have one in use, so the stats on physical interface only relate to this one)
sh int g0/0/0
GigabitEthernet0/0/0 is up, line protocol is up
Hardware is ASR1001, address is 649e.f329.3d80 (bia 649e.f329.3d80)
Description:
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 17/255, rxload 6/255
Encapsulation 802.1Q Virtual LAN, Vlan ID 1., loopback not set
Keepalive not supported
Full Duplex, 1000Mbps, link type is force-up, media type is SX
output flow-control is on, input flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 02:20:54, output 02:20:54, output hang never
Last clearing of "show interface" counters 00:03:11
Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 267
Queueing strategy: Class-based queueing
Output queue: 0/40 (size/max)
30 second input rate 26851000 bits/sec, 12150 packets/sec
30 second output rate 60529000 bits/sec, 18885 packets/sec
2330446 packets input, 709862871 bytes, 0 no buffer
Received 10 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
3541955 packets output, 1634851452 bytes, 0 underruns
The policy-map shows;
snip
Class-map: class-default (match-any)
3217551 packets, 1484943980 bytes
30 second offered rate 60418000 bps, drop rate 4000 bps
Match: any
Queueing
queue limit 416 packets
(queue depth/total drops/no-buffer drops) 0/267/0
(pkts output/bytes output) 3535758/1630202127
shape (average) cir 100000000, bc 1000000, be 1000000
target shape rate 100000000
I have also tried increasing the hold queue, however this does not help with the drops. Increasing the shaping rate to 200Mb/sec gets rid of almost all the drops (not quite all!) but is not what we need.
I think that we should see some drops when rates spike above 100mb, however the amount we are seeing seems excessive?
Does anyone have any views on this?
Thanks
Mike
02-01-2012 04:13 AM
Hello
You have 267 drops, I don't see that as being excessive.
Here as example from one of device that I have
Class-map: class-default (match-any)
5155683605 packets, 2284544032554 bytes
30 second offered rate 466000 bps, drop rate 4000 bps
Match: any
Queueing
Output Queue: Conversation 139
Bandwidth 55 (%)
Bandwidth 2200 (kbps)
(pkts matched/bytes matched) 860129821/2284338175242
(depth/total drops/no-buffer drops) 0/801807/0
exponential weight: 9
mean queue depth: 0
Of course, different bandwidth (4Mbps) but this is not the point. The drops here are generate from the overload of the line of course.
How fast are the drops increase? If you have 100Mbps contracted from the provided by SLA then you should not have drops at 60Mbps. But this also depends of the contract type.
Nevertheless what I noticed from my experience, just as an FYI, is that I'm more comfortable using percentage in the classes and also add a bandwidth percent for the class-default. Just to know how much I assign for the class default.
HTH,
Calin
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: