cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
540
Views
1
Helpful
10
Replies

QoS fail to capture packet at interface ISR1000

maya davidson
Level 1
Level 1

Hello All, 

I am applying QoS on interface that has traffic going, 
BPE_LAB_C1111x_0002#sh interface g0/1/6
GigabitEthernet0/1/6 is up, line protocol is up (connected)
Hardware is C1111X-ES-8, address is ac7a.5622.9a0e (bia ac7a.5622.9a0e)
Description: R3-Cisco 9200_0001 Gi1/0/4
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 226/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not supported
Full-duplex, 100Mb/s, link type is auto, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:01, output 00:00:01, output hang never
Last clearing of "show interface" counters 6d20h
Input queue: 0/100/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: Class-based queueing
Output queue: 0/1000 (size/max)
5 minute input rate 3000 bits/sec, 3 packets/sec
5 minute output rate 88795000 bits/sec, 7523 packets/sec
5591106892 packets input, 1366629555455 bytes, 0 no buffer
Received 4 broadcasts (393448 multicasts)
0 runts, 0 giants, 0 throttles
1 input errors, 1 CRC, 0 frame, 1 overrun, 0 ignored
0 watchdog, 393448 multicast, 0 pause input
0 input packets with dribble condition detected
5933116700 packets output, 1657779980854 bytes, 0 underruns
Output 10 broadcasts (97312 multicasts)
0 output errors, 0 collisions, 5 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
2 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
BPE_LAB_C1111x_0002#

There is vlan on this interface which most of the traffic belongs to that 
BPE_LAB_C1111x_0002#sh interface vlan 12
Vlan12 is up, line protocol is up , Autostate Enabled
Hardware is Ethernet SVI, address is ac7a.5622.9a74 (bia ac7a.5622.9a74)
Internet address is 1.0.9.1/24
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 228/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not supported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:03, output 00:00:00, output hang never
Last clearing of "show interface" counters 7w0d
Input queue: 0/1000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/1000 (size/max)
5 minute input rate 3000 bits/sec, 2 packets/sec
5 minute output rate 89531000 bits/sec, 7574 packets/sec
8636434310 packets input, 11527650903302 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
22822131072 packets output, 28360138920378 bytes, 0 underruns
Output 0 broadcasts (0 IP multicasts)
0 output errors, 1 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
BPE_LAB_C1111x_0002#


I am applying Qos on the interface but it fails to capture any packets 

BPE_LAB_C1111x_0002#sh policy-map interface g0/1/6
GigabitEthernet0/1/6

Service-policy output: ppot

Class-map: allp (match-all)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: vlan 12
Queueing
queue limit 1000 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 100000000, bc 400000, be 400000
target shape rate 100000000


Class-map: class-default (match-any)
23 packets, 9155 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 1000 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 100000000, bc 400000, be 400000
target shape rate 100000000

BPE_LAB_C1111x_0002#

and the configs for interface are 

BPE_LAB_C1111x_0002#sh run interface g0/1/6
Building configuration...

Current configuration : 238 bytes
!
interface GigabitEthernet0/1/6
description R3-Cisco 9200_0001 Gi1/0/4
switchport access vlan 12
switchport mode access
speed 100
isis network point-to-point
service-policy output ppot
hold-queue 100 in
hold-queue 1000 out
end

BPE_LAB_C1111x_0002#sh run interface vlan 12
Building configuration...

Current configuration : 133 bytes
!
interface Vlan12
bandwidth 100000
ip address 1.0.9.1 255.255.255.0
ip ospf cost 1
hold-queue 1000 in
hold-queue 1000 out
end

BPE_LAB_C1111x_0002#
BPE_LAB_C1111x_0002#sh policy-map ppot
Policy Map ppot
Class allp
Average Rate Traffic Shaping
cir 100000000 (bps)
queue-limit 1000 packets
Class class-default
Average Rate Traffic Shaping
cir 100000000 (bps)
queue-limit 1000 packets
BPE_LAB_C1111x_0002#

BPE_LAB_C1111x_0002#sh class-map allp
Class Map match-all allp (id 11)
Match vlan 12

BPE_LAB_C1111x_0002#









10 Replies 10

Joseph W. Doherty
Hall of Fame
Hall of Fame

Cannot say for sure, but often full blown router QoS has been restricted to "WAN" ports.  Possibly the fact you're using it on a switch port is the issue.

With your current setup, have you tried the policy on the V12 SVI?

Yes, It says the queuing policy is not supported on vlan, 

could it be the issue why this router experience high delay ?
I am still stuck on the previous problem which this one has delay over than 4 times bigger.
Even applying traffic more than bandwidth , the total output drop is 0 which is weird 

Cannot speak on the architecture of C1111, but often mini L2 switches, within a router, have bandwidth constraints between L3 and L2.  I e., sometimes, internally, you have a router on a stick.

Over the years, I've seen several cases where processing supported by hardware don't always update the higher level stat counters.  E.g. ASIC stats might not be updating show interface stats.

As to still being stuck on your previous problem, again, if you don't provide detailed information requested, might preclude someone possiblity identifying the cause of the issue let provide a solution.  (Ever examine the output of a show tech?  There's a reason it's so voluminous.)

FYI, just found a CiscoLive Breakout presentation on the C1111 architecture and performance.  CEF IMIX, 1.7 Gbps, adding HQOS, about half that, IPSec, about a quarter of that.  Bandwidth between router and internal switch, 1 Gbps for 4 LAN ports, 2.5 Gbps for 8 LAN ports.

https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2020/pdf/BRKARC-2005.pdf

Forgoing is reference I drew stats from.  (NB: My prior reply was via my phone, and some stuff difficult to do, like copying this reference link.  But easy to do on my PC.)

Lots of other interesting information in the presentation.

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello @maya davidson ,

the L2 port is an access switch port

>> switchport access vlan 12

As a result of this user traffic is received without any 802.1Q tag

your service policy uses a class-map definition that matches on VLAN ID 12

Class-map: allp (match-all)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
>>> Match: vlan 12

My guess is this why this class-map has 0 packets . All received packets have no 802.1Q VLAN tag they are untagged frames.

if the port is an 802.1Q trunk with allowed vlan 12 you could have a match.

Hope to help

Giuseppe

 

An interesting observation, which I ignored because what caught my attention, although I didn't mention it earlier, class class-default doesn't appear to show the traffic either.  I.e. even if any explicit class isn't matching traffic, the traffic should show there.  Since it doesn't, I suspect a deeper issue.  Possibly the deeper issue is due to using VLAN matching.

That said, you might try matching on something other than VLAN or remove the explicit class altogether and see if class-default then shows the traffic.

Hello @Joseph W. Doherty ,

on first post of OP @maya davidson we see:

>> Class-map: class-default (match-any)
>> 23 packets, 9155 bytes

on the top of the post after the class allp

allp class should match using an IP ACL based on source  / destination subnets of offered traffic.

( if this is still the lab setup with the IXIA ports connected to different network devices, but the note is valid also if the setup is different)

Hope to help

Giuseppe

 

Yup, I too noticed the 23 packet matches, but OP has "I am applying QoS on interface that has traffic going," then look at OP posted interface stats.  Seems there should be more, a lot, lot more, than 23 packets.  Its packet count should be increasing by about 7.5 Kpps.

 

Oh, if your C1111 has two WAN ports, try using just them.

Traditional, design wise, small ISRs L3 capacity was for dealing with a WAN link.  Intra LAN traffic was expected to be all L2.

Having performance issues would not be surprising if you were even pushing just one gig link at capacity, but I wouldn't expect performance issues, on a C1111, for 100 Mbps.

Review Cisco Networking for a $25 gift card