cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
739
Views
0
Helpful
10
Replies

QoS

eXPlosion
Level 1
Level 1

Hi,

I see many dropped packets and increasing 

sh policy-map int f0/17

Class-map: class-default (match-any)
395605418 packets
Match: any
Tail Packets Drop: 352028
Queue Limit queue-limit 544 (packets)
Common buffer pool: 679
Current queue depth: 0
Current queue buffer depth: 0
Output Queue:
Max queue-limit default threshold: 544
Tail Packets Drop: 352028

1# sh controllers fas 0/17 utilization
Receive Bandwidth Percentage Utilization : 0
Transmit Bandwidth Percentage Utilization : 1

uplink utiliazation

1# sh controllers gigabitEthernet 0/1 utilization
Receive Bandwidth Percentage Utilization : 17
Transmit Bandwidth Percentage Utilization : 0

Uplink traffic is 170mbps but f0/17 is only transmitting at 1mbps and dropping packets. Where could be a problem?

10 Replies 10

Bob Bagheri
Level 1
Level 1

Can you please paste your QoS configurations, class-maps, policy-map, ACL, etc?  Doesn't look like anything other than the default class is being applied.  Please include the interface config for f0/17.

In addition, if this is a Fast Ethernet port, I doubt you will see more than 100Mbps.

Where do you see the transmit at 1Mbps?  Please add a "show interface f0/17".  You may have a duplex mismatch.


Bob

interface FastEthernet0/17
description Client1
switchport access vlan 121
switchport block multicast
switchport block unicast
switchport port-security maximum 8
switchport port-security
switchport port-security aging time 20
switchport port-security violation restrict
switchport port-security aging type inactivity
ip arp inspection limit rate 14
ip access-group 101 in
duplex full
small-frame violation-rate 1000
service-policy output client-output
ip igmp max-groups 10
ip igmp max-groups action replace
ip igmp filter 1
ip verify source port-security
ip dhcp snooping limit rate 15
end

interface GigabitEthernet0/1
description Link to Backbone
port-type nni
switchport mode trunk
ip arp inspection trust
logging event spanning-tree
service-policy input all-input
ip igmp max-groups 100
ip dhcp snooping trust

class-map match-all tcp-syn
match access-group 100
class-map match-all multicast
match ip dscp af41
class-map match-all vod
match ip dscp af42

policy-map all-input
class multicast
set ip dscp af41
class vod
set ip dscp af42

policy-map client-output
class multicast
bandwidth percent 50
class vod
bandwidth percent 20
queue-limit 544
class class-default
queue-limit 544

i see 1mbps with:

1# sh controllers fasteth 0/17 utilization

Transmit Bandwidth Percentage Utilization : 1

which i 1% of 100mbps, but checked monitoring and it shows 4,5mbps utilization of f0/17

with uplink Gigabit monitoring and this command shows the same utilization

Well since you are reserving the bandwidth for only Multicast, all other traffic is going to be treated as "default" class.  This is only when the queues are full and QoS kicks in.

What does ACL 101 do on port f0/17?


Can you include a "show interface" of f0/17, so we can eliminate a duplex mismatch possibility?


What kind of switch is this? 


Thanks,
Bob

1#sh int f0/17
FastEthernet0/17 is up, line protocol is up (connected)
Hardware is Fast Ethernet, address is 001b.0d94.b193 (bia 001b.0d94.b193)
Description: Mironaites 14-24
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 3/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, link type is auto, media type is 100BaseBX-10D SFP
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:26, output 00:00:26, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1724160
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 43000 bits/sec, 71 packets/sec
5 minute output rate 1202000 bits/sec, 113 packets/sec
202601059 packets input, 45745043577 bytes, 0 no buffer
Received 14215263 broadcasts (434645 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 434645 multicast, 743 pause input
0 input packets with dribble condition detected
4567902727 packets output, 6127465167447 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out

ACL denies routing traffic and permits certain IGMP ip ranges

It's Cisco 3400

BOOTLDR: ME340x Boot Loader (ME340x-HBOOT-M) Version 12.2(40r)SE1, RELEASE SOFTWARE (fc1)

I see the input queue is using FIFO and has lots of drops:

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1724160
Queueing strategy: fifo

Have you tried looking at the hardware QoS capabilities of this port on the 3400? 


You may want to also remove the policy on f0/17 and test again.  

Bob

It's output drops, so i think we should look at Output queue: 0/40 (size/max)

Yes, it is important to understand all the hardware components in the path of the packets and fine tune the queues.  Often times I find my clients are not even using the priority queue or the switch is configured for the wrong purpose.  For example, I see auto-qos for voice deployed on switches that are connected to servers.

HTH,

Bob

Yes it is very important and not very easy to understand those components. but QoS is in play when traffic reaches 100% utilization with long or short bursts. In this situation maybe it is very short bursts, because i do not have explanation for this

Actually those drops are almost on most switches and most ports.

eXPlosion
Level 1
Level 1

Just noticed the same thing on other Cisco switch 

SW1#sh policy-map int f0/13

Class-map: class-default (match-any)
567844733 packets
Match: any
Tail Packets Drop: 90
Queue Limit
queue-limit 544 (packets)
Output Queue:
Max queue-limit default threshold: 544
Tail Packets Drop: 90

Uplink is about 20% of Gigabit and client traffic on f0/13 is about 1-5 mbps

also strange ARP responce on this port

Jan 27 17:07:45.964 GMT: %SW_DAI-4-INVALID_ARP: 1 Invalid ARPs (Res) on Fa0/13, vlan 115.([xxxx.xxxx.xxxx/x.x.5.96/0018.742d.54xx/255.255.255.255/17:07:45 GMT Wed Jan 27 2016])

isn't 255.255.255.255 supposed to be default Gateway IP?