cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
507
Views
5
Helpful
18
Replies
Beginner

Packet loss across Catalyst

Looking for a bit of guidance.

 

I have a customer that is using a Catalyst as their CPE, it also appears they are using this device as one of their core switches as there are over 20+ VLANs connected to the device through a number of trunks. The customer is reporting slow performance, this is evident from a large number of drops on the output queue of one of the trunks. Cable has been replaced but is still re-occuring. 

 

I conducted a number of Ping Plotter tracers, the tracers take the same path, but obviously 3 & 4 have an additional hop. All were conducted at the same time. The four tracers were:

1. To the WAN IP of the CPE

2. To one of the SViI's IP on the CPE

3. Host IP on one of the VLANs

4. Another host IP on a different VLAN.

 

I conducted these persistent tracers over an hour or so, and for tracers 3 & 4 I was seeing up to 25% packet loss, however the packet loss was occurring at the WAN IP, but at the same time, for tracers 1 & 2 there was no packet loss at the WAN IP, and no packet loss overall?

 

I need to find some sort of distinction here. My peers think this could be the Catalyst is low on resources in respect to operating as a switch & a router, i.e some sort of contention for resources, however I need to provide some evidence to support this.

 

If anyone can shed some light as to what may be happening that would be appreciated.

 

Everyone's tags (3)
1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted
VIP Mentor

Re: Packet loss across Catalyst

Hello,

 

based on the output, configure the below on your interface GigabitEthernet0/4:

 

mls qos queue-set output 1 buffers 5 85 5 5
srr-queue bandwidth share 5 85 5 5

mls qos queue-set output 1 threshold 1 100 100 50 100
mls qos queue-set output 1 threshold 2 800 200 200 800
mls qos queue-set output 1 threshold 3 80 90 50 100
mls qos queue-set output 1 threshold 4 80 90 50 100

 

There is a certain way to calculate these values, basically what it comes down to is assign higher values to queues with higher traffic and more drops...

 

Give this a try and check if the drops decrease...

18 REPLIES 18
VIP Mentor

Re: Packet loss across Catalyst

Hello,

 

which type/model is the switch ? Output queue drops usually mean there is simply too much traffic for the interface to handle. Does the switch have any quality of service configured ?

If possible, post the output of 'show version' as well as the full config of the switch...

Beginner

Re: Packet loss across Catalyst

Hey Georg,

 

Thanks for taking the time.

 

So there is QoS configured, I have turned it off and retried the tracers but I get the same results. Here is the config:

 

System returned to ROM by power-on
System restarted at 07:42:05 UTC Fri Mar 23 2018
System image file is "flash:/c3560e-universalk9-mz.122-55.SE10/c3560e-universalk9-mz.122-55.SE10.bin"


This product contains cryptographic features and is subject to United
States and local country laws governing import, export, transfer and
use. Delivery of Cisco cryptographic products does not imply
third-party authority to import, export, distribute or use encryption.
Importers, exporters, distributors and users are responsible for
compliance with U.S. and local country laws. By using this product you
agree to comply with applicable laws and regulations. If you are unable
to comply with U.S. and local laws, return this product immediately.

A summary of U.S. laws governing Cisco cryptographic products may be found at:
http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

If you require further assistance please contact us by sending email to
export@cisco.com.

License Level: ipservices
License Type: Permanent
Next reload license Level: ipservices

cisco WS-C3560X-24 (PowerPC405) processor (revision R0) with 262144K bytes of memory.
Last reset from power-on
21 Virtual Ethernet interfaces
1 FastEthernet interface
28 Gigabit Ethernet interfaces
2 Ten Gigabit Ethernet interfaces


512K bytes of flash-simulated non-volatile configuration memory.

Model number : WS-C3560X-24T-E

 

Switch Ports Model SW Version SW Image
------ ----- ----- ---------- ----------
* 1 30 WS-C3560X-24 12.2(55)SE10 C3560E-UNIVERSALK9-M


Configuration register is 0xF

 

!
version 12.2
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!

!
boot-start-marker
boot-end-marker
!
logging buffered 8192

!

!
!
!
aaa new-model
!
!
!
!
!
aaa session-id common
system mtu routing 1500
ip routing

mls qos map cos-dscp 0 8 16 24 32 41 46 48
mls qos srr-queue input priority-queue 1 bandwidth 10
mls qos
!

!
!

spanning-tree mode pvst
spanning-tree extend system-id
!
!
!
!
vlan internal allocation policy ascending
!
!
class-map match-all EF
match ip dscp ef
!
!
!
!
interface FastEthernet0
no ip address
no ip route-cache cef
no ip route-cache
no ip mroute-cache
!
interface GigabitEthernet0/1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203
switchport mode trunk
mls qos trust dscp
!
interface GigabitEthernet0/2
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203
switchport mode trunk
mls qos trust dscp
!
interface GigabitEthernet0/3
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203
switchport mode trunk
mls qos trust dscp
!
interface GigabitEthernet0/4
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203
switchport mode trunk
mls qos trust dscp
!
interface GigabitEthernet0/5
shutdown
!
interface GigabitEthernet0/6
shutdown
!
interface GigabitEthernet0/7
shutdown
!
interface GigabitEthernet0/8
shutdown
!
interface GigabitEthernet0/9
!
interface GigabitEthernet0/10
shutdown
!
interface GigabitEthernet0/11
shutdown
!
interface GigabitEthernet0/12
shutdown
!
interface GigabitEthernet0/13
!
interface GigabitEthernet0/14
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203
switchport mode trunk
mls qos trust dscp
!
interface GigabitEthernet0/15
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203
switchport mode trunk
mls qos trust dscp
!
interface GigabitEthernet0/16
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203
switchport mode trunk
mls qos trust dscp
!
interface GigabitEthernet0/17
shutdown
!
interface GigabitEthernet0/18
shutdown
!
interface GigabitEthernet0/19
shutdown
!
interface GigabitEthernet0/20
shutdown
!
interface GigabitEthernet0/21
shutdown
!
interface GigabitEthernet0/22
shutdown
!
interface GigabitEthernet0/23
shutdown
!
interface GigabitEthernet0/24
shutdown
!
interface GigabitEthernet1/1
description WAN interface
no switchport
ip address
priority-queue out
mls qos trust dscp
!
interface GigabitEthernet1/2
!
interface GigabitEthernet1/3
!
interface GigabitEthernet1/4
!
interface TenGigabitEthernet1/1
!
interface TenGigabitEthernet1/2
!
interface Vlan12
ip address 192.168.1.1 255.255.255.0
ip access-group 110 in
!
interface Vlan21
ip address 192.168.2.1 255.255.255.0
ip access-group 125 in
!
interface Vlan31
ip address 192.168.3.1 255.255.255.0
ip access-group 135 in
!
interface Vlan41
ip address 192.168.8.1 255.255.255.0
ip access-group 145 in
!
interface Vlan51
ip address 192.168.5.1 255.255.255.0
ip access-group 155 in
!
interface Vlan61
ip address 192.168.6.1 255.255.255.0
ip access-group 165 in
!
interface Vlan72
ip address 10.0.104.1 255.255.255.0 secondary
ip address 10.0.102.1 255.255.254.0
!
interface Vlan84
ip address 172.16.172.1 255.255.255.0
!
interface Vlan114
ip address 10.66.11.1 255.255.255.0
ip access-group 115 in
!
interface Vlan221
ip address 10.66.22.1 255.255.255.0
!
interface Vlan332
ip address 10.66.33.1 255.255.255.0
!
interface Vlan443
ip address 10.66.44.1 255.255.255.0
!
interface Vlan554
ip address 10.66.55.1 255.255.255.0
ip access-group 175 in
!
interface Vlan771
ip address 10.66.77.1 255.255.255.0
ip access-group 185 in
!
interface Vlan881
ip address 10.66.88.1 255.255.255.0
!
interface Vlan112
ip address 10.66.102.1 255.255.255.0
!
interface Vlan113
ip address 10.66.103.1 255.255.255.0
!
interface Vlan114
ip address 10.66.104.1 255.255.255.0
!
interface Vlan115
ip address 10.66.105.1 255.255.255.0
!
interface Vlan116
ip address 10.66.106.1 255.255.255.0
!
interface Vlan213
ip address 10.66.203.1 255.255.255.0

 

I get a bunch of 'encapsulation failed', 'no adjacency' & 'no route' drops under #sh ip traffic, not sure how pertinent that is.

 

Ben

VIP Mentor

Re: Packet loss across Catalyst

Hello,

 

can you post the output of 'show interfaces GigabitEthernet1/1 ?

 

Your 3650 essentially functions as a router. Is this the full configuration, that is, there are no (static) routes configured ?

Beginner

Re: Packet loss across Catalyst

Hey Georg,

 

Sorry about that, I removed the BGP config, there is a BGP learnt default route. I also omitted the access-lists that permit traffic on the vlans, as it was just permitting /24 subnets for the VLANs.

 

 

 

GigabitEthernet1/1 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is
Description: WAN interface
Internet address is
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 3d21h
Input queue: 1/75/0/0 (size/max/drops/flushes); Total output drops: 2
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 347000 bits/sec, 123 packets/sec
5 minute output rate 390000 bits/sec, 120 packets/sec
581276871 packets input, 403227429391 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
532501331 packets output, 355693860967 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out

 

 

VIP Mentor

Re: Packet loss across Catalyst

Hello,

 

the uplink looks good...

 

Can you post the output of 'show interfacex x' where 'x' is the trunk you see the output queue drops on ?

Can you also post the output of 'show buffers' ?

 

Sorry for all the questions, but it can be a lot of things causing this...

Beginner

Re: Packet loss across Catalyst

Hey Georg,

 

You are helping me out, man :)

 

So there are three trunk and these two present with output drops

 

GigabitEthernet0/4 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 3/255, rxload 2/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:01, output 00:00:00, output hang never
Last clearing of "show interface" counters 3d22h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1679282
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 11374000 bits/sec, 2712 packets/sec
5 minute output rate 11971000 bits/sec, 2722 packets/sec
7370174760 packets input, 9133545064110 bytes, 0 no buffer
Received 12874477 broadcasts (7418514 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 7418514 multicast, 0 pause input
0 input packets with dribble condition detected
5437296649 packets output, 5383261631841 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out

GigabitEthernet0/2 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:05, output 00:00:00, output hang never
Last clearing of "show interface" counters 3d22h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 44749
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 143000 bits/sec, 41 packets/sec
5 minute output rate 485000 bits/sec, 110 packets/sec
1037704745 packets input, 437182238895 bytes, 0 no buffer
Received 2830824 broadcasts (1849195 multicasts)
0 runts, 5822 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 1849195 multicast, 0 pause input
0 input packets with dribble condition detected
3090556028 packets output, 4294017794788 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out


#sh buffers
Buffer elements:
1061 in free list (500 max allowed)
540052843 hits, 0 misses, 1024 created

Public buffer pools:
Small buffers, 104 bytes (total 50, permanent 50, peak 116 @ 7w0d):
43 in free list (20 min, 150 max allowed)
264842676 hits, 47 misses, 141 trims, 141 created
0 failures (0 no memory)
Middle buffers, 600 bytes (total 25, permanent 25, peak 91 @ 7w0d):
22 in free list (10 min, 150 max allowed)
7015069 hits, 41 misses, 123 trims, 123 created
0 failures (0 no memory)
Big buffers, 1536 bytes (total 50, permanent 50, peak 77 @ 7w0d):
50 in free list (5 min, 150 max allowed)
22349184 hits, 232 misses, 696 trims, 696 created
0 failures (0 no memory)
VeryBig buffers, 4520 bytes (total 15, permanent 10, peak 17 @ 7w0d):
1 in free list (0 min, 100 max allowed)
894 hits, 9 misses, 703 trims, 708 created
0 failures (0 no memory)
Large buffers, 5024 bytes (total 1, permanent 0, peak 2 @ 7w0d):
1 in free list (0 min, 10 max allowed)
103 hits, 1 misses, 692 trims, 693 created
0 failures (0 no memory)
Huge buffers, 18024 bytes (total 1, permanent 0, peak 5 @ 7w0d):
1 in free list (0 min, 4 max allowed)
2675 hits, 9 misses, 706 trims, 707 created
0 failures (0 no memory)

Interface buffer pools:
Syslog ED Pool buffers, 600 bytes (total 132, permanent 132):
100 in free list (132 min, 132 max allowed)
33899 hits, 0 misses
FastEthernet0-Physical buffers, 1524 bytes (total 32, permanent 32):
8 in free list (0 min, 32 max allowed)
24 hits, 0 fallbacks
8 max cache size, 8 in cache
0 hits in cache, 0 misses in cache
RxQFB buffers, 2040 bytes (total 300, permanent 300):
292 in free list (0 min, 300 max allowed)
10433408 hits, 158342 misses
RxQ1 buffers, 2040 bytes (total 128, permanent 128):
1 in free list (0 min, 128 max allowed)
3360194 hits, 26953 fallbacks
RxQ2 buffers, 2040 bytes (total 12, permanent 12):
0 in free list (0 min, 12 max allowed)
12 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
RxQ3 buffers, 2040 bytes (total 128, permanent 128):
6 in free list (0 min, 128 max allowed)
102116265 hits, 3064739 fallbacks
RxQ4 buffers, 2040 bytes (total 64, permanent 64):
0 in free list (0 min, 64 max allowed)
2634354 hits, 14951349 misses
RxQ5 buffers, 2040 bytes (total 0, permanent 0):
0 in free list (0 min, 0 max allowed)
0 hits, 0 misses
RxQ6 buffers, 2040 bytes (total 128, permanent 128):
0 in free list (0 min, 128 max allowed)
31817035 hits, 31861980 misses
RxQ7 buffers, 2040 bytes (total 192, permanent 192):
63 in free list (0 min, 192 max allowed)
22267264 hits, 0 misses
RxQ8 buffers, 2040 bytes (total 64, permanent 64):
0 in free list (0 min, 64 max allowed)
118462464 hits, 391623805 misses
RxQ9 buffers, 2040 bytes (total 1, permanent 1):
0 in free list (0 min, 1 max allowed)
1 hits, 0 misses
RxQ10 buffers, 2040 bytes (total 64, permanent 64):
1 in free list (0 min, 64 max allowed)
46598742 hits, 7364796 fallbacks
RxQ11 buffers, 2040 bytes (total 16, permanent 16):
1 in free list (0 min, 16 max allowed)
2928951102 hits, 2932341703 misses
RxQ12 buffers, 2040 bytes (total 16, permanent 16):
0 in free list (0 min, 16 max allowed)
16 hits, 0 misses
RxQ13 buffers, 2040 bytes (total 16, permanent 16):
0 in free list (0 min, 16 max allowed)
16 hits, 0 misses
RxQ15 buffers, 2040 bytes (total 4, permanent 4):
0 in free list (0 min, 4 max allowed)
134196090 hits, 134196086 misses
IPC buffers, 2048 bytes (total 300, permanent 300):
296 in free list (150 min, 500 max allowed)
898744 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
Jumbo buffers, 9240 bytes (total 200, permanent 200):
200 in free list (0 min, 200 max allowed)
0 hits, 0 misses

Header pools:

 

 

Ben

VIP Mentor

Re: Packet loss across Catalyst

Hello,

 

let's try some generic tuning first: enter the below on your switch, and the two lines in bold to interface GigabitEthernet0/4:

 

mls qos queue-set output 1 threshold 1 100 100 50 200
mls qos queue-set output 1 threshold 2 125 125 100 400
mls qos queue-set output 1 threshold 3 100 100 100 400
mls qos queue-set output 1 threshold 4 60 150 50 200
mls qos queue-set output 1 buffers 15 25 40 20

 

interface GigabitEthernet0/4
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203
switchport mode trunk
mls qos trust dscp
srr-queue bandwidth share 1 30 35 5
priority-queue out

 

Clear the interface counters and check if the output drops decrease. If not, post the output of:

 

show mls qos inter GigabitEthernet0/4 stat

 

Beginner

Re: Packet loss across Catalyst

Hey Georg, sorry for the delay. I tuned the queues as per you suggestions but unfortunately I am still seeing increasing drops.


#sh mls qos interface gigabitEthernet 0/4 statistics
GigabitEthernet0/4 (All statistics are in packets)

dscp: incoming
-------------------------------

0 - 4 : 469492867 4128 292263 0 1005201
5 - 9 : 0 0 0 0 0
10 - 14 : 0 0 0 0 0
15 - 19 : 0 0 0 0 0
20 - 24 : 0 0 0 0 0
25 - 29 : 0 0 0 0 0
30 - 34 : 0 0 0 0 0
35 - 39 : 0 0 0 0 0
40 - 44 : 378658 0 0 0 0
45 - 49 : 0 76877 0 2299162 0
50 - 54 : 0 0 0 0 0
55 - 59 : 0 15641085 0 0 0
60 - 64 : 0 0 0 0
dscp: outgoing
-------------------------------

0 - 4 : 2771805945 617 244464 0 860632
5 - 9 : 0 0 0 0 0
10 - 14 : 0 0 0 0 0
15 - 19 : 0 38 0 0 0
20 - 24 : 1259 0 0 0 0
25 - 29 : 0 0 0 0 0
30 - 34 : 0 0 0 0 8701
35 - 39 : 0 0 0 0 0
40 - 44 : 1 0 0 0 0
45 - 49 : 0 859704 0 3006413 0
50 - 54 : 0 0 0 0 0
55 - 59 : 0 67393 0 0 0
60 - 64 : 0 0 0 0
cos: incoming
-------------------------------

0 - 4 : 1113654791 0 0 0 0
5 - 7 : 0 0 0
cos: outgoing
-------------------------------

0 - 4 : 2784114474 0 1297 0 8701
5 - 7 : 859012 3005244 29746165
output queues enqueued:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 859015 0 0
queue 1: 2769521191 10364697 30186163
queue 2: 1297 0 0
queue 3: 2573947 0 8377677

output queues dropped:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 0 0 0
queue 1: 7034609 600 0
queue 2: 0 0 0
queue 3: 0 0 0

Policer: Inprofile: 0 OutofProfile: 0

 

Highlighted
VIP Mentor

Re: Packet loss across Catalyst

Hello,

 

based on the output, configure the below on your interface GigabitEthernet0/4:

 

mls qos queue-set output 1 buffers 5 85 5 5
srr-queue bandwidth share 5 85 5 5

mls qos queue-set output 1 threshold 1 100 100 50 100
mls qos queue-set output 1 threshold 2 800 200 200 800
mls qos queue-set output 1 threshold 3 80 90 50 100
mls qos queue-set output 1 threshold 4 80 90 50 100

 

There is a certain way to calculate these values, basically what it comes down to is assign higher values to queues with higher traffic and more drops...

 

Give this a try and check if the drops decrease...

Beginner

Re: Packet loss across Catalyst

Hey Georg,

 

Thanks for you time on this one. Customer reports that their experience has improved significanty

VIP Mentor

Re: Packet loss across Catalyst

Hello,

 

for the sake of verification, can you post the output of:

 

sh mls qos interface gigabitEthernet 0/4 statistics

 

again ? I am curious to know what effects the changes had...

Beginner

Re: Packet loss across Catalyst

There are still packets being dropped but at a rate of 100 or so a day, compared with 100k to 1 million! I can live with that, apparently so can the customer.

Thank you Georg
VIP Mentor

Re: Packet loss across Catalyst

Sounds good !

 

If you want to know how these values are calculated, below is a link that explains it quite well...

 

http://www.kerschercomputing.com/in-the-news/catalyst296035603750outputdropspart2

VIP Expert

Re: Packet loss across Catalyst

BTW, if you want to look at some excellent documentation on the 3560 queuing architecture, you might look at: https://supportforums.cisco.com/t5/network-infrastructure-documents/egress-qos/ta-p/3122802
CreatePlease to create content
Content for Community-Ad
July's Community Spotlight Awards