11-11-2011 02:26 PM - edited 03-07-2019 03:21 AM
Greetings,
I have spent some time looking online and the forums and documents but have a few questions. We are in the middle of a migration of in-house software and have had some performance bottlenecks in HTTP requests between clients and hosts on different servers that route through a 2960g switch in a bridge with another an RSTP setup. While most of the issues have been resolved by repartitioning on the linux hosts and filers, I spent some time early on looking at the switch. I am a Gnu/Linux guy, and only recently got thrown into the switching when our comms group got retasked so bare with me.
The switchport is connected to a linux host running RHEL5u5 with a bonded interface in transmit load-balance, only on MAC is visible to the switch at any time, the switchport config is pretty basic, as the switch config:
!
interface GigabitEthernet0/28
description dx1
switchport mode access
spanning-tree portfast
I noticed that the txload and rxload are relatively small, and remain relatively small, however the Total output drops increments pretty fast and with regularity, I read most times this can be just fine for various reasons, but was wondering if there is a way to sniff which packets are dropped, or if that is what that stat is telling me. Is this something I should be concerned about, on the same hardware and setup running our legacy software the total output drops are 0-5, as opposed to across-the-board similar to the below on the new software, which does however use a lot more network traffic as it is more ESB/SOA based than the legacy software.
hsw1-awcn#show interfaces g0/28
GigabitEthernet0/28 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is e804.62b3.4c1c (bia e804.62b3.4c1c)
Description: dx1
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 7/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:01, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 163167301
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 4670000 bits/sec, 1793 packets/sec
5 minute output rate 29639000 bits/sec, 2992 packets/sec
40329003402 packets input, 19801050808406 bytes, 0 no buffer
Received 1330378 broadcasts (79 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 79 multicast, 0 pause input
0 input packets with dribble condition detected
54443241034 packets output, 55818849960705 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
We don't have any QoS setup, but was wondering also what the below is telling me, or if it is an issue;
hsw1-awcn#show platform port-asic stats drop GigabitEthernet0/28
Interface Gi0/28 TxQueue Drop Statistics
Queue 0
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 1
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 2
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 3
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 115263540
Here is some other output from the switchport that might be of use to someone with more knowledge than I.
hsw1-awcn#show interfaces g0/28 counters
Port InOctets InUcastPkts InMcastPkts InBcastPkts
Gi0/28 19801847031738 40331456565 79 1330324
Port OutOctets OutUcastPkts OutMcastPkts OutBcastPkts
Gi0/28 55820903755360 54134963247 298941448 13486895
hsw1-awcn#show interfaces g0/28 counters err
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
Gi0/28 0 0 0 0 0 163167306
Port Single-Col Multi-Col Late-Col Excess-Col Carri-Sen Runts Giants
Gi0/28 0 0 0 0 0 0 0
hsw1-awcn#sh mls qos maps cos-output-q
Cos-outputq-threshold map:
cos: 0 1 2 3 4 5 6 7
------------------------------------
queue-threshold: 2-1 2-1 3-1 3-1 4-1 1-1 4-1 4-1
hsw1-awcn#show mls qos maps dscp-output-q
Dscp-outputq-threshold map:
d1 :d2 0 1 2 3 4 5 6 7 8 9
------------------------------------------------------------
0 : 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01
1 : 02-01 02-01 02-01 02-01 02-01 02-01 03-01 03-01 03-01 03-01
2 : 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01
3 : 03-01 03-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01
4 : 01-01 01-01 01-01 01-01 01-01 01-01 01-01 01-01 04-01 04-01
5 : 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01
6 : 04-01 04-01 04-01 04-01
hsw1-awcn#show interfaces g0/28 switching
GigabitEthernet0/28 dx1
Throttle count 0
Drops RP 0 SP 0
SPD Flushes Fast 0 SSE 0
SPD Aggress Fast 0
SPD Priority Inputs 0 Drops 0
Protocol Path Pkts In Chars In Pkts Out Chars Out
Other Process 0 0 2132504 128043660
Cache misses 0
Fast 0 0 0 0
Auton/SSE 0 0 0 0
Spanning Tree Process 0 0 10465630 627937800
Cache misses 0
Fast 0 0 0 0
Auton/SSE 0 0 0 0
CDP Process 0 0 354589 146047840
Cache misses 0
Fast 0 0 0 0
Auton/SSE 0 0 0 0
hsw1-awcn#show interfaces g0/28 stats
GigabitEthernet0/28
Switching path Pkts In Chars In Pkts Out Chars Out
Processor 0 0 12999502 956834048
Route cache 0 0 0 0
Total 0 0 12999502 956834048
hsw1-awcn#show queueing interface g0/28
Interface GigabitEthernet0/28 queueing strategy: none
Transmit GigabitEthernet0/28 Receive
1763188559 Bytes 3735544227 Bytes
3812006753 Unicast frames 3727259303 Unicast frames
406848156 Multicast frames 32316 Multicast frames
43306196 Broadcast frames 27011280 Broadcast frames
0 Too old frames 1844580102 Unicast bytes
0 Deferred frames 6268671 Multicast bytes
0 MTU exceeded frames 1884695228 Broadcast bytes
0 1 collision frames 0 Alignment errors
0 2 collision frames 0 FCS errors
0 3 collision frames 0 Oversize frames
0 4 collision frames 0 Undersize frames
0 5 collision frames 0 Collision fragments
0 6 collision frames
0 7 collision frames 2440117432 Minimum size frames
0 8 collision frames 2187857298 65 to 127 byte frames
0 9 collision frames 3488687700 128 to 255 byte frames
0 10 collision frames 1591178981 256 to 511 byte frames
0 11 collision frames 2555603644 512 to 1023 byte frames
0 12 collision frames 80792437 1024 to 1518 byte frames
0 13 collision frames 0 Overrun frames
0 14 collision frames 0 Pause frames
0 15 collision frames
0 Excessive collisions 1 Symbol error frames
0 Late collisions 0 Invalid frames, too large
0 VLAN discard frames 0 Valid frames, too large
0 Excess defer frames 0 Invalid frames, too small
1181878413 64 byte frames 0 Valid frames, too small
3009065386 127 byte frames
3273911113 255 byte frames 0 Too old frames
1981929200 511 byte frames 0 Valid oversize frames
3554245931 1023 byte frames 0 System FCS error frames
4146032950 1518 byte frames 0 RxPortFifoFull drop frame
0 Too large frames
0 Good (1 coll) frames
0 Good (>1 coll) frames
hsw1-awcn#show mls qos interface g0/28
GigabitEthernet0/28
QoS is disabled. When QoS is enabled, following settings will be applied
trust state: not trusted
trust mode: not trusted
trust enabled flag: ena
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none
qos mode: port-based
11-13-2011 08:00 PM
Hello,
I see all drops happening due to OutDiscards. Those drops happen when portTX port buffer is full and switch can't send it out. Other thing is that all your traffic is coming out through queue3. Even without QoS configured switch apply some default qos if configured globally. SO that queue may have not the whole port buffers and with a big number of small packets. Possibly QoS should be applied to solve it. Can you please also capture these commands to take a quick look.
show platform port-asic stATS Drop
show queueing interface g0/28 --- > the one above is not show queueing - it was show controller
Nik
11-14-2011 03:12 AM
Thank you for the reply ... I have cleared the counters on the switchport to monitor the situation a little more and see what the ratio of drops to total packets are/have been over time.
There is no QoS configured on the switchport either, thanks for clarifying those two commands. Most of the literature I have read online from Cisco staff replies in forums and whitepapers appear to have, as you noted, two solutions [1] increase bandwidth on the port and/or [2] implement QoS ... I have also read a lot about how drop queue counts are normal sometimes, and the ratio is what is more important, while ratios seems to be in the low single digits or lower, I am still slightly concerned as more traffic will be hitting the end device on the switchport in the future.
Thanks for any further information and that which you have provided thus far!
hsw1-tbw3#show queueing interface g0/28
Interface GigabitEthernet0/28 queueing strategy: none
And the below is the output of the drop statistics:
hsw1-tbw3#show platform port-asic stATS Drop
Port-asic Port Drop Statistics - Summary
========================================
RxQueue 0 Drop Stats: 0
RxQueue 1 Drop Stats: 0
RxQueue 2 Drop Stats: 0
RxQueue 3 Drop Stats: 0
Port 0 TxQueue Drop Stats: 0
Port 1 TxQueue Drop Stats: 0
Port 2 TxQueue Drop Stats: 0
Port 3 TxQueue Drop Stats: 1630
Supervisor TxQueue Drop Statistics
Queue 0: 0
Queue 1: 0
Queue 2: 0
Queue 3: 0
Queue 4: 0
Queue 5: 0
Queue 6: 0
Queue 7: 16
Queue 8: 252025
Queue 9: 0
Queue 10: 0
Queue 11: 0
Queue 12: 0
Queue 13: 0
Queue 14: 0
Queue 15: 0
Port-asic Port Drop Statistics - Details
========================================
RxQueue Drop Statistics
Queue 0
Weight 0 Frames: 0
Weight 1 Frames: 0
Weight 2 Frames: 0
Queue 1
Weight 0 Frames: 0
Weight 1 Frames: 0
Weight 2 Frames: 0
Queue 2
Weight 0 Frames: 0
Weight 1 Frames: 0
Weight 2 Frames: 0
Queue 3
Weight 0 Frames: 0
Weight 1 Frames: 0
Weight 2 Frames: 0
Port 0 TxQueue Drop Statistics
Queue 0
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 1
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 2
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 3
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Port 1 TxQueue Drop Statistics
Queue 0
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 1
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 2
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 3
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Port 2 TxQueue Drop Statistics
Queue 0
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 1
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 2
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 3
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Port 3 TxQueue Drop Statistics
Queue 0
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 1
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 2
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 3
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 1630
Supervisor TxQueue Drop Statistics
Queue 0: 0
Queue 1: 0
Queue 2: 0
Queue 3: 0
Queue 4: 0
Queue 5: 0
Queue 6: 0
Queue 7: 16
Queue 8: 252025
Queue 9: 0
Queue 10: 0
Queue 11: 0
Queue 12: 0
Queue 13: 0
Queue 14: 0
Queue 15: 0
11-14-2011 05:04 AM
ok,
This can be really traffic specific. As port has no QoS on it - that output queue is FIFO. But buffer for it is still limited. Outdiscards we see - mean there is no egress buffers available so the packet will be dropped. If we have bursty traffic - that can overwhelm queue causing those drops. If this reallydependant of type of traffic - QoS can be a solution as we can put some thresholds making port to request more buffers from common pool in case of starvation.
Other thing to try is just to change the port (better on the other side of switch). Some time ASIC level problems/limitations can affect particular ports.
Nik
11-14-2011 05:52 AM
Hi,
Try to configure the below command under the interface. This will set the queue size to max and therefore no drops on the interface.
hold-queue 4096 in
Please rate the helpfull posts.
Regards,
Naidu.
11-14-2011 06:53 AM
I assume you mean hold-queue 4096 out since my drops above are with the output queue? If you mean to increase the size of the input queue then can you explain why that would help output drops?
I am also a little hesitant to up the queue sizes as I don't want to introduce further complications with retransmission if packets stay too long in the output queue. But, the queue size here is only at 40, which seems a little low and most things I have looked at seem to suggest 1024 for GB connections so perhaps that is a simple solution.
11-14-2011 01:53 PM
Try this:
1. Hardcode the interface to "speed 100";
2. Clear the counters; and
3. Observe if the drops are evident.
I know it's not something anyone likes to do but there are some high-end servers, when mis-configured, can't really take 1Gb traffic.
But once configured correctly, the output drops go away. We have several cases like that and we get into heated arguments with the server guys about this. So this is what we did and the server people "get the message".
11-14-2011 11:42 PM
Hi,
We have found that adding the following configuration to the interfaces on the router helps a lot: "hold-queue 4096 out 1G interfaces can benefit from “hold-queue 1024 out” as well. This can be configured on input as well, e.g. “hold-queue 1024 in” in the same place in the configuration. Looking at the output of “show interface” can tell you the size of the interface queues. Check before you make changes, since some interfaces default to a 2000-packet input queue.
Please rate the helpfull posts.
Regards,
Naidu.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide