11-02-2019 12:27 PM
I am totally lost on this one. A port connected to a backup server on hyper-v is losing packets.
I get in wireshark lots of retransmissions including fast retransmissions and duplicate acks.
The mtu on the switch was set to 9000 however the nic is not set for jumbo frames and neither are the switch neighbours, rest network 1500mtu.
I have reset the mtu
Qos is not enabled and QoS ip packet dscp rewrite is enabled
swname#sh interfaces gi1/0/18
GigabitEthernet1/0/18 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is ###############
Description: **portname**
MTU 9000 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 56/255, rxload 56/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:00, output hang never
Last clearing of "show interface" counters 2d04h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 181583573
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 221583000 bits/sec, 25590 packets/sec
5 minute output rate 220007000 bits/sec, 20299 packets/sec
2013505432 packets input, 2125310644085 bytes, 0 no buffer
Received 44515 broadcasts (17545 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 17545 multicast, 0 pause input
0 input packets with dribble condition detected
3478770942 packets output, 4922453886651 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
#sh mls qos interface gi1/0/18 statistics
GigabitEthernet1/0/18 (All statistics are in packets)
dscp: incoming
-------------------------------
0 - 4 : 2029735768 0 0 0 93
5 - 9 : 0 0 0 0 0
10 - 14 : 0 0 0 0 0
15 - 19 : 0 0 0 0 0
20 - 24 : 0 0 0 0 0
25 - 29 : 0 0 0 0 0
30 - 34 : 0 0 0 0 0
35 - 39 : 0 0 0 0 0
40 - 44 : 0 0 0 0 0
45 - 49 : 0 0 0 222 0
50 - 54 : 0 0 0 0 0
55 - 59 : 0 0 0 0 0
60 - 64 : 0 0 0 0
dscp: outgoing
-------------------------------
0 - 4 : 472463323 0 104 0 0
5 - 9 : 2 0 0 2 0
10 - 14 : 1 0 0 0 0
15 - 19 : 0 0 0 0 0
20 - 24 : 0 0 0 0 0
25 - 29 : 0 0 0 0 0
30 - 34 : 0 0 0 0 1
35 - 39 : 0 0 0 257 0
40 - 44 : 0 0 0 0 0
45 - 49 : 0 80 0 11182517 0
50 - 54 : 0 0 0 0 0
55 - 59 : 0 126899 0 0 0
60 - 64 : 0 0 0 0
cos: incoming
-------------------------------
0 - 4 : 2029748162 0 0 0 0
5 - 7 : 0 0 0
cos: outgoing
-------------------------------
0 - 4 : 3476926286 223510 330116 6989 12472
5 - 7 : 2815 10620740 3461204
output queues enqueued:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 0 0 0
queue 1: 13722696 200861 2962300
queue 2: 0 0 0
queue 3: 0 0 3474904355
output queues dropped:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 0 0 0
queue 1: 405 0 1
queue 2: 0 0 0
queue 3: 0 0 183458504
Policer: Inprofile: 0 OutofProfile: 0
11-02-2019 12:49 PM
Hi,
Configuring jumbo frame usually does not cause packet loss. Is this a cat-9k or Nexus 9k? What is the exact mode number?
HTH
11-02-2019 12:54 PM
Thank you
The switch is a C2960S on v12.2(55)
11-02-2019 12:58 PM - edited 11-02-2019 01:03 PM
Hi,
I am not sure if this is a buffer issue or not but the 2960s series switches have lower amount buffer and are designed for user campus environment and not so much for VM, Server and storage connectivity.
HTH
11-02-2019 03:33 PM
11-02-2019 01:07 PM
Hello,
there is a way to increase the relevant queue threshold values. Enable 'mls qos' and then post the output of:
show mls qos int gi1/0/18 statistics
If you are interested in the procedure to calculate these values, check the document below:
11-03-2019 01:36 AM
Thank you very much for the support it is fantastic! Running iperf between my hyper-v host and backup server traversing this port i am getting 2-300mbps over a gig port. Uplink has capacity, source is a WSSD (fast) server with 40gb link to core 10gb backbone to this switch with a problematic port.
I will look at enabling qos and increasing the buffer and will post.
The full sw version is WS-C2960S-24PD-L 12.2(55)SE7 C2960S-UNIVERSALK9-M
sh interfaces gi1/0/18 controller
GigabitEthernet1/0/18 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is ##########
Description: **portname**
MTU 9000 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 237/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:00, output hang never
Last clearing of "show interface" counters 2d16h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 259908789
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 3455000 bits/sec, 5152 packets/sec
5 minute output rate 931338000 bits/sec, 77774 packets/sec
2892281653 packets input, 3063906956668 bytes, 0 no buffer
Received 54934 broadcasts (21435 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 21435 multicast, 0 pause input
0 input packets with dribble condition detected
4672171974 packets output, 6600112271104 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
Transmit GigabitEthernet1/0/18 Receive
3468719563 Bytes 1920926614 Bytes
356974883 Unicast frames 2896343977 Unicast frames
19894496 Multicast frames 21550 Multicast frames
3885720 Broadcast frames 33578 Broadcast frames
0 Too old frames 1899228740 Unicast bytes
0 Deferred frames 3302491 Multicast bytes
0 MTU exceeded frames 9270389 Broadcast bytes
0 1 collision frames 0 Alignment errors
0 2 collision frames 0 FCS errors
0 3 collision frames 0 Oversize frames
0 4 collision frames 0 Undersize frames
0 5 collision frames 0 Collision fragments
0 6 collision frames
0 7 collision frames 426284912 Minimum size frames
0 8 collision frames 487533102 65 to 127 byte frames
0 9 collision frames 2137414 128 to 255 byte frames
0 10 collision frames 1728420 256 to 511 byte frames
0 11 collision frames 1142827 512 to 1023 byte frames
0 12 collision frames 1977051660 1024 to 1518 byte frames
0 13 collision frames 0 Overrun frames
0 14 collision frames 0 Pause frames
0 15 collision frames
0 Excessive collisions 0 Symbol error frames
0 Late collisions 0 Invalid frames, too large
0 VLAN discard frames 520770 Valid frames, too large
0 Excess defer frames 0 Invalid frames, too small
232363327 64 byte frames 0 Valid frames, too small
74819426 127 byte frames
6220784 255 byte frames 0 Too old frames
4302141 511 byte frames 0 Valid oversize frames
3679006 1023 byte frames 0 System FCS error frames
58555217 1518 byte frames 0 RxPortFifoFull drop frame
815198 Too large frames
0 Good (1 coll) frames
0 Good (>1 coll) frames
11-03-2019 02:06 AM
If you need help with reconfiguring the queues, also post the output of:
show mls qos maps dscp-output-q
11-03-2019 02:12 AM
Thank you very much
I have yet to switch on qos and get my head around the documentation
Here is the output
show mls qos maps dscp-output-q
Dscp-outputq-threshold map:
d1 :d2 0 1 2 3 4 5 6 7 8 9
------------------------------------------------------------
0 : 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01 02-01
1 : 02-01 02-01 02-01 02-01 02-01 02-01 03-01 03-01 03-01 03-01
2 : 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01 03-01
3 : 03-01 03-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01
4 : 01-01 01-01 01-01 01-01 01-01 01-01 01-01 01-01 04-01 04-01
5 : 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01 04-01
6 : 04-01 04-01 04-01 04-01
11-03-2019 04:31 AM
Hello,
based on your output, I have come up with two options and the values below:
Option 1
Globally configure:
mls qos queue-set output 2 buffers 10 70 10 10
and on the interface:
GigabitEthernet1/0/18
queue-set 2
Option 2
Globally configure:
mls qos srr-queue output dscp-map queue 2 threshold 1
and on the interface:
GigabitEthernet1/0/18
priority-queue out
Both options should reduce the output drops, you need to check by how much...
11-03-2019 06:17 AM
Hello,
on a side note, also post the output of:
show buffers
Buffer tuning sometimes helps as well...
11-04-2019 02:19 AM
A little confused this morning which direction to take. Hopefully this will help
show buffers
Buffer elements:
1067 in free list (500 max allowed)
3159241 hits, 0 misses, 1024 created
Public buffer pools:
Small buffers, 104 bytes (total 50, permanent 50, peak 122 @ 18:46:27):
49 in free list (20 min, 150 max allowed)
1552388 hits, 24 misses, 72 trims, 72 created
0 failures (0 no memory)
Middle buffers, 600 bytes (total 25, permanent 25, peak 52 @ 18:46:27):
24 in free list (10 min, 150 max allowed)
22884 hits, 9 misses, 27 trims, 27 created
0 failures (0 no memory)
Big buffers, 1536 bytes (total 59, permanent 50, peak 86 @ 18:45:47):
44 in free list (5 min, 150 max allowed)
213263 hits, 23 misses, 60 trims, 69 created
0 failures (0 no memory)
VeryBig buffers, 4520 bytes (total 10, permanent 10):
10 in free list (0 min, 100 max allowed)
1 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Large buffers, 5024 bytes (total 0, permanent 0):
0 in free list (0 min, 10 max allowed)
0 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Huge buffers, 18024 bytes (total 4, permanent 0, peak 7 @ 10:06:49):
4 in free list (0 min, 4 max allowed)
12616 hits, 995 misses, 1983 trims, 1987 created
0 failures (0 no memory)
Interface buffer pools:
HRPCRespFallback buffers, 1400 bytes (total 80, permanent 80):
80 in free list (0 min, 80 max allowed)
0 hits, 0 misses
IPC buffers, 2048 bytes (total 300, permanent 300):
298 in free list (150 min, 500 max allowed)
2 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
RxQFB buffers, 2176 bytes (total 904, permanent 904):
904 in free list (0 min, 904 max allowed)
750 hits, 0 misses
RxQ14 buffers, 2176 bytes (total 64, permanent 64):
16 in free list (0 min, 64 max allowed)
48 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
RxQ0 buffers, 2176 bytes (total 1200, permanent 1200):
700 in free list (0 min, 1200 max allowed)
500 hits, 0 misses
RxQ2 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
128 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
RxQ1 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
1451827 hits, 55 fallbacks
RxQ4 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
28860 hits, 0 misses
RxQ15 buffers, 2176 bytes (total 4, permanent 4):
0 in free list (0 min, 4 max allowed)
1347348 hits, 1347344 misses
RxQ10 buffers, 2176 bytes (total 76, permanent 76):
12 in free list (0 min, 76 max allowed)
4202685 hits, 682 fallbacks
RxQ12 buffers, 2176 bytes (total 115, permanent 115):
19 in free list (0 min, 115 max allowed)
96 hits, 0 misses
RxQ7 buffers, 2176 bytes (total 192, permanent 192):
63 in free list (0 min, 192 max allowed)
8104 hits, 0 misses
RxQ8 buffers, 2176 bytes (total 76, permanent 76):
12 in free list (0 min, 76 max allowed)
1163459 hits, 0 misses
RxQ5 buffers, 2176 bytes (total 128, permanent 128):
64 in free list (0 min, 128 max allowed)
64 hits, 0 misses
RxQ6 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
128 hits, 0 misses
RxQ3 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
5001646 hits, 0 fallbacks
RxQ11 buffers, 2176 bytes (total 19, permanent 19):
3 in free list (0 min, 19 max allowed)
16 hits, 0 misses
Jumbo buffers, 9368 bytes (total 200, permanent 200):
200 in free list (0 min, 200 max allowed)
0 hits, 0 misses
Header pools:
11-03-2019 03:21 PM - edited 11-03-2019 03:25 PM
It's possible, but I don't know for certain, that when the switch's MTU was 9K, how the switch reserves buffers could be impacted; possibly there may be less available buffers when MTU is configured to support larger frames. If so, after you reset the MTU to 1500, it might require a switch reboot to readjust buffer allocations. Again, I really don't know whether buffer allocations would be impacted just by having the larger MTU. Certainly if you had larger frames, you would exhaust your buffer space sooner, but from what you described, you weren't actually using larger frames.
As the other posters have noted, a 2960S doesn't have much in the way of buffer resources (as it was really designed as a user edge device). By default, generally you're less likely to have buffer issues with QoS disabled versus when QoS is enabled using the default settings (in that switch). However, I believe the most optimal buffer allocation might be obtained with QoS enabled, and with a "custom" QoS configuration.
Georg's referenced Cisco technote, should be taken with a "grain of salt". I.e. it's often not as "black and white" to address such issues as the technote seems to imply.
For example, the technote describes increasing the QoS bandwidth allocation to the queue that's suffering drops. Yes, that might help on a 2960 because with QoS enabled, port queues "reserve" buffers for themselves. I.e. while port queue X is running short of buffers, port queue Y has "extra". So, if you allocate more bandwidth to X, so that queue drains faster, likely queue Y will grow deeper. If you don't exceed the "extra" buffers queue Y had, you're golden (well except for also adding latency to queue Y traffic), but if you exceed queue Y's buffers, you will then have drops there, rather than in queue X. (QoS often determines what traffic will "suffer", i.e. it can be a zero sum game.)
Several years ago, I had a 3750 (similar QoS architecture, but more RAM for buffers) dropping multiple packets a second on two iSCSI "server" ports. I reconfigured the QoS to behave much like the 3550's QoS did, and my iSCSI ports reduced their drops to a few per day! Other ports did not increase their drops.
The prior only describes what might be accomplished, but if you're thinking can I just try what he did? Well yes, but again understand what I did was suitable for my traffic and our corporate QoS policy. As QoS often pushes in here, then pops out somewhere else, what I did might make things worse for you.
Doing QoS "right", needs a good understanding of your QoS goals/requirements. It also require monitoring with possible adjustment. Unfortunately on switches like the 2960 series, one size often does not fit all.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide