03-30-2011 02:32 AM - edited 03-06-2019 04:21 PM
Hi everyone,
Here is our (very simple) network topology :
- 3 switches : A & B & C
A is the main switch (2970G) to which B (2970G) and C (2960S) are connected.
- Between A & B : 500/600 Mbps pike bw
- Between A & C : 400/500 Mbps pike bw
The problem is that, when running wget tests between servers (on the same VLAN) we GET very poor performance results on machines connected on 2960S only.
Test results :
- Host connected to A gets 1 GB file on Host connected to B : 100 MBps which is good
- Host connected to B gets 1 GB file on Host connected to A : 83 MBps which is not too bad
And now, let's test hosts on 2960S :
- Host connected to C gets 1 GB file on Host connected to A : 110 MBps which is good
- Host connected to A gets 1 GB file on Host connected to C : 2 to 5 MBps with many stalled states.
Troubleshoot :
- Câble was changed. No worry on cable anymore.
- Port are force 1000-full-duplex and look to be working 1000-full duplex.
- Switch C reports lots of drops :
#sh interfaces gigabitEthernet 0/48
GigabitEthernet0/48 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is a40c.c3b7.74b0 (bia a40c.c3b7.74b0)
Description: *** SW1101 ***
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 84/255, rxload 2/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:01, output hang never
Last clearing of "show interface" counters 4d22h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 31444194
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 9417000 bits/sec, 12249 packets/sec
5 minute output rate 332242000 bits/sec, 27847 packets/sec
5330787705 packets input, 391268847429 bytes, 0 no buffer
Received 4760590 broadcasts (3847052 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 3847052 multicast, 0 pause input
0 input packets with dribble condition detected
11296428851 packets output, 16854517232017 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
Would anyone have a brilliant idea on how to figure out this issue?
Thanks for reading and, maybe, helping!
Gaëtan
03-30-2011 03:48 AM
if you copy between hosts on the same switch ie switch C do you get the same problem?
cheers
Ben
03-30-2011 05:42 AM
Hi Ben,
When I copy files from/to hosts on the same switch (2960S), I can reach around 100 MBps wathever way this is done.
This makes me think that switch backplane is all right.
03-30-2011 07:03 AM
By the way, "show buffers" on 2960S reports very interesting things :
Interface buffer pools:
IPC buffers, 2048 bytes (total 300, permanent 300):
299 in free list (150 min, 500 max allowed)
1 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
RxQFB buffers, 2168 bytes (total 150, permanent 150):
150 in free list (0 min, 150 max allowed)
4 hits, 0 misses
RxQ1 buffers, 2168 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
7809762 hits, 0 fallbacks
RxQ4 buffers, 2168 bytes (total 76, permanent 76):
12 in free list (0 min, 76 max allowed)
1887291 hits, 0 misses
RxQ15 buffers, 2168 bytes (total 4, permanent 4):
0 in free list (0 min, 4 max allowed)
60096797 hits, 60096793 misses
RxQ10 buffers, 2168 bytes (total 19, permanent 19):
3 in free list (0 min, 19 max allowed)
13379624 hits, 0 fallbacks
RxQ12 buffers, 2168 bytes (total 19, permanent 19):
3 in free list (0 min, 19 max allowed)
16 hits, 0 misses
RxQ7 buffers, 2168 bytes (total 96, permanent 96):
27 in free list (0 min, 96 max allowed)
408502 hits, 0 misses
RxQ8 buffers, 2168 bytes (total 38, permanent 38):
6 in free list (0 min, 38 max allowed)
4964544 hits, 0 misses
RxQ6 buffers, 2168 bytes (total 76, permanent 76):
12 in free list (0 min, 76 max allowed)
64 hits, 0 misses
RxQ3 buffers, 2168 bytes (total 19, permanent 19):
3 in free list (0 min, 19 max allowed)
18310268 hits, 4 fallbacks
RxQ11 buffers, 2168 bytes (total 4, permanent 4):
0 in free list (0 min, 4 max allowed)
4 hits, 0 misses
Jumbo buffers, 9368 bytes (total 200, permanent 200):
200 in free list (0 min, 200 max allowed)
0 hits, 0 misses
#sh platform port-asic stats drop gigabitEthernet 0/48
Interface Gi0/48 TxQueue Drop Statistics
Queue 0
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 1
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 2
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 3
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 51366043 (incrementing a lot)
Queue 4
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 5
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 6
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 7
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
03-30-2011 08:00 AM
are you using the same up link ports on each switch? have you tired different ports and see if the problem follows?
03-30-2011 09:35 AM
No, I'm not using same uplink port on each switch.
Yes, I've tried another port but no luck.
03-30-2011 11:05 AM
are both ports 1000MB full duplex? if you set it to 100MB full does it run ok? not sure if this helps but may help nail it down
03-30-2011 11:12 AM
Ports on each side are forced 1000-full-duplex.
I can't change to 100-Full since I already have 400/500 Mbps on the link.
03-30-2011 11:50 AM
does it run ok at 100MB or do you get the same low speed?
03-31-2011 12:32 AM
Hi Pete,
I can't say. It's always been configured 1000-full and it's too late to switch back to 100-full (too much trafic already).
03-31-2011 02:49 AM
can you do a showi nterface on the other switch where the up link connects and post please
03-31-2011 02:58 AM
#sh interfaces gigabitEthernet 0/18
GigabitEthernet0/18 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 0013.c3d7.f512 (bia 0013.c3d7.f512)
Description: *** Baie 4 ***
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 88/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:27, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 348747000 bits/sec, 29219 packets/sec
5 minute output rate 7612000 bits/sec, 13030 packets/sec
40993769387 packets input, 60874292793398 bytes, 0 no buffer
Received 572684 broadcasts (0 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 524923 multicast, 0 pause input
0 input packets with dribble condition detected
16352075795 packets output, 1346744462632 bytes, 0 underruns
0 output errors, 0 collisions, 11 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
#sh platform port-asic stats drop gigabitEthernet 0/18
Interface Gi0/18 TxQueue Drop Statistics
Queue 0
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 1
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 2
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
Queue 3
Weight 0 Frames 0
Weight 1 Frames 0
Weight 2 Frames 0
IOS of 2960S was upgraded to latest version last night. But this does not change anything unfortunately...
03-31-2011 03:09 AM
do you use mulitcast?
03-31-2011 03:12 AM
There might be a couple of multicast packets between our firewalls for redundancy (BSD).
03-31-2011 03:14 AM
reason i asked was one side of the link shows the muiltcast packets the other side shows only broadcast and no mulitcast
different model of switches may have different default IGMP settings?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide