09-23-2014 08:17 AM - edited 03-07-2019 08:51 PM
I have a point-to-point 100 Mbps circuit that I use to route traffic between a 6506 in Covina, CA and a 3850 stack in Raleigh, NC. The performance of VDI terminals in NC connecting to CA has been poor. I used iperf to check the bandwidth for a single connection at various window sizes.
NC side is Windows 2008 Server
CA side is a CentOS 6.5
Winsize is TCP window size in KB -- both sides were set to the same value
RTT is round trip measured using ping from NC
Calulated value is (TCP window size)/RTT in Mbps
NC2CA is the measured bandwith sending from NC to CA in Mbps
CA2NC is the measured bandwith sending from CA to NC in Mbps
The queuing strategy for the ports at either end of the point-to-point and to the respective servers is set to FIFO.
Time Warner insists their circuit is working correctly so I am trying to find something wrong on the 6506 and/or the 3850 switches. Problem is, everything looks right to mr. However, I am no expert so I hope someone here can tell where to start looking.
port on the 6506
#show interfaces gi1/3/9
GigabitEthernet1/3/9 is up, line protocol is up (connected)
Hardware is C6k 1000Mb 802.3, address is 70ca.9b65.3bc0 (bia 70ca.9b65.3bc0)
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT
input flow-control is off, output flow-control is on
Clock mode is auto
ARP type: ARPA, ARP Timeout 04:00:00
Last input 19w4d, output never, output hang never
Last clearing of "show interface" counters 4d19h
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 719000 bits/sec, 567 packets/sec
5 minute output rate 3524000 bits/sec, 632 packets/sec
53678124 packets input, 9273723417 bytes, 0 no buffer
Received 214987 broadcasts (214983 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
81201059 packets output, 77569738141 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
Port on the 3850
#show interface gi 1/0/1
GigabitEthernet1/0/1 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 189c.5dda.6b81 (bia 189c.5dda.6b81)
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:39, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 2931000 bits/sec, 545 packets/sec
5 minute output rate 651000 bits/sec, 520 packets/sec
158108261 packets input, 2247787225 bytes, 0 no buffer
Received 18501 broadcasts (17419 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 17419 multicast, 0 pause input
0 input packets with dribble condition detected
114985635 packets output, 1843157338 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
09-24-2014 01:49 PM
I once had a similar problem, ended up hardcoding the speed on the local ethernet interfaces, cleared the issue, somethign with clocking with the carrier. Don't ask me why,, worth a try?
09-24-2014 02:20 PM
Thanks.
That was the first thing T/W asked me to do. I set the interfaces at both end to 1000/full with no success.
The only change so far is that TW did manage to throttle the 900 Mbps connection back to <100Mbps.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide