cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3865
Views
0
Helpful
6
Replies

Low iPerf TCP results over a 1gb circuit

nzChris
Level 1
Level 1

Hi Guys,

I need some advise here, we have a 1gb circuit between site A and site B (500km distance) which we are told is underperforming. Through iPerf testing I have found that for a standard TCP test with no extra parameters, we only hit a maximum of 160 Mbits/sec.

iperf3 -c x.x.x.x
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 139 MBytes 116 Mbits/sec sender
[ 4] 0.00-10.00 sec 139 MBytes 116 Mbits/sec receiver

This is a point to point fibre circuit terminated on one end by a Nexus 9k and a 4431 with 1gb throughput license at the remote end (I do understand that the throughput license is cumulative).

Pings across the /30 interfaces have a 10ms response time which seems reasonable for the distance.
If I send 10 parallel streams or set a windows size of 2mb we can get around 650-800 Mbits/sec.

iperf3 -c x.x.x.x -P 10
[ ID] Interval Transfer Bandwidth

[SUM] 0.00-10.00 sec 896 MBytes 751 Mbits/sec sender
[SUM] 0.00-10.00 sec 895 MBytes 751 Mbits/sec receiver

iperf3 -c x.x.x.x -W 2000000
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 926 MBytes 777 Mbits/sec sender
[ 4] 0.00-10.00 sec 925 MBytes 776 Mbits/sec receiver


UDP iPerf transfers will not get above 500 Mbits/sec:

iperf3 -c x.x.x.x -u -b 1000m

[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-10.00 sec 555 MBytes 465 Mbits/sec 0.050 ms 15182/71009 (21%)

The same test to a site 5km (5ms ping) away will get speeds of 900 Mbits/sec from a single stream without the need to manipulate the window size or add parallel streams.

We also have two 200mb circuits in this location (500km away) and they also see iPerf TCP results of up to 160 Mbits/sec, but this is close to the full bandwidth of 200mb.

 

So my question is, should we be able to get a faster transmission with a single stream over this distance? does 160 Mbits/sec seem low for a 1gb line? are our expectations too high?


Cheers,

Chris

6 Replies 6

Hello,

 

just to be sure the ISR 4331 actually has the throughput correctly configured, can you post the out put of:

 

ISR4331#show platform hardware throughput level

#show platform hardware throughput level
The current throughput level is 1000000 kb/s

 

cheers

Hello,

 

the throughput level is good (1Gbit). 

 

Actually, to find out where the bottleneck is, I would get with the ISP, have them do an end to end test and verify that the bandwidth is not throttled somewhere in their path...

Joseph W. Doherty
Hall of Fame
Hall of Fame

Not all TCP variants do well with LFNs (long fat networks). I'm not sure 1 Gbps over 10 ms is a LFN, but from your stats, your TCP appears to behaving much like such a TCP variant. If it is . . .

For TCP to fully consume all available bandwidth, the receiving host's RWIN has to be large enough to support the BDP (bandwidth delay product). The fact that your bandwidth utilization jumps (much) when you use multiple streams or a larger RWIN highlight this.

With higher latency, TCP also takes longer to ramp up. This means a "too small" data transfer often will have an overall lower transfer rate.

Also with higher latency, if TCP loses any packets, it takes much longer for it to ramp back up to full speed.

If you're using Linux, you might try something like the TCP BBR variant, otherwise, you'll need to have a correctly sized RWIN or use multiple streams to fully utilize your Gig 10 ms link.

Not withstanding the above, I'm surprised by the low bandwidth throughput of the UDP test. Does it get gig to your site that's only 5 Km away?

Do you iPerf record/show any packet loss on the 500 Km away site?

What kind of link is the 500 Km gig?

Hi Joseph,

Thanks for your response, to answer your questions.
We are running a windows VM in our core and at each branch site.

UDP to the 5km site is quite low also though we can get 900mb TCP.
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-10.00 sec 449 MBytes 376 Mbits/sec 0.114 ms 0/57430 (0%)

If I do a UDP test in reverse to the 5km site I get a ton of out of order packets and lost traffic.
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-10.01 sec 982 MBytes 823 Mbits/sec 152.103 ms 116433/125507 (93%)
[SUM] 0.0-10.0 sec 2255 datagrams received out-of-order

UDP from our data centre to 500km site shows very little lost traffic:
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-10.00 sec 456 MBytes 382 Mbits/sec 0.119 ms 167/58360 (0.29%)

If I do the reverse towards our data centre there is lost packets.
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-10.00 sec 477 MBytes 400 Mbits/sec 0.043 ms 14386/61014 (24%)

All of the circuits are Carrier Ethernet.

Shouldn't we expect to see lost traffic as this is UDP?

I'm not current on the TCP stacks on all versions of Windows but I recall if you're running CTCP, it used to be on by default on servers and off by default on servers. It also seems, 2019 server now supports TCP Cubic. So, how TCP performs, under Windows, might depend much on the version of Windows and other default settings.

The mention of out-of-order and/or lost packets is of concern, as either can much reduce performance.

As to losing traffic on UDP, certainly possible if you oversubscribe the path's bandwidth. However, if hosts have gig interface, that's shouldn't be possible unless other hosts are concurrently using link.

Review Cisco Networking products for a $25 gift card