cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
733
Views
2
Helpful
5
Replies

Mac PC having trouble sending more than 1G Iperf

carl_townshend
Spotlight
Spotlight

Hi guys, I am looking for some ideas how to troubleshoot an issue

we have a pc, a powerful apple pc used to design etc, this pc has a 10G copper connection to a 9200L, this then has a 10G fiber back to a comms room, the server is located here, also on 10G

We have done some performance testing using iperf and we are unable to go much past 1 gig per second.

Any ideas why we wouldn’t get more than this and where to look.

should I start with cpu on the Mac?

any network card settings I should look at, tcp offload etc? Mss? Jumbo frames maybe ?

also, when connecting to the server, who decides the window size? If I send a syn to the server, it sends its window size in the syn ack, is this what we will use to start with and then ramp up? Or can the client use its own window size?

Cheers

Carl

 

5 Replies 5

Joseph W. Doherty
Hall of Fame
Hall of Fame

Is your iperf test using TCP?  If so, try a UDP test.  This might quickly identify if you're dealing with a hardware performance issue or a TCP performance issue.

To your questions, TCP receiver tells sender how much data can be sent to it.  Sender will not send data more than that.  However, TCP sender ramps up to that value, either using slow start or congestion avoidance.

balaji.bandi
Hall of Fame
Hall of Fame

Is this a test environment ?

i would as below :

1. what version of iperf - i use operf3 for good outcome

2. jumbo frames helps here.

3. MAC to Server back to back - are you able to get 10GB or nearby speed when you do iperf ?

4. Then test via switch and enabling jumbo frames and also MTU 1500.

5. Try multi stream and see aggrigate you get near by 10GB.

6. by default it is 1MB only s- change to higher size like -b 10G or 100G

 

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Hi there

what do you mean by default it’s 1MB only ?

Hmm, suggestion #6 is a bit unclear to me too.

BTW, concise documentation for iperf3 might be found here: https://iperf.fr and https://iperf.fr/iperf-doc.php (web pages are in English)  The TCP and UDP tuning sections, on the second reference, especially on TCP, are worth a read.

Some comments . . .

"2. jumbo frames helps here."

Certainly possible.

Personally, I'm always reluctant to use jumbo frames because they are NOT standard and it's easy to accidentally trip over an issue with them because of that.

The two things jumbo do provide are: data payload slightly increases from about 97% to 99% (not a huge increase) but also a significant decrease in PPS need (the latter being, possibly, the most significant aid to hardware struggling to meet a higher bits/sec rate [BTW you used to see the same issue decades ago between sending minimum sized frames vs. 1500 sized frames]).

"4. Then test via switch and enabling jumbo frames and also MTU 1500."

To be clear, @balaji.bandi is suggesting separate test runs using jumbo MTU vs. 1500 MTU.  Again as jumbo is not standard you should insure sender doesn't send larger jumbo MTU frames than all downstream devices can process.

"5. Try multi stream and see aggrigate you get near by 10GB."

Also possible.

If true, mostly indicates per flow RWIN not optimally sized for BDP.  If flow's RWIN is too small, sender will keep pausing transmission.  If flow's RWIN is too large, sender will send too fast, excess packets being queued, if dropped, will trigger congestion avoidance or possibly slow start. Multiple concurrent flows aggregate can better maintain 100% utilization, but if they are over running bandwidth, you'll see drops and/or delay for packets in those flows.  Also, if drops, you're losing bandwidth to retransmissions.

"6. by default it is 1MB only s- change to higher size like -b 10G or 100G"

Another worthwhile suggestion that may help, especially a performance test.

Also, though, if you see a big transfer rate improvement, multiple concurrent hosts might be negatively impacted if you optimize all for optimal single host performance.

Also, with real hosts, and 10g or better, host storage components might struggle at such rates.

Also host NIC drivers and TCP stacks need to be capable of dealing with 10g, or better, optimally.

Effective/optimal, high speed networking, can be more complex then slapping in a high speed NIC.

Review Cisco Networking for a $25 gift card