cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1112
Views
5
Helpful
9
Replies

LAG Speed question

jani80k
Level 1
Level 1

Dear all,

 

I know there have been many discussions on this topic but I still have one open point I am not sure about.


According to other discussion on these forums, LACP/LAG will only be beneficial in case of more than one connection/session.

 

So I did the following: I started two iperf3 servers on a linux based machine that is connected via LAG to an SG300-20 Switch. Each server on its own dedicated port.

 

Then I started two iperf3 clients on a Windows based machine with an Intel350-T2 NIC Team.

 

I still only get a total throughput of the equivalent of 1GB/s even though I am running two clients and two servers. Should I not get past the 1 GB/s mark with this setup?

 

Best regards,

Jani

9 Replies 9

Philip D'Ath
VIP Alumni
VIP Alumni

You will only get more throughput if the clients actually end up on different LAG links.  Without knowing the hasing algorithym configured, there is roughly a 50% change of a particular stream being on a particular LAG member (when there are 2 members).

 

So yes, it is quite possible both of your streams only went down one pipe.

As Phillips mentions, a "good" hashing algorithm choice will evenly divide the traffic. A poor choice, less so, or not at all.

Cisco devices don't all offer the same hashing algorithms, and even when they overlap, often they use different defaults for their algorithm choice.

Further, many hash algorithms are based on MAC or host IP, so if you're test between a pair of devices, most hashing algorithms will choose the same link. A few platforms provide hashing algorithms that can use UDP/TCP port numbers, but many Cisco platforms do not.

I am using two SG300 switches which both only offer MAC or MAC/IP based load-balancing algorithms.

Both switches were left in L2 mode.

My setup is as follows:

 

Linux Server <---LACP(2xCat6a)---> SG300-20 <---LACP(2xCat6a)---> SG300-10 <---LACP(2xCat6a)--->Win10(Intel I350-T2)

 

Could the bottleneck be the connection between both switches? 

If you're doing L2 end-to-end, from one Linux Server to one Win10 host, MACs or IPs will be the same for all flows between them. If so, as noted, all such flows will hash to the same link.

Is there any way to increase the connection speed with my equipment?

Use multiple hosts (one either or both sides).

So I connect both the Linux server’s and the Windows 10 client‘s NICs as individual connections with individual IPs. 

Since they are both connected to separate Cisco switches, do I connect the switches amongst each other via LACP or without any aggregation?

I contemplate using SMB Multichannelling which seems to work well if both the client and the server are connected to the same switch with multiple individually setup NICs.

You could try that, or if you host(s) support multiple IPs on the same interface, you might try that.

As to using different paths rather than a bundle, the former could be more deterministic although might lack redundancy. Remember, bundles don't account for link usage, so its possible heavy flows will hash to the same link even when there's another link lightly loaded. Generally, figure a dual link bundle will average (with good hashing) about a 50% increased bandwidth capacity.

I just tested all possible combinations.

The most stable and performant combination is:

Windows machine <-LAG-> Switch 1 <-LAG-> Switch 2 <- NIC 1 of Linux machine

Windows machine <-LAG-> Switch 1 <-LAG-> Switch 2 <- NIC 2 of Linux machine

This will give me almost 2 GB/s from the windows machine to the linux machine - 1 GB/s connecting to each individual IP of the Linux machine, unfortunately only using iperf....

So LAG works and the switches are doing their job.

Under SMB even with multichanneling enabled, I could not get past the 1 GB/s mark... but this is not a question that I should ask on a Cisco forum.

Since the Cisco equipment is doing its job impressively well and all that is left is most likely a linux configuration / stability issue (SMB has not been marked stable under Linux, yet) I consider this problem solved from a Cisco perspective.

Thanks to everyone for their inputs!