02-07-2018 11:41 AM - edited 03-08-2019 01:45 PM
Dear all,
I know there have been many discussions on this topic but I still have one open point I am not sure about.
According to other discussion on these forums, LACP/LAG will only be beneficial in case of more than one connection/session.
So I did the following: I started two iperf3 servers on a linux based machine that is connected via LAG to an SG300-20 Switch. Each server on its own dedicated port.
Then I started two iperf3 clients on a Windows based machine with an Intel350-T2 NIC Team.
I still only get a total throughput of the equivalent of 1GB/s even though I am running two clients and two servers. Should I not get past the 1 GB/s mark with this setup?
Best regards,
Jani
02-07-2018 11:53 AM
You will only get more throughput if the clients actually end up on different LAG links. Without knowing the hasing algorithym configured, there is roughly a 50% change of a particular stream being on a particular LAG member (when there are 2 members).
So yes, it is quite possible both of your streams only went down one pipe.
02-07-2018 12:27 PM
02-07-2018 01:28 PM
I am using two SG300 switches which both only offer MAC or MAC/IP based load-balancing algorithms.
Both switches were left in L2 mode.
My setup is as follows:
Linux Server <---LACP(2xCat6a)---> SG300-20 <---LACP(2xCat6a)---> SG300-10 <---LACP(2xCat6a)--->Win10(Intel I350-T2)
Could the bottleneck be the connection between both switches?
02-08-2018 02:56 AM
02-08-2018 03:06 AM
Is there any way to increase the connection speed with my equipment?
02-08-2018 03:07 AM
02-08-2018 03:32 AM
So I connect both the Linux server’s and the Windows 10 client‘s NICs as individual connections with individual IPs.
Since they are both connected to separate Cisco switches, do I connect the switches amongst each other via LACP or without any aggregation?
I contemplate using SMB Multichannelling which seems to work well if both the client and the server are connected to the same switch with multiple individually setup NICs.
02-08-2018 06:19 AM - edited 02-08-2018 07:53 AM
You could try that, or if you host(s) support multiple IPs on the same interface, you might try that.
As to using different paths rather than a bundle, the former could be more deterministic although might lack redundancy. Remember, bundles don't account for link usage, so its possible heavy flows will hash to the same link even when there's another link lightly loaded. Generally, figure a dual link bundle will average (with good hashing) about a 50% increased bandwidth capacity.
02-08-2018 07:28 AM - edited 02-08-2018 07:29 AM
I just tested all possible combinations.
The most stable and performant combination is:
Windows machine <-LAG-> Switch 1 <-LAG-> Switch 2 <- NIC 1 of Linux machine
Windows machine <-LAG-> Switch 1 <-LAG-> Switch 2 <- NIC 2 of Linux machine
This will give me almost 2 GB/s from the windows machine to the linux machine - 1 GB/s connecting to each individual IP of the Linux machine, unfortunately only using iperf....
So LAG works and the switches are doing their job.
Under SMB even with multichanneling enabled, I could not get past the 1 GB/s mark... but this is not a question that I should ask on a Cisco forum.
Since the Cisco equipment is doing its job impressively well and all that is left is most likely a linux configuration / stability issue (SMB has not been marked stable under Linux, yet) I consider this problem solved from a Cisco perspective.
Thanks to everyone for their inputs!
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide