03-04-2010 12:52 PM - edited 03-06-2019 09:59 AM
We created NIC Teaming in Dell PowerEdge R710 server. The two NICs are connecting to the Cisco switch 3560G without any issues. If we unplug one cable, the server still works and connects to the network. Do we need to configure EtherChannel on those two ports.
Here are the current configuration.
interface GigabitEthernet0/16
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk spanning-tree portfast
interface GigabitEthernet0/17
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk spanning-tree portfast
Solved! Go to Solution.
03-05-2010 12:13 PM
The servers support LACP, so can you change channel-group 1 mode on to channel-group 1 mode active. Also on the server side do the same and make sure LACP is turned on. Did you also add the switchport mode trunk andswitchport trunk native vlan 200 to the physical ports as well?
HTH
Reza
03-05-2010 12:51 PM
I can't see the place turn of LACP. How do you turn on LACP in Dell server?
03-05-2010 01:02 PM
I am not a server guy but I have seen server guys doing it. when you create NIC teaming using Broacom software on MS OS there is an option to do LACP or some other Microsoft link aggregation and so you want pick LACP. If you are using an ESX server I think they have their own software for NIC teaming
Reza
03-06-2010 05:43 AM
I can't see the place turn of LACP. How do you turn on LACP in Dell server?
Hi,
Check out the below link by HP for teaming of adapter at server level hope that help.
ftp://ftp.compaq.com/pub/products/servers/networking/TeamingWP.pdf
Ganesh.H
03-08-2010 10:34 AM
I re-created Teaming in Dell by selecting LACP. Now, I can access the network. But what? I don't see any different. Let me go back to my original question.
My original teaming on Dell was setup to use Smart Load Balance and Failover without adding channel-group 1 mode active. It works with 1 GB speed show up. Now I configured teaming uses LACP with channel-group mode active. The speed still shows 1GB.
1. Can I get double speed if I create NIC teaming? Or it just gives me failover?
2. If it does give me move speed, how can we know if the Dell connection shows 1GB. Or can we have some tools to test it?
3. What are differnt between load balance/failover and LACP? Or which is better?
4. If the LACP setting doesn't give me more speed, should we go back to the original configuration without LACP and channel-group 1 mode active?
03-08-2010 10:50 AM
Each individual NIC can only speak at 1Gbps so your connection "speed" will always show 1Gbps on the Dell. That is the physical connection speed. However, by using LACP you are actually rolling those two NIC cards together into a single pipe and therefore getting a full 2Gbps of throughput.
The cool think about LACP (and etherchannel for that matter) is that you get to Link Aggregate while at the same time employ redundancy. It kills the bandwidth bottleneck and the redundancy problem with one solution.
But yeah, don't expect to see your NIC speed show 2Gbps. Its a logical 2Gbps connection, not a physical one.
You would have to test it to prove it. Although I am sure that Dell already has a WhitePaper out there someone doing the testnig for you,
On a quick search I came across this document: http://www.cisco.com/en/US/prod/collateral/switches/ps6746/ps8742/ps8764/white_paper_c07-443792.pdf where it says:
LACP based teaming extends the functionality by allowing the team to receive load-balanced traffic from the network. This requires that the switch can load balance the traffic across the ports connected to the server NIC team. LACP based load balancing is done on the L2 address. The team of NICs looks like a larger single NIC to the switch, much like an EtherChannel looks between switches. Redundancy is built into the protocol. The Cisco Catalyst Blade Switch 3130 supports the IEEE 802.3ad standard and Gigabit port channels. Servers can now operate in Active/Active configurations. This means that each server team can provide 2 Gigabit of Ethernet Connectivity to the Switching fabric. Failover mechanisms are automatically built into the LACP protocol. The pair of CBS3130s must be in the same ring for the server to support LACP connections. In other words, the server must see the same switch on both interfaces. Otherwise, the user most likely will use the SLB mode.
Hope that helps.
James
03-08-2010 12:01 PM
James, Thank you forthe detail information. After reading your post, I decide to keep the LACP configuration. What are the disadvantages of using LACP?
This server is our GIS server running SQL. It comes with 4 NICs. Should I do one teaming with 4 NICs, but two conenct to a swicth and other two NICs conenct to another same Cisco swicth with the same port configuration?
We have a lot HP servers. When we create a NIC teaming with two NICs on the HP server, it always shows 2GB. But the Dell engineer said it is misleading or marketing. You won't get the 2GB.
03-08-2010 12:39 PM
I'd do all four NICs but it really depends on your network design.
If you trunk VLANs across two switches and the link between the switches is greater than 1Gbps, then I would set up the aggregation between all four NICs. If you just have two switches with a 1Gbps between them then maybe you do two different aggregations, one aggregation (two nics) per switch. Of course, if you have two different aggregations then you would need two different IP addresses (one per aggregation) and your application would need to know about both.
Again, the right answer depends on your design. If you can trunk the VLAN across two seperate switches at more than 1Gbps, then use all four. If you are limited to 1Gbps trunk then I would probably have two different aggregations and adjust the application accordingly.
James
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide