cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3646
Views
30
Helpful
8
Replies

Cisco UCS link aggregation

BVC
Level 1
Level 1

I'm currently investigating link aggregation with Cisco UCS. From my understanding this is supported but I'm having trouble on how to configure it. On Cisco IMC I'm trying to select the Networking option from the drop down menu (I assume you can configure aggregation here) but nothing happens when I try to click the option. In the UCS box I only have 1 intel gig network adaptor, do I require Cisco VIC/special interface card so I can access the networking option in IMC and to create link aggregation?

1 Accepted Solution

Accepted Solutions

Kirk J
Cisco Employee
Cisco Employee

LACP can be configured from the OS for the Intel NICs (and the VICs)

This is typically the 'mode=4' statement in the bond config.

From RedHat's definition: "Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slaves in the active aggregator. Requires a switch that is 802.3ad compliant."

 

Kirk...

View solution in original post

8 Replies 8

Kirk J
Cisco Employee
Cisco Employee

The ports on VIC adapters can be configured to create a port-channel via LACP, in coordination with upstream switch(es)

LACP port-channel can also be configured at the OS level.  If you are only using your 1Gb LOM ports then you would configure that at the OS level via a bond(Linux), or teaming (ESXi) settings.

 

Kirk...

Hi Kirk, 

 

Thank you very much for your answer, I'm currently looking into doing Linux bonding then. But just to confirm then, the likes of LACP is only available on a Cisco VIC, this is not available on the likes of an Intel i350 (this came with the UCS) or even the LOM ports?

Kirk J
Cisco Employee
Cisco Employee

LACP can be configured from the OS for the Intel NICs (and the VICs)

This is typically the 'mode=4' statement in the bond config.

From RedHat's definition: "Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slaves in the active aggregator. Requires a switch that is 802.3ad compliant."

 

Kirk...

Vasyl
Level 1
Level 1

Hello,

Anyone succeed and can share your configs for both Linux and Cisco Nexus sides? I have CentOS from one side on UCS C3260 M4 with VICs and Cisco Nexus 9396PX from another side. No luck for bonding/802.3ad configuration. Cisco marks both 40g interfaces "suspended" right when put them into port-channel. Without PO and bonding configs - all works fine.

I have another message thread with my configs included:

https://community.cisco.com/t5/unified-computing-system-discussions/cisco-ucs-c3260-m4-and-lacp-bonding-802-3ad-with-linux/td-p/4788807

I have managed to get OL 7.5 to work, below is config for team interface and physical interfaces - you change the interface config for each physical interface you want on the team. I don't have the config for Nexus switches unfortunately. 

-------team interface config-------

IPADDR=xxxxxxxxx
NETMASK=xxxxxxxx
GATEWAY=xxxxxxxxx
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME="Team connection 1"
DEVICE=nm-team
ONBOOT=yes
DEVICETYPE=Team
TEAM_CONFIG="{\"device\": \"team0\", \"runner\": {\"name\": \"lacp\", \"fast_rate\": true, \"tx_hash\": [\"eth\", \"ipv4\", \"ipv6\", \"l4\"]}, \"link_watch\": {\"name\": \"ethtool\"}, \"ports\": {\"eno1\": {}, \"eno2\": {}}}"

-------physical interface config-------

DEVICE=eno1
DEVICETYPE=TeamPort
ONBOOT=yes
TEAM_MASTER=nm-team
TEAM_PORT_CONFIG='{"prio": 100}'
NAME="System eno1"

Vasyl
Level 1
Level 1

Thanks.

I have downgraded my CentOS to 7.9 and even tried to configure without NetworkManager, using pure udev/ifupdown scripts. Still no luck. Both interfaces marked as suspended from the cisco side. I have applied your config to match 100% except interface names. I have enp9s0 and enp10s0 in my setup. No luck.

Could this be any specific configuration from the Nexus side? I have no problems in the past with 1gb and 10gb links between Nexus and Linux boxes. Other configuration used with Nexus - worked well, without any specific settings. I have working config on 10gb ports with Nexus 5010 for example. Even without "spanning-tree port type" settings - all worked well.

Now I have problem with 40g links for both Linux and 40g HPE VC 20/40 F8 modules. HPE VC 20/40 activate only one interface on the port-channel group and keeping second as Standby. Could this be related to "40G Ethernet Expansion Module (N9K-M12PQ)"? I assume the switch model N9K-C9396PX support 802.3ad/LACP configuration, as I was unable to get it working for another 2 systems. 

Any other help? Thanks in advance.

 

I'm sorry but I've never worked with Nexus switches before. The Linux server I'm using the LACP for is connecting to a 9300 switch.

Vasyl
Level 1
Level 1

will try to stack 2 C9396PX together with dual 40gigs and do LACP configuration for them. Will update on the outcome once completed.

beside of that - no luck for Linux <=> C9396PX on dual 40gig with LACP, not HP VC 20/40 F8 <=> C9396PX on dual 40gig with Active/Active

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card