Showing results for 
Search instead for 
Did you mean: 

Port Channel and IBM Blade Centers

Level 1
Level 1

Hi all,

I am looking to deploy the Nexus 1000v to our ESX hosts, but I have a question about port channels in our environment.

Our configuration:

vSphere 4.1

IBM Blade Center H

HS22 Blades (1 GB connection to each CISCO module)

CISCO Catalyst Switch Module for IBM BladeCenter - 4 modules per blade center

Up stream CISCO 6513, multiple GB line cards

Each switch module has 4 GB connections to the CISCO 6513 and is configured a LACP port-channel with the ports connected to different line cards in the 6513.  Therefore each Blade Center has 4 port-channels connecting to the 6513.

On the ESX host, I currently have the standard VMWare vSwitch configuration:

vSwitch0 - vmnic0 - vMotion and Management

vSwitch1 - vmnic1/2/3 - VMs (multiple vLANs), FT and redundant Management

vSwitch1 is configured with 3 vLANs and Port Groups

With vSwitch1, I am aggregating the 3 GB connections for bandwidth/redundancy.

Reading the getting started guide for 1000v, Release 4.2(1) SV1(4), I read where:


The server administrator should not assign more than one uplink on the same VLAN without port channels.  Assigning more than one uplink on the same host is not supported for the following:

- A profile without port channels

- Port profiles that share one or more vLANs


With my hardware there is no way to establish a port channel from the ESX host to the CISCO Catalyst Switch Module when I only have the one link to each module.

Does the above entry in the guide mean that I will not be able to aggregate the 3 GB connections to maximize bandwidth and provide redundancy?


8 Replies 8

Robert Burns
Cisco Employee
Cisco Employee


You defiantely can aggregate your traffic with your hardware.  What this is saying, is that if you use multiple uplink port profiles on the 1000v you can't use the same VLAN accross different uplink port profiles.  The 1000v wouldn't know "how" to choose an uplink in this respect, so follow that rule.   Here's how it would look.

1000v Side

Uplink Port Profile 1 - Management Uplink


Vlans Allowed: VMotion & Managment

Channeling Method: Mac Pinning

Uplink Port Profile 2 - VM Uplink


Vlans Allowed: All vlans - excluding VMotion & Managment

Channleing Method: Static PC (mode on) or LACP (mode active)

3120 Chassis Blade Switch Side

vmnics0's - Just configured as trunk with Port Fast enable allowing only VMotion & Management VLANs

vmnics1/2/3's - Each host should have these NICs configured in a static (mode on) or LACP (mode active) channel group with Portfast enabled allowing all vlans - excluding VMotion & Managment.

With Port Channeling its imparrative that you match both sides match (either both static or using LACP).

This would give you the aggregated 3GB connection between the 1000v and 3120 blade switch stack.  As for the 4 uplinks between the 3120 -> 6513, you could also channel your interconnection here also with LACP or static PCs.

Let me know if you have any quesitons.



Thank you for the reply.

The CISCO Catalyst Switch Module is the 3012 option and not the 3110G or 3110X.  The 3012 model does not allow you to stack the switches.

Since I can not stack the switches, I believe I am still out of luck.

Do you agree?


In this case you will need to use mac-pinning for both uplinks port profiles.  Though the 3-links for your vm-uplinks PP will not be aggregated, all links will be utilized for forwarding.



Then it seems to me this would have the same configuraiton/result as using a standard vSwitch with three ports assigned to it.

Is there any advantage to try and swap out my 3012 modules with the 3110gs that support a switchstack and etherchannel?  According to VMWare, at least with a standard or distributed vSwitch, each VM's NIC will choose one and only one uplink NIC to forward through.  What is the advantage then of the etherchannel?  I am not referring to the uplink ports from the 30xx to the upstream 6513.

Going back to the question of the port profiles, I currently have two IP addresses assigned within vLAN 202.  These are for primary and backup Management.  I am allowing vLAN 202 to be forwarded over both groups (vmnic0 and vmnic1/2/3).  Based on what you wrote above, I will not be able to use vLAN 202 for both port profiles.  I will then need to have the backup Management IP in a different vLAN on the vmnic1/2/3 group.  Does that sound correct?




Yes - Mac-pinning is essentially the same as the vSwitch default hashing method (route based on virtual port ID).

If you swap your current modules for the 31xxgs modules you will be able to utilize Etherchanneling for both your vSwitch/vDS/1000v uplinks as well as the Chassis -> 6513 uplinks.  By default the vSwitch is not configured to work with an aggregated link such as an Etherchannel.  In order to benefit from a multi-NIC vSwitch uplink, you need to change the hashing to "Route based on IP Hash".  An etherchannel will always give you far better load balancing across your member NICs than MAC Pinning or the default vSwitch hashing can provide.  The added benefit of the 1000v with Etherchannel is it can also utilise 802.3ad (LACP) negotiation (mode active).  This provides superior link monitoring versus a static port channel (mode on) - the only etherchannel type a VMware vSwitch/vDS can support.

In regards to your dual management interfaces - correct - you can't use the same VLAN accross multiple uplink port profiles.  Might I suggest you add a secondary/backup Management interface in your VMotion VLAN.  This interface is only used if the primary one fails.  Keep in mind you can only have one default gateway, so using the backup Management interface would require being on the same Layer 2 subnet. 

As for your suggested NIC configuration, that looks good.  One thing to be careful is your VMotion interface.  This will essentially "flood" the NIC momemtarily while a vMotion is occurring.  This can cause a loss of management heart beats between your host & vCenter.  If you're not using any FT-enabled VMs (FT is still a work in progress in my opinion) I would swap your FT & VMotion.  This would give you more bandwidth where it's needed.




Thanks for the information.

I only have 4 nics available.  If I assign two to managmenet and FT, then I only have two left for the VM traffic.

Why not just create one profile with all four nics?  Then I would have redundancy for Managmeent, FT, Vmotion and the VM traffic.

Would there be any issues with that?




You could definately do that also.  That would be the most simple way of doing things.  One uplink port profile allowing all vlans. 

Do ensure you've configured the upstream 3xxx ports with port fast.



Also what would you recommend for the uplink groups?

VMWare recommends having a dedicated nic for vMotion, FT and management; however, I only have the 4 nics to work with.

In addition, I believe I need to have more than one nic for the VSM to communicate with the host for fault-tolerance.

I am thinking of setting up this configuration:


- vMotion, Management - vLAN 202


- Mangement backup - vLAN 203

- FT - vLAN 203

- VM traffic - vLAN 105

- n1000v System (control/packet/mgmt) - vLAN 204

What do you think or suggest?