cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements

vPC Host Mode Pinning for Nexus 1000v

8918
Views
5
Helpful
3
Comments

 

 

Introduction

vPC Host Mode on the Nexus 1000v is a capability of the Virtual Ethernet Module (VEM) in a host to group two or more physical uplinks into an Etherchannel bundle, with those uplinks being connected to two different upstream physical switches. With vPC Host Mode on the Nexus 1000V, the upstream switches do not need to have multi-chassis Etherchannel (MEC) capability which would normally be required to support Active-Active host connectivity. In order to accomplish this, vPC host mode pins traffic from a VM to one (and only one) of the available uplinks in the vPC Host Mode port-channel.

 

vPC Host Mode

When vPC-HM is used, each vEthernet interface on the VEM is mapped to one of two subgroups in a round-robin method. All traffic from the vEthernet interface uses the assigned subgroup unless it is unavailable, in which case the vEthernet interface fails over to the remaining subgroup. When the original subgroup becomes available again, traffic shifts back to it. Traffic from each vEthernet interface is then balanced based on the configured hashing algorithm. Do not configure vPC-HM on the Cisco Nexus 1000V when the upstream switch ports that connect to the VEMs have vPC configured. In this case, the connection can be interrupted or disabled.

There are currently three methods of pinning VM traffic to an uplink:

  • MAC Pinning
  • Subgroup Pinning based on CDP
  • Manual Subgroup Pinning

 

MAC Pinning

 

MAC Pinning simply pins VM traffic in a round-robin fashion to each uplink based on the MAC address of the VM. Service Console and/or VMKernel traffic is also pinned round robin to an uplink based on the source MAC address. This allows for utilization of all uplinks, but is non-deterministic as to which uplink a particular VM will use. MAC Pinning is the easiest to deploy and is the recommended channeling option when connecting to upstream switches that do not support MEC. In the below diagram, with MAC Pinning VM1 might be pinned to the first uplink, VM2 pinned to the second uplink, and so on.

 

vpcv1.jpg

 

To configure MAC Pinning, the channel group must be set to ON and the mac-pinning keyword must be specified as shown below:

 

port-profile type ethernet CNA-Uplink

  vmware port-group

  switchport mode trunk

  switchport trunk allowed vlan 10-11,110

  channel-group auto mode on mac-pinning

  no shutdown

  system vlan 10-11

  state enabled

 

 

Subgroup Pinning based on CDP

 

Subgroup Pinning based on CDP is an option that can be used when there are several uplinks available that are being connected to two separate physical upstream switches. CDP is used to discover which upstream physical switch an uplink is connected to, and all uplinks connected to the same switch are assigned the same sub-group id. VM traffic is then assigned to a subgroup based on the source MAC address in a round-robin fashion. Traffic within the subgroup can be hashed based on L2/L3/L4 header information, to achieve more efficient load balancing between the links in each subgroup. The following illustrates an example of pinning based on CDP:

 

vpcv2.jpg

 

In the example above, VM1 might be pinned to SG0 and traffic from VM1 would be hashed on both of the links in SG0 based on L2/L3/L4 information. Likewise, VM2 might be pinned to SG1 and traffic from VM2 would be hashed on both links in SG1.

 

Pinning based on CDP information is configured by setting the mode to ON and specifying sub-group cdp as shown below:

 

port-profile type ethernet CNA-Uplink

  vmware port-group

  switchport mode trunk

  switchport trunk allowed vlan 10-11,110

  channel-group auto mode on sub-group cdp

  no shutdown

  system vlan 10-11

  state enabled

 

 

Manual Subgroup Pinning

 

With release 4.0(4)SV1(2), the capability was introduced to specify which member uplink in a vPC-HM port-channel VM traffic should take when exiting the VEM. This is done by specifying which subgroup a particular port-profile should be pinned to within the port-profile configuration. You can also specify which member uplink you want the Control/Packet VLANs to take using the pinned-sgid {control-vlan-pinned-sgid | packet-vlan-pinned-sgid} sub-group_id command.

 

The subgroup ID can be assigned either automatically via CDP, or it can now be assigned manually as well in the 4.0(4)SV1(2) release. The ability to assign the subgroup ID manually can be used in the event that the upstream switch does not support CDP. With the automatic detection based on CDP, any links connecting to the same upstream physical switch will be assigned the same subgroup ID. In Figure 2 above, suppose we wanted to pin VM 1 and VM2 to SG0 and VM 3 and VM 4 to SG1. Furthermore, we want to make sure Control traffic always prefers SG0. In this case, we create two VM port-profiles and specify the ‘pinning id [sg-id]’ command in order to pin all VMs associated to that port-profile to the specified subgroup. In the example configuration below, a VM assigned to the VM-Traffic-SG0 port-profile will use the member uplinks associated with subgroup 0 and a VM assigned to VM-Traffic-SG1 will use the member uplinks associated with subgroup 1.

 

port-profile type vethernet VM-Traffic-SG0

  vmware port-group

  switchport mode access

  switchport access vlan 110

  pinning id 0

  no shutdown

  state enabled

 

port-profile type vethernet VM-Traffic-SG1

  vmware port-group

  switchport mode access

  switchport access vlan 110

  pinning id 1

  no shutdown

  state enabled

 

In addtion, Control VLAN traffic is pinned to subgroup 0:

 

port-profile type ethernet CNA-Uplink

  vmware port-group

  switchport mode trunk

  switchport trunk allowed vlan 10-11,110

  channel-group auto mode on sub-group cdp

  no shutdown

  system vlan 10-11

  pinned-sgid control-vlan-pinned-sgid 0

  state enabled

 

The above example is using subgroup information assigned based on CDP. If CDP was not available on the upstream switches, you could configure channel-group auto sub-group manual and then under the physical Ethernet interfaces specify subgroup-id [sg-id].  

 

 

Related Information

 

Install and Configure Nexus 1000v on VMware

Nexus 1000v ERSPAN Configuration

Comments
Beginner

In manual and auto subgroup id options ; isnt seperate static port channel configuration needed on the upstream switches ?

Like a static port channel on left upstream switch, and a SEPERATE static port channel on the right upstream switch.

Otherwise, like for an example the left hand top switch would see the same source mac (VM1) on two different ports (which belong to sub group id 0) when L2/L3/L4 hashing based traffic distribution occurs on that sub group.

Beginner

I guess it should. Just found this Document Guide :

Cisco Nexus 1000V Series Switches Deployment Guide Version 2 Deployment Guide (

guide_c07-556626_v2.pdf)

And it is stated as :

When multiple uplinks are attached to the same subgroup, the upstream switch needs to be configured ina

PortChannel, bundling those links together. The PortChannel needs to be configured with the option mode on.

Beginner

Question 1

 

If on the upper layer switches or in my case the two FI A & B are CDP enabled.

 

Then in 1000v can i do manual sub-group by going against the cdp of upper switch.

 

Question 2

 

My FIs are in End-Host Mode, i want to know what is the default for CDP in FI for north and south traffic.

 

And is there any option in ucs manager or cli to globally disable or enable CDP. 

 

Question 3

 

If i apply manual sub-group command in 1000v and upstream switch has CDP enable which will work as more prior to other CDP of upstream switch or manual of 1000v.

 

or the manual sub-group will be reconfigured automatically by the upstream CDP.

CreatePlease to create content
Content for Community-Ad
August's Community Spotlight Awards