cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
811
Views
9
Helpful
7
Replies

Nexus1000V load balancing

Hello,

could someone help me clarify this...

In our environment we have a Nexus1000V. VEMs are connected to two uplink switches. At this point neither mac-pinning nor vPC-HM are being used and nexus is running the default load-balancing mechanism (source-mac). I see a mac-flapping on the uplink switches for servers in the VCenter. If source-mac was being used shouldn't the mac-address of a vm be persistent on a specific switch, assuming that it is not moved to another ESXi?

We are planning to change our uplink port-profiles to either mac-pinning or vPC-HM. The documentation states that, in this case, the vms are associated to an uplink in a round-robin fashion. So what is the use of load balancing in this case? Does load-balancing have effect only if proper LACP is formed (stackable switches etc)?

A last question:

If mac-pinning is used and a link fails, then all vm traffic will be sent to the second link. If the first link comes up again, then will the traffic for the vms that were associated with the first link, be moved back to the first or will the traffic continue to flow on the second one?

 

Thank you in advance,

Katerina

 

 

1 Accepted Solution

Accepted Solutions

Hi Katerina,

I configured my lab for "channel-group auto" and the two links are in a port-channel.

The VEM views the two uplinks as the same interface.

n1k# module vem 4 execute vemcmd show port
  LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
   19     Eth4/3     UP   UP    F/B*   1039     0    vmnic2
   20     Eth4/4     UP   UP    F/B*   1039     0    vmnic3
   49      Veth9     UP   UP    FWD       0     0      vmk1

 

* SGID denotes Sub Group ID

Based on the output, Vmk1 traffic can take vmnic2 or vmnic3. N1k sees this port-channel as one outgoing interface. In order to avoid mac flapping, we would need to configure the two upstream switchports into one logical interface.

Now, with MAC Pinning configured, we run the same command

n1k# module vem 4 execute vemcmd show port
  LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
   19     Eth4/3     UP   UP    F/B*   1040     2    vmnic2
   20     Eth4/4     UP   UP    F/B*   1040     3    vmnic3
   49      Veth9     UP   UP    FWD       0     2      vmk1

vmnic2 and vmnic3 are treated as two different outgoing interfaces. There is no upstream switchport configuration required.

HTH,

Joe

View solution in original post

7 Replies 7

Joe LeBlanc
Cisco Employee
Cisco Employee

Hi Katerina,

Load-balancing mechanisms aren't used outside of port-channel, unless it is L3 ECMP or FabricPath.

When using Mac Pinning, the VMs are associated in a round-robin fashion. If you have 2 uplinks and 8 VMs, 4 VMs are pinned out uplink A and 4 VMs are pinned out uplink B. Hence, load-balancing. If you use LACP, the traffic is able to be hashed across all active links in the bundle. 

If the first link comes back to a Mac Pinning bundle, traffic will be equally distributed across the links again. However, the VMs originally associated with the first link might not be moved back. That is to say, the 8 VMs will be equally distributed across the 2 uplinks, but while VM A was using uplink A originally, it might continue to use uplink B, upon restoration of the link, while some other VM is moved to uplink A instead.

HTH,

Joe

Hi Joe,

I now understand how mac-pinning is going to work when a link fails and comes back up again, but I still don't understand how load-balancing is working if no mac-pinning is in place (our current situation).

In our case we have two uplinks (going to different switches), but they appear as a port-channel to Nexus1000V. This is no real LACP. The VEMs are in active-active state. Shouldn't a vms mac-address be persistent on one link if load-balancing is done on source-mac? I am seeing a mac-flapping on the uplink switches (3120blade switches), about every 20 or so minutes. Why is this happening? Is it associated with the Nexus or the ESXi config?

 

Katerina

Hi Katerina,

If there is no port-channel or channel-group configuration on the Nexus 1000v side, you should expect to see mac-flapping. If a VM wants to send traffic out an uplink, it will choose any uplink with the allowed VLAN. In your case, this is either switch A or switch B.

For this reason, we need to make sure that if an N1k uplink port-profile ever has multiple uplinks, the uplink port profile has a channel-group configuration.

Here's a note on MAC Pinning from the SV2(2.1) Interface Configuration Guide

MAC pinning divides the uplinks from your server into standalone links and pins the MAC addresses to those links in a round-robin method to ensure that the MAC address of a virtual machine is never seen on multiple upstream switch interfaces. Therefore, no upstream configuration is required to connect the VEM to upstream switches.

HTH,

Joe

The thing is that the "uplink" port-profile has channel-group configured. In fact this is the configuration:

port-profile type ethernet Main_Uplink
  vmware port-group
  switchport mode trunk
  switchport trunk native vlan 135
  switchport trunk allowed vlan 6,106-109,135-136,150-152,254,551-552
  channel-group auto
  no shutdown
  system vlan 135-136,254
  state enabled

 

So channel-group is configured! As I said earlier the uplinks go to different switches. Sorry, but there is something I am still not understanding, as far as the mac-flapping is concerned. Do you mean that if channel-group is enabled on the nexus (with no mac-pinning) and the the uplinks go to different switches, the vm will see the links as two independent links and choose one to send traffic, based on its own load-balancing algorithm?

 

I know that mac-pinning will solve my problems, that is why I want to implement it, but I am still trying to figure out the current behavior...

 

Sorry for all the questions...

 

Katerina

Hi Katerina,

I configured my lab for "channel-group auto" and the two links are in a port-channel.

The VEM views the two uplinks as the same interface.

n1k# module vem 4 execute vemcmd show port
  LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
   19     Eth4/3     UP   UP    F/B*   1039     0    vmnic2
   20     Eth4/4     UP   UP    F/B*   1039     0    vmnic3
   49      Veth9     UP   UP    FWD       0     0      vmk1

 

* SGID denotes Sub Group ID

Based on the output, Vmk1 traffic can take vmnic2 or vmnic3. N1k sees this port-channel as one outgoing interface. In order to avoid mac flapping, we would need to configure the two upstream switchports into one logical interface.

Now, with MAC Pinning configured, we run the same command

n1k# module vem 4 execute vemcmd show port
  LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
   19     Eth4/3     UP   UP    F/B*   1040     2    vmnic2
   20     Eth4/4     UP   UP    F/B*   1040     3    vmnic3
   49      Veth9     UP   UP    FWD       0     2      vmk1

vmnic2 and vmnic3 are treated as two different outgoing interfaces. There is no upstream switchport configuration required.

HTH,

Joe

Wow!!!!

Thank you so much for your assistance! This really clarifies things for me. Since my upstream switches aren't stackable I cannot configure them into a logical interface. So mac-pinning is the only solution.

I just needed to understand why there was mac-flapping and now I know! smiley

Really appreciate your help!

Katerina

 

Walter Dey
VIP Alumni
VIP Alumni

Hi Katerina

the answer is yes !

The general theory is that the Cisco Nexus 1000V Series VEM will assign traffic to the group of uplink interfaces that pass the correct VLAN and it will assign the vNIC of a given virtual machine to one of the uplinks on a per-vNIC basis. The upstream switches (in this case, the Cisco UCS 6100 Series Fabric Interconnects) are not aware of the configuration on the Cisco Nexus 1000V Series Switch and see only the appropriate MAC addresses on the appropriate blades. If a link fails, the traffic fails over to the surviving link to mitigate the outage, and the traffic is returned to the original link (after an appropriate timeframe to ensure stability) after the link is restored.

http://www.cisco.com/c/en/us/products/collateral/switches/nexus-1000v-switch-vmware-vsphere/white_paper_c11-558242.html

Cheers

Walter.

 

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card