Showing results for 
Search instead for 
Did you mean: 

vMotion failed between two vm cluster that has different Nexus1000v domain

Level 1
Level 1

I am having ESXi 5.1 with two (2) Nexus 1000v  domain (VSM-A in domain ID 100 and VSM-B in domain ID 200).  There is only single DC and vCenter server. I cannot vMotion between the two clusters that is managed by different Nexus1000v system although all the port-group VLANs are exist in both N1k systems. I may misunderstood that as long as vMotion network is spanned at layer 2 on those two clusters then vMotion should be working. Furthermore the vMotion network on all servers are using standard vSwitch instead of vDS. Can someone advise if this is the correct scenario? Thank you

10 Replies 10

Level 1
Level 1

You can't vMotion between two Nexus 1000v switches (even if they are part of the same DC etc). It is the same like any other virtual switch and is not a Cisco limitation.



Thanks for your reply. My customer requires new VSM HA and vMotion at same time. Looks like have to choose either one now.


The VSMs are HA regardless of their location - they are always active & standby.  They can reside on ESX in the same cluster, different clusters, different datacenters or elsewhere in your network such as on a different hypervisor or the Nexus 1110 appliance.


Matt, Do you mean we can have VSM primary and secondary in different vDC?

This is correct.  The VSMs are just virtual machines and can be located on any host as long as they are layer 2 reachable.  Not sure why you would want them in different vDC as this could add confusion.

Gavin Lodge
Level 1
Level 1


If your running HA VSMs(active/standby) you should be using the same domain IDs. Why does your customer want to run multiple instances if the VSM? If you are running the in standalone mode you will need to change them to Primary and Secondary if you migrate to an HA solution.


Sent from Cisco Technical Support iPhone App

You can't vMotion between two Nexus 1000v switches (even if they are  part of the same DC etc). It is the same like any other virtual switch  and is not a Cisco limitation.

Can you please elaborate? I can't see the point.

Regarding vMotion failure, I guess the physical switches are not set to handle jumbo frames. That's necessary for vMotion to work.

My customer basically wants more redundancy. We have setup HA system but what happens if both vm hosts that run VSM primary and standby fail? I know one redundant VSM is usually enough but my cust is looking for more flexible and higher redundancy option.

If the concern is to mitigate risks of VSM-HA failing as a pair, you may want to explore/consider suggesting the Backup and Restore procedures. Be it the VSM on VEM (as VM) or as a virtual service blade on N1010/N1110 we've got tried and tested procedures.

VMware assigns a unique idetifier to each DVS.  The behavior is the same for N1k & VMware DVS.  This unique ID prevents vmotion between different DVS.

vMotion works fine with 1500 byte frames.  However, if you have modified the vmk's MTU then yes, all upsteam network components need jumbo enabled.  On N1k, jumbo is enabled by default on all vethernet port-profiles.  You will need to set 'mtu 9000' on the ethernet port-profile used for vmotion traffic.