I am having ESXi 5.1 with two (2) Nexus 1000v domain (VSM-A in domain ID 100 and VSM-B in domain ID 200). There is only single DC and vCenter server. I cannot vMotion between the two clusters that is managed by different Nexus1000v system although all the port-group VLANs are exist in both N1k systems. I may misunderstood that as long as vMotion network is spanned at layer 2 on those two clusters then vMotion should be working. Furthermore the vMotion network on all servers are using standard vSwitch instead of vDS. Can someone advise if this is the correct scenario? Thank you
If your running HA VSMs(active/standby) you should be using the same domain IDs. Why does your customer want to run multiple instances if the VSM? If you are running the in standalone mode you will need to change them to Primary and Secondary if you migrate to an HA solution.
Sent from Cisco Technical Support iPhone App
You can't vMotion between two Nexus 1000v switches (even if they are part of the same DC etc). It is the same like any other virtual switch and is not a Cisco limitation.
Can you please elaborate? I can't see the point.
Regarding vMotion failure, I guess the physical switches are not set to handle jumbo frames. That's necessary for vMotion to work.
VMware assigns a unique idetifier to each DVS. The behavior is the same for N1k & VMware DVS. This unique ID prevents vmotion between different DVS.
vMotion works fine with 1500 byte frames. However, if you have modified the vmk's MTU then yes, all upsteam network components need jumbo enabled. On N1k, jumbo is enabled by default on all vethernet port-profiles. You will need to set 'mtu 9000' on the ethernet port-profile used for vmotion traffic.