vMotion failed between two vm cluster that has different Nexus1000v domain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2013 11:17 PM
I am having ESXi 5.1 with two (2) Nexus 1000v domain (VSM-A in domain ID 100 and VSM-B in domain ID 200). There is only single DC and vCenter server. I cannot vMotion between the two clusters that is managed by different Nexus1000v system although all the port-group VLANs are exist in both N1k systems. I may misunderstood that as long as vMotion network is spanned at layer 2 on those two clusters then vMotion should be working. Furthermore the vMotion network on all servers are using standard vSwitch instead of vDS. Can someone advise if this is the correct scenario? Thank you
- Labels:
-
Server Networking
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2013 11:33 PM
You can't vMotion between two Nexus 1000v switches (even if they are part of the same DC etc). It is the same like any other virtual switch and is not a Cisco limitation.
Regards,
Shankar
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2013 12:44 AM
Thanks for your reply. My customer requires new VSM HA and vMotion at same time. Looks like have to choose either one now.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2013 01:57 PM
Robin,
The VSMs are HA regardless of their location - they are always active & standby. They can reside on ESX in the same cluster, different clusters, different datacenters or elsewhere in your network such as on a different hypervisor or the Nexus 1110 appliance.
-mw
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2013 02:07 PM
Matt, Do you mean we can have VSM primary and secondary in different vDC?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-18-2013 05:37 AM
This is correct. The VSMs are just virtual machines and can be located on any host as long as they are layer 2 reachable. Not sure why you would want them in different vDC as this could add confusion.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2013 02:28 PM
Robin
If your running HA VSMs(active/standby) you should be using the same domain IDs. Why does your customer want to run multiple instances if the VSM? If you are running the in standalone mode you will need to change them to Primary and Secondary if you migrate to an HA solution.
Gavin
Sent from Cisco Technical Support iPhone App
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2013 01:37 PM
You can't vMotion between two Nexus 1000v switches (even if they are part of the same DC etc). It is the same like any other virtual switch and is not a Cisco limitation.
Can you please elaborate? I can't see the point.
Regarding vMotion failure, I guess the physical switches are not set to handle jumbo frames. That's necessary for vMotion to work.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2013 02:06 PM
My customer basically wants more redundancy. We have setup HA system but what happens if both vm hosts that run VSM primary and standby fail? I know one redundant VSM is usually enough but my cust is looking for more flexible and higher redundancy option.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-18-2013 06:17 AM
If the concern is to mitigate risks of VSM-HA failing as a pair, you may want to explore/consider suggesting the Backup and Restore procedures. Be it the VSM on VEM (as VM) or as a virtual service blade on N1010/N1110 we've got tried and tested procedures.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-18-2013 05:40 AM
VMware assigns a unique idetifier to each DVS. The behavior is the same for N1k & VMware DVS. This unique ID prevents vmotion between different DVS.
vMotion works fine with 1500 byte frames. However, if you have modified the vmk's MTU then yes, all upsteam network components need jumbo enabled. On N1k, jumbo is enabled by default on all vethernet port-profiles. You will need to set 'mtu 9000' on the ethernet port-profile used for vmotion traffic.
