Showing results for 
Search instead for 
Did you mean: 

Required Configuration to support multiple vCenter Data Centres

Level 1
Level 1

We have 2 x VMware vCenter 5.x servers (one in each physical site).

Site 1 vCenter manages our Production and Test environments (which are separate), and is implemented on a single vCentre Server, but has two vCenter data centres (Production and Test)

Site 2 vCenter manages just Production and is also implemented on a single vCentre Server, with a single vCenter data centre

We are looking to implement Cisco Nexus 1000v and we are trying to ascertain the best configuration. The Site 2 data centre is straight forward, it's the Site 1 data centre where we have a few questions:

Do we need to implement a VSM pair for Production, and target the Production vCenter data centre during the install and config, and allocate a domain id of for example 100?

Do we need to implement a second VSM pair for Test, and target the Test vCenter data centre during the install and config? , and allocate a domain id of for example 200?

If the above config is valid, we will run all VSM's in the production cluster (as it has higher levels of availability and performance).

Can the VSM pairs for Test share the Management, Control and Packet VLAN's we create for Production? (I assume so as we use separate domain id's?)

Thanks in advance


4 Replies 4

Cisco Employee
Cisco Employee

Hello Sean.  For your "Site1", since you have 1 vCenter Server (i.e. Database) with 2 logical Data Center, it will require 2 separate N1KV instances.  The Nexus 1000V can only be linked to 1 vCenter Server and only 1 logical Data Center.  So what you describe is correct on creating 2 instances for Production and Test.  Since you are doing L2 mode with different "Domain ID" for each of the data center, you can use the same Control and Packet VLAN for both N1KV instances, as you stated since they are using different Domain IDs.

Not sure where you are in your process of testing/deploying N1KV in your environment but moving foward, we will be recommending L3 mode versus L2 mode for VSM to VEM communication.  The reasoning is that L3 mode is simpler to "troubleshoot" vs. L2.  With close to 5,000 customers deploying N1KV now, we find that when deploying N1KV, mis-communication between the 2 teams (server and network) causes confusion in configuration of the "system VLANs".  Being that the control VLAN is critical in the VSM to VEM communication, misconfiguring that tends to causes many issues.  With L3 mode, as long as there is a "vmkernel" interface that is controlled by N1KV, then VSM to VEM communication is done via L3.  Server admin can just do a vmkernel ping to the management interface of the VSM for verification.  There is no need for control or packet VLANs.

I don't want to go into the whole background history of this but we will be providing more documentation in regards to this change.  In future software release, we will default to L3 mode when installing new instances of the N1KV.  If you are already running L2 mode in your production, there is no need to change it to L3.  Hope this helps you and not confuse you



Thanks for your reply, it has confirmed how we were intending to configure the Nexus 1000v VSM's.

We were intending to set up the sysem in L3 mode, but again reading the documentation we are slightly confused in the terminology between Management (Service Console) and Mgmt (Nexus Management).

This is our (proposed) configuration:

vCenter Server -                                         VLAN 174

ESXi Management (Service Console) -         VLAN 175

Nexus Control                                             VLAN 175

Nexus Packet (not used)                              VLAN 175

Nexues Mgmt                                             VLAN 174

In this scenario our VSM Management and VEMa are on different VLAN's, so does this satisfy the requirements for an L3 configuration?

Thanks in advance

Hi Sean.  When deploying the Nexus 1000V in L3 mode, the control and packet VLAN is really no longer needed and is irrelevant.  If your vCenter is on VLAN 174 (subnet X) and your ESXi (Service Console) is on VLAN 175 (subnet Y), then you can have your VSM management interface (Nexus mgmt0) to be on VLAN 174 (subnet X), just like the vCenter Server.  Then the communication between the VSM and ESXi (VEM) will be going through L3 via your router in the network.  The Control and Packet VLAN can be set back to the default VLAN (i.e. VLAN 1).  One of the requirements of doing L3 is to convert your ESXi (Service Console) to be controlled by the Nexus 1000V.  You will need to create a type vEthernet port-profile like the example below:

Nexus1000V# configure terminal

Nexus1000V(config)# port-profile type vethernet L3vmkernel

Nexus1000V(config-port-profile)# switchport mode access

Nexus1000V(config-port-profile)# switchport access vlan 175

Nexus1000V(config-port-profile)# vmware port-group

Nexus1000V(config-port-profile)# no shutdown

Nexus1000V(config-port-profile)# capability l3control

Nexus1000V(config-port-profile)# system vlan 175

Nexus1000V(config-port-profile)# state enable

This example port-profile will be used by the ESXi Service Console to be controlled by the N1KV.  When this is done, the VSM to VEM communication will be done through L3. 


Hello Cuong !!! I am also looking into deploying Nexus 1000V in our data center. I am doing a POC as we speak and the plan is to move all ESX hosts from vSwitch over the Nexus 1000V. Can you share some design best practices for green-field deployment ?

Currently I have vCenter server on VLAN 8 and ESX (Service Console) on VLAN 7. For L3 mode I would have the VSM mgmt interfacen (Nexus 1000V mgmt0) on VLAN 8 correct ? Does the VSM mgmt0 have to be on the same VLAN as vCenter ? I also have VM's on VLAN 8 as well, do you see any issues/concerns with that ? Should I create a separate VLAN for vCenter and VSM mgmt0 ?

Thanks Cuong !!!!