Showing results for 
Search instead for 
Did you mean: 

Migration fully to 1000v

This may be a dumb question and maybe I'm missing something simple....

Let's say I have a vSphere server with 2 NICs.  I set up the server like normal and have a VMKernel port group and Service Console on the first NIC.  I add this host to the Nexus 1000v dvswitch and uplink via the second NIC.  What's the most seamless way to migrate the first NIC?  I know I can do a Migrate Port Group with the dvswitch but it seems every time I do that the migration fails when it does the Service Console as that is the management channel.  Do I first set up a second SC via the dvswitch and point vCenter to that first?  Then you get in to issues of DNS names pointing to one IP and now I have a second...  What am I missing?


Robert Burns
Cisco Employee

Sounds like you have the procedure correct.   The host you're trying to migrate nics with - is it running your VSM or vCenter server?

When migrating all your nics to the DVS, ensure you've done the following first:

- Create all necessary port groups - VMotion, SC, NFS etc

- Create all the VLANs used by the above Port Groups on the DVS and your upstream switch.  I'd also test them if possible with a VM. (Simple ping test between two VMs on each newly created VLAN would suffice)

- Add the VMkernel & SC vlans to your DVS as "System VLANs"

- Ensure any newly created VLANs on your DVS are allowed on the appropriate uplink Port Profile.

Then use the Migration Wizard in vCenter to migrate your VMKernel and SC ports to the DVS.  You may lose momentary connection to your ESX host (3-5sec) but it should come right back online assuming it can communicate over your management VLAN.

Once your vSwitch is empty you and the ESX host is connected via it's SC hosted by the DVS, you can remove the Network Adapter from the vSwitch and completely remove the vSwitch (assuming its no longer being used).   Next in vCenter, go to the Network Configuration of the ESX host, click on the Distributed Virtual Switch view and click "Manager Physical Adapters" and you can add the free Network Adapter to one of your Uplink Port Groups.

Hope this helps. If not let me know and we'll investigate further.


Thanks Robert!  Very good info.  Does this change if I do have the VSM and/or vCenter on the server I'm switching over?  That will be the case on two servers.

On the esx4 host i`ve created NEXUS-Mgmt, NEXUS-Control and NEXUS-Packet port-groups these PGs connected to physical NICs....

If i want now move all physical NICs fully to nexus1000v , i have to create these PortGroups on the nexus as what port-profile(?) and connect the three adapters of the VSM to these port-profiles?

Or is there a missunderstanding from me?

Thanks for help

You would create the Port Profiles just as non-uplink port profiles.  Essentially, all these port profiles would be doing is assigning the virtual ports to a VLAN.   Keep in mind if you intend on migrating your Control & Packet VLANs to the DVS you'll need to add them as System VLANs to your uplinks.

This configuration is fine.


robert, where can i find the migration wizard?

1. select your esx host

2. go to configuration tab

3. select "networking"

4. choose "Distibuted Virtual Switch"

5. click "Manage Virtual Adapters"

6. click "Add"

and than you can choose "Migrate exesting virtual adapters"

I found this advice in the vmware community boards:

"Choose networking then choose the Distributed Virutal Switch button. Select the "manage virtual adapters " link at the top of the page and select "add" in the new pop-up. The next screen will allow you to migrate existing interfaces from the vSwitch to the DVS. "

and this worked great for my ESX VMKernel Management interface.  But it did not allow me to migrate my remaining VM Port-Groups - the 1000v Control and Packet groups.  is there a trick to migrating these?

You will first need to create the port groups on the 1000v.  Then you can migrate your VMware Port Groups to the DVS.  This prevents a Server Admin from crossing the Network Admin boundary.

There's a doc here attached, you'll have to overlook the formatting.  Going from pdf to doc didn't fair well.



Hi robert,

thanks for the doc.  I'm using ESXi 4.0 so there is no 'service console' per se, but i was able to migrate the Vmkernel Management interface with no issues.  What I need to know how to do now is migrate my 1000v_Control and 1000v_Packet portgroups from the vswitch to the 1000v dVS.

You can't migrate a Port Group, you can only migrate virtual adapters.   As this thread details, you need to create a Control, Packet, and Managment (if required) Port Profiles on the 1000v DVS.  Set them as system VLANs, add them to the appropriate System-uplink (allow them on the trunk) and then just change the Network Interface bindings on your 1000v VSM VM from the vSwitch Port Groups, to the 1000v Port Groups.

Tip:  When creating Port Profiles on the 1000v, make use of the "vmware port-group xxxx" command to properly name your Port Profiles.  This will dictate how they appear in vCenter to your VMs.  As a best practice I always append my 1000v port groups wjith "dvs_portgroupname".  This way from vCenter I can distringuish between vSwitch & DVS port Groups.


port-profile system-uplink
  capability uplink
  vmware port-group dvs_system-uplink
  switchport mode trunk
  switchport trunk allowed vlan 19,25,99,200,3001-3002
  channel-group auto mode on sub-group cdp
  no shutdown
  system vlan 19,25,99,200,3001-3002
  state enabled

port-profile dvs_control_3001
  vmware port-group dvs_control
  switchport mode access
  switchport access vlan 3001
  no shutdown
  system vlan 3001
  state enabled

port-profile dvs_packet_3002
  vmware port-group dvs_packet
  switchport mode access
  switchport access vlan 3002
  no shutdown
  system vlan 3002
  state enabled

Hope this helps.


ah yes- all I had to do was migrate the NICs on the vsm virtual machine to the appropriate 1000v port groups.  That worked great and then i was able to delete the unused vswitch.

thanks robert!

Hello, I'm trying to design a UCS system and 1000V with the Menlo adaptor (only 2 NICs visible to hosts). Now I would go with ESXi since VMware is syaing it's the way of the future but I have concerns about the 1000v, 2NICs limitation and ESXi. Can anyone advice on this? Should I stick to ESX instead of ESXi?

Thank you,


I have been using ESXi for all my install with UCS and actually I am pretty much using all the time ESXi over ESX.

Install is much faster and it is way smaller. What are you concern regards to the 2 NIC limitations with ESXi.

On the Nexus 1000V you should enable the MAC-PINNING feature for that.


Pettori thanks for the fast response!

My concern is the management NIC and the fact that if the 1000V DVS will take control of both NICs...

In essence, if I create a port channel/uplinks on the 1000v out of the 2 and only nics I got, what happens to ESXi management if something should go wrong with the 1000v config...

With ESX, at least I got the service console to fix the issue with esxcfg- commands on the console it self...

I may be missing something but to run the CLI on ESXi don't I need at least network remote access to the host?

Thank you,

Content for Community-Ad