cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4571
Views
11
Helpful
8
Replies

Moving VSM Guest to VEM host

hypnotoad
Level 3
Level 3

I'm having trouble getting my VSM to work on a host with a VEM installed. I have 5 ESX 4U1 hosts. VEM is running on the first 4. The VSM is running alone on the 5th host without a VEM.

My procedure to move the VSM to another hosts is this:

1) Save running config on VSM

2) Power off VSM

3) Migrate VSM guest to other host

4) Edit three NICs to be in the proper VLANs (port groups)

5) Power on VSM

6) run 'vem stop' and 'vem start' on each host.

I never get my management interface to ping outside the VSM. I cannot ping the VSM from outside. Show modules never lists my VEMs.

I can migrate back to the original host and get it working.

Yes, the VLANs and defined correctly on the both systems. Control, Management, and Packet are on different VLANs. The VLANs are the same between VEM and the last non-VEM host.

What am I missing?

Thanks,

Patrick

8 Replies 8

mmehta
Cisco Employee
Cisco Employee

Hi Patrick,

can you please provide the port-profile configuration being used for the control, management and packet interfaces.

The VLANs need to be defined as "system vlans" in those Vethernet port-profiles, as well as the uplink port-profile.

Thanks,

Munish.

Here is a snip from my config. I don't have them defined as system. Do I just add the 'system vlan xxx' command to each one?

port-profile type vethernet N1KV-Control
  vmware port-group
  switchport mode access
  switchport access vlan 111
  no shutdown
  state enabled
port-profile type vethernet N1KV-Management
  vmware port-group
  switchport mode access
  switchport access vlan 10
  no shutdown
  state enabled
port-profile type vethernet N1KV-Packet
  vmware port-group
  switchport mode access
  switchport access vlan 112
  no shutdown
  state enabled

--Patrick

All 3 of those Port Profiles should be defined as "System VLANS" as Munish has detailed.

port-profile type  vethernet N1KV-Control
  vmware port-group
  switchport mode  access
  switchport access vlan 111
  no shutdown
  state  enabled

    system vlan 111
port-profile type vethernet N1KV-Management
  vmware  port-group
  switchport mode access
  switchport access vlan 10
   no shutdown
  state enabled

    system vlan 10

port-profile type vethernet  N1KV-Packet
  vmware port-group
  switchport mode access
   switchport access vlan 112
  no shutdown
  state enabled

     system vlan 112

Then in your uplink Port Profile these all need to be defined as system vlans also

"system vlan 10, 111-112"

After this you should have no issue migrating your VSM from VEM host to VEM host.

Robert

That fixed the problem. Thanks for the help.

I have a new problem. I need to bundle my uplinks into etherchannel groups. I'll do that tomorrow.

I'll keep the VSM on the non VEM host until I get that sorted.

Thanks again,

--Patrick

Creating auto port channels is very simple.  Please refer to this page for setting this up.

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_2/port_profile/configuration/guide/n1000v_portprof_5channel.html

Cheers,

Robert

Can I configure port channels one host at a time. My system-uplink port-profile is used by four ESX boxes. They all have production VMs running on them. I can move VMs off them one by one to setup my port channels.

My hosts connect to a pair of stacked 3750E switches. The system-uplink port channel has four pnics. Two pnics to each 3750E. I plan to configure cross stack port grpoups on the 3750.

It appears that i need to edit my system-uplink port-profile. This is what it looks like less the red (bottom) line. Should I just add the red line and can I do this with a live system?

port-profile type ethernet system-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 10,111-112
  no shutdown
  system vlan 10,111-112
  state enabled

  channel-group auto mode on sub-group cdp

--Patrick

Patrick,

Per your message below, as you have stacked 3750s, which is a clustered switch. right?

If it is a clustered switch, you need to configure regular port channel on N1K. If it is not a clustered switch, (two different switches), you need to configure vPC-HM channel on N1K.

To configure regular port channel, you have two options: You need to configure the same type of channel on upstream switch too.

1. "channel-group auto mode <on>"

2. "channel-group auto mode <active/passive>" for LACP channels.

To configure vPC-HM channel, it is recommended to use the following:

"channel-group auto mode on mac-pinning". Do not configure any port channels on the upstream switch ports that are connected to N1K VEM.

When you configure channel configuration in the uplink profile, it will be applied to all the ports using that profile. Also it will bounce the ports, which will affect the traffic of non-system vlans.

If you want to do it without any hit in the traffic, follow these steps:

1. Migrate all the VMs from a host

2. Create a new system uplink profile with channel configuration as explained above.

3. On vCenter, goto networking-->hosts and remove the host from DVS

4. Add the host back to N1K DVS, this time use the new uplink system profile created in step 2.

Repeat the same for other hosts..

Thanks,

Naren

I finished my 1000V deployment. Thanks for the assist. I configured the port-channels manually.

--Patrick