Showing results for 
Search instead for 
Did you mean: 


Nexus 1000v - VEM missing


After new installation of version Nexus 4.2.1.SV1.5.1a, I have problems with connectivity between VSM and VEM. I can see VEM module on VSM under missing:

Nex-1# sh module vem

No Virtual Ethernet Modules found.

Nex-1# sh module vem missing

Mod  Server-IP        Server-UUID                           Server-Name

---  ---------------  ------------------------------------  --------------------

-     34313339-3239-4742-3836-343959443836  NA


What can I do to connect them?

BR,  M.

Cisco Employee


Are you using L2 or L3 mode for VEM - VSM connectivity ?

If it is L3, can you ping ESXi mgmt ip address from VSM ?

If it is L2, can you make sure that control and packet VLANs are configured properly on the nodes between VEM and VSM ?



Thanx for the reply, but we decided to go from the beginning, since we had some other issues and we would like to have clear installation.

We tried with both L2 and L3.

I will write you back, when we hit at the same or any other problem with this version.

BR,  M.


We have successfully installed the VSM, but we hit the same issue...

Now using L3 connectivity:

Everything is pingable from VSM:

ping mgmt (VSM) - ok

ping vDC - ok

ping esx host 1 - ok

ping esx host 2 - ok

Any idea how to solve "vem missing" issue? Where could be the problem?

BR, Marko

Hello Marko,

Do you have NICs assigned to VEM ? 

Can you please provide the output of following commands


show svs domain

show svs nei


vemcmd show card


Nex-1# sh svs doma

SVS domain config:

  Domain id:    1  

  Control vlan: 1 

  Packet vlan:  1 

  L2/L3 Control mode: L3

  L3 control interface: mgmt0

  Status: Config push to VC successful.

Nex-1# sh svs nei

Active Domain ID: 1

AIPC Interface MAC: 0050-5697-45b9

Inband Interface MAC: 0050-5697-45bb

Src MAC           Type   Domain-id    Node-id     Last learnt (Sec. ago)


//hmm, I guess I should see VEM here...

~ # vemcmd show card

Card UUID type  2: 34313339-3239-4742-3836-343959443836

Card name: ritstesx1

Switch name: Nex-1

Switch alias: DvsPortset-0

Switch uuid: 77 a4 17 50 f2 aa f1 9c-e5 08 f7 68 40 97 9c 09

Card domain: 1

Card slot: 3

VEM Tunnel Mode: L3 Mode

L3 Ctrl Index: 0

VEM Control (AIPC) MAC: 00:02:3d:10:01:02

VEM Packet (Inband) MAC: 00:02:3d:20:01:02

VEM Control Agent (DPA) MAC: 00:02:3d:40:01:02

VEM SPAN MAC: 00:02:3d:30:01:02

Primary VSM MAC : 00:50:56:97:45:b9

Primary VSM PKT MAC : 00:50:56:97:45:bb

Primary VSM MGMT MAC : 00:50:56:97:45:ba

Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff

Management IPv4 address:

Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000

Secondary VSM MAC : 00:00:00:00:00:00

Secondary L3 Control IPv4 address:

Upgrade : Default

Max physical ports: 32

Max virtual ports: 216

Card control VLAN: 1

Card packet VLAN: 1

Card Headless Mode : Yes

       Processors: 8

  Processor Cores: 8

Processor Sockets: 4

  Kernel Memory:   33552236

Port link-up delay: 5s


Heartbeat Set: False

PC LB Algo: source-mac

Datapath portset event in progress : no

Licensed: Yes

BR,  M.

When I'm trying to install VEM on secondary host I got this error at the end of installation:

INFO: VEM Install Summary: VEM Installation Complete

VEM Host / Status

        VEM #1: HA-Cluster/ Install status: Failure

Is it possible that there is a problem with DNS name, because I see ESXi hosts with names not with IPs?

Hello Marko,

Can you please share the output of " vem status -v " from ESXi host ?



When using the L3 mode, you'll have to assign a vmkernel interface with 'capability l3control' to help in communication between the VSM and the VEM:

One certain problem is that this interface/configuration is missing. 'vemcmd show card' would show something like this:

VEM Tunnel Mode: L3 Mode

L3 Ctrl Index: 50

L3 Ctrl VLAN: 10

VEM Control (AIPC) MAC: 00:02:3d:10:de:02

In your case, when this is not assigned:

VEM Tunnel Mode: L3 Mode

L3 Ctrl Index: 0

VEM Control (AIPC) MAC: 00:02:3d:10:01:02




Thank you guys for your support. When I was installing Nexus1000v for the first time last year, I didn't have any problem with installation and I finished it just in few minutes, that's why I was so angry about all this problems this time.

What we did to finally make it work:

Our collegue from VMWare team reinstalled the vc and we made the new installation of Nexus1000v, but the problem was the same. What we have noticed was that when migrating to DVS switch, vmkernel is migrated to Nexus too and at that time we lost connection to ESX. We have moved vmkernel interface to standard switch and use another physical interface for management interface of Nexus. After that we got an error message on Nexus that versions of VMS and VEM are not compatible. (before we have never seen this error). After upgrading of VEM version, everything starts to work fine, with ASA 1000v installed into the same LAB in just a few minutes. (-:

I have a question. Is iz good that vmkernel is also, by default, migrated to Nexus, because if anything goes wrong (like in my case), there is a problem to mantain or troubleshoot the whole situation?

BR,  M.


Keeping the vmkernel behind the vSwitch or the Nexus 1000v is entirely based on the design requirements. I wouldn't prefer using a physical NIC on the vSwitch just for the vmkernel. If you want to move it to the Nexus 1000v, the general practice would be to keep the VLAN as a system VLAN in both the vEthernet and the corresponding Ethernet uplink port-profiles. Some benefits of adding the vmkernel to the N1kv are discussed in the same white paper that I mentioned earlier:



Content for Community-Ad