I'm unclear as to the actual configuration needed for a Nexus 1kv.
Taken from another post on here:
You need to have at least one nics configured for each VEM module. This is how each VEM communicates with the VSM. Without an uplink the VEM module can never get programmed by the VSM.
On one of my ESX hosts, I have two physical NICs (it's a blade). When setting up the VSM, we created the packet, control, and mgmt port groups on the standard local vswitch. After getting everything installed and setup, we were able to see the VEM and the secondary VSM.
We then migrated the vmk0 to the VSM on a port-profile we created and we still had connectivity. After, we migrated all vmnics over to the nexus DVS and set the packet and control NICs to use port-profiles we set up on the nexus.
When we did, we no longer had connectivity to the VEM (if we did a show mod). Once we moved the control and packet back to the local vswitch and assigned a vmnic to it, it all came back.
Are we required to use the local vswitch for the packet and control ifs from the nexus, and dedicate a single physical NIC for this purpose?