I have a Nexus 1000v running on a datacentre and when I use vCenter to view the port status on a port-group created in the VSM, some VM ports are listed as "blocked": YES and "Vlan": 1 instead of the assigned port group for the VM NIC. I cannot find any documentation that describes how or why this happens, or how to fix it. The port profile is configured with the correct vlan ID and the SVS connection is up and active. I'm running SV1.2 and the ESX hosts are all up to date patch wise and VEM wise. I'm also running a full license on the VSM. The VM has its NICs assigned to the correct port-group and I have tried deleting and re-adding the NIC and group. Still the VM cannot ping a gateway through the VEM. The VEM diagram view in vCentre shows the VM port in the group greyed out and with an 'x' in theport symbol. Anybody have any clues on this. Your help would be greatly appreciated.
I'm having the same problem. The only solution that I found in a first time was to halt and then reboot the VM.
It's now becomming a recurrent issue and this solution doesn't work anymore...
Don't know the origin of the problem, would appreciate information about it.
I'm seeing the same thing. Just upgraded to vSphere 4.1 and previously didn't have these problems. For me it is my vmkernel port for vMotion that is blocked. I tried removing and readding the vmkernel port and that worked on one host, but is not working on the others.
N1KV: system: version 4.0(4)SV1(3a)
Looking for answers...
Well, we solved the problem thanks to cisco support team.
Here the interesting part of the answer, hopping that would help :
When a Vethernet interface is created in a DVS, VSM create a corresponding Lveth interface to
save the configuration information for the Veth interface. When the Veth is moved out of the DVS,
the configuration information in the Lveth is not cleaned up properly, for a Veth in access mode,
the "primary vlan" value in the Lveth data structure will be the same as the "access vlan".
When later another Veth interface is created in PVLAN mode, it could reuse a Lveth that was
created for a previous interface, then inherited a Stale value of "primary vlan". If the old
"primary vlan" has been deleted, the bring up of the Veth wil fail, and stay in down state
with reason of "primary vlan is down".
User can avoid running into this issue by avoiding delete VLANs.
if user does run into this problem, user can do the following to clean up the Lveth properly.
1. put the Veth in an access mode port-profile,
2. bring up the interface, when a Veth interface in access mode is brought up,
it will automatically clean up the "primary vlan" field in the Lveth data structure.
3. put the Veth back into the PVLAN mode port-profile.
4. since the "primary vlan" is cleaned up, this time the Veth will be up correctly in
It's also recommended to upgrade to code version 4.0(4).1(3a)
Alexis, thanks for posting your solution. It does not apply to my situation so I think we were seeing a different kind of anomaly. I tried all sorts of bringing interfaces up and down to no avail. Ultimately I put the host in maintenance mode and rebooted then the problem went away. Our interface was down with a reason of "Not participating" so it did not match vlan reason you mentioned. We were also already running the latest code.
Did you ever get this resolved, Russell? I have the same problem. The Vethernet port is in "not participating" state. I'm able to delete this port, shut it and do unshut, but it is still blocked...
Russell or mtrcek, did either of you get this resolved? We just ran into this today. We're runnign ESX 5.0 (no recent upgrades) and a VM that was rebooted today can't get back on the network - its port state is blocked and it in VLAN 1. We tried rebooting a 2nd VM that was non-critical and got the same issue.