Showing results for 
Search instead for 
Did you mean: 

Removing Nexus 1000v - Stuck/partial Vethernet Configuration

Level 1
Level 1



We have a Nexus 1000v environment running version 4.2(1)SV2(2.2).


I am trying to remove a host but there is a vethernet port that has a partial configuration and I can't remove the host because the port is "in use".


There was a VM that was showing attached to the port - but in a different data center.


I removed the VM from inventory (and disk).


But, the vethernet port still has a partial configuration.


The configuration on the port is

name = <not found>

state = link down


I removed the port group profile, thinking I could get rid of the vethernet port that way.


The vethernet port was moved to the Unused_Or_Quarantine_Veth port group.


When I try to remove the host I get this error message:

Task - Reconfigure vSphere Distributed Switch

Error stack - vDS n1kp port 248 is still on host connected to <not-found> nic=4000 type=vmVnic


Any guidance to clearing this port configuration so that I can remove the host is appreciated.


2 Replies 2

Kirk J
Cisco Employee
Cisco Employee


Is the actual error from vcenter or the n1k/VSM?

Can you confirm the N1k/VSM config doesn't have any config for that guestVM or other VMK ports on the ESXi host at this point?

If the veth still exists from the VSM perspective, do a conf t,,, # no int vethxyz and see if that clears it.


I've seen a few cases where vcenter didn't correctly clean up, and had some left over artifacts, that required actually removing the ESXi host from Vcenter itself, and then re-adding back to vcenter, clear up the left over links at vcenter level.





Thanks for the response.

I thought that I had successfully removed the port on the n1kv side using the command line. But there was no change in vCenter.

It could very well be an issue with vCenter not be "clean".

We are eliminating the Nexus switches and going with a VMWare dVS.

I have already added this host to the VMWare DVS - so I would prefer not to remove it from vCenter.

VMWare recommended that I use a procedure to forcefully remove the Nexus VEM:

1. Shutdown the Nexus VEM (secondary)

2. Shutdown the Nexus VEM (primary)

3. Follow the steps in the document (

4. Remove the Nexus dVS from vCenter

5. Delete the Nexus VEM VMs



Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card