cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1310
Views
0
Helpful
2
Replies

Problem with MAC Pinning and new VLAN

simon.geary
Level 1
Level 1

[Cross posting to Nexus 1000V and UCS forums]

Hi all, I have a working setup of UCS 1.4 (3i) and Nexus 1000V 1.4 and as per the best practice guide am using 'channel-group auto mode on mac-pinning' on the Nexus uplinks. I am having trouble when adding a new VLAN into this environment, and it is a reproducable problem across two different installations.

I go through the usual VLAN creating process on the Nexus, the upstream network and within UCS itself. I create the new vethernet port profile and set it as an access port in the new VLAN. However when I attach a VM (either existing or new) to this new vethernet port profile within vCentre the VM cannot communicate with anything. However, if I disable MAC pinning with 'no channel-group auto mode on mac-pinning', the VM instantly starts to talk to the outside world and the new VLAN is up and running. I can then turn MAC pinning back on again and everything continues to work.

So the question is, is this behaviour normal or is there a problem? Disabling MAC pinning does have a brief interruption to the uplink so is not a viable long-term solution for the customer when they want to add new VLANs. Is there a way to add new VLANs in this scenario without any network downtime, however brief?

Thanks

2 Replies 2

Robert Burns
Cisco Employee
Cisco Employee

Full active thread will be maintained here:

https://supportforums.cisco.com/message/3420129#3420129

Solution will be posted here once resolved.

Robert

Closing the loop on this.  You're hitting bug CSCto00715.

Symptom:
New MAC address is not learn on vem in the l2 table even though the mac address
table is not overflow yet.
vemcmd show l2-emergency-aging-stats | grep "Number of entries that
could not be inserted:"
will show extremely large number.


Conditions:
Nexus1000v VEM running on SV1.4 release.There are two CPU cores on the host.
This issue may happen at race condition.

Workaround:
Reboot the ESX/ESXi host.

This is fixed in 1.4a release.

Regards,

Robert

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: