With Vishal Mehta, Ali Haider and Gunjan Patel
Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about troubleshooting common features of Cisco UCS and Nexus 1000v with experts Vishal Mehta, Ali Haider and Gunjan Patel.
During the live webcast, Vishal covered all the features from a perspective of configuration, performance, best practices, and troubleshooting.
As a TAC engineers, our experts constantly receive requests from customers to provide configuration and troubleshooting documentation on how to configure common features of both Cisco UCS and Nexus 1000v. Vishal tackled these issues with a primary focus on private VLAN, quality of service, multicast, load balancing, and other layer 2 configurations involving Cisco UCS and Nexus 1000v.
Vishal also covered common TAC cases that participants can use to design their networks more efficiently and finished his presentation with a short demo on troubleshooting scenarios between UCS and Nexus 1000v.
This is a continuation of the live webcast.
Vishal Mehta is a customer support engineer for Cisco's Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master's degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
Ali Haider is a customer support engineer for Cisco's Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past 14 months, with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Ali completed his undergraduate degree from the University of Texas at Austin in management information systems and holds CCNA and VCP certifications.
Gunjan Patel is a TAC engineer in the San Jose Server Virtualization Team with a primary focus on Cisco Nexus 1000v and virtualization. He has been with the TAC for the past two years and presented at Cisco Live Milan 2014. Gunjan holds a BS degree in electrical engineering from Northwestern Polytechnic University and is currently working on achieving his MS degree in computer science and engineering at Santa Clara University.
Remember to use the rating system to let Vishal, Ali and Gunjan know if you have received an adequate response.
Our experts might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation in Data Center, sub-community Server Networking discussion forum shortly after the event. This event lasts through February 28, 2014. Visit this forum often to view responses to your questions and the questions of other community members.
Thanks for interesting session and sorry to bother you, but I became very confused by your statement on p 47
UCS supports all 3 modes of MS-NLB – Unicast, Multicast, Multicast+IGMP
MS Unicast NLB is not supported on UCS in EHM. The issue is that MS NLB uses MAC flooding which is not supported in EHM. You will need to use Multicast mode or configure UCS to switching mode.
Can you please clarify !
Thanks and Kind Regards
Yes you are correct on the traffic forwarding logic of Fabric Interconnect
UCS will not support Unicast MS-NLB mode and the detail reason for that is listed below:
The MS-NLB default setting is unicast mode. In unicast mode, NLB replaces the actual MAC address of each server in the cluster to a common NLB MAC address. Usually something like 02bf:xxxx:xxxx. When all the servers in the cluster have the same MAC address, all packets forwarded to that address are sent to all members of the cluster.
However, a problem with this configuration is when the servers NLB cluster are connected to the same switch, you cannot have two ports on the switch register the same MAC address.
MS-NLB solves this problem by masking the cluster MAC address. MS-NLB creates a "new" MAC address and assigns that MAC address to each server in the NLB cluster. NLB assigns each server an address based on the host ID of the member. The MAC is usually 0201:xxxx:xxxx, 0202, 0203, etc.. for each node in the cluster. The 02bf will never show up in the MAC address table of any switch.
Now the problem is how do clients get to 02bf if no device is registering that MAC on a switchport?
To make frames get delivered to all nodes in the cluster, they use an ARP broadcast.
When the router sends an ARP request for the MAC address of the virtual IP address, the reply contains an ARP header with the actual NLB cluster MAC address 02bf::xxxx:xxxx.
The clients use the MAC address in the ARP header, not the Ethernet header. The switch uses the MAC address in the Ethernet header, not the ARP header.
The issue is when a client sends a packet to the NLB cluster with destination MAC address as cluster MAC address 02bf::xxxx:xxxx, the switch looks at the CAM table for that MAC address 02bf::xxxx:xxxx. Since there is no port registered with the NLB cluster MAC address, the frame is delivered to all switch ports (flooding).
So MS-NLB Unicast mode depends on the switch flooding the packet destined to the unknown unicast cluster mac address. As per the UCS-End Host mode rule - Unicast traffic coming from an uplink going to unknown destination will be dropped.
Hope this answer helps to clarify why MS-NLB Unicast mode is not supported and will not work on UCS in End-Host Mode.
Thank you for the question.
Thanks Vishal for this detailed explanantion ! much appreciated !
PS. The pdf still shows on p47, that NLB Unicast is supported on UCS ! should this not be corrected ?
Hello there experts,
Thank you for covering this subject. My question is - in my disjoint L2 setup VLAN 1 is present on all uplinks – is that going to be an issue ? Please advise.
I had customers who used vlan nr 1 for productive traffic, and therefore could not implement disjoint vlan, for the reason you mention. They had to fix this first, moving off vlan 1. Although we preach not using vlan nr 1 (same applies for vsan nr 1), it's happening all the time.
VLAN 1 is the default VLAN on all the uplinks, and it's also the default native VLAN for the uplink interface(s)/port-channel(s). It will be present on all the uplinks regardless of disjoint-l2 configuration.
VLAN 1 is also used for LACPDU exchange in some systems, so it can't be removed from any of the uplink interfaces, however you can remove it from upstream switch trunk.
Walter brings a good point, running production traffic on VLAN 1 (specially in disjoint-l2 environment) is not a good idea.
Only one of the uplinks gets selected as the Designated Receiver for each VLAN, not since VLAN 1 is present on both uplinks, it will only select one of the uplinks at random and your broadcast/multicast traffic might not work as expected.
Please note that ARP might not work in that case. That's why it's not recommended to use VLAN 1 for production traffic, specially in disjoint-L2 environment.
Please refer to the note in this white paper:
Hope that helps.