08-06-2013 09:43 AM
Hello,
Working on a project to deploy VXLAN for vCloud Director which requires IP multicast for overlay encaps. All host blades are running ESXi in UCS chassis. We've dedicated a VLAN ID 4050 for mutlicast VXLAN traffic where VLAN 4050 does not currenly have an IP gateway (all Layer 2). Because of this, we needed an IGMP Snooping Querier which was configured on the UCS FI:
However, the upstream Nexus switches do not see the FI as IGMP Querier:
N5K# show ip igmp snooping querier vlan 4050
Vlan IP Address Version Expires Port
UCSA(nxos)# show ip igmp snooping querier vlan 4050
Vlan IP Address Version Expires Port
4050 192.168.179.3 v3 00:00:51 Switch querier
This same Nexus5K does see the querier as the attached router interface for other networks where a router is attached. Here is an example:
N5K# show ip igmp snooping querier vlan 85
Vlan IP Address Version Expires Port
85 149.173.85.1 v2 00:04:02 port-channel207
The UCS FIs see this too:
UCS(nxos)# show ip igmp snooping querier vlan 85
Vlan IP Address Version Expires Port
85 149.173.85.1 v2 00:04:09 port-channel1
What limitations exist when the UCS FI is configured as snooping querier. Should this work?
Thanks,
08-06-2013 03:18 PM
Hi,
If the FI is end-host mode it should not send IGMP queries to the upstream network (only southbound to the blades). You can try configuring the IGMP querier for vlan 4050 on the 5k instead, or disable igmp snooping on the FI but you might get alot of unwanted multicast traffic.
08-07-2013 06:17 AM
Thanks, Brian, great stuff.
Dumb question, when you configure the Fabric Interconnect as IGMP Querier, exactly where does the IGMP Querier process exist? On fabric interconnect A or B? Or Both?
Feel free to elaborate, sounds like you understand this stuff.
--Jim
08-07-2013 06:55 AM
Hi Jim,
Good question! The IGMP querier can exist on one FI or both depending on how the VLANs are configured in UCSM. It is possible to create a VLAN on just one fabric interconnect and create a multicast policy for that VLAN with an igmp querier. The other option is to create the VLAN globally with the multicast policy and the igmp querier will run on both fabric interconnects independantly. For me, the igmp querier option in UCS end-host mode seems like it would be useful in pod-type environments .
08-07-2013 07:31 AM
Hey Brian, Great info. Thank you.
In our use-case, VLAN 4050 is defined in in both FIs so the querier would be too.
This is confusing to me.
Given there is no Layer 2 transport between the FIs, multicast traffic from a source host VNIC on fabric A would need to traverse the upstream core to get to a receiver on destination host VNIC on fabric B
But, if we have a querier defined on both Fabric Interconnects, the IGMP Joins from the receivers will never be forwarded toward the core. i.e. in my experience the Queriers as the acting mrouter would "eat" the Join, right?
Am I missing something here? How would this ever work? In our use case, other than disabling IGMP snooping, how would receivers connected to fabric A ever hear sources on fabric B?
Happy to be educated.
--Jim
08-07-2013 10:04 AM
Hi Jim,
Correct if the fabric interconnect is in end-host mode with a querier defined it will not forward it to the core. However, if the fabric interconnect is in switch-mode (normally not configured in this mode) it will forward it to the core.
In your use case a querier on the 5k should do what you are looking for. For something like a pod design, where say blade 1 and 2 both have vnics with fabric failover and are sending out multicast traffic on the same FI the querier on the UCS might be useful.
08-07-2013 06:57 AM
It should get enabled where you've defined the VLANs. So it should get enabled on both the FIs if the VLAN is defined on both the FIs
Thanks,
Shankar
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide