cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
26740
Views
15
Helpful
13
Replies

Multicast on UCS

Lukas Mazur
Level 1
Level 1

hi all,

can anybody explain me in detail how multicast traffic is forwarded between UCS blades?

MAC adresses from on ServerPorts are learnd - those managed by the UCS wont age out.

Unknown unicasts are dropped on uplink ports

broadcast traffic is forwarded to all servers.

regarding multicast traffic I only found the following statement:

Server-to-server multicast and broadcast traffic is sent through all uplink ports in the same VLAN.

but 2 Blades on UCS dont receive Multicast Traffic from eachothe - by default.

any ideas regarding this issue?

Thx lukas

13 Replies 13

danielearena
Level 1
Level 1

Hi Lukas,

We have the same problem too.

We need to cluster two RedHat 5.4 installed onto two different B-Series UCS blades.

They can heartbeat each other on vlan 2300. The interfaces on the hosts are configured to be active on the same Fabric Interconnect, so the heartbeat traffic do not need to cross some other switch to make the hosts communicate.

Doing a tcpdump on eth0.2300 on each host, I can clearly see multicast packets going out from hostA to be BLACKHOLED, because they never reach hostB.

Issuing a "show ip igmp snooping groups" on the FabricInterconnect shows that soon after RedHat boot hostA and hostB register themselves to receive the multicast flow. While this entry is active, there are no flow problems for multicast between the hosts. When the entry rapidly cleares out, multicast flow gets blackholed.

The hosts don't register ever again with the FabricInterconnect: they don't send out a join message so they do not generate a snoop entry.

RedHat support suggests to deal with multicast problems on the switch, but the solution given is relative to catalyst switches, now we are in a Nexus environment. It also seems to be impossible to insert some static cam entries or to disable igmp snooping on FabricInterconnect, so no room for some other testing.

One solution would be to change the heartbeat in broadcast instead of multicast, but RedHat clearly states this is not possible for version 5.4.

I've one question: isn't the Fabric Interconnect supposed to flood the multicast patcket on all available ports in vlan 2300, when there is no snooping entry ? This would be one solution to my problem.

If anyone has some answers, please help (:

Thank you, bye

I'm resurrecting this thread since I think I'm seeing the same issue setting up an IGMP multicast MS NLB cluster. Did anyone find a resolution for this?

Hi Harold,

Cisco TAC gave us a solution many days ago, but I forgot to write it here, sorry

In our environment, UCS system is connected upstream to Nexus7000. There was no way to change the multicast behaviour of RedHat (RedHat support didn't help so much), we had to work only on the Nexus (thank you Kartic !).

You have to constantly poll the hosts with multicast queries that can trigger multicast reports, which in turn populate igmp snooping table on UCS and upstream switches.

You can turn on "igmp querier" feature on Nexus7000, by issuing this commands on the vlan where you need multicast:

vlan 100

  ip igmp snooping querier 192.168.1.100  [subnet doesn't matter, write any ip you want]
  no ip igmp snooping link-local-groups-suppression    [this ensures that igmp reports containing two-groups can pass through switches successfully]
  name vlan-name

Now you could see multicast traffic flow through the entire broadcast domain.

Hope this helps, bye

one of our customers had the same issue, they didn't have any multicast router doing the membership queries so after 3 minutes, the membership was removed from the interconnect.

Their upstream switch couldn't send any snoop query so customer wrote a simple perl script sending the snoop query which solved this issue.

John

Hi Daniel,

 

Many thanks for your answer. you just saved my life :P

 

seems the command line has been changed with the new nexus versions.

 

the commands are:

vlan 100

 name vlan-name

vlan configuration 100

 ip igmp snooping querier 192.168.1.100

 no ip igmp snooping link-local-groups-suppression   

 

hope this will be useful for future reference.

Keep in mind this line will cause you to stop flooding link-local multicast if anything on the VLAN joins the link-local multicast group.

 no ip igmp snooping link-local-groups-suppression

This will essentially put you a single packet away from an outage of each link-local multicast group, at least it will drop all traffic to devices that behave like normal and do not join the link-local multicast range.

chaausti
Cisco Employee
Cisco Employee

By default most switches will supress link-local (224.0.0.0/24)  group joins and flood these groups like broadcasts, but this can be disabled with the following command:

no ip igmp snooping link-local-groups-suppression 

With this feature enabled, the behavior will be identical to normal until something joins a link-local group. For a group that has anything joined, it will stop flooding the multicast and start forwarding only to ports that joined the group. This feature was a good workaround for an old (and long fixed) bug where the switch ignored an entire IGMPv3 Join packet if it has any link-local groups in the packet, but other than this specific scenario the feature is almost never needed.

It is not necessary for a device to join a link-local group because they are flooded by default, so this feature is never needed. Make sure to thorughly investigate if this is in your upstream switch config because you may be 1 join packet away from the network no-longer flooding link-local multicast groups.

I post a few slides which I did for a customer who had issues with MC; highlighting the changes of UCS firmware after V2.1

Hi,

We have different multicast groups like 239.255.255.255,  225.0.0.36 etc. Different multicast groups are used to receive multicast on different vnic. All was working fine before updating to Version 2.1 We recently upgraded the ucs to version 2.1 and after that we do not receive multicast traffic on blades for group 225.0.0.36. We opened a case and TAC says " this issue is because the group ip multicast mac address use  same as link local address 224.0.0.X , octets contains zeroes (x.0.0.y), the address is classified as link-local". They say to change multicast group but we cannot change the multicast group. Can anyone suggest a solution here. 

Hi,

I have a similar issue... 

We have the Flexpod solution with FIs in End-Host Mode and N5Ks upstream. We cookie cut VMware virtual environments for guest system engineering purposes. All routing for a given guest network is done using a Linux VM running GNS3 simulation routers (referred to as vRouters in future references in this post).  These vRouters run PIM dense mode for their respective subnets.

The problem is that no IGMP join reports make it to the vRouters unless the client is on same ESXi host and an explicit join command is set on the PIM port of the vRouter.  The second a machine outside of the ESXi host (other host or physical machine) requests a multicast stream, no joy.  If the vRouter is shut down and a physical router is brought up and configured on the VLAN instead then multicast works as expected.

I am very new to the fundamentals of multicast and your .pptx file is hard for me to understand.

What do you suggest my solution would be?

You should probably open a support ticket, and/or make a separate question in regards to an IGMP querier running inside a blade.

Multicast works well on recent code when the IGMP querier on the upstream network, but your question is related to running PIM in a blade which probably needs to be addressed separatly.

Running PIM inside a blade sounds to me like it goes against what end-host mode aims to provide, and page 4 of the document says that the FI will not send the queries upstream. The document does not specify if the USC will drop all queries from being sent upstream or just the ones generated by the FI so it is unclear if this will work.

If you are able to test something, disableing multicast on the UCS might be worth a try. This will flood all multicast in the VLAN so is not going to work well if you have a large amount of multicast traffic. This is configured by applying a multicast policy to a VLAN.

ericleun
Cisco Employee
Cisco Employee

 

 If I "Enabled" /  "IGMP Snooping Querier State" in Multicast Policy, what IPv4 Address should I use ?

  IP address of FI or Upstream switch IP ?

 

 

You can use any IP address you like, even 1.2.3.4

 

The IP only matters when you are doing PIM, which sends its own queries and makes configuring a querier unnecessary on the switches.

 

 

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card