cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2670
Views
0
Helpful
7
Replies

Redundant Multicast switching

sjohns
Level 1
Level 1

All, I have a customer with a L2 network with multiple VLANS and consisting of multiple access switches with two L2/L3 core switches.  Both core switches have a SVI for each vlan using HSRP to provde redundant Default Gateways.

The main core switch is a Cisco 6500 (running 12.2(33)SXJ6) and the backup core switch is a Cisco 4500.

One of the applications on this network is Multicast which needs the protection of the redundant core switches (this is for a critical public infrastruction and so requires the protection, if one switch fails the other must continue to support the service).

I initially tried configuing "IP PIM DENSE-MODE" on the VLAN interfaces (one of the solutions as per Cisco doc #68131 which discribes a problem and solutions for configuring multicast on a L2 network) to make the switch act as a "mrouter".  When I configure pim on the applicatable vlan's on one (the main core) switch the multicast application/s work properly but when I then configure pim on the applicatable vlan's on the other (backup) switch igmp-snooping seems to fail and all interfaces on the vlan get the multicast traffic whether they have joined the group or not (effectively causing a DOS attack on the interfaces that haven't joined the group).

Another solution from document #66131 is to enable the igmp querier feature on the L2 switches (and I assume, remove the IP PIM configuration).  This should make the switch act as a mrouter "proxy".

I have also read the chapter (chaper 38) in the IOS configuraiton guide on "Configuring IGMP Snooping" which has a section on configuring redundant igmp snooping queriers.  I am thinking of trying the configuration this section is suggesting where I would remove the IP PIM and, instead, configure "ip igmp snooping querier" on the appropriate vlan interfaces on both switches.  Unfortunately I do not have a lab to test this out on and so am currently limited to trying this out on the actual network (scary!).

So, my questions.  First, and in general, does anyone have any words of wisdom for me?  Two, if my network only has mrouter "proxies" only but no actual mrouter (as I believe will be the case if I am only using the "querier" configurations) will that cause any problems with the multicast applications?

I am under some immediate pressure to solve this redundancy issue so any help would be greatly appreciated.

7 Replies 7

Jon Marshall
Hall of Fame
Hall of Fame

Steve

Firstly, do you need to route the multicast stream between vlans ? If you do then you have to use PIM. IGMP snooping querier is only used when you want to multicast within a single vlan and therefore you do not have PIM enabled on any interface.

Secondly, when you enabled dense mode you say the 4500 was flooding it to all ports within vlans ?  How are the 6500 and 4500 interconnected ie. is it a trunk and they run HSRP between them for the same vlans or something different ?

How are the access switches connected to the 4500 and 6500 ie. do you uplink the access switches to both ?

What IOS/supervisor is the 4500 running ?

There is also the option of not using dense mode but sparse mode for PIM but it may be worth trying to see why the 4500 flooded the stream to all ports.

Finally what is the actual group address of the multicast stream ?

Jon

Jon,

Thank you for the quick response.

Yes we will need to route the multicast streams; the senders are in one vlan and the receivers are in another.  Can I have both IP PIM and the igmp querier configurations on the same (routed) VLAN interface?

The details of the network - the two core switches along with the other access edge switches are all interconnected via an optical network supported by mutiple ONS 15454 M6 chassis in a loop that is around 10 miles long.  All the switches connect, via dot1q trunks, to an interface on a Xponder card in a local M6 chassis which provides L2 connectivity end-to-end.  For the optical transport the traffic is tagged Q-in-Q (SVLAN/CVLAN).  But as far as all of the switches are concerned they are all connected via a dot1q trunk including the two core switches.  The two core switches are the only switches with SVI's but each has an SVI for each vlan and are running HSRP for each vlan.

Note that if I enable IP PIM on one switch the multicast works fine.  It is only when I try to enable it on both that the flooding to all interfaces starts.

The 4500 is running IOS-XE 03.02.00.SG, I will have to get back to you on the Sup.

I don't think Sparse mode will work, first of all I don't know what I would use for an RP (and it would have to also be redundant) the only L3 devices I have in the network are the L2/L3 core switches.

Lastly the multicast groups are all in the 230.0.0.0/24 subnet.

Steve

Can I have both IP PIM and the igmp querier configurations on the same (routed) VLAN interface?

You don't need to. The IGMP querier function is only used when you don't have PIM enabled on the vlan interface. When you enable PIM on a L3 interface it then sends out IGMP queries and the switch listens to the responses with IGMP snooping so it can record the multicast mac address to the correct ports.  If you don't have PIM enabled, something still needs to make those IGMP queries otherwise the switch has nothing to listen for. So that is what the IGMP snooping querier does. So its one or the other. With PIM enabled you do not need the querier.

To be honest i didnt understand a lot of what you said about your physical connectivity other than each switch sees the other switches via trunk links.

So when you enable PIM on the 6500 only multicast works for all clients on all switches properly. When you enable it on both switches multicast is then flooded to all interfaces on both switches ?

Cna you just clarify what you mean by all interfaces ie. do you mean all end devices on all the switches start seeing multicast traffic ?

When you enabled PIM on the 4500 did you enable it on all L3 interfaces at the same time ?

I am just trying to get a picture of which switches were affected and how they connect back to the core switches. Like i say i did not really follow your setup because i have no experience of that.  Is each access switch in effect connected to both core switches or do they only connect to one or the other ?

Jon

Jon,

Once again thanks for the response.

Sorry that I wasn't clear on the carrier optic system but for all intents and purposes you should visialize that all of the edge and core switches are connect to one big mutial L2 switch (with full MAC learning and flooding etc) via 802.1q trunks.

The issue is when I enable IP PIM on both core switches all of the multicast traffic floods out all ports on all of the VLAN's configured with IP PIM and this is seen by the devices connected to those ports in the VLAN's (and for the devices that are not members of the group this consitutes a DOS type of attack).  This was also confirmed by using snooping with Wireshark to see that the non-member port/s are receiving multicast traffic.  When I enable IP PIM only on the 6500 core switch the flooding stops and only the ports that have joined the group get the traffic.

In this network there are around 10 VLANS but only three need multicast.  When I enable IP PIM it is for those three VLANS, all three on either one switch (the working config) or both core switches (the flooding config).

I am thinking that using redundant IP PIM on both switches will not work (but is there any confirmation of that?). 

So the advise I should give to my customer is to either; one, use IP PIM with no redundancy but the ability to route between vlans or, two, using  igmp snooping be redundant but all multicasts will be limited to their own VLAN (and so they will have to reconfigure some of their senders and receivers to be on the same VLAN),

Is that right?

Steve

There should be no reason why you cannot have PIM enabled on both switches for the same vlan. The PIM interfaces should work out between them who is responsible for making the queries and who is responsible for forwarding the actual multicast stream onto the vlan.

Without going into the specific election details, one of the PIM interfaces per vlan will be responsible for making the queries. As long as IGMP snooping is enabled on all switches each switch should be able to see the answers to these queries because at L2 the vlan exists on all switches. And one PIM interface per vlan will be forwarding the data. It could be the same interface or the other one, interface here being the L3 SVI for that vlan.

So yes, you should be able to have PIM enabled on both switches and it should still work. It sounds more like an issue to do with IGMP snooping and the switch not seeing the responses.

What happens if you only enable PIM on the 4500 ie. does it work properly ?

Jon

Jon,

Again, thanks for the response.

Enabling PIM on both core switches should work.

Could the issue be the vervision of igmp the different switches are running.  It appears all the edge switches along with the 4500 are running v2 but I have some indications that the 6500 is running v3.  Could that be an issue.

Also I noticed that all of the edge switches (which are running igmp snooping, I checked) seem to be running "pim-dvmrp" for the "Multicast router learning mode".  Could that be part of my issue?

I have not, in fact, tested running IP PIM on just the 4500 (which, by the way is using a "SUP 7-E").  My next chance to do that test will be after the new year.  I will keep you posted.

Steve

Could the issue be the vervision of igmp the different switches are running.  It appears all the edge switches along with the 4500 are running v2 but I have some indications that the 6500 is running v3.  Could that be an issue.

I don't think it is an issue with the different versions (although i could be wrong). IGMP snooping is done on a per switch basis as far as i know ie. there is no need for switches to communicate IGMP snooping between themselves.

Also I noticed that all of the edge switches (which are running igmp snooping, I checked) seem to be running "pim-dvmrp" for the "Multicast router learning mode".  Could that be part of my issue?

No, that is the correct learning mode.

I have not, in fact, tested running IP PIM on just the 4500 (which, by the way is using a "SUP 7-E").  My next chance to do that test will be after the new year.  I will keep you posted.

It would be worth doing this just as a sanity check.

Basically, each switch should be responsible for it's own IGMP snooping and as long as IGMP queries are being sent, and they would be with PIM enabled on the L3 interfaces, they should be able to map the multicast group mac address to only those ports that they have seen an IGMP response come from.

What we may need to do is start looking at the IGMP outputs when only the 6500 is setup and then when both are setup and perhaps some debugging as well.

Jon

Review Cisco Networking for a $25 gift card