cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
916
Views
5
Helpful
3
Replies

Multicast Routing Help

jhutter
Level 1
Level 1

Hi,

I asked a similar question before but did not get a clear answer so I am posing the question again with some different information.

I am configuring multicast in a environment where I have a 4506 at each site (4 total) and a 6506 as the core.

Each 4506 is connected via layer 3 to the 6506.

I have a mix of 3560s, 3548s, and 2960s connected to the 4506s and the 6506 via layer 2 trunk

I have multiple multicast sources and hosts communicating at a time (multiple cameras sending video / multiple computers receiving video).  So this is not a scenario where there is 1 sender and many receivers.  This would be many senders (~50) and some receivers (~10)

Sample Diagram:

->3560

|

6506 --> 4506 --> 3548

|   |

|    --> 2960

|

4506 --> 2960

|

-->3548

I configured ip multicast-routing on each of the 4506s and on the 6506.

IGMP snooping is on by default on the 3560 and 2960 switches.

CGMP is on by default on the 3548 switches.

I set up PIM sparse-dense mode and IGMP version 3 on each of the layer 3 interfaces for the 4506s and 6506 where they connect and on each VLAN that is sending or receiving multicast

Multicast is working throughout the network, however I am looking to verify the configuration as I scale this out to more clients on the network. 

#1 - Is it correct to us sparse-dense mode in this configuration?

#2 - Do I need to configure a redevous points using AUTO-RP? (ip pim send-rp-announce INTERFACE scope TTL). Not sure here if I need to designate this and what to choose.  Right now I do not have this and it is working, but documention seems to infer that I need to designate this.

#3 - Is there any other configuration settings I should be considering?  I hard to find real world configurations of multicast as examples or people that know multicast routing well.

Thanks in advance

3 Replies 3

Peter Paluch
Cisco Employee
Cisco Employee

Hello,

#1 - Is it correct to us sparse-dense mode in this configuration?

Personally, I would opt for either sparse-mode (not sparse-dense-mode) or for Bidir-PIM. In your case, as there are many senders to the same group, there will be a (S,G) source-tree from each sender towards either the RP or to individual end routers with directly connected receivers. While 50 senders is a negligible number from the viewpoint of maintaining routing records in multicast routers, resources of your multicast-enabled routers may become strained as the number of sources increases. The Bidir-PIM builds a single distribution tree for a multicast group, regardless of how many senders and receivers are there.

So far, using the PIM-SM is fine. If you want to achieve immediate savings in router resources, configure each router with ip pim spt-threshold infinity global configuration command. This command will prevent routers with directly connected receivers to join the shortest-path-tree, and instead, it will force them to always remain with the RP-tree.

If the scalability is of paramount concern, consider using Bidir-PIM.

#2 - Do I need to configure a redevous points using AUTO-RP?

No, you don't. Your routers absolutely must know about the address of the RP (otherwise they will fall back to dense mode operation thanks to the sparse-dense-mode setting) but it is not important how they learned about it. You can configure the IP address of the RP statically, or you can use dynamic mechanisms to discover it - either AutoRP or BSR. In a small network, configuring the RP statically using the ip pim rp-address is just fine (this command has to be used on all routers including the RP). However, in a larger network, automatic RP discovery is called for as the manual configuration can be tedious. Both AutoRP and BSR provide this; AutoRP is a Cisco-proprietary mechanism - I personally vouch for open solutions so I would personally suggest using BSR if you decide to go with automatic RP discovery.

#3 - Is there any other configuration settings I should be considering?

Unless you know you need IGMPv3, you don't need it Two main advantages of IGMPv3 are source-specific multicast (rarely deployed) and a designated IP address for IGMPv3 messaging, making the life of IGMP Snooping-enabled switches easier. However, for most deployments that don't need source-specific multicast, IGMPv2 is a safe bet.

Using the ip pim spt-threshold infinity can help prevent the network from creating source-based trees between the source and final recipients, however, the savings gained by sticking to the RP-rooted tree are difficult to comment - it depends strongly on the location of senders and receivers. Also, as the traffic may flow through suboptimal paths (from sources to RP and from RP to final receivers), it is worth having a look whether these paths have sufficient bandwidth to carry the streams.

Avoid using multicast group addresses of the form

<224-239>.<0 or 128>.<0>.<0-255>

because these multicast addresses map to link-local scoped MAC addresses that are exempted on Catalyst switches from IGMP Snooping and are always flooded.

I hope other friends here will add their own best practices...

Best regards,

Peter

Wow, thank you for the detailed info Peter.  I have some follow up questions if you don't mind

In regards to Rendezvous points:

OK, so since I have none configured, then I am running in dense mode currently. The reason why I initially did not set a RP is I thought that all traffic would flow to that point. That is not true correct? I recently read that it will initially in sparse mode, but will eventually use the source path if it is shorter.

I would then want to configure AUTO-RP. How do I set this up correctly? Do I set an interface on each Core (the 6500 and all 4500s) as a RP or do I just set up a single RP on the 6500 for the entire network?  I am not sure how I select what interface to use.  I segregated the multicast data to specific VLANs.

i.e.

ip pim send-rp-announce vlan199 scope 16

ip pim send-rp-discovery scope 16

And what is your feeling on the scope level (16 vs 64 vs 128).

In regards to IGMPv3 :

Thanks for this info. I have to look at some of my documentation but I remember there was a reason I was forcing IGMP to v3. If not I will drop to v2.  I think it was because Windows Server 2008R2 defaults to IGMPv3.

In regards to ip pim spt-threshold infinity

I’m not sure I understand the benefits of forcing the RP rather than the shorter path? This would force the data to flow through the RP correct? I can’t imagine that the load on the routers would out way the desire to for a shorter path. Wouldn’t the extra data flow itself strain the router more than maintaining the source trees?

Thanks again

Hi,

OK, so since I have none configured, then I am running in dense mode currently

It definitely looks that way.

The reason why I initially did not set a RP is I thought that all traffic would flow to that point. That is not true correct?

If a router does not know who is the RP then the traffic can not flow "to that point" - becuse "that point" is not defined. If running a PIM-SM network, each router, including the RP itself, must know who the RP is. This knowledge is important for three primary reasons:

  1. Any router can become the first hop router of a sender (if the sender is directly attached). This first hop router is then required to "register" the source at the RP by tunneling it in unicast packets. The knowledge of the RP's address is crucial, as it is the destination of the tunneled traffic.
  2. When building a shared multicast distribution tree which is by definition rooted at the RP, it is crucial to know who the RP is so that PIM Join messages can be sent out interfaces towards the RP.
  3. When replicating the multicast trafic down the shared distribution tree rooted at the RP, each router performs an RPF check whether the flow comes in through the appropriate interface. In the case of shared tree, the multicast traffic must arrive through the RPF interface towards the RP.

I recently read that it will initially in sparse mode, but will eventually use the source path if it is shorter.

Yes, PIM-SM starts by traffic flowing from the source to the RP and from the RP down the shared tree to the recipients - however, even for this, the knowledge of RP is crucial. You simply cannot run PIM-SM without the knowledge about the RP. It is true that once a router connected to the final recipient learns about the sender of the multicast traffic, it can join the source-rooted tree that can eventually provide shorter path between the source and the receiver.

I would then want to configure AUTO-RP.

Are you determined to stay with this proprietary protocol? Please note that the AutoRP is spoken only by Cisco routers. If your network includes routers from other vendors, BSR is a better choice than AutoRP.

How do I set this up correctly? Do I set an interface on each Core (the  6500 and all 4500s) as a RP or do I just set up a single RP on the 6500  for the entire network?

The simplest setup is a single RP for the entire network. More complex setups include different RPs for different multicast groups, or multiple RPs for a single group, all done for purposes of redundancy. However, the simplest setup just works - make a single device to be the RP for all multicast groups and for the entire network.

I am not sure how I select what interface to use.

It shall be any interface of the RP router that is reachable from within all your network. The ideal way is to have a loopback with a /32 mask configured on the RP that is advertised in your unicast routing protocol.

ip pim send-rp-announce vlan199 scope 16

ip pim send-rp-discovery scope 16

Using a Loopback interface instead of a SVI, this would be:

ip pim send-rp-announce Loopback0 scope 255

ip pim send-rp-discovery Loopback0 scope 255

Or, to use BSR:

ip pim rp-candidate Loopback0

ip pim bsr-candidate Loopback0

Usually, the router requires that the ip pim sparse-mode is also configured on this Loopback - it's only a nuisance, though.

And what is your feeling on the scope level (16 vs 64 vs 128).

In your case, you want all your network to know this single RP. Any TTL will currently do, provided it is set at least to the diameter of your routed network. So setting it even up to 255 is just fine in your case.

Thanks for this info. I have to look at some of my documentation but I  remember there was a reason I was forcing IGMP to v3. If not I will drop  to v2.  I think it was because Windows Server 2008R2 defaults to IGMPv3

Many OSes in fact default to IGMPv3. That alone is not important because these operating systems will fall back to whatever IGMP version is spoken by the router.

I’m not sure I understand the benefits of forcing the RP rather than the  shorter path? This would force the data to flow through the RP correct?  I can’t imagine that the load on the routers would out way the desire  to for a shorter path. Wouldn’t the extra data flow itself strain the  router more than maintaining the source trees?

This is a complicated question with no general answer, I am afraid. The key thing to keep in mind, though, is that on multilayer switches, data flows are handled outside CPU in specialized forwarding hardware (ASICs), while multicast states themselves must be handled also by the CPU before being programmed into ASICs. As the amount of multicast data flows increases, the switching hardware may become more and more loaded but this does not influence the CPU. However, if the multicast states get multiplied, the CPU becomes more and more burdened with their maintenance. High CPU load is always a matter of concern.

You are welcome to ask further!

Best regards,

Peter

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card