cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
548
Views
1
Helpful
8
Replies

IOSXR PIM / Multicast routing

divadko
Level 1
Level 1

Hello,

i am new in iosxr and i have to handle some tv streams from TV studios that are directly connected to my cisco NCS57C3mod. Each stream with specific (different) multicast address is connected in separate vlan. For eg 10 streams are connected as trunk vlan 3000-3010 on same uplink port with multicas address 239.0.0.1-230.0.0.10.

I want to  aggregate all this streams into one downlink vlan.

What is the best way to do it?

The downlink vlan 100 is connected to headend switch thats port is configured as l3 vlan with pim sm.

The second thing is if is there any way to change the multicast ips if there will be some colision with existing multicast ranges in future?

Thanky for help!

dave

8 Replies 8

Ramblin Tech
Spotlight
Spotlight

IP multicast and PIM are strictly L3 and are agnostic to the VLAN tags. Your 10 mcast streams can all be carried over the same downstream VLAN 100 just as easily as over the 10 upstream VLANs. The downstream PIM neighbor will signal to your NCS 5700 that it wants to join groups and mcast state will be established for each of the groups signaled, with downstream VLAN 100 going on the OIL for each signaled group.

Is it too late for you to change from PIM-SM to PIM-SSM? SSM effectively eliminates group address collisions between existing mcast apps and newly added ones, as the addition of the source address makes (S,G) a unique tuple. When multiple mcast apps originate from a single source, each can have its own group address with no risk of colliding with apps from different sources.

Disclaimer: I am long in CSCO

What kind of configuration will be needed on ncs to be able to handle PIM and multicast requests from headend sw?

It is not really late to miggrate to pim ssm

divadko
Level 1
Level 1

I have done some config like:

multicast-routing
 address-family ipv4
  interface Bundle-Ether1.19
   enable
  !
  interface TenGigE0/0/0/13.892
   enable
  !
  interface TenGigE0/0/0/13.2531
   enable
  !
 !
!
router igmp
 interface Bundle-Ether1.19
 !
 interface TenGigE0/0/0/13.892
  join-group 233.0.0.2 1.1.5.102
  join-group 233.0.0.4 1.1.5.104
 !
 interface TenGigE0/0/0/13.2531
  static-group 233.0.0.7 1.1.1.120
 !
router pim
 address-family ipv4
  auto-rp mapping-agent Bundle-Ether1.19 scope 10 interval 60
  auto-rp candidate-rp Bundle-Ether1.19 scope 10 group-list 224-4 interval 60
  interface Bundle-Ether1.19
   enable
  !
  interface TenGigE0/0/0/13.892
   enable
  !
  interface TenGigE0/0/0/13.2531
   enable
  !
 !
!

 interface Bundle-Ether1.19 have to be an outgoing interface that is vlan 100 l3 interface.

The show mrib route command is:

(1.1.5.102,233.0.0.2) RPF nbr: 1.1.5.102 Flags: L RPF
  Up: 00:17:08
  Incoming Interface List
    TenGigE0/0/0/13.892 Flags: F A IC II LI, Up: 00:17:08
  Outgoing Interface List
    TenGigE0/0/0/13.892 Flags: F A IC II LI, Up: 00:17:08

(1.1.5.104,233.0.0.4) RPF nbr: 1.1.5.4 Flags: L RPF
  Up: 00:17:08
  Incoming Interface List
    TenGigE0/0/0/13.892 Flags: F A IC II LI, Up: 00:17:08
  Outgoing Interface List
    TenGigE0/0/0/13.892 Flags: F A IC II LI, Up: 00:17:08

(1.1.1.120,233.0.0.7) RPF nbr: 1.1.1.120 Flags: RPF
  Up: 00:47:05
  Incoming Interface List
    TenGigE0/0/0/13.2531 Flags: F A NS LI, Up: 00:47:05
  Outgoing Interface List
    TenGigE0/0/0/13.2531 Flags: F A NS LI, Up: 00:47:05

 

If i add "Join" into some gropups in IGMP comfing under  "interface Bundle-Ether1.19" it will add the interfacer into outgoing interface too. But there will be no traffic on int interface Bundle-Ether1.19 .

On uplink ports i can see all multicast traffic incoming to virtual interfaces withou any issues.

The network structure is very simple.

Multicast Sources on Incoming L3 Interface  "te0/0/0/13.x" -------- Destination / Outgoing interface should beL3 switch where  headend connected and i already have a working PIM - SM on other vlan.

 

Let's back up a little... which interface is the upstream toward the source(s)? Which interface is downstream toward the receivers? Are the receivers directly connected to this router? That is, is this their first-hop router?

If you want to migrate to PIM-SSM, then you can get rid of the RP configs, as SSM multicast distribution trees are rooted at their respective sources' first-hop routers, with no RPs. For SSM, you configure IGMPv3 (for IPv4; MLDv2 for IPv6) on the interface of the first-hop router for the receivers and then enable PIM on all the L3 interfaces/subinterfaces of the unicast topology from the receivers' first-hop routers back to the sources. If your receivers can signal group membership via IGMPv3, then there is no need to configure IGMP membership on interfaces statically (except possibly for troubleshooting purposes).

The SSM source-rooted MDTs are built backwards, from the receivers' first-hop to the sources' first-hops. The receivers signal their membership requests via IGMPv3/MLDv2 to their first-hop routers, which adds state and OIL for the receiver's interface, and then transform the requests into PIM joins to their upstream neighbors according to the unicast best-path toward the source. The PIM joining and adding to the OIL is iterated back all the way to the source's first-hop routers, which has (S,G) state on the source's interface establish by virtue of the source multicasting into its first-hop router (no IGMPv3/MLDv2 membership signaling required for a source to commence sending).

BTW, PIM-SM and PIM-SSM can co-exist in the same network, as SSM is really a sparse-mode without an RP or shared trees.

Disclaimer: I am long in CSCO

As i wrote in my post the multicast source is: te0/0/0/13.xy

The downlink source is interface Bundle-Ether1.19

I have tryed to connect my pc into interface Bundle-Ether1.19 with my config but i cant see the streams via vlc.

Are there any active receivers for these groups (233.0.0.2, 233.0.0.4, 233.0.0.7)? Receivers will signal group membership requests to their first-hop routers via IGMPv2, which will then establish (*,G) state and OILs back to the RP. The first-hop routers for the sources will create (S,G) state for the groups and then encap mcast traffic in unicast PIM register packets and forward to the RP. If the RP has (*,G) state for receivers, the encap'ed mcast from the source is decap'ed and forwarded down the shared MDT (and a source MDT is initiated from the RP via PIM). If there is no (*,G) state for the group, the encap'ed packet is discarded.

Without active receivers (ie, (*,G) state at the RP), you will not be seeing native multicast traffic flowing from a source's first-hop router downstream toward the RP (or receivers), as the mcast will be encap'ed as PIM register messages.

Disclaimer: I am long in CSCO

Friend you need to use same RP for all three multicast vlan.

RP is in different vlan than multicast vlan.
why you use 

candidate-rp 

not RP ?

MHM

 

I changed the RP to IP of interface Bundle-Ether1.19 and some streams works now. But only if the source interface ip address is from the same subnet as the multicast source ip is. If i change the ip on interface TenGigE0/0/0/13.892 to some other or dummy ip, the stream stops work.

I tryed to add: rpf topology route-policy PASS under Router PIM to pass every source ip but also not helps...

route-policy PASS
  pass    
end-policy

 

Any idea how to solve this issue?

Review Cisco Networking products for a $25 gift card