cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
156
Views
5
Helpful
3
Replies
vin.marco
Beginner

Multicast VRF

Hello everybody
I have a problem implementing Multicast in a small provider with some VRFs.
Attached is a diagram that can be used to better understand the typology.

 

Disegno1.jpg

 

As you can see we have two types of routers R1 is an XR and R2 is an xe, in both routers we have the vrf XXX.
The two routers are connected in MPLS through a cisco dwdm.

The client of vrf XXX needs to install a device that needs multicast traffic.

I configured the two nodes as per the extract, but something is missing in my opinion.

 

R1
---------------------
multicast-routing
 address-family ipv4
  interface Loopback86
   enable
  !
  interface TenGigE0/1/1/4
   enable
  !
 !
 vrf XXX
  address-family ipv4
   interface Loopback20
    enable
   !
   interface TenGigE0/1/1/1
    enable
   !
  !
 !
!

router igmp
 interface Loopback86
  version 2
 !
 interface TenGigE0/1/1/4
  version 2
 !
 vrf XXX
  interface Loopback20
   join-group 239.0.0.1
   version 2
  !
  interface TenGigE0/1/1/1
   version 2
  !
 !
!



R2
---------------------
ip multicast-routing distributed
ip multicast-routing vrf XXX distributed

interface Loopback20
 description Loopback vrf XXX
 ip vrf forwarding XXX
 ip address aaa.aaa.aaa.aaa 255.255.255.255
 ip pim sparse-dense-mode
!  
interface Loopback86
 description Loopback global
 ip address bbb.bbb.bbb.bbb 255.255.255.255
 ip pim sparse-dense-mode
!           
interface TenGigabitEthernet0/1/1
 mtu 9148 
 ip address ccc.ccc.ccc.ccc 255.255.255.252
 ip pim sparse-dense-mode
 ip ospf network point-to-point
 mpls ip  
 cdp enable 
!
ip pim vrf XXX rp-address <Loopback 20 R1 >

In R2 I can't see the route for group 239.0.0.1 Do you think I am missing something?

 

sh ip mroute vrf XXX 
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report, 
       Z - Multicast Tunnel, z - MDT-data group sender, 
       Y - Joined MDT-data group, y - Sending to MDT-data group, 
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute, 
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed, 
       Q - Received BGP S-A Route, q - Sent BGP S-A Route, 
       V - RD & Vector, v - Vector, p - PIM Joins on route, 
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.0.1.40), 00:55:59/00:02:30, RP 10.86.20.1, flags: SJCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Loopback20, Forward/Sparse-Dense, 00:55:59/00:02:30

 

Thank you

 

3 REPLIES 3
vin.marco
Beginner

Hello

can someone help me ?

loris.marcellini
Beginner

"Do you think I am missing something?"

Yes! I think quite a few things. I would start by saying that you have to clarify what profile you are using or intending to use? To have a better idea you should start by looking at the IOS XR documentation and IOS XE documentation. In general my feeling is that there's a little bit of confusion here (IGMP join commands on PEs as well as terminology is not super clear).

 

The following examples are based on the assumption that we are working in a MPLS L3VPN environment where things work and LDP is the protocol used to assign/distribute labels in the core. If that's not the case then we should probably start from building the backbone for L3VPN.

 

When it comes to Multicast VPNs (that's what you want yeah?) let's assume you want PIM in the core, here's the config of one PE (the other will be practically the same thing):

 

router bgp 123
 address-family ipv4 mdt
 !
 neighbor 2.2.2.2 <-- other PE or RR for the IPv4 mdt AF
  address-family ipv4 mdt
!
multicast-routing
 address-family ipv4 
  mdt source Loopback0 <-- loopback 0 is used for MPLS reachability and needs to be enabled
  interface all enable
!
 vrf XXX
 address-family ipv4 
  mdt data 232.0.1.0/24
  mdt default ipv4 232.0.0.1 
  interface all enable <-- enable all the interfaces in the vrf XXX
!
end

Now just to make another example let's assume you do not want PIM in the core, but rather implement MLDP here's the config of one PE (the other will be practically the same thing:

!
vrf XXX
  vpn id 1:1
!
mpls ldp
 mldp
  logging notifications
  address-family ipv4
  !
 !
!
multicast-routing
  address-family ipv4
    interface lo0 enable
  !
  vrf XXX
    address-family ipv4
      mdt source lo0
      mdt default mldp ipv4 2.2.2.2
      interface all enable
    !
  !
!
router pim
 vrf cust_2
  address-family ipv4
   rpf topology route-policy rpf-for-XXX
   !
  !
 !
!
route-policy rpf-for-XXX
  set core-tree mldp-default
end-policy
end

I would rather skip the rest of profile configuration examples as each might slightly differ from the rest based on specific requirements and different scenarios.

 

PS.

IGMP join should not be on PEs but rather on the client itself, not sure why you would have it on the PE to be honest.

 

Thanks, L.

vin.marco
Beginner

Hi Loris
Thanks for your help.
I confirm that we are working in a fully functional MPLS L3VPN network and LDP is the protocol we are using for the distribution of the labels in the core.

This scenario is a lab created in EVE-NG to test multicast, for this reason I perform the join from the PE but in reality it will be in the CE (C9400).

The network in production is made up of 20 routers which have a relationship in BGP with the RR.

The purpose of my LAB is to carry the multicast through MLDP, I believe the profile is 6.

I try to configure and update you, thanks again

Content for Community-Ad