08-07-2018 12:25 PM - edited 08-07-2018 12:27 PM
When RP are available on both Multicast domains, they work as expected.
RP-1#show ip mroute | b \(
(*, 239.100.100.100), 14:43:09/00:03:11, RP 100.100.100.100, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/1, Forward/Sparse, 14:43:09/00:03:11
(*, 239.5.5.5), 14:44:18/00:02:43, RP 100.100.100.100, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/0, Forward/Sparse, 14:44:18/00:02:43
(*, 224.0.1.40), 14:46:36/00:02:17, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Loopback0, Forward/Sparse, 14:46:36/00:02:17
After some sending some pings, I got the following
R4#show ip pim rp
Group: 239.100.100.100, RP: 100.100.100.100, v2, uptime 02:07:08, expires 00:01:59
Group: 239.6.6.6, RP: 100.100.100.100, v2, uptime 02:07:08, expires 00:01:59
RP-2#show ip mroute | b \(
(*, 239.100.100.100), 16:17:32/00:02:50, RP 100.100.100.100, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/0, Forward/Sparse, 02:07:33/00:02:50
(*, 239.6.6.6), 02:07:33/00:02:42, RP 100.100.100.100, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/1, Forward/Sparse, 02:07:33/00:02:42
(4.4.4.4, 239.6.6.6), 00:00:54/00:02:05, flags: A
Incoming interface: Ethernet0/0, RPF nbr 172.16.1.4
Outgoing interface list:
Ethernet0/1, Forward/Sparse, 00:00:54/00:02:42
(172.16.1.4, 239.6.6.6), 00:00:54/00:02:05, flags: TA
Incoming interface: Ethernet0/0, RPF nbr 172.16.1.4
Outgoing interface list:
Ethernet0/1, Forward/Sparse, 00:00:54/00:02:42
(*, 224.0.1.40), 04:32:40/00:02:20, RP 0.0.0.0, flags: DPL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null
Then I shutdown lo100 on R2, I got the following on R1
RP-1#show ip mroute | b \(
(*, 239.100.100.100), 14:47:45/00:03:21, RP 100.100.100.100, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/2, Forward/Sparse, 00:00:10/00:03:21
Ethernet0/1, Forward/Sparse, 14:47:45/00:02:33
(*, 239.5.5.5), 14:48:53/00:03:04, RP 100.100.100.100, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/0, Forward/Sparse, 14:48:53/00:03:04
(*, 239.6.6.6), 00:00:05/stopped, RP 100.100.100.100, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/2, Forward/Sparse, 00:00:05/00:03:24
(172.16.1.4, 239.6.6.6), 00:00:05/00:02:54, flags: PMX
Incoming interface: Ethernet0/2, RPF nbr 140.142.1.2
Outgoing interface list: Null
(4.4.4.4, 239.6.6.6), 00:00:05/00:02:54, flags: PMX
Incoming interface: Ethernet0/2, RPF nbr 140.142.1.2
Outgoing interface list: Null
(*, 224.0.1.40), 14:51:11/00:02:42, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Loopback0, Forward/Sparse, 14:51:11/00:02:42
Seems that mcast route failed over to R1 from R2. I still can ping 239.6.6.6 from R4
R4#ping 239.6.6.6
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.6.6.6, timeout is 2 seconds:
Reply to request 0 from 6.6.6.6, 23 ms
Reply to request 0 from 6.6.6.6, 50 ms
Some time later, R4 has no more rp map, and R1 does not have (*,G) from domain 65002.
RP-1#show ip mroute | b \(
(*, 239.100.100.100), 14:55:49/00:03:20, RP 100.100.100.100, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/1, Forward/Sparse, 14:55:49/00:03:20
(*, 239.5.5.5), 14:56:58/00:02:51, RP 100.100.100.100, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/0, Forward/Sparse, 14:56:58/00:02:51
(*, 224.0.1.40), 14:59:16/00:02:33, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Loopback0, Forward/Sparse, 14:59:16/00:02:33
What have I missed here ?
thanks !!
Solved! Go to Solution.
08-08-2018 01:51 AM
Hello,
I have the exact same setup as you have. The 'ip pim bsr-border' command filters out the PIM message type that contains the RP information. The multicast information continues to work, but no RP information is shared. That is why after the expiration timer expires, R4 has no RP mapping.
--> After RP mapping expired, the traffic to multicast group was successful.
So you are saying you can ping 239.6.6.6 from R4 with no RP mapping ? It doesn't work in my lab...
08-07-2018 01:09 PM
Hello,
is this a GNS3 project ? If so, post the gns3 file...
08-07-2018 01:54 PM
Hostname R3 ip multicast-routing interface Loopback0 ip address 3.3.3.3 255.255.255.255 ip pim sparse-mode ip ospf 10 area 10 ip igmp join-group 239.100.100.100 ! interface Ethernet0/1 ip address 10.10.1.3 255.255.255.240 ip pim sparse-mode ! router ospf 10 router-id 3.3.3.3 network 10.10.0.0 0.0.255.255 area 10 Hostname R5 ip multicast-routing interface Loopback0 ip address 5.5.5.5 255.255.255.255 ip ospf 10 area 10 ip pim sparse-mode ip igmp join-group 239.5.5.5 ! interface Ethernet0/0 ip address 10.10.2.5 255.255.255.240 ip pim sparse-mode ! router ospf 10 router-id 5.5.5.5 network 10.10.0.0 0.0.255.255 area 10 Hostname RP-1 ip multicast-routing interface Loopback0 ip address 1.1.1.1 255.255.255.255 ip ospf 10 area 10 ip pim sparse-mode ! interface Ethernet0/0 ip address 10.10.2.1 255.255.255.240 ip pim sparse-mode ! interface Ethernet0/1 ip address 10.10.1.1 255.255.255.240 ip pim sparse-mode ! interface Ethernet0/2 ip address 140.142.1.1 255.255.255.240 ip pim sparse-mode ip pim bsr-border ! router ospf 10 router-id 1.1.1.1 redistribute bgp 65001 subnets network 10.10.0.0 0.0.255.255 area 10 ! router bgp 65001 bgp log-neighbor-changes no bgp default ipv4-unicast neighbor 140.142.1.2 remote-as 65002 ! address-family ipv4 redistribute ospf 10 neighbor 140.142.1.2 activate network 100.100.100.100 mask 255.255.255.255 exit-address-family ! interface Loopback100 ip address 100.100.100.100 255.255.255.255 ip pim sparse-mode ip pim rp-candidate Loopback100 ip pim bsr Loopback100 0 100 ip msdp peer 2.2.2.2 connect-source lo0 remote-as 65002 ip msdp originator-id lo0 Hostname RP-2 ip multicast-routing interface Loopback0 ip address 2.2.2.2 255.255.255.255 ip pim sparse-mode ip ospf 10 area 10 ! interface Ethernet0/0 ip address 172.16.1.2 255.255.255.240 ip pim sparse-mode ! interface Ethernet0/1 ip address 172.16.2.2 255.255.255.240 ip pim sparse-mode ! interface Ethernet0/2 ip address 140.142.1.2 255.255.255.240 ip pim sparse-mode ip pim bsr-border ! router ospf 10 router-id 2.2.2.2 redistribute bgp 65002 subnets network 172.16.0.0 0.0.255.255 area 10 ! router bgp 65002 bgp log-neighbor-changes no bgp default ipv4-unicast neighbor 140.142.1.1 remote-as 65001 ! address-family ipv4 redistribute ospf 10 neighbor 140.142.1.1 activate network 100.100.100.100 mask 255.255.255.255 exit-address-family ! interface Loopback100 ip address 100.100.100.100 255.255.255.255 ip pim sparse-mode ip pim rp-candidate Loopback100 ip pim bsr Loopback100 0 100 ip msdp peer 1.1.1.1 connect-source lo0 remote-as 65001 ip msdp originator-id lo0 Hostname R4 ip multicast-routing interface Loopback0 ip address 4.4.4.4 255.255.255.255 ip pim sparse-mode ip ospf 10 area 10 ip igmp join-group 239.100.100.100 ! interface Ethernet0/0 ip address 172.16.1.4 255.255.255.240 ip pim sparse-mode ! router ospf 10 router-id 4.4.4.4 network 172.16.0.0 0.0.255.255 area 10 Hostname R6 ip multicast-routing interface Loopback0 ip address 6.6.6.6 255.255.255.255 ip ospf 10 area 10 ip pim sparse-mode ip igmp join-group 239.6.6.6 ! interface Ethernet0/1 ip address 172.16.2.6 255.255.255.240 ip pim sparse-mode ! router ospf 10 router-id 6.6.6.6 network 172.16.0.0 0.0.255.255 area 10
08-07-2018 01:11 PM - edited 08-07-2018 02:01 PM
seems bsr-border cause that behavior.
I like to use bsr-border to prevent multicast domains from interraction unless the RP in one domain fails. Is it possible ?
thanks !!
08-07-2018 03:53 PM
Hello,
as far as I can tell, the BSR failover, in your case, only works when there are no border interfaces. When you remove 'ip pim bsr-border' from both RP-1 and RP-2, the failover will work. The 'while' it takes for R4 to lose it's RP mappings is just the expiration timer expiring...
If you configure another BSR candidate either on R3/R5 or R4/R6, respectively, you can use the borders and still achieve failover...
08-07-2018 04:40 PM
thanks for taking a look at it.
1. After I failed over by shutting down lo100 on R2, all the source trees went to R1. But the newly registered group did not register to R1.
2. After RP mapping expired, the traffic to multicast group was successful.
So I guess my lab is not appropriate for test Anycast RP concept. As I tried to achieve Anycast RP between two different multicast domain.
I will try to see whether static RP works.
08-08-2018 01:51 AM
Hello,
I have the exact same setup as you have. The 'ip pim bsr-border' command filters out the PIM message type that contains the RP information. The multicast information continues to work, but no RP information is shared. That is why after the expiration timer expires, R4 has no RP mapping.
--> After RP mapping expired, the traffic to multicast group was successful.
So you are saying you can ping 239.6.6.6 from R4 with no RP mapping ? It doesn't work in my lab...
08-08-2018 08:18 AM - edited 08-08-2018 08:18 AM
thanks !
I double checked with RP map expiration, no traffic was successful. sorry about that.
After I removed bsr-border, anycast RP worked as expected.
08-08-2018 08:22 AM
My lab comes from
From <http://resources.intenseschool.com/gns3-lab-basic-msdp/>.
For anycast section, I just like to try BSR instead of static RP.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide