08-21-2024 12:45 AM - edited 08-21-2024 12:49 AM
Hello,
I have 4 Nexus switches. I want one pair to be the RP for 239.30.0.0/16 and the backup RP for 224.100.0.0/16, and the other pair to do exactly the opposite.
This is the configuration I want to apply to the Nexus switches, with the same config but different priorities on the other ones:
ip pim bsr bsr-candidate loopback0 priority 49
ip pim bsr rp-candidate loopback0 group-list 239.30.0.0/16 priority 10
ip pim bsr rp-candidate loopback0 group-list 224.100.0.0/16 priority 20
ip pim log-neighbor-changes
ip pim ssm range 232.0.0.0/8
ip pim bsr forward listen
The problem is, when I enter the second "ip pim bsr rp-candidate" line, it just removes the previous one. It only allows me to have a single statement, so I cannot achieve what I want. Is this expected behavior or is it a bug?
If this is how it works, how could I accomplish what I want?
For the moment, what I did is have one pair configured with "ip pim bsr rp-candidate loopback0 group-list 224.0.0.0/4 priority 10" and the other pair with "ip pim bsr rp-candidate loopback0 group-list 239.30.0.0/16 priority 10". I do not like this approach.
Thanks, any help would be appreciated.
Solved! Go to Solution.
08-22-2024 05:18 AM
I have tried it in the lab using IOS and it works.
TAC says in NXOS is not possible. Wonderfull Nexus world
Thank you all.
08-21-2024 12:59 AM
hello @pzkqx6000. I haven’t worked extensively with Nexus switches yet, but I’m currently involved in a project that includes them, so I’ve been digging into the documentation and learning quite a bit.
Regarding your question, it seems that what you’re experiencing is expected behavior with Nexus switches. Nexus devices only allow a single allow a single ip pim bsr rp-candidate statement per interface, which means you can't have multiple group lists on the same loopback interface. So, to accomplish what you're aiming for, where each pair of switches acts as the RP for different multicast groups and as a backup for others, you might consider the following options:
interface loopback0
ip address 1.1.1.1/32
ip pim bsr rp-candidate loopback0 group-list 239.30.0.0/16 priority 10
interface loopback1
ip address 2.2.2.2/32
ip pim bsr rp-candidate loopback1 group-list 224.100.0.0/16 priority 20
If your network design is more complex, MSDP could be a good approach. It allows different RPs on different routers to share multicast source information, providing redundancy and load balancing.
Another option could be to designate a single RP for all groups and use PIM Anycast or other methods to ensure failover and redundancy, although this might require rethinking your group-list strategy.
Your current setup with a broad group-list (224.0.0.0/4) is a practical workaround, though I understand it may not be your preferred solution. If you want to explore more advanced configurations, I think Cisco TAC could offer more targeted guidance (in my opinion).
And that’s all I can help with from what I’ve read and labbed until now. Hope this will help. ALSO, I would also suggest using EVEng so you can test this setup there first.
-Enes
08-21-2024 01:10 AM
Thank you for your reply. I have just tried what you said but it does not work either. I have created a new loopback1 and as soon as I enter the new line it just removes the previous one.
version 10.2(6) Bios:version 07.69
feature pim
ip pim bsr bsr-candidate loopback0 priority 199
ip pim bsr rp-candidate loopback1 group-list 239.30.0.0/16 priority 20
I could not find anywhere where it says it is not supported
08-21-2024 01:17 AM
Well, I’m currently labbing on EVE-NG, and as soon as I find something, I’ll let you know. I guess it’s labbing time! Hehehe (DOUBLE LOOL)
08-21-2024 12:02 PM
Hello
Same result for me also, you would most probably need either have one NK pair to be the active MC MA/RP or go down Anycast or SSM route.
08-21-2024 03:29 AM
Hello
You should be able to create the necessary resiliency for each group, but it will require two loopbacks that are reachable and have pim sparse mode enabled, however only a single bsr mapping agent will be active.
Nkx
int loopback 0/1
ip pim spare-mode
ip pim bsr listen forward
ip pim bsr-candidate Loopback0 10
ip pim rp-candidate Loopback0 group-list xxxxx/16 priority 1
ip pim rp-candidate Loopback1 group-list yyyyy/16 priority 100
Nky
int loopback 0/1
ip pim spare-mode
ip pim bsr listen forward
ip pim bsr-candidate Loopback0 <-----will be active BSR MA
ip pim rp-candidate Loopback0 group-list xxxxx/16 priority 100
ip pim rp-candidate Loopback1 group-list yyyyy/16 priority 1
08-21-2024 03:50 AM - edited 08-21-2024 03:50 AM
Then I have a bug, cause I cannot have in the config two ip pim rp-candidate <interface> statements. No matter if they are using same interface or not. I have tried all and it does not work. Thank you @paul driver and @Enes Simnica , I will open a case.
08-22-2024 11:17 PM - edited 08-23-2024 12:16 AM
Regarding this: I have tested it and it works (again in IOS, not NXOS) but I don't understand why different loopbacks are needed, apart from the part that otherwise the command does not work. Does it have some kind of logic beyond a bad implementation?
It should be possible to use the same loopback to be the RP for different multicast groups, or am I missing something?
Here the RFC where I think it proves my point: https://datatracker.ietf.org/doc/html/rfc5059#section-4.2
It only says it must be the same family address (IPv4 or IPv6) I guess
08-22-2024 05:18 AM
I have tried it in the lab using IOS and it works.
TAC says in NXOS is not possible. Wonderfull Nexus world
Thank you all.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide