cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
299
Views
5
Helpful
6
Replies
Highlighted
Beginner

PIM-SM over DMVPN and NBMA-Mode

Hello all,

My main question is when exactly do we need pim nbma mode.

Specifically, it seems obvious to me that when the sender and the receiver is between spokes, the RPF will fail because the incoming interface and the OIL is the same hub dmvpn tunnel interface. Is that correct?

Furthermore, lets say that the source is behind the hub and the receivers are the spokes. I test it in the lab and it works.

What i would expect though, is that once one spoke sends a prune message towards the RP, this message would not be received by the other spokes (because of RPF failure), leading to pruning the traffic for everybody.

This is not the case though. Spokes receive the prune message as if they exist in a LAN segment and override the prune with an immediate Join.

Is my understanding about RPF checking of PIM Messages incorrect?

Thank you

6 REPLIES 6
Highlighted
Enthusiast

Hi,

 

Specifically, it seems obvious to me that when the sender and the receiver is between spokes, the RPF will fail because the incoming interface and the OIL is the same hub dmvpn tunnel interface. Is that correct?

Yes, that's correct. 

 

What i would expect though, is that once one spoke sends a prune message towards the RP, this message would not be received by the other spokes (because of RPF failure), leading to pruning the traffic for everybody.

Yes, that's also correct. Within the partial mesh design only the hub receive the prune message.

 

Spokes receive the prune message as if they exist in a LAN segment and override the prune with an immediate Join.

If the multicast routers are in a broadcast network, they will override the prune.

In DMVPN, spokes use "ip nhrp map multicast 192.0.2.2; for example....", only the hub will receive the prune message.

 

And one more information for your reference is the command 'ip pim nbma-mode' also keep track on the NBMA IP address + interface in the OIL, not just the interface. So, it has addressed the RPF issue.

 

 ip pim nbma-mode command in interface configuration mode. This command allows the router to track the IP address of each neighbor when a PIM join message is received from that neighbor. The router can also track the interface of the neighbor in the outgoing interface list for the multicast groups that the neighbor joins.

https://www.cisco.com/c/en/us/td/docs/ios/solutions_docs/ip_multicast/White_papers/frm_rlay.html#wp1029708

 

 

=======

 

Tbh, I did not pay so much attention on multicast during my exam. Configuration is simple, the theory behind is horrible.

 

So, If I got anything wrong in the above, please kindly let me know. Thanks 

Highlighted

Hello,

We agree on all you mentioned. This is what i believe would be the case.

The problem is that in practice, during the lab, it seems this is not the case.

In DMVPN, when one Spoke sends Prune, the other Spoke which receives the same stream, also receives this Prune. Thus, it sends a Join message and the stream continues to flow.

I would expect the Prune message to NOT reach the other Spoke because of RPF failing in Hub!

 

Highlighted

Hi,

Look like real hardware and virtual appliance will yield different result.

Rene Molenaar said that 'I just redid this lab on some real hardware and I think this is some Cisco VIRL quirk. When I run it on VIRL, I don’t need to use PIM nbma mode. On real hardware, I do need it.'

https://forum.networklessons.com/t/multicast-pim-nbma-mode/983/17



Highlighted

I have seen this post. It is very interesting but talks about traffic between Spokes. So i am not sure how this applies to the scenario (which i suppose is the most common), the source being behind Hub and Spokes being the receivers.

My understanding is that indeed, once a Spoke sends a Prune message, other Spokes will not see it and will cause traffic loss on all of them, till a Join is periodically sent again.

But, on the other hand, if this is the case, then in practice we would need pim nbma mode in ALL scenarios, right?

Highlighted


@CSCO11598534 wrote:

I have seen this post. It is very interesting but talks about traffic between Spokes. So i am not sure how this applies to the scenario (which i suppose is the most common), the source being behind Hub and Spokes being the receivers.


The prune message is also a multicast traffic send to 232.0.0.2 (all routers), I guess it 's the similar case.

 


@CSCO11598534 wrote:

My understanding is that indeed, once a Spoke sends a Prune message, other Spokes will not see it and will cause traffic loss on all of them, till a Join is periodically sent again.


I totally agreed with you according to my understanding to PIM

 


@CSCO11598534 wrote:

But, on the other hand, if this is the case, then in practice we would need pim nbma mode in ALL scenarios, right?


No, You need it only on a NBMA network. In other cases, spoke should have talk to other spokes directly (e.g. it could be a point-to-point tunnel / sub-interface / a broadcast network / fully mesh PVC...etc) 

 

What is the effect of "ip pim nbma"? It additionally to keep track the NBMA address + outgoing interface.

When you need it? On a point-to-multipoint interface (not subinterfaces) + clients cannot talk to each other (non-fully mesh topology).

 

 

Highlighted

Yes when i am talking about all cases, i refer to all NBMA cases like DMVPN.

I wonder if someone can verify that. I am not sure about the PIM Prune....

By the way, all PIMv2 Messages (except Register - Register/Stop) go to 224.0.0.13 with TTL=1