cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
367
Views
0
Helpful
3
Replies

Multicast environment issue.

Hello All, hope you are doing great. I have a question related an issue we are facing with our multicast VPN environment. After some investigation on a high CPU problem with one of our PE devices, we found multicast traffic being punted due to those groups / sources are not being requested on this end (no entry on the mroute / mdt table), but packets still reaching this PE.

Side note: This traffic was never supposed to reach this PE / Region since it has been deployed (to eliminate the cache possibility).

 

We have performed several TS steps and below the most important / relevant ones:

 

- From a netdr capture, we found S,M (source and multicast) that its not supposed to reach this particular PE, as we separate the traffic by region and for this case the EMEA traffic is reaching APAC region even if there are no requester for this multicast groups. 

-No sho ip mroute vrf S, M on egress PE as its not joining any MDT (data) session (and its ok as this device should not receive this traffic). 

- We have cleared the mroute for our particular VRF on the egress PE (the one receiving undesired traffic). 

- We have removed and re added the MDT configuration within the VRF in order to get the tree rebuilt on egress PE (the one receiving undesired traffic).

After the steps above, we are still seeing the unwanted traffic on subsequent netdr captures.

A difference we have noticed, was the drastically downgrade on the amount of packets transiting the IBC. Before clearing the mroute and rebuild the mdt, show ibc command showed more than 11k packets rx. After the clear / mdt, this number dropped to 2.5k rx. 

 

Sorry that i did not include more specific information / configurations, as i cant for security reasons. 

 

Thanks in advance!

 

Regards!

3 Replies 3

Hello.

From the description it's not clear how you were troubleshooting multicast forwarding, but I would:

 - check if multicast traffic is sent using MDT default or data group;
 - if it's default group -> the issue is expected;
 - if it's data -> check if we reuse any data address;
 - if it's data and no reuse -> check if we joined S,G or *,G for the group (global table) and how traffic comes to the PE.

Do you have a contact to submit ticket to Cisco TAC?

Other questions:
 - do you use SSM or normal sparse mode in global table?
 - do you use SSM or sparse/dense mode in vrf and where is RP located?
 - what is PE IOS version and core facing LC?

Could share your vrf +pim configuration and output for show ip mroute for global + vrf table for the groups and show ip pim vrf ... mdt send/receive det ?

Hi Vasilii,

 

Thanks so much for your answer. For security reasons, i cant post my configuration here. But we did verified the traffic, and basically, what we see is the MDT data address being used more then once at source PE (Ref_count = 4). It is my understanding that this would be an expected behavior then (for the traffic to be flooded if this over utilization is found)? 

Our MVPN setup is SSM (global and within the VRF).

We do have a TAC case open, but so far they are not being able to explain this behavior. 

Thanks in advance!

Regards!

Hello.

If your data address is reused - it's expected to see that undesired vrf multicast flows are received. This basically means that your mdt data pool is exhausted.

The workarounds could be:
- to adjust mdt data threshold to a high value, so that more addresses would stay on default group and only "fat" would switchover to data;
- to extend you mdt data pool.

Could you please share your TAC ticket number?

Review Cisco Networking products for a $25 gift card