cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2096
Views
4
Helpful
6
Replies

c7600 MVPN Issue

guru.prashant
Level 1
Level 1

Hi guys,

I am witnessing a stange behaviour on one of my PEs, which seems to be breaking all conventional norms for multicast routing. Any inputs in diagnosing the root cause of this issue will be of great help. Let me give you some backgroud info first.

I have a MVPN implemented with PIM SSM in the global context and also within the VPN. The network diagram is attached with this thread. MVPN is working OK between PE1, PE2 and PE4, but not via PE3. All P/PEs use same hardware and software platforms – c7609-S with 12.2(33) SRC2. PE3 configuration is similar to other PEs.

Following discrepancies are observed on PE3:

1.      The incoming interface in global context mroute table does not list links to P1 & P2.

PE3# sh ip mroute

IP Multicast Routing Table

--output deleted for brevity ------

(10.172.100.9, 232.172.0.5), 3d08h/00:03:18, flags: sT

  Incoming interface: Loopback0, RPF nbr 0.0.0.0, RPF-MFD

  Outgoing interface list:

    TenGigabitEthernet7/6, Forward/Sparse, 07:41:15/00:03:18, H

    TenGigabitEthernet6/6, Forward/Sparse, 3d08h/00:02:51, H

(10.172.100.10, 232.172.0.5), 3d08h/stopped, flags: sTIZ

  Incoming interface: Null, RPF nbr 0.0.0.0

  Outgoing interface list:

    MVRF V59:CCTV, Forward/Sparse, 3d08h/00:01:26

(10.172.100.4, 232.172.0.5), 3d08h/stopped, flags: sTIZ

  Incoming interface: Null, RPF nbr 0.0.0.0

  Outgoing interface list:

    MVRF V59:CCTV, Forward/Sparse, 3d08h/00:01:26

(10.172.100.3, 232.172.0.5), 3d08h/stopped, flags: sTIZ

  Incoming interface: Null, RPF nbr 0.0.0.0

  Outgoing interface list:

    MVRF V59:CCTV, Forward/Sparse, 3d08h/00:01:24

Even though route to the multicast source (PEs) exists in the global routing table

PE3#sh ip route 10.172.100.3

---output deleted for brevity ----

    10.172.50.57, from 10.172.100.3, 07:44:54 ago, via TenGigabitEthernet7/6

      Route metric is 21, traffic share count is 1

  * 10.172.50.33, from 10.172.100.3, 07:44:54 ago, via TenGigabitEthernet6/6

      Route metric is 21, traffic share count is 1

PE3#sh ip route 10.172.100.4

Routing entry for 10.172.100.4/32

  Known via "ospf 1", distance 110, metric 21, type intra area

---output deleted for brevity ----

  * 10.172.50.57, from 10.172.100.4, 07:44:56 ago, via TenGigabitEthernet7/6

      Route metric is 21, traffic share count is 1

    10.172.50.33, from 10.172.100.4, 07:44:56 ago, via TenGigabitEthernet6/6

      Route metric is 21, traffic share count is 1

PE3#sh ip route 10.172.100.10

Routing entry for 10.172.100.10/32

  Known via "ospf 1", distance 110, metric 21, type intra area

  ---output deleted for brevity ----

  * 10.172.50.57, from 10.172.100.10, 07:44:58 ago, via TenGigabitEthernet7/6

      Route metric is 21, traffic share count is 1

    10.172.50.33, from 10.172.100.10, 07:44:58 ago, via TenGigabitEthernet6/6

      Route metric is 21, traffic share count is 1

2.      No PIM neighbour relations with other PEs over the MTI (Tunnel 3)

PE3#sh ip pim vrf V59:CCTV nei

---output deleted for brevity ----

Address                                                            Prio/Mode

10.163.0.130      Port-channel10.560       1w5d/00:01:36     v2    1 / DR

10.163.0.134      Port-channel12.561       1w5d/00:01:33     v2   1 / DR

10.163.0.10       Te8/1.112                2w5d/00:01:19     v2    1 / DR S P

3.      Inspite of having no PIM neighbours on MTI, the VPN mroute table shows the MTI as the incoming interface, which is misleading.

woking-manpe01#sh ip mroute vrf V59:CCTV

---output deleted for brevity ----

Interface state: Interface, Next-Hop or VCD, State/Mode

(10.163.37.2, 232.2.2.2), 1w5d/00:03:09, flags: sT

  Incoming interface: Tunnel3, RPF nbr 10.172.100.4, RPF-MFD

  Outgoing interface list:

    Port-channel12.561, Forward/Sparse, 22:58:39/00:03:09, H

Thanks

6 Replies 6

Luc De Ghein
Cisco Employee
Cisco Employee

Hi Guru,

So MVPN is broken because mcast in the core is broken. Forget about mcast in VRF for now,

you'll need to fix mcast in the core first.

Check from PE3 towards the ingress PE:

-show ip pim neighbor

-show ip mroute

-show ip route (for S=loopback of ingress PE)

-check ssm range (default or ACL) is configured on every PE and P router

-show ip bgp ipv4 mdt rd <>   on both ingress and egress PE router

Thanks,

Luc

Hi Luc,

First of all thanks for your posting.

Yes lets focus on the Core multicast routing first.

The global multicast table pn PE3 is not listing its neighbor Ps as incoming interfaces, for the Core SSM. Even though the unicast routing for the sources (other PEs) point to the Ps as the next hop.

The mroute and unicast routing table is shown in the original posting. As can be seen in the mroute table SSM is enabled for the MDT group (default range) .Here are other details:

PE3#sh ip pim nei

PIM Neighbor Table

Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,

      P - Proxy Capable, S - State Refresh Capable

Neighbor          Interface                Uptime/Expires    Ver   DR

Address                                                            Prio/Mode

10.172.50.57      TenGigabitEthernet7/6    5d05h/00:01:33    v2    1 / S P

10.172.50.33      TenGigabitEthernet6/6    5d05h/00:01:30    v2    1 / S P

! On the Egress router

PE3#sh ip bgp ipv4 mdt rd 65535:51056

BGP table version is 35, local router ID is 10.172.100.9

Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,

              r RIB-failure, S Stale

Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path

Route Distinguisher: 65535:51056

* i10.172.100.4/32  10.172.100.4             0    100      0 ?

*>i                 10.172.100.4             0    100      0 ?

PE3#

PE3#sh ip pim mdt bgp

MDT (Route Distinguisher + IPv4)               Router ID         Next Hop

 

  MDT group 232.172.0.5

   65535:51055:10.172.100.3                    10.172.100.5      10.172.100.3

   65535:51056:10.172.100.4                    10.172.100.5      10.172.100.4

   65535:51059:10.172.100.10                   10.172.100.5      10.172.100.10

PE3#

!On the Ingress router

PE2#sh ip pim mdt bgp

MDT (Route Distinguisher + IPv4)               Router ID         Next Hop

  MDT group 232.172.0.5

   65535:51055:10.172.100.3                    10.172.100.5      10.172.100.3

   65535:51058:10.172.100.9                    10.172.100.5      10.172.100.9

   65535:51059:10.172.100.10                   10.172.100.5      10.172.100.10

PE2#sh ip bgp ipv4 mdt rd 65535:51058

BGP table version is 34, local router ID is 10.172.100.4

Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,

              r RIB-failure, S Stale

Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path

Route Distinguisher: 65535:51058

* i10.172.100.9/32  10.172.100.9             0    100      0 ?

*>i                 10.172.100.9             0    100      0 ?

Cheers

Guru

Hi Guru,

So, we saw that only (10.172.100.9, 232.172.0.5) had RPF, which is the local loopback of PE3.

PE3# sh ip mroute

IP Multicast Routing Table

--output deleted for brevity ------

(10.172.100.9, 232.172.0.5), 3d08h/00:03:18, flags: sT

  Incoming interface: Loopback0, RPF nbr 0.0.0.0, RPF-MFD

  Outgoing interface list:

    TenGigabitEthernet7/6, Forward/Sparse, 07:41:15/00:03:18, H

    TenGigabitEthernet6/6, Forward/Sparse, 3d08h/00:02:51, H

(10.172.100.10, 232.172.0.5), 3d08h/stopped, flags: sTIZ

  Incoming interface: Null, RPF nbr 0.0.0.0

  Outgoing interface list:

    MVRF V59:CCTV, Forward/Sparse, 3d08h/00:01:26

(10.172.100.4, 232.172.0.5), 3d08h/stopped, flags: sTIZ

  Incoming interface: Null, RPF nbr 0.0.0.0

  Outgoing interface list:

    MVRF V59:CCTV, Forward/Sparse, 3d08h/00:01:26

(10.172.100.3, 232.172.0.5), 3d08h/stopped, flags: sTIZ

  Incoming interface: Null, RPF nbr 0.0.0.0

  Outgoing interface list:

    MVRF V59:CCTV, Forward/Sparse, 3d08h/00:01:24

PE2 only has route 10.172.100.9/32:

PE2#sh ip bgp ipv4 mdt rd 65535:51058

BGP table version is 34, local router ID is 10.172.100.4

Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,

              r RIB-failure, S Stale

Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path

Route Distinguisher: 65535:51058

* i10.172.100.9/32  10.172.100.9             0    100      0 ?

*>i                 10.172.100.9             0    100      0 ?

Where are the other loopback IPv4 /32 prefixed in AFI/SAFI IPv4 MDT ?

PE3 only has 10.172.100.4/32:

PE3#sh ip bgp ipv4 mdt rd 65535:51056

BGP table version is 35, local router ID is 10.172.100.9

Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,

              r RIB-failure, S Stale

Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path

Route Distinguisher: 65535:51056

* i10.172.100.4/32  10.172.100.4             0    100      0 ?

*>i                 10.172.100.4             0    100      0 ?

Again, the other ones are missing.

You're supposed to have one /32 prefix for every PE participating in MVPN.

Even the local prefix is missing, the one generated by this PE router, with next-hop

addres 0.0.0.0. It looks like PE4 and PE9 do have a local one and advertise this one.

So, what config does PE4 and PE9 have that the others do not?

Please check the configs, you should have PIM enabled on these loopback interfaces

and have a default MDT configured in this VRF one each PE router. Also, make sure

that address-family ipv4 mdt is configured on every PE and RR and has the neighbors

activated.

Thanks,

Luc

Hi Luc,

In my posting I have included output of "sh ip pim mdt bgp" which shows PEs are all learning loopback of all other PEs for the MDT. MVPN is woking fine between other PEs (PE1, PE2 and PE4). PE3 has the same config as others.

FYI, I have raised it with TAC now and will update on any developments on that front as well.

Regards

Guru

Guru,

Could you make sure you have a unique group configured for the mdt,  along with pim activated on the loopback which is the source.

If possible, post PE3 config here.

HTH

Mohamed

Got the solution now and the issue seems to be sorted.

The issue was related to RPF check. RPF check priortises routes learnt with better AD as compared to longest-match route.

In my case, PE3 is learning a less specific route (10.0.0.0/8) via EBGP from the firewalls. Loopback addresses of all remote PEs are learnt from Ps, via OSPF. RPF check by default will prefer the EBGP route, rather than more specific OSPF in the unicast routing table.

To force the router to use the more specific route for its RPF decision making, I have now put a Cisco hidden command " ip multicast longest-match".

There can be other work arounds as well like putting static mroutes, but for me it will be difficult to support them, as I am using multicast multipath in the Core.

Thanks to Cisco TAC for great support.

- Guru Prashant