12-30-2019 08:10 AM - edited 06-30-2021 05:43 PM
In this document, I will go over how to verify control plane and data plane programming on ncs55xx.
This profile is becoming the most favorite on this platform because of the use of partitioned mdt, and the fact it can scale easily using mldp and bgp-auto discovery along with bgp c-mcast signalling.
BGP C-mcast is probably the biggest change from other mvpn profiles supported on ncs55xx, and while it can be a bit challenging at first to understand, I will attempt to simplify this as much as possible!
The topology I will use is as follows - very simple:
[Source in vrf] --- [Sansa] (PE - NCS5502-SE) --- [Oberyn] (PE - ASR9904) --- [Receiver in vrf]
Later in the document I will add another PE with a receiver to show how the verification works.
Also note, this won't possibly cover all theory of multicast - a whole book can be dedicated only to this section.
# configure a vrf
vrf mvpn_p14_vrf1400 address-family ipv4 unicast import route-target 65014:1 ! export route-target 65014:1 ! !
# enable ipv4 mvpn for bgp for that vrf router bgp 100 vrf mvpn_p14_vrf1400 rd auto address-family ipv4 mvpn !
# create route-policy for rpf check purposes. needed at the tail-end PE, but used if head-end PE also is a tail-PE
route-policy profile-14-rpl
set core-tree mldp-partitioned-p2mp
end-policy
!
# apply the route-policy to pim and specify the customer signalling for the mdt via bgp router pim vrf mvpn_p14_vrf1400 address-family ipv4 rpf topology route-policy profile-14-rpl mdt c-multicast-routing bgp ! ! !
# enable partitioned mdt for the vrf and use bgp auto-discovery to discover the PE's participating in the tree multicast-routing vrf mvpn_p14_vrf1400 address-family ipv4 mdt source Loopback0 mdt partitioned mldp ipv4 p2mp interface all enable bgp auto-discovery mldp ! accounting per-prefix rate-per-route ! ! !
# enable mLDP for core tree signalling mpls ldp mldp address-family ipv4 ! ! session protection root !
# configure a sample interface where source is connected interface Hu0/0/0/46.1400 encapsulation dot1q 1400 vrf mvpn_p14_vrf1400 ipv4 address 8.14.0.1/24
Note the config for the tail-end is virtually the same if you configured the route-policy. The only changes are related to the interfaces of course.
All you need in the core is mLDP configuration, same as the head-end.
Let's talk about the signalling with examples. Remember with pim we have a join/prune message. BGP the same way - we use the route-type 7 (Source Tree Join).
We also have to somehow signal the partitioned mdt and automatically discover the PE's participating in such mdt, and that is done through the route type 1 (Intra-AS I-PMSI AD) and route type 3 (S-PMSI AD).
Intra-AS Inclusive P-Multicast Service Interface Auto-Discovery route - that's the full name!!! know why all the acronyms?!
What is this used for? basically to signal that PE 1.1.1.7 has that vrf enabled for multicast vpn, and the attributes like RD and RT.
Important attributes of the route-type 1 are:
1. RD of the VRF from the advertising PE
2. Source PE IP
3. RT of the NLRI (route)
They are highlighted below. You can see two routes because i have a pair of route-reflectors and multipath/add-path enabled.
RP/0/RSP0/CPU0:II09-9904-Oberyn#show bgp ipv4 mvpn rd 1.1.1.7:0 [1][1.1.1.7]/40 BGP routing table entry for [1][1.1.1.7]/40, Route Distinguisher: 1.1.1.7:0 Versions: Process bRIB/RIB SendTblVer Speaker 613 613 Last Modified: Sep 30 16:19:46.142 for 1d04h Paths: (2 available, best #1, not advertised to EBGP peer) Not advertised to any peer Path #1: Received by speaker 0 Not advertised to any peer Local, (received & used) 1.1.1.7 (metric 1000) from 1.1.1.1 (1.1.1.7) Origin IGP, localpref 100, valid, internal, best, group-best, import-candidate, not-in-vrf Received Path ID 1, Local Path ID 1, version 609 Community: no-export Extended community: RT:65014:1400 Originator: 1.1.1.7, Cluster list: 1.1.1.1 Path #2: Received by speaker 0 Not advertised to any peer Local, (received & used) 1.1.1.7 (metric 1000) from 1.1.1.6 (1.1.1.7) Origin IGP, localpref 100, valid, internal, not-in-vrf Received Path ID 1, Local Path ID 0, version 0 Community: no-export Extended community: RT:65014:1400 Originator: 1.1.1.7, Cluster list: 1.1.1.6
Ok, so that's how we discover there's another PE that has this vrf and enabled for mvpn. Cool!
Also note - these are exchanged just by the fact we have "address-family ipv4 mvpn" under the bgp vrf configuration, as follows:
router bgp 100 vrf mvpn_p14_vrf1400 rd auto address-family ipv4 unicast redistribute connected ! address-family ipv4 mvpn
This route-type is a kind of complement to the signalling mechanism to enable P-MDT. Type 1 route will let us know what PE has the ipv4 mvpn on that vrf enabled and *would* (won't in this case) be used as a "default mdt" type route, but route type 3 lets us know the type of partitioned mdt that will be set - s-pmsi in this case (selective pmsi aka p-mdt). You can see the source and group information is all 0's because we are not creating a tree for each C-(S,G), but for the whole vrf in this case.
If we were using a profile that uses default mdt with bgp, the pmsi flags would be seen in the route-type 1, but as you can see, only the route-type 3 has it, because we want a partitioned mdt to be built!
RP/0/RSP0/CPU0:II09-9904-Oberyn#show bgp ipv4 mvpn rd 1.1.1.7:0 [3][0][0.0.0.0][0][0.0.0.0][1.1.1.7]/120 BGP routing table entry for [3][0][0.0.0.0][0][0.0.0.0][1.1.1.7]/120, Route Distinguisher: 1.1.1.7:0 Versions: Process bRIB/RIB SendTblVer Speaker 612 612 Last Modified: Sep 30 16:19:46.142 for 1d04h Paths: (2 available, best #1, not advertised to EBGP peer) Not advertised to any peer Path #1: Received by speaker 0 Not advertised to any peer Local, (received & used) 1.1.1.7 (metric 1000) from 1.1.1.1 (1.1.1.7) Origin IGP, localpref 100, valid, internal, best, group-best, import-candidate, not-in-vrf Received Path ID 1, Local Path ID 1, version 608 Community: no-export Extended community: RT:65014:1400 Originator: 1.1.1.7, Cluster list: 1.1.1.1 PMSI: flags 0x00, type 2, label 0, ID 0x0600010401010107000701000400000001 PPMP: label 24000 Path #2: Received by speaker 0 Not advertised to any peer Local, (received & used) 1.1.1.7 (metric 1000) from 1.1.1.6 (1.1.1.7) Origin IGP, localpref 100, valid, internal, not-in-vrf Received Path ID 1, Local Path ID 0, version 0 Community: no-export Extended community: RT:65014:1400 Originator: 1.1.1.7, Cluster list: 1.1.1.6 PMSI: flags 0x00, type 2, label 0, ID 0x0600010401010107000701000400000001 <<< note type 2 = p2mp mldp tree PPMP: label 24000 <<< This is the mldp local label from the source-PE (Sansa)
Also note that 00000001 is the global-id for the mldp tree. this value is unique per profile (type of mldp tree).
Just to be super clear, you can follow the NLRI format by looking at the rfc format for this route type:
+-----------------------------------+ | RD (8 octets) | +-----------------------------------+ | Multicast Source Length (1 octet) | +-----------------------------------+ | Multicast Source (variable) | +-----------------------------------+ | Multicast Group Length (1 octet) | +-----------------------------------+ | Multicast Group (variable) | +-----------------------------------+ | Originating Router's IP Addr | +-----------------------------------+
So remember, multicast source and group information is all 0's because it's p-mdt tree type.
And the type 2 above there, it's just how bgp tells through that route what kind of tree has to be built in the core. See all types below:
+ 0 - No tunnel information present + 1 - RSVP-TE P2MP LSP + 2 - mLDP P2MP LSP + 3 - PIM-SSM Tree + 4 - PIM-SM Tree + 5 - BIDIR-PIM Tree + 6 - Ingress Replication + 7 - mLDP MP2MP LSP
This route type will be our actual bgp join (same idea of a pim join). If we get an igmp join in that vrf, we convert that into a type 7 join and send it upstream. See below an example for 1 route in vrf mvpn_p14_vrf1400:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show bgp ipv4 mvpn rd 1.1.1.7:0 [7][1.1.1.7:0][100][32][8.14.0.2][32][232.14.0.1]/184 BGP routing table entry for [7][1.1.1.7:0][100][32][8.14.0.2][32][232.14.0.1]/184, Route Distinguisher: 1.1.1.7:0 Versions: Process bRIB/RIB SendTblVer Speaker 1971 1971 Last Modified: Oct 1 11:48:03.780 for 09:49:41 Paths: (2 available, best #1) Not advertised to any peer Path #1: Received by speaker 0 Not advertised to any peer Local, (received & used) 1.1.1.10 (metric 1000) from 1.1.1.1 (1.1.1.10) Origin IGP, localpref 100, valid, internal, best, group-best, import-candidate, imported Received Path ID 1, Local Path ID 1, version 1772 Extended community: RT:1.1.1.7:102 <<<<<<<<<<<<<<<< this RT is derived from the ipv4 uni advertisement from source-PE Originator: 1.1.1.10, Cluster list: 1.1.1.1 Source AFI: IPv4 MVPN, Source VRF: mvpn_p14_vrf1400, Source Route Distinguisher: 1.1.1.7:0 Path #2: Received by speaker 0 Not advertised to any peer Local, (received & used) 1.1.1.10 (metric 1000) from 1.1.1.6 (1.1.1.10) Origin IGP, localpref 100, valid, internal, import-candidate, imported Received Path ID 1, Local Path ID 0, version 0 Extended community: RT:1.1.1.7:102 Originator: 1.1.1.10, Cluster list: 1.1.1.6 Source AFI: IPv4 MVPN, Source VRF: mvpn_p14_vrf1400, Source Route Distinguisher: 1.1.1.7:0
Note that, the RT 1.1.1.7:102 is created at the source-PE (Sansa). It's basically called a "Route Import RT", and it's how the source-PE understands what VRF the join (route-type 7) is for. Without this info, the source-PE wouldn't know what vrf this join is a part of.
You can double check this value by going to the source-PE and run the following command:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show bgp vrf mvpn_p14_vrf1400 ipv4 mvpn mvpn-rt BGP MVPN RT list: RT:1.1.1.7:102 RT:1.1.1.7:0
The "102" in this case is the VRF-ID. You can easily double check this:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show rib tables | i "vrf1400|Table" D - Table Deleted, C - Table Reached Convergence VRF/Table SAFI Table ID PrfxLmt PrfxCnt TblVersion N F D C mvpn_p14_vrf1400/defa uni 0xe0000066 10000000 3 35 N N N Y <<<< 0x66 is 102 in decimal
You can see the nlri contains the join info - basically the source and group the join is for. See below the igmp join info on the LHR:
RP/0/RSP0/CPU0:II09-9904-Oberyn#show igmp vrf mvpn_p14_vrf1400 groups 232.14.0.1 detail Interface: HundredGigE0/1/0/0.1400 Group: 232.14.0.1 Uptime: 09:51:40 Router mode: INCLUDE Host mode: INCLUDE Last reporter: 3.14.0.2 Suppress: 0 Group source list: Source Address Uptime Expires Fwd Flags 8.14.0.2 09:51:40 00:01:25 Yes Remote 4
So this will be used to complete the signalling - if say there were other hops before this PE to get to the FHR (first hop router), then this bgp join would transform into a pim join and keep following the path upstream - but in our case this is it! So then you end up seeing the following in the pim topology table on the source-PE:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show pim vrf mvpn_p14_vrf1400 topology 232.14.0.1 IP PIM Multicast Topology Table Entry state: (*/S,G)[RPT/SPT] Protocol Uptime Info Entry flags: KAT - Keep Alive Timer, AA - Assume Alive, PA - Probe Alive RA - Really Alive, IA - Inherit Alive, LH - Last Hop DSS - Don't Signal Sources, RR - Register Received SR - Sending Registers, SNR - Sending Null Registers E - MSDP External, EX - Extranet MFA - Mofrr Active, MFP - Mofrr Primary, MFB - Mofrr Backup DCC - Don't Check Connected, ME - MDT Encap, MD - MDT Decap MT - Crossed Data MDT threshold, MA - Data MDT Assigned SAJ - BGP Source Active Joined, SAR - BGP Source Active Received, SAS - BGP Source Active Sent, IM - Inband mLDP, X - VxLAN Interface state: Name, Uptime, Fwd, Info Interface flags: LI - Local Interest, LD - Local Dissinterest, II - Internal Interest, ID - Internal Dissinterest, LH - Last Hop, AS - Assert, AB - Admin Boundary, EX - Extranet, BGP - BGP C-Multicast Join, BP - BGP Source Active Prune, MVS - MVPN Safi Learned, MV6S - MVPN IPv6 Safi Learned (8.14.0.2,232.14.0.1)SPT SSM Up: 09:53:40 JP: Join(00:00:12) RPF: HundredGigE0/0/0/46.1400,8.14.0.2* Flags: <<< rpf points to where the source is Lmdtmvpn/p14/vrf1400 09:53:40 fwd BGP <<< look at this OIF pointing to the mdt for the vpn
Note the type-7 will be per-C-(S,G), so that's how this profile 14 scales up nicely, because if uses a protocol that is made for millions of nlri's
Some notes for people not used to partitioned-mdt concept:
1. If you no longer have joins coming from receivers, no mldp tree is built.
2. If 1 join comes from a vrf's CE, then the mldp tree is built.
3. If more than 1 join comes, that same tree is used - we don't do 1 tree/group.
4. Data mdt can be used if desired to build separate tree's for groups that might carry more data and achieve better load distribution over the core - not needed on most scenarios.
What happens when a join comes from a receiver at the egress PE node? We have to signal this via mLDP. Here's what happens at the egress and ingress nodes seen via "debug mpls mldp sig":
Oberyn (Egress PE):
RP/0/RSP0/CPU0:2019 Oct 6 21:33:23.767 EDT: mpls_ldp[1300]: %ROUTING-MLDP-5-BRANCH_ADD : 0x0006B [global-id 1] P2MP 1.1.1.7, Add PIM MDT branch remote label no_label RP/0/RSP0/CPU0:2019 Oct 6 21:33:23.767 EDT: mpls_ldp[1300]: %ROUTING-MLDP-5-BRANCH_ADD : 0x0006C [recursive 257:17235968] 1.1.1.7:[global-id 1] P2MP 1.1.1.7, Add Recursive 0x0006B branch remote label no_label RP/0/RSP0/CPU0:2019 Oct 6 21:33:23.770 EDT: mpls_ldp[1300]: %ROUTING-MLDP-5-BRANCH_DELETE : 0x0006C [recursive 257:17235968] 1.1.1.7:[global-id 1] P2MP 1.1.1.7, Delete Recursive 0x0006B branch remote label no_label RP/0/RSP0/CPU0:Oct 6 21:33:23.771 EDT: mpls_ldp[1300]: DBG-mLDP[9410], Peer(1.1.1.7:0)/Root(1.1.1.7): ldp_mldp_sig_peer_msg_tx: 'Label-Mapping' msg (type 0x400), label 24349, P2MP FEC { root 1.1.1.7, opaque-len 7 }, no MP-Status, status_code 0x0
What we can see from the above:
1. we allocate a global-id for the p2mp tree. no remote_label is assigned because we are the decapsulation point.
2. we are adding a "forwarding recursive" log because of mldp lfa frr feature - don't worry about this one because it deserves a TOI by itself
3. Last but not least, we transmit a "Label-Mapping" towards 1.1.1.7 as the root of the tree (because that's the PE to reach the source), and we inform what label that router will use to send multicast traffic over that tree (24349).
Sansa (Ingress PE):
RP/0/RP0/CPU0:Oct 6 21:33:23.773 EDT: mpls_ldp[1285]: DBG-mLDP[7552], Peer(1.1.1.10:0)/Root(1.1.1.7): ldp_mldp_sig_peer_msg_rx_nfn: 'Label-Mapping' msg (type 0x400), msg_id 27413, lbl 24349, P2MP FEC { root 1.1.1.7, opaque-len 7 }, no MP-Status, status_code 0x0 RP/0/RP0/CPU0:2019 Oct 6 21:33:23.774 EDT: mpls_ldp[1285]: %ROUTING-MLDP-5-BRANCH_ADD : 0x00001 [global-id 1] P2MP 1.1.1.7, Add LDP 1.1.1.10:0 branch remote label 24349
1. Sansa receives the label-mapping from Oberyn.
2. It then add's a branch to the newly created tree, which has one branch now going towards Oberyn and will use label 24349.
The easiest way to confirm the binding information for the tree is the following cli:
Ingress PE:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mpls mldp bindings | util egrep "24349" -B 3 <<< label here LSP-ID: 0x00001 Paths: 2 Flags: Pk 0x00001 P2MP 1.1.1.7 [global-id 1] Local Label: 24000 Remote: 1048577 Inft: Lmdtmvpn/p14/vrf1400 RPF-ID: 0 TIDv4/v6: 0xE0000066/0xE0800066 Remote Label: 24349 NH: 1.1.1.10 Inft:
Of course, don't use the label if you don't know what it is
You can see we allocated local label 24000.
The intf is this nice naming format here: Lmdtmvpn/p14/vrf1400 (mvpn_p14_vrf1400 is the vrf name).
Let's check now some more info on this lsm-id:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mpls mldp lsm-id 0x0001 details
mLDP database
LSM-ID: 0x00001 VRF: default Type: P2MP Uptime: 1w0d
FEC Root : 1.1.1.7 (we are the root) <<< source-PE is the root always
FEC Length : 12 bytes
FEC Value internal : 020100040000000101010107
Opaque length : 4 bytes
Opaque value : 01 0004 00000001 <<< global-id 1
Opaque decoded : [global-id 1]
Features : MBB RFEC RFWD Trace
Upstream neighbor(s) :
None
Downstream client(s):
LDP 1.1.1.10:0 Uptime: 01:37:52 <<< egress-PE is the downstream ldp neighbor (1.1.1.10)
Rec Next Hop : 1.1.1.10
Remote label (D) : 24349 <<< outgoing label for the tree
LDP MSG ID : 31884
PIM MDT Uptime: 1w0d
Egress intf : Lmdtmvpn/p14/vrf1400
Table ID : IPv4: 0xe0000066 IPv6: 0xe0800066
HLI : 0x00001
Ingress : Yes
Peek : Yes <<< typically set to yes for head-node for control-plane purposes (punt)
PPMP : Yes
Local Label : 24000 (internal) <<< our local label - use that to check the lfib
We can check the mldp forwarding db as well:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mpls mldp forwarding lsm-id 0x0001 mLDP MPLS forwarding database 24000 LSM-ID: 0x00001 HLI: 0x00001 flags: In Pk Lmdtmvpn/p14/vrf1400, RPF-ID: 0, TIDv4: E0000066, TIDv6: E0800066 24349, NH: 1.1.1.10, Intf: Role: H, Flags: 0x4
Now check the LFIB:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mpls forwarding labels 24000 Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 24000 24349 mLDP/IR: 0x00001 BE17 1.10.0.10 0
Note the prefix/id here is the LSP-ID.
There are some pretty cool commands under the "show mvpn" umbrella that we can use for verification of control plane.
Source-PE:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mvpn vrf mvpn_p14_vrf1400 context MVPN context information for VRF mvpn_p14_vrf1400 (0x5562db3ca928) RD: 1.1.1.7:0 (Valid, IID 0x1), VPN-ID: 0:0 Import Route-targets : 2 RT:1.1.1.7:0, BGP-AD RT:1.1.1.7:102, BGP-AD <<< here you can see the route import RT as well BGP Auto-Discovery Enabled (I-PMSI added) , MS-PMSI sent MLDP Core-tree data: MDT Name: Lmdtmvpn_p14_vrf1400, Handle: 0x8000674, idb: 0x5562db8f7898 MTU: 1376, MaxAggr: 255, SW_Int: 30, AN_Int: 60 RPF-ID: 205/0, C:0, O:1, D:0, CP:0 MLDP Number of Roots: 0 (Local: 0), HLI: 0x00000, Rem HLI: 0x00000 Partitioned MDT: Configured, P2MP (RD:Not added, ID:Added), HLI: 0x00001, Loc Label: 24000, Remote: None ID: 1 (0x5562da96b6d0), Ctrl Trees : 0/0/0, Ctrl ID: 0 (0x0), IR Ctrl ID: 0 (0x0), Ctrl HLI: 0x00000 P2MP Def MDT ID: 0 (0x0), added: 0, HLI: 0x00000, Cfg: 0/0
Some very important information highlighted. Use the detail option to get some stats about invalid/errors.
Another very useful command is the "pe".
Source-PE:
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mvpn vrf mvpn_p14_vrf1400 pe MVPN Provider Edge Router information VRF : mvpn_p14_vrf1400 PE Address : 1.1.1.10 (0x5562dab90e58) <<< egress-PE learned RD: 0:0:0 (null), RIB_HLI 0, RPF-ID 405, Remote RPF-ID 0, State: 0, S-PMSI: 0 PPMP_LABEL: 24000, MS_PMSI_HLI: 0x00000, Bidir_PMSI_HLI: 0x00000, MLDP-added: [RD 0, ID 0, Bidir ID 0, Remote Bidir ID 0], Counts(SHR/SRC/DM/DEF-MD): 0, 0, 0, 0, Bidir: GRE RP Count 0, MPLS RP Count 0RSVP-TE added: [Leg 0, Ctrl Leg 0, Part tail 0 Def Tail 0, IR added: [Def Leg 0, Ctrl Leg 0, Part Leg 0, Part tail 0, Part IR Tail Label 0 bgp_i_pmsi: 1,0/0 , bgp_ms_pmsi/Leaf-ad: 1/0, bgp_bidir_pmsi: 0, remote_bgp_bidir_pmsi: 0, PMSIs: I 0x5562dab91028, 0x0, MS 0x5562dbadcd28, Bidir Local: 0x0, Remote: 0x0, BSR/Leaf-ad 0x0/0, Autorp-disc/Leaf-ad 0x0/0, Autorp-ann/Leaf-ad 0x0/0 IIDs: I/6: 0x1/0x0, B/R: 0x0/0x0, MS: 0x1, B/A/A: 0x0/0x0/0x0 Bidir RPF-ID: 406, Remote Bidir RPF-ID: 0 I-PMSI: Unknown/None (0x5562dab91028) I-PMSI rem: (0x0) MS-PMSI: MLDP-P2MP, Opaque: [global-id 1] (0x5562dbadcd28) Bidir-PMSI: (0x0) Remote Bidir-PMSI: (0x0) BSR-PMSI: (0x0) A-Disc-PMSI: (0x0) A-Ann-PMSI: (0x0) RIB Dependency List: 0x0 Bidir RIB Dependency List: 0x0 Sources: 0, RPs: 0, Bidir RPs: 0
Once you add multiple PE's as part of the tree due to joins coming in, you should see the others there as well in the mvpn database.
When it comes to rpf check, the only difference when it comes to profile 14 is at the egress PE. Check it out:
Egress-PE:
RP/0/RSP0/CPU0:II09-9904-Oberyn#show pim vrf mvpn_p14_vrf1400 rpf 8.14.0.2 Table: IPv4-Unicast-default * 8.14.0.2/32 [200/0] via Lmdtmvpn/p14/vrf1400 with rpf neighbor 1.1.1.7 Connector: 1.1.1.7:0:1.1.1.7, Nexthop: 1.1.1.7
You can see the interface back to the source is the p-mdt interface. The RPF neighbor is of course the source-PE.
A data MDT is nothing more than a tree that is built for one or more groups, with the intent of:
One of the main sources of confusion is around the various knobs we have to even start using data mdt's. Consider the following cheatsheet for when you're configuring this for the first time:
Note: All the below commands are applied under
1. “mdt immediate-switch” (hidden cli until 7.4.1)
By default, mRIB applies a 5s delay to switch from a default/partitioned MDT to a data MDT encap-ID. This knob removed the delay.
2. “mdt data <> immediate-switch”
Ensures the traffic will always be encapsulated only in a data MDT, and never be sent over default/partitioned MDT.
3. "mdt data <> threshold 0"
By default, this value is 1 (kbps). By setting it to 0, it will still allow traffic to be encapsulated over default/partitioned MDT for the first packets, but the switchover to data MDT will be as quick as possible, meaning that the router will detect that a source is active over the vrf and will signal the S-PMSI via BGP RT3.
As with almost anything, there's no one-size-fits-all in this case, but I almost want to say that option 3 is likely all you need and what most customers have been using.
Using option 2 may end up in longer delay as you will never have any packet flowing over the partitioned mdt tree.
Example:
multicast-routing vrf mvpn address-family ipv4 mdt data mldp 200 threshold 0
From now on, we will be checking the Source-PE programming. Let's start with the mrib mpls command. You can use the lsm-id or the local label. Up to you.
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mrib mpls forwarding head-lsm-id 0x0001 detail LSP information (mLDP) : LSM-ID: 0x00001, Role: Head, Peek, Head LSM-ID: 0x00001 Assoc-TIDs: 0xe0000066/0xe0800066, MDT: Lmdtmvpn/p14/vrf1400 Incoming Label : (24000) Transported Protocol : <unknown> Explicit Null : None IP lookup : enabled Platform information : FGID: 20394, Tunnel RIF: -1, RIF VRF: -1 <<< this is the core MCID / don't mind Tunnel RIF/RIF VRF at the head-node Outsegment Info #1 [T/Pop]: No info. Outsegment Info #2 [H/Push, Recursive]: OutLabel: 24349, NH: 1.1.1.10, ID: 0x1, Sel IF: Bundle-Ether17(V) UL IF: HundredGigE0/0/0/24, Node-ID: 0x8 Backup Tunnel: Un:0x0 Backup State: Ready, NH: 0.0.0.0, MP Label: 0 Backup Sel IF: Bundle-Ether78(V), UL IF: HundredGigE0/0/0/3, Node-ID: 0x0
We are mostly interested in the outgoing label and nexthop information.
Note there are two types of MCID in this route type. The route mcid and the core mcid.
In order to find out the route mcid, use the following command.
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mrib vrf mvpn_p14_vrf1400 route 8.14.0.2 232.14.0.1 detail IP Multicast Routing Information Base Entry flags: L - Domain-Local Source, E - External Source to the Domain, C - Directly-Connected Check, S - Signal, IA - Inherit Accept, IF - Inherit From, D - Drop, ME - MDT Encap, EID - Encap ID, MD - MDT Decap, MT - MDT Threshold Crossed, MH - MDT interface handle CD - Conditional Decap, MPLS - MPLS Decap, EX - Extranet MoFE - MoFRR Enabled, MoFS - MoFRR State, MoFP - MoFRR Primary MoFB - MoFRR Backup, RPFID - RPF ID Set, X - VXLAN Interface flags: F - Forward, A - Accept, IC - Internal Copy, NS - Negate Signal, DP - Don't Preserve, SP - Signal Present, II - Internal Interest, ID - Internal Disinterest, LI - Local Interest, LD - Local Disinterest, DI - Decapsulation Interface EI - Encapsulation Interface, MI - MDT Interface, LVIF - MPLS Encap, EX - Extranet, A2 - Secondary Accept, MT - MDT Threshold Crossed, MA - Data MDT Assigned, LMI - mLDP MDT Interface, TMI - P2MP-TE MDT Interface IRMI - IR MDT Interface (8.14.0.2,232.14.0.1) Ver: 0x6a09 RPF nbr: 8.14.0.2 Flags: RPF EID, FGID: 21138, Statistics enabled: 0x0 Up: 01:57:24 RPF-ID: 0, Encap-ID: 262145 Incoming Interface List HundredGigE0/0/0/46.1400 Flags: A, Up: 01:57:24 Outgoing Interface List Lmdtmvpn/p14/vrf1400 Flags: F LMI TR, Up: 01:57:24, Head LSM-ID: 0x00001
In this command we can see the route MCID 21138.
Also take note of the LMI Flag pointing to a mLDP MDT interface.
Basic rule of when we use route vs core MCID:
1. If there's a local receiver (LR) at the source-PE, we use the the route MCID which has the collapsed members of both local receivers and core receivers (for mldp).
2. If there's no local receiver, we only use the core MCID for the replication over the mldp tree.
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mfib vrf mvpn_p14_vrf1400 route 8.14.0.2 232.14.0.1 detail IP Multicast Forwarding Information Base Entry flags: C - Directly-Connected Check, S - Signal, D - Drop, IA - Inherit Accept, IF - Inherit From, EID - Encap ID, ME - MDT Encap, MD - MDT Decap, MT - MDT Threshold Crossed, MH - MDT interface handle, CD - Conditional Decap, DT - MDT Decap True, EX - Extranet, RPFID - RPF ID Set, MoFE - MoFRR Enabled, MoFS - MoFRR State, X - VXLAN Interface flags: F - Forward, A - Accept, IC - Internal Copy, NS - Negate Signal, DP - Don't Preserve, SP - Signal Present, EG - Egress, EI - Encapsulation Interface, MI - MDT Interface, EX - Extranet, A2 - Secondary Accept Forwarding/Replication Counts: Packets in/Packets out/Bytes out Failure Counts: RPF / TTL / Empty Olist / Encap RL / Other (8.14.0.2,232.14.0.1), Flags: EID Up: 02:02:31 Last Used: never SW Forwarding Counts: 0/0/0 SW Replication Counts: 0/0/0 SW Failure Counts: 0/0/0/0/0 Route ver: 0x6a09 MVPN Info :- Associated Table ID : 0xe0000000 MDT Handle: 0x0, MDT Probe:N [N], Rate:Y, Acc:Y MDT SW Ingress Encap V4/V6, Egress decap: 0 / 0, 0 Encap ID: 262145, RPF ID: 0 Local Receiver: False, Turnaround: False Lmdtmvpn/p14/vrf1400 Flags: F LMI TR, Up:02:02:31 HundredGigE0/0/0/46.1400 Flags: A, Up:02:02:31
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mfib vrf mvpn_p14_vrf1400 route 8.14.0.2 232.14.0.1 detail location 0/0/CPU0 IP Multicast Forwarding Information Base Entry flags: C - Directly-Connected Check, S - Signal, D - Drop, IA - Inherit Accept, IF - Inherit From, EID - Encap ID, ME - MDT Encap, MD - MDT Decap, MT - MDT Threshold Crossed, MH - MDT interface handle, CD - Conditional Decap, DT - MDT Decap True, EX - Extranet, RPFID - RPF ID Set, MoFE - MoFRR Enabled, MoFS - MoFRR State, X - VXLAN Interface flags: F - Forward, A - Accept, IC - Internal Copy, NS - Negate Signal, DP - Don't Preserve, SP - Signal Present, EG - Egress, EI - Encapsulation Interface, MI - MDT Interface, EX - Extranet, A2 - Secondary Accept Forwarding/Replication Counts: Packets in/Packets out/Bytes out Failure Counts: RPF / TTL / Empty Olist / Encap RL / Other (8.14.0.2,232.14.0.1), Flags: EID , FGID: 21138, -1, 21138, Mesh NPU Bitmap: 0x0, tun_rif: -1, Statistics Enabled: No , Up: 02:10:48 Last Used: never SW Forwarding Counts: 0/0/0 SW Replication Counts: 0/0/0 SW Failure Counts: 0/0/0/0/0 Route ver: 0x6a09 MVPN Info :- Associated Table ID : 0xe0000000 MDT Handle: 0x0, MDT Probe:N [N], Rate:Y, Acc:Y MDT SW Ingress Encap V4/V6, Egress decap: 0 / 0, 0 Encap ID: 262145, RPF ID: 0 Local Receiver: False, Turnaround: False Lmdtmvpn/p14/vrf1400 Flags: F LMI TR, Up:02:10:48 HundredGigE0/0/0/46.1400 Flags: A, Up:02:10:48
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mfib vrf mvpn_p14_vrf1400 hardware route 8.14.0.2 232.14.0.1 detail location 0/0/CPU0 Route (8.14.0.2: 232.14.0.1) HAL PD context VRF ID: 102 Core MCID : 20394 Core backup MCID 20394 <<< Core-MCID has to match with mldp info HAL Ingress route context: Route FGID: 21138 RPF IF signal : not-set Local receivers: not-set <<< route FGID not used in this case Encap ID flag: set, Encap ID: 262145 Statistics enabled: not-set Ingress engine context: local_route: set, is_accept_intf_bvi: not-set is_tun_rif_set:not-set VRF ID: 0 RPF ID:0 <<< vrf-id and rpf-id are not used in this case, only at egress PE HAL Egress route context: RPF ID VALID: not-set, RPF ID: 0, Tunnnel UIDB 0 Egress engine context: out_of_sync: not-set, local_intf: not-set encap_intf:not-set bvi_count: 0 DPA Route context: Number of OLE 0 VRF ID: 102 Route Flags 0x0 <<< you won't see any OLE's as the only OIF is the MDT intf. Incoming interface : Hu0/0/0/46.1400 A_intf_id: 0x763 Merged flag 0 <<< check A-Intf is the IIF (show im database ifhandle 0x763)>. Tunnel RIF : 0x0 FGID: 20394 Core FGID 0 FEC ID : 0x2001f169 Punt action: 0x0 Punt sub action 0 <<< make sure FEC-id is correct and points to core MCID. TCAM entry ID : 0x0 IPMC action: 0x4 FEC Accessed 1 L3 Intf Refhandle : 0x308d9d7598 L3 interface ref key: 0x0 Statistics enabled : not-set Statistics activated : not-set
Now let's check the core mcid and ensure we got the right information.
Remember we have to check it in all NPU's in all locations. This one only have 1 location, but remember that for modular systems.
RP/0/RP0/CPU0:II10-5502SE-Sansa#show controllers fia diagshell 0 "diag multicast egress 20394 members" location all Node ID: 0/0/CPU0 Multicast ID: 20394 Ingress/Egress: Egress-list Members count: 3 --------------------------------------------------------------------------------- Member | Gport Port | Gport Type | 1st Encap ID/CUD | 2nd Encap ID/CUD | --------------------------------------------------------------------------------- 0 | 0x4000005 | Local_Port | 0x13998 | 0x0 | 1 | 0x4000001 | Local_Port | 0x13998 | 0x0 | 2 | 0x4000009 | Local_Port | 0x13998 | 0x0 | RP/0/RP0/CPU0:II10-5502SE-Sansa#show controllers fia diagshell 1 "diag multicast egress 20394 members" location all Node ID: 0/0/CPU0 Multicast ID: 20394 Ingress/Egress: Egress-list Members count: 4 --------------------------------------------------------------------------------- Member | Gport Port | Gport Type | 1st Encap ID/CUD | 2nd Encap ID/CUD | --------------------------------------------------------------------------------- 0 | 0x4000011 | Local_Port | 0x13998 | 0x0 | 1 | 0x400000d | Local_Port | 0x13998 | 0x0 | 2 | 0x4000015 | Local_Port | 0x13998 | 0x0 | 3 | 0x4000009 | Local_Port | 0x13998 | 0x0 | RP/0/RP0/CPU0:II10-5502SE-Sansa#show controllers fia diagshell 2 "diag multicast egress 20394 members" location all Node ID: 0/0/CPU0 Multicast group ID: 20394 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5502SE-Sansa#show controllers fia diagshell 3 "diag multicast egress 20394 members" location all Node ID: 0/0/CPU0 Multicast group ID: 20394 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5502SE-Sansa#show controllers fia diagshell 4 "diag multicast egress 20394 members" location all Node ID: 0/0/CPU0 Multicast ID: 20394 Ingress/Egress: Egress-list Members count: 1 --------------------------------------------------------------------------------- Member | Gport Port | Gport Type | 1st Encap ID/CUD | 2nd Encap ID/CUD | --------------------------------------------------------------------------------- 0 | 0x4000015 | Local_Port | 0x13997 | 0x0 | RP/0/RP0/CPU0:II10-5502SE-Sansa#show controllers fia diagshell 5 "diag multicast egress 20394 members" location all Node ID: 0/0/CPU0 Multicast group ID: 20394 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5502SE-Sansa#show controllers fia diagshell 6 "diag multicast egress 20394 members" location all Node ID: 0/0/CPU0 Multicast group ID: 20394 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5502SE-Sansa#show controllers fia diagshell 7 "diag multicast egress 20394 members" location all Node ID: 0/0/CPU0 Multicast group ID: 20394 does not exist in the specified configuration!
In my scenario i got 4 possible physical interfaces to the core, so that's what we see in the mcid.
Starting with release 6.6.3, you can now check the mcid much easier with the following command (Note this is not the same mcid or router that i am running this command. this is only showing you the new cli).
RP/0/RP0/CPU0:ios#show mfib hardware egress mcid 27749 npu all location all 0/1/CPU0 npu:1 pp_port:0x4000011 ecap1:0x7aa ecap2:0x0 sysport:717 interface:Hu0/1/0/8 0/1/CPU0 npu:1 pp_port:0x4000005 ecap1:0x7aa ecap2:0x0 sysport:705 interface:Hu0/1/0/10 0/1/CPU0 npu:1 pp_port:0x4000001 ecap1:0x7aa ecap2:0x0 sysport:701 interface:Hu0/1/0/11 0/1/CPU0 npu:1 pp_port:0x4000009 ecap1:0x7aa ecap2:0x0 sysport:709 interface:Hu0/1/0/9 0/1/CPU0 npu:2 pp_port:0x4000011 ecap1:0x7aa ecap2:0x0 sysport:817 interface:Hu0/1/0/14 0/1/CPU0 npu:2 pp_port:0x400000d ecap1:0x7aa ecap2:0x0 sysport:813 interface:Hu0/1/0/13 0/1/CPU0 npu:2 pp_port:0x4000015 ecap1:0x7aa ecap2:0x0 sysport:821 interface:Hu0/1/0/12 0/1/CPU0 npu:2 pp_port:0x4000005 ecap1:0x7aa ecap2:0x0 sysport:805 interface:Hu0/1/0/16 0/1/CPU0 npu:2 pp_port:0x4000001 ecap1:0x7aa ecap2:0x0 sysport:801 interface:Hu0/1/0/17 0/1/CPU0 npu:2 pp_port:0x4000009 ecap1:0x7aa ecap2:0x0 sysport:809 interface:Hu0/1/0/15 0/1/CPU0 npu:3 pp_port:0x4000011 ecap1:0x7aa ecap2:0x0 sysport:917 interface:Hu0/1/0/20 0/1/CPU0 npu:3 pp_port:0x400000d ecap1:0x7aa ecap2:0x0 sysport:913 interface:Hu0/1/0/19 0/1/CPU0 npu:3 pp_port:0x4000015 ecap1:0x7aa ecap2:0x0 sysport:921 interface:Hu0/1/0/18 0/1/CPU0 npu:3 pp_port:0x4000009 ecap1:0x7aa ecap2:0x0 sysport:909 interface:Hu0/1/0/21
You can see you only need 1 command and it shows you the actual XR interfaces directly. Cool!
Don't forget to check the label programming as well.
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mpls forwarding labels 24000 hardware ingress detail location 0/0/CPU0 Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 24000 mLDP/IR: 0x00001 (0x00001) Updated Oct 7 13:50:37.357 mLDP/IR LSM-ID: 0x00001, MDT: 0x8000674, Head LSM-ID: 0x00001 IPv4 Tableid: 0xe0000066, IPv6 Tableid: 0xe0800066 Flags:IP Lookup:set, Expnullv4:not-set, Expnullv6:not-set Payload Type v4:not-set, Payload Type v6:not-set, l2vpn:not-set Head:set, Tail:not-set, Bud:not-set, Peek:set, inclusive:not-set Ingress Drop:not-set, Egress Drop:not-set RPF-ID:0, Encap-ID:0 Disp-Tun:[ifh:0x0, label:-] Platform Data [28]: { 0 0 79 170 0 0 79 170 0 0 0 0 0 0 0 0 0 0 0 0 255 255 255 255 228 24 52 251 } mpls paths: 1, local mpls paths: 1, protected mpls paths: 1 24349 mLDP/IR: 0x00001 (0x00001) \ BE17 1.10.0.10 N/A Updated: Oct 7 13:50:37.340 My Nodeid:0x0 Interface Nodeids: [ 0x8 - - - - - - - - - ] Interface Handles: [ 0x260 - - - - - - - - - ] Backup Interface Nodeids: [ 0x0 - - - - - - - - - ] Backup Interface Handles: [ 0x1b8 - - - - - - - - - ] Packets Switched: 0 LEAF - HAL pd context : sub-type : MPLS_P2MP, ecd_marked:0, has_collapsed_ldi:0 collapse_bwalk_required:0, ecdv2_marked:0, HW Walk: LEAF: PI:0x308d5b7b48 PD:0x308d5b7be8 rev:84366 type: MPLS_P2MP (11) FEC key: 0 label action: MPLS_POP_NH_LKUP MOL: PD: 0x308f5ab758 rev: 84374 dpa-rev: 20110096 fgid: 20394 (0x4faa) LSM id: 1 ipmcolist DPA hdl: 0x308fbf00a8 <<< check core mcid matches is head: 1 is tail: 0 is bud: 0 drop_flag: 0 <<< is_head should be set to 1 num MPIs: 1 MPI: PD: 0x308ebf21d8 rev: 84371 p-rev: 84368 84369 76494 flags: 0xdb in-label: 24000 out-label: 24349 neighbor: 1.1.1.10 <<< out-label matches PRIMARY: mpls encap id: 0x40013997 mplsnh DPA handle: 0x308f831898 dpa-rev: 20110093 LDP local lbl: 24101 out lbl: 1048580 nh: 1.10.0.10 nh encap hdl: 0x308ed3d690 nh ifh: 0x80009a4 <<< points to be17, right interface incomplete: 0 NPU mask: 0x10 sysport: 201326592 BACKUP: mpls encap id: 0x40013998 mplsnh DPA handle: 0x308f8aab68 dpa-rev: 20110094 LDP backup out lbl: 24222 pq lbl: 1048577 nh: 1.10.0.14 nh encap hdl: 0x308ed3e7d0 nh ifh: 0x80009ac incomplete: 0 NPU mask: 0x3 sysport: 201326593 BKUP-FRR-P2MP: PI:0x308e92d5d8 PD:0x308e92d698 Rev: 84369 parent-rev: 84368 50603 dpa-rev: 20110092 DPA Handle: 0x308f4123e0, HW Id: 0x80000006, Status: BLK, npu_mask: 0x3, Bkup IFH: 0x80009ac PROT-FRR-P2MP: PI:0x308e92da38 PD:0x308e92daf8 Rev: 84368 parent-rev: 76494 dpa-rev: 20110091 FRR Active: 0, DPA Handle: 0x308f412100, HW Id: 0x80000006, Status: FWD, npu_mask: 0x10, Prot IFH: 0x80009a4
I will now add another egress-PE to the game. When I send the join, I will start seeing two egress labels from the source-PE. This is the topology now:
So from Sansa's p.o.v, we should verify that we got two egress labels. Let's first check that to find out what ingress label Jon-Snow router will expect.
RP/0/RP0/CPU0:II10-5502SE-Sansa#show mpls forwarding labels 24000 Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 24000 24213 mLDP/IR: 0x00001 BE78 1.10.0.14 0 <<< BE78 added, label 24213 24350 mLDP/IR: 0x00001 BE17 1.10.0.10 0
Now let's start the core verification - note this will be basically mpls swap operation, so the main command is the show mpls.
RP/0/RP0/CPU0:II10-5504-Jon-Snow#show mrib mpls forwarding labels 24213 detail LSP information (mLDP) : LSM-ID: 0x0011E, Role: Mid <<<< role should be mid Incoming Label : 24213 Transported Protocol : <unknown> Explicit Null : None IP lookup : disabled Platform information : FGID: 42049, Tunnel RIF: -1, RIF VRF: -1 <<< grab fgid/mcid Outsegment Info #1 [M/Swap]: OutLabel: 24325, NH: 1.10.0.37, ID: 0x11E, IF: Bundle-Ether104(V) UL IF: HundredGigE0/0/0/33, Node-ID: 0x4 <<< underlying outgoing intf is located on 0/0/cpu0
RP/0/RP0/CPU0:II10-5504-Jon-Snow#show mpls forwarding labels 24213 Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 24213 24325 mLDP/IR: 0x0011e BE104 1.10.0.37 0
Just a generic look - we see this is an mldp fec and the lsp-id is part of it. let's double check from mldp binding.
RP/0/RP0/CPU0:II10-5504-Jon-Snow#show mpls mldp bindings 0x11e mLDP MPLS Bindings database LSP-ID: 0x0011E Paths: 2 Flags: 0x0011E P2MP 1.1.1.7 [global-id 1] Local Label: 24213 Active Remote Label: 24325 NH: 1.10.0.37 Inft: Bundle-Ether104
So that's what we expect.
Now our job is to verify that label has the right forwarding chain.
Note this command will show different information depending on where the egress interface is located.
In this lab, the egress LC will be 0/0/cpu0, so let me first show you the LC info if there's no egress interface on it.
RP/0/RP0/CPU0:II10-5504-Jon-Snow#show mpls forwarding labels 24213 hardware ingress detail location 0/1/CPU0 Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 24213 mLDP/IR: 0x0011e (0x0011e) Updated Oct 8 16:27:51.025 mLDP/IR LSM-ID: 0x0011e, MDT: 0x0 Flags:IP Lookup:not-set, Expnullv4:not-set, Expnullv6:not-set Payload Type v4:not-set, Payload Type v6:not-set, l2vpn:not-set Head:not-set, Tail:not-set, Bud:not-set, Peek:not-set, inclusive:not-set <<< make sure it's not marked as Head or Tail Ingress Drop:not-set, Egress Drop:not-set RPF-ID:0, Encap-ID:0 Disp-Tun:[ifh:0x0, label:-] Platform Data [28]: { 0 0 164 65 0 0 164 65 0 0 0 0 0 0 0 2 0 0 0 0 255 255 255 255 235 103 36 251 } mpls paths: 1, local mpls paths: 1, protected mpls paths: 0 24325 mLDP/IR: 0x0011e (0x0011e) \ BE104 1.10.0.37 N/A Updated: Oct 8 16:27:51.025 My Nodeid:0x100 Interface Nodeids: [ 0x4 - - - - - - - - - ] Interface Handles: [ 0x1d8 - - - - - - - - - ] Backup Interface Nodeids: [ - - - - - - - - - - ] Backup Interface Handles: [ - - - - - - - - - - ] Packets Switched: 0 LEAF - HAL pd context : sub-type : MPLS_P2MP, ecd_marked:0, has_collapsed_ldi:0 collapse_bwalk_required:0, ecdv2_marked:0, HW Walk: LEAF: PI:0x308c7b2e68 PD:0x308c7b2f08 rev:74350 type: MPLS_P2MP (11) FEC key: 0 label action: MPLS_POP_NH_LKUP MOL: PD: 0x3090905710 rev: 74349 dpa-rev: 0 fgid: 42049 (0xa441) LSM id: 0 ipmcolist DPA hdl: 0x0 <<< fgid matches mrib entry is head: 0 is tail: 0 is bud: 0 drop_flag: 0 num MPIs: 1 MPI: PD: 0x3090c55cd8 rev: 74346 p-rev: 19165 flags: 0x9 in-label: 24213 out-label: 24325 neighbor: 1.10.0.37 <<< outlabel matches from pi information PRIMARY: mpls encap id: 0 mplsnh DPA handle: (nil) dpa-rev: 0 <<< no encap-id or mplsnh programmed. this is expected LDP local lbl: 0 out lbl: 0 nh: 1.10.0.37 nh encap hdl: 0x308c56c600 nh ifh: 0x800006c <<< next-hop matches, and ifhandle points to be104 incomplete: 0 NPU mask: 0 sysport: 0
Now let's check the location 0/0/cpu0 where the member interface of bundle-e104 is located:
RP/0/RP0/CPU0:II10-5504-Jon-Snow#show mpls forwarding labels 24213 hardware ingress detail location 0/0/CPU0 Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 24213 mLDP/IR: 0x0011e (0x0011e) Updated Oct 8 16:27:51.030 mLDP/IR LSM-ID: 0x0011e, MDT: 0x0 Flags:IP Lookup:not-set, Expnullv4:not-set, Expnullv6:not-set Payload Type v4:not-set, Payload Type v6:not-set, l2vpn:not-set Head:not-set, Tail:not-set, Bud:not-set, Peek:not-set, inclusive:not-set Ingress Drop:not-set, Egress Drop:not-set RPF-ID:0, Encap-ID:0 Disp-Tun:[ifh:0x0, label:-] Platform Data [28]: { 0 0 164 65 0 0 164 65 0 0 0 0 0 0 0 2 0 0 0 0 255 255 255 255 235 103 36 251 } mpls paths: 1, local mpls paths: 1, protected mpls paths: 0 24325 mLDP/IR: 0x0011e (0x0011e) \ BE104 1.10.0.37 N/A Updated: Oct 8 16:27:51.030 My Nodeid:0x0 Interface Nodeids: [ 0x4 - - - - - - - - - ] Interface Handles: [ 0x1d8 - - - - - - - - - ] Backup Interface Nodeids: [ - - - - - - - - - - ] Backup Interface Handles: [ - - - - - - - - - - ] Packets Switched: 0 LEAF - HAL pd context : sub-type : MPLS_P2MP, ecd_marked:0, has_collapsed_ldi:0 collapse_bwalk_required:0, ecdv2_marked:0, HW Walk: LEAF: PI:0x308c332e68 PD:0x308c332f08 rev:71417 type: MPLS_P2MP (11) FEC key: 0 label action: MPLS_POP_NH_LKUP MOL: PD: 0x3090faf710 rev: 71416 dpa-rev: 5082564 fgid: 42049 (0xa441) LSM id: 0 ipmcolist DPA hdl: 0x308cfc90a8 <<< fgid matches is head: 0 is tail: 0 is bud: 0 drop_flag: 0 num MPIs: 1 MPI: PD: 0x30912aacd8 rev: 71413 p-rev: 12104 flags: 0x19 in-label: 24213 out-label: 24325 neighbor: 1.10.0.37 PRIMARY: mpls encap id: 0x4001380f mplsnh DPA handle: 0x308f14fd88 dpa-rev: 5082563 <<< here we have an mpls encap LDP local lbl: 0 out lbl: 0 nh: 1.10.0.37 nh encap hdl: 0x308c0041b0 nh ifh: 0x800006c <<< ifhandle matches be104 incomplete: 0 NPU mask: 0x4 sysport: 201326600 <<< npu mask is 0x4 (100), that is basically NPU2
In order to double check the right npu mask, you can check the mrib fgid info command.
RP/0/RP0/CPU0:II10-5504-Jon-Snow#show mrib fgid info 42049 FGID information ---------------- FGID (type) : 42049 (Primary) Context : Label 24213 Members[ref] : 0/0/2[2] <<< you can see slot 0/0 and NPU2 are programmed into the FGID/MCID LineCard Slot : 0 :: Npu Instance 2 FGID bitmap : 0x0000000000000004 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 FGID chkpt context valid : TRUE FGID chkpt context: Label 0x5e95 FGID chkpt info : 0x2b000000 Fgid in batch : NO Secondary node count : 0
Now let's make sure the FGID has the right information. Note you can use the new cli as well after 6.6.3.
RP/0/RP0/CPU0:II10-5504-Jon-Snow#show controllers fia diagshell 0 "diag multicast egress 42049 members" location all Node ID: 0/1/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/0/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/2/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/3/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5504-Jon-Snow#show controllers fia diagshell 1 "diag multicast egress 42049 members" location all Node ID: 0/1/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/0/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/2/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/3/CPU0 RP/0/RP0/CPU0:II10-5504-Jon-Snow#show controllers fia diagshell 2 "diag multicast egress 42049 members" location all Node ID: 0/1/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/0/CPU0 Multicast ID: 42049 Ingress/Egress: Egress-list Members count: 1 --------------------------------------------------------------------------------- Member | Gport Port | Gport Type | 1st Encap ID/CUD | 2nd Encap ID/CUD | --------------------------------------------------------------------------------- 0 | 0x4000009 | Local_Port | 0x1380f | 0x0 | <<< pp port 9 / encapid 0x1380f Node ID: 0/2/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/3/CPU0 RP/0/RP0/CPU0:II10-5504-Jon-Snow#show controllers fia diagshell 3 "diag multicast egress 42049 members" location all Node ID: 0/1/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/0/CPU0 Node ID: 0/2/CPU0 Node ID: 0/3/CPU0 RP/0/RP0/CPU0:II10-5504-Jon-Snow#show controllers fia diagshell 4 "diag multicast egress 42049 members" location all Node ID: 0/1/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/0/CPU0 Node ID: 0/2/CPU0 Node ID: 0/3/CPU0 RP/0/RP0/CPU0:II10-5504-Jon-Snow#show controllers fia diagshell 5 "diag multicast egress 42049 members" location all Node ID: 0/1/CPU0 Multicast group ID: 42049 does not exist in the specified configuration! Node ID: 0/0/CPU0 Node ID: 0/2/CPU0 Node ID: 0/3/CPU0 RP/0/RP0/CPU0:II10-5504-Jon-Snow#show controllers fia diagshell 6 "diag multicast egress 42049 members" location all Node ID: 0/1/CPU0 Node ID: 0/0/CPU0 Node ID: 0/2/CPU0 Node ID: 0/3/CPU0 RP/0/RP0/CPU0:II10-5504-Jon-Snow#show controllers fia diagshell 7 "diag multicast egress 42049 members" location all Node ID: 0/1/CPU0 Node ID: 0/0/CPU0 Node ID: 0/2/CPU0 Node ID: 0/3/CPU0
So pp port 9 is the member programmed. Let's make sure it points to Hu0/0/0/33:
RP/0/RP0/CPU0:II10-5504-Jon-Snow#show controllers npu voq-usage interface hu0/0/0/33 instance 2 location 0/0/CPU0 ------------------------------------------------------------------- Node ID: 0/0/CPU0 Intf Intf NPU NPU PP Sys VOQ Flow VOQ Port name handle # core Port Port base base port speed (hex) type ---------------------------------------------------------------------- Hu0/0/0/33 1d8 2 0 9 209 1496 13328 local 100G <<< it does!
Alright, so everything seems good here! Let's check the TAIL now.
Let's make sure mfib mpls forwarding info at PI level is ok.
RP/0/RP0/CPU0:II10-5508-The-Hound#show mrib mpls forwarding labels 24325 detail LSP information (mLDP) : LSM-ID: 0x00067, Role: Tail, Peek <<< check mainly the role and peek flag RPF-ID: 0x000CD, Assoc-TIDs: 0xe0000066/0xe0800066, MDT: Lmdtmvpn/p14/vrf1400 Incoming Label : 24325 Transported Protocol : <unknown> Explicit Null : None IP lookup : enabled Platform information : FGID: 66799, Tunnel RIF: 7, RIF VRF: 0 <<< most important info is the Tunnel RIF Outsegment Info #1 [T/Pop]: No info.
So the Tunnel RIF is key here. It is basically an ID allocated by GRID. It's a hashed value from two keys: VRF-ID and RPF-ID. Both will generate this tunnel rif and is the value we use for rpf check.
So if this tunnel rif from mrib doesn't match the hw entry, rpf check will fail.
Let's see what tunnel rif is in mrib.
RP/0/RP0/CPU0:II10-5508-The-Hound#show mrib vrf mvpn_p14_vrf1400 route 232.14.0.1 detail IP Multicast Routing Information Base Entry flags: L - Domain-Local Source, E - External Source to the Domain, C - Directly-Connected Check, S - Signal, IA - Inherit Accept, IF - Inherit From, D - Drop, ME - MDT Encap, EID - Encap ID, MD - MDT Decap, MT - MDT Threshold Crossed, MH - MDT interface handle CD - Conditional Decap, MPLS - MPLS Decap, EX - Extranet MoFE - MoFRR Enabled, MoFS - MoFRR State, MoFP - MoFRR Primary MoFB - MoFRR Backup, RPFID - RPF ID Set, X - VXLAN Interface flags: F - Forward, A - Accept, IC - Internal Copy, NS - Negate Signal, DP - Don't Preserve, SP - Signal Present, II - Internal Interest, ID - Internal Disinterest, LI - Local Interest, LD - Local Disinterest, DI - Decapsulation Interface EI - Encapsulation Interface, MI - MDT Interface, LVIF - MPLS Encap, EX - Extranet, A2 - Secondary Accept, MT - MDT Threshold Crossed, MA - Data MDT Assigned, LMI - mLDP MDT Interface, TMI - P2MP-TE MDT Interface IRMI - IR MDT Interface (8.14.0.2,232.14.0.1) Ver: 0x623b RPF nbr: 1.1.1.7 Flags: RPF RPFID, FGID: 66699, Statistics enabled: 0x0, Tunnel RIF: 7 <<< RIF is 7 Up: 20:16:48 RPF-ID: 205, Encap-ID: 0 <<< here you can see the RPF-ID Incoming Interface List Lmdtmvpn/p14/vrf1400 Flags: A LMI, Up: 20:16:48 Outgoing Interface List TenGigE0/6/0/4.1400 Flags: F NS LI, Up: 20:16:48
You can grab the VRF-ID from the rib tables command - doesn't matter in this case, just know how the RIF is built basically.
RP/0/RP0/CPU0:II10-5508-The-Hound#show rib tables | include "Table ID|vrf1400" VRF/Table SAFI Table ID PrfxLmt PrfxCnt TblVersion N F D C mvpn_p14_vrf1400/defa uni 0xe0000066 10000000 4 4 N N N Y <<< 0x66 is 102 in decimal
Now let's check the ingress lfib entry.
RP/0/RP0/CPU0:II10-5508-The-Hound#show mpls forwarding labels 24325 hardware ingress detail location 0/3/CPU0 Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 24325 Unlabelled mLDP/IR: 0x00067 <<< unlabelled outgoing as expected Updated Oct 8 16:27:51.026 mLDP/IR LSM-ID: 0x00067, MDT: 0x8000734 IPv4 Tableid: 0xe0000066, IPv6 Tableid: 0xe0800066 Flags:IP Lookup:set, Expnullv4:not-set, Expnullv6:not-set Payload Type v4:not-set, Payload Type v6:not-set, l2vpn:not-set Head:not-set, Tail:set, Bud:not-set, Peek:set, inclusive:not-set Ingress Drop:not-set, Egress Drop:not-set RPF-ID:205, Encap-ID:0 Disp-Tun:[ifh:0x0, label:-] Platform Data [28]: { 0 1 4 239 0 1 4 239 0 0 0 255 0 0 0 255 0 0 0 0 0 0 0 7 148 212 228 251 } mpls paths: 0, local mpls paths: 0, protected mpls paths: 0 LEAF - HAL pd context : sub-type : MPLS_P2MP, ecd_marked:0, has_collapsed_ldi:0 collapse_bwalk_required:0, ecdv2_marked:0, HW Walk: LEAF: PI:0x308c477528 PD:0x308c4775c8 rev:67903 type: MPLS_P2MP (11) FEC key: 0 label action: MPLS_POP_FIB_LKUP MOL: PD: 0x3090efcbf0 rev: 67902 dpa-rev: 0 fgid: 66799 (0x104ef) LSM id: 0 ipmcolist DPA hdl: 0x0 is head: 0 is tail: 1 is bud: 0 drop_flag: 0 <<< is_tail has to be set num MPIs: 0 tunnel vrf: 102 rpf: 205 rif: 7 <<< make sure rif matches mrib / "tunnel vrf" is the vrf-id master_lc: 255 master_np: 255 my_slot: 3 <<< don't worry about master_lc and master_np - only used in bud node
Now we need to verify the mfib hardware entry.
RP/0/RP0/CPU0:II10-5508-The-Hound#show mfib vrf mvpn_p14_vrf1400 hardware route 232.14.0.1 detail location 0/3/CPU0 Route (8.14.0.2: 232.14.0.1) HAL PD context VRF ID: 102 Core MCID : 0 Core backup MCID 0 HAL Ingress route context: Route FGID: 66699 RPF IF signal : not-set Local receivers: set <<< mcid is 66699 Encap ID flag: not-set, Encap ID: 0 Statistics enabled: not-set Ingress engine context: local_route: set, is_accept_intf_bvi: not-set is_tun_rif_set:set <<< make sure this is set VRF ID: 102 RPF ID:205 HAL Egress route context: RPF ID VALID: set, RPF ID: 0, Tunnnel UIDB 0 Egress engine context: out_of_sync: not-set, local_intf: not-set encap_intf:not-set bvi_count: 0 DPA Route context: Number of OLE 0 VRF ID: 102 Route Flags 0x0 <<< no OLE's as this is the ingress LC, so expected Incoming interface : None A_intf_id: 0x0 Merged flag 0 Tunnel RIF : 0x7 FGID: 66699 Core FGID 0 <<< this is the most important info to match FEC ID : 0x2001eedb Punt action: 0x0 Punt sub action 0 TCAM entry ID : 0x0 IPMC action: 0x0 FEC Accessed 1 L3 Intf Refhandle : 0x0 L3 interface ref key: 0x0 Statistics enabled : not-set Statistics activated : not-set
RP/0/RP0/CPU0:II10-5508-The-Hound#show mfib vrf mvpn_p14_vrf1400 hardware route 232.14.0.1 detail location 0/6/CPU0 Route (8.14.0.2: 232.14.0.1) HAL PD context VRF ID: 102 Core MCID : 0 Core backup MCID 0 HAL Ingress route context: Route FGID: 66699 RPF IF signal : not-set Local receivers: set Encap ID flag: not-set, Encap ID: 0 Statistics enabled: not-set Ingress engine context: local_route: set, is_accept_intf_bvi: not-set is_tun_rif_set:set VRF ID: 102 RPF ID:205 HAL Egress route context: RPF ID VALID: set, RPF ID: 0, Tunnnel UIDB 0 Egress engine context: out_of_sync: not-set, local_intf: set encap_intf:not-set bvi_count: 0 DPA Route context: Number of OLE 1 VRF ID: 102 Route Flags 0x0 <<< at the egress, we see the OLE programmed Incoming interface : None A_intf_id: 0x0 Merged flag 0 Tunnel RIF : 0x7 FGID: 66699 Core FGID 0 FEC ID : 0x2001eee2 Punt action: 0x0 Punt sub action 0 TCAM entry ID : 0x0 IPMC action: 0x0 FEC Accessed 1 L3 Intf Refhandle : 0x0 L3 interface ref key: 0x0 Statistics enabled : not-set Statistics activated : not-set Egress Route OLEs: NPU ID: 0 Outgoing intf Te0/6/0/4.1400 <<< this matches the mrib OLE Type : Main Interface outgoing port : 0xe2d cud: 0x0 is_bundle: 0 Sys_port : 0x0 mpls encap id: 0x0 LAG ID: 0 is_pw_access: 0 pw_encap_id:0x0 EFP-Visibility: not-set L3 intf refhndl : 0x3089d85c78 L3 intf refkey: 0x3000178 L2 Port refhandle : 0x3089dd0868 L2 Port refkey: 0x30000b0 MPLS nh refhandle : 0x0 MPLS nh refkey: 0x0 LAG port refhandle : 0x0 LAG port refkey: 0x0 Total fwd packets : 0 Total fwd bytes: 0
With that checked, we can go to the next step, the MCID check.
RP/0/RP0/CPU0:II10-5508-The-Hound#show controllers fia diagshell 0 "diag multicast egress 66699 members" location all Node ID: 0/5/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/1/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/6/CPU0 Multicast ID: 66699 Ingress/Egress: Egress-list Members count: 1 --------------------------------------------------------------------------------- Member | Gport Port | Gport Type | 1st Encap ID/CUD | 2nd Encap ID/CUD | --------------------------------------------------------------------------------- 0 | 0x400001d | Local_Port | 0x746 | 0x147ff | <<< 1d is 29, which matches with pp port of ten0/6/0/4 Node ID: 0/2/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/3/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/4/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5508-The-Hound#show controllers fia diagshell 1 "diag multicast egress 66699 members" location all Node ID: 0/5/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/1/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/6/CPU0 Node ID: 0/2/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/3/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/4/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5508-The-Hound#show controllers fia diagshell 2 "diag multicast egress 66699 members" location all Node ID: 0/5/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/1/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/6/CPU0 Node ID: 0/2/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/3/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/4/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5508-The-Hound#show controllers fia diagshell 3 "diag multicast egress 66699 members" location all Node ID: 0/5/CPU0 Node ID: 0/1/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/6/CPU0 Node ID: 0/2/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/3/CPU0 Node ID: 0/4/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5508-The-Hound#show controllers fia diagshell 4 "diag multicast egress 66699 members" location all Node ID: 0/5/CPU0 Node ID: 0/1/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/6/CPU0 Node ID: 0/2/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/3/CPU0 Node ID: 0/4/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5508-The-Hound#show controllers fia diagshell 5 "diag multicast egress 66699 members" location all Node ID: 0/5/CPU0 Node ID: 0/1/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/6/CPU0 Node ID: 0/2/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! Node ID: 0/3/CPU0 Node ID: 0/4/CPU0 Multicast group ID: 66699 does not exist in the specified configuration! RP/0/RP0/CPU0:II10-5508-The-Hound#show controllers fia diagshell 6 "diag multicast egress 66699 members" location all Node ID: 0/5/CPU0 Node ID: 0/1/CPU0 Node ID: 0/6/CPU0 Node ID: 0/2/CPU0 Node ID: 0/3/CPU0 Node ID: 0/4/CPU0 RP/0/RP0/CPU0:II10-5508-The-Hound#show controllers fia diagshell 7 "diag multicast egress 66699 members" location all Node ID: 0/5/CPU0 Node ID: 0/1/CPU0 Node ID: 0/6/CPU0 Node ID: 0/2/CPU0 Node ID: 0/3/CPU0 Node ID: 0/4/CPU0
RP/0/RP0/CPU0:II10-5508-The-Hound#show controllers npu voq-usage interface ten0/6/0/4 instance all location 0/6/CPU0 ------------------------------------------------------------------- Node ID: 0/6/CPU0 Intf Intf NPU NPU PP Sys VOQ Flow VOQ Port name handle # core Port Port base base port speed (hex) type ---------------------------------------------------------------------- Te0/6/0/4 30000b0 0 1 29 3629 1072 6176 local 10G <<< pp port 29 is programmed correctly on mcid
Now we confirmed the MCID points to the right interfaces.
There you go! Now you hopefully have a better feel of how this profile works!
Bruno Novais
Hey Bruno
Great article. Question, for a typical IPTV Mcast design with several source PE and many receiver PE routers will profile 14 work with just p2mp? Since each PE source PE has unique groups this wouldn't be considered mp2mp would it (eg profile 15)?
Thanks
Hi @Jon Berres
Thank you.
In a P2MP mldp scenario, each PE that has source traffic going to other PE's will have a tree with itself as the root, i.e. 5 trees if you have 5 PE's with sources behind in a single VRF. In summary, it would work just fine having multiple PE's with sources behind.
In MP2MP, you would only have 1 tree, the trick is that mp2mp trees allow leaves to send traffic upstream - that's basically it!
Cheers
Bruno Novais
Hi Jon
We don't currently support pim signalling over core on the ncs55xx today for mvpn, hence you wouldn't be able to run bsr over the core network. PIM SM is generally not used unless there's a hardware support limitation, but I would use ssm mapping where possible to not have to run PIM SM all over the network because of some legacy devices.
As far as compatibility, you could technically encapsulate your multicast traffic over a p2mp AND a mp2mp tree, but i don't see a need for it to be honest? usually mp2mp and p2mp are very similar and i don't see a need to run both types of trees...you're probably better off with one type only.
Cheers!
Bruno Novais
Thanks for the detailed analysis - a definite bookmark!
I notice that the NCS55xx mVPN configuration guide (e.g. https://www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/multicast/65X/b-multicast-cg-ncs5500-65x/b-multicast-cg-ncs5500-65x_chapter_010.html#concept_5291AB7C3FA447C6BBD3FB472322CAB3 ) states that the 55xx platforms only support profiles 6, 7, 8 & 10 as a PE, however you clearly have profile 14 working here.
I have been successfully using profile 12 on an NCS55A1 in the lab, which is also not supported according to the configuration guide. Do you think the documentation is incorrect or is this a case of "works" != "supported"? Profile 12 support would be extremely useful for a project I'm working on right now.
Hi @FM_PM!
It's just missing in the documentation, but 663 onwards we support profile 14 in the ncs55xx.
Profile 12 has not been tested, but the main difference is default mdt vs partitioned mdt...so technically not much difference at all..the difference is around platform-independent bgp code to advertise right mldp tree type.
Note though, it is technically not supported, so would you be willing to move to profile 14? It's probably a better choice anyways due to p-mdt being more optimal than default mdt.
Feel free to reach out to me on brusilva@cisco.com and we can chat more!
Cheers!
Bruno Novais
Hi Bruno
Great post! I am trying to configure profile 14 and ran into some problems with regard to using data MDT's. I want each group to have its own data MDT. I could not get this to work but finally I tried this:
multicast-routing ! vrf MCAST-1 mdt data 255 immediate-switch
This works and now every time a new group is requested a new MDT and mLDP binding is added. I could not find this in any documentation so it was just to try. Is this the right way of doing it? Also this added another problem, which is that when a receiver joins a new group is takes precisely 15 seconds (give or take 50 ms) before the multicast flow starts to stream. When I look at the controle plane everything looks right the BGP advertisement is triggered right away the mLDP binding is added right away the same with the entry in the mrib and mfib but the receiver PE does not receive any multicast flow before after 15 seconds. Is this be design? Or is there some timers to change to get this to work faster?
@henriksenauit thank you!
So technically mdt data on ncs5500 is not yet supported for profile 14.
I do want to say though - it will work if you do it like this or using an rpl specifying which groups you want mdt data for. The dynamic mdt data switchover is not going to work at all. Note that while it works, you may not get support for that, so use it with care! :)
It is hard to say what's happening here without details, but do open up a tac case if that's a concern and we can certainly look and see if there are optimizations to be done - there shouldn't be any timers to make this faster, but note we can only look if it's p-mdt performance, not yet for data mdt as that hasn't been officially made a supported feature, so performance might not be the greatest yet.
Cheers!
Bruno Novais
Thank you very much for the fast answer!
Do you know if there are a planned release where MDT data will be supported?
Also then if I understand it correct without the MDT data will that not mean that all PE's in the network will receive all multicast groups that has been joined across the whole network? E.g if I have 100 groups and 100 PE's that each has join a different group, then all 100 PE's will receive all 100 groups even through that they have only joined one group?
Hello Bruno,
I'm testing NCS-540 and profile 14 works fine, however I'm not able to have intra-AS traffic. (option B)
Followed indication from TAC document https://www.cisco.com/c/en/us/support/docs/ip/multicast/200512-Configure-mVPN-Profiles-within-Cisco-IOS.html#anc40
Any suggestion ? I don't see any type 2 AD route on ASBR
Also, my core is mainly composed by ASR9K, but we still shave some 76xx boxes and my preferred solution would be IR (profile 25). Is this supported on NCS-540 ? Any other suggestion to move low bw (< 50M) across AS without making big changes on a currently ldp only core ?
Best Regards,
Maurizio
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: