cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2536
Views
0
Helpful
6
Replies

Multipod over MPLS - Prefix Distribution?

dit network
Level 1
Level 1

In our lab environment we're trying to build multipod over an MPLS network with Nexus 7k's acting as the IPN routers.  We've run into a number of problems, but now the latest is that endpoints in pod2 can only ping the anycast address of the bridge-domain that they're in but nothing else.

 

My hunch is that we're not distributing prefixes correctly into the multipod vrf context as 1) there doesn't really seem to be any great documentation on what prefixes I need to distribute and 2) MPLS configurations in nxos are pretty new to me.

 

Any help would be greatly appreciated.

 

Here's our current prefix distribution list along with some descriptions of what they are in regards to our setup:

 

===================================================
!-----ACI ROUTE PREFIXES FOR REDISTRIBUTION <BEGIN>-----!
!
!POD1 NODE TEP IP ADDRESS POOL!
ip prefix-list aci-multipod-prefix seq 1 permit 10.1.0.0/16
!POD2 NODE TEP IP ADDRESS POOL!
ip prefix-list aci-multipod-prefix seq 2 permit 10.2.0.0/16
!APIC IP ADDRESSES!
ip prefix-list aci-multipod-prefix seq 11 permit 10.1.0.1/32
ip prefix-list aci-multipod-prefix seq 12 permit 10.1.0.2/32
ip prefix-list aci-multipod-prefix seq 13 permit 10.1.0.3/32
!POD1 VTEP ANYCAST IP ADDRESS!
ip prefix-list aci-multipod-prefix seq 101 permit 172.30.6.1/32
!POD1 VTEP L3 NETWORKS!
ip prefix-list aci-multipod-prefix seq 110 permit 172.30.6.4/30
ip prefix-list aci-multipod-prefix seq 120 permit 172.30.6.8/30
ip prefix-list aci-multipod-prefix seq 130 permit 172.30.6.12/30
ip prefix-list aci-multipod-prefix seq 140 permit 172.30.6.16/30
!POD2 VTEP ANYCAST IP ADDRESS!
ip prefix-list aci-multipod-prefix seq 201 permit 172.30.6.129/32
!POD2 VTEP L3 NETWORKS!
ip prefix-list aci-multipod-prefix seq 210 permit 172.30.6.132/30
ip prefix-list aci-multipod-prefix seq 220 permit 172.30.6.136/30
ip prefix-list aci-multipod-prefix seq 230 permit 172.30.6.140/30
ip prefix-list aci-multipod-prefix seq 240 permit 172.30.6.144/30
!IPN AND SPINE MULTIPOD LOOPBACKS!
ip prefix-list aci-multipod-prefix seq 1001 permit 172.30.19.1/32
ip prefix-list aci-multipod-prefix seq 1002 permit 172.30.19.2/32
ip prefix-list aci-multipod-prefix seq 1003 permit 172.30.19.3/32
ip prefix-list aci-multipod-prefix seq 1004 permit 172.30.19.4/32
ip prefix-list aci-multipod-prefix seq 1005 permit 172.30.19.5/32
ip prefix-list aci-multipod-prefix seq 1006 permit 172.30.19.6/32
ip prefix-list aci-multipod-prefix seq 1007 permit 172.30.19.7/32
ip prefix-list aci-multipod-prefix seq 1008 permit 172.30.19.8/32
!MULTICAST PHANTOM RP-THIS IS DIFFERENT BETWEEN POD1 IPN and POD2 IPN
ip prefix-list aci-multipod-prefix seq 10000 permit 172.31.31.136/30  (<--/29 on POD2 IPN)
!
!-----ACI ROUTE PREFIXES FOR REDISTRIBUTION <END>-----!

route-map multipod-route-map permit 10
  match ip address prefix-list aci-multipod-prefix

=================================================================

 

Here's the VRF configurations on the Nexus side:

 

=====================================================

vrf context aci-multipod
  ip pim rp-address 172.31.31.136 group-list 225.0.0.0/8 bidir
  ip pim rp-address 172.31.31.136 group-list 239.0.0.0/8 bidir
  ip pim ssm range 232.0.0.0/8
  rd 1:411
  address-family ipv4 unicast
    route-target import 1:411
    route-target export 1:411

=================================================

 

And finally our routing configurations for OSPF and BGP:

 

=================================================

router ospf 1
  vrf aci-multipod
    router-id 172.30.19.8
    redistribute bgp 65500 route-map multipod-route-map
    log-adjacency-changes
router bgp 65500
  router-id 1.1.2.253
  log-neighbor-changes
  neighbor 1.1.1.254 remote-as 65500
    update-source loopback8
    address-family vpnv4 unicast
      send-community extended
  neighbor 1.1.2.254 remote-as 65500
    update-source loopback8
    address-family vpnv4 unicast
      send-community extended
  vrf aci-multipod
    address-family ipv4 unicast
      redistribute direct route-map multipod-route-map
      redistribute ospf 1 route-map multipod-route-map

=======================================================

6 Replies 6

dit network
Level 1
Level 1

Made a slight modification to the multicast configuration:

 

vrf context aci-multipod
  ip pim rp-address 172.31.31.138 group-list 225.0.0.0/8 bidir
  ip pim rp-address 172.31.31.138 group-list 239.0.0.0/8 bidir
  ip pim ssm range 232.0.0.0/8
  rd 1:411
  address-family ipv4 unicast
    route-target import 1:411
    route-target export 1:411

 

=====pod1-IPN=====

interface loopback11
  description ***ACI Phantom RP Supernet Primary***
  vrf member aci-multipod
  ip address 172.31.31.137/30
  ip ospf network point-to-point
  ip router ospf 1 area 0.0.0.0
  ip pim sparse-mode

 

=====pod2-IPN=====

interface loopback11
  description ***ACI Phantom RP Supernet Secondary***
  vrf member aci-multipod
  ip address 172.31.31.137/29
  ip ospf network point-to-point
  ip router ospf 1 area 0.0.0.0
  ip pim sparse-mode

Hi,

on the Pod-to-Pod MPLS connection what kind of VPN are you using? L2VPN?

Be aware that L3VPN must support pim-bidir in order to provide the necessary flooding mechanisms.

 

Cheers.

I have the same problem of running the aci multipod over mpls. Just wondering whether you managed to eventually found the issue and implemented successfully in the live environment. What were the key points for this setup to work?

Thanks

HI @abal,

The ACI IPN network is a simple routed network which must provide end-to-end communication.

In its simplest form, it can be just two inter-connected devices providing routing between ACI Spine Switches on the Pods these are connected to.

The IPN must fulfill the next pre-requisites for the ACI Multi-Pod architecture to work properly:

  • OSPF Adjacency between the IPN Switches connecting with the ACI Spine Switches
  • 802.1Q tag 4 (Vlan 4) between the IPN Switches connecting with the ACI Spine Switches
  • Multicast PIM BiDIR end-to-end in the IPN
  • Jumbo Frames end-to-end in the IPN (preferable, otherwise at least MTU 1600 because of VXLAN header overhead)
  • DHCP relay enabled on IPN Switches on the interface facing Spines. One entry per APIC TEP address.

If you are using an MPLS network as an IPN, you should make sure these pre-requisites are fulfilled.

I usually recommend to my customers to review the below document when designing an IPN for ACI Multi-Pod:

Cisco Application Centric Infrastructure Multi-Pod Configuration White Paper

I hope this helps!

Has anyone managed to make this work? Can we know what additional changes might be required for the MPLS part?

randreetta
Level 1
Level 1

"In our lab environment we're trying to build multipod over an MPLS network with Nexus 7k's acting as the IPN routers."

 

... so you're using the 7k as PE ? where are the mpls configurations ? what the core interfaces ? what is the IGP in the core ? where is ldp ? where are multicast configurations in the core mpls network ? if you want to transport multicast on an mpls network (like if it is an ISP) you will need all the above configurations, not just a couple of bgp sessions. What do you use those bgp sessions for ?

 

Save 25% on Day-2 Operations Add-On License