cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
735
Views
0
Helpful
12
Replies

SDA Multisite Underlay

packet2020
Level 1
Level 1

Hi All,

I'm currently looking at another multisite SD-Access design for a small number of sites (~10) that all connect to a single hub/core site using metro-e WAN circuits. I'm looking to use SD-Access transit in this design with a "hub" fabric site that will provide external fabric connectivity using IP transit. So all sites will be added to the SD-Access transit with just the hub fabric site providing IP transit. MTU etc is suitable to support this.

My question is about best practises/recommendations for establishing the underlay for all of these sites. So far I have come up with three options:


1) Each fabric site will have thier own local IGP (ISIS/OSFP) for the underlay, with the fabric borders for each site connecting to the hub site using BGP. BGP advertises the fabric border loopbacks for each site (which is required for SD-Access transit communication) as well as a summary for the each site's underlay (and AP pools if the WLCs are hosted central)

2) Each fabric site will have thier own local IGP (again ISIS/OSFP) for the underlay, with the fabric borders for each site connecting to the hub also using an IGP. This would be a different IGP or process to keep the internal and external underlay networks seperate (so ISIS for the site's local underlay and OSPF on the fabric borders and hub)

3) Use the same IGP for all sites using a single or multiple areas. For example, the fabric borders and hub will be in OSPF area 0, and the IGP for each site's local underlay will also be in OSPF area 0, or seperate areas (area 1 for site 1 underlay, area for site 2 etc)

Thanks

12 Replies 12

I'm aware of that as I posted originally but the associated project got delayed. I should have referenced that post and clearly stated that I'm after any updated best practises or practical examples/feedback from the community. 

Have you implemented any multisite deployments, and if so did you opt to go with BGP handoff in the underlay or just used OSPF/ISIS? There are lots of design documents on cisco.com and on cisco live that provide details of multisite deployments, but nothing that details the options and considerations for the underlay with pros/cons

i did with case similar to yours. i used single L2 ISIS underlay though as SDA-transit links were manually configured to summarize non-RLOC prefixes across transit. 

Thanks. This is where my understanding lacks slightly. So if all of your fabric sites where part of a single L2 ISIS underlay, how do INFRA_VN subnets get advertised outside of the fabric, such as IP pools for APs? Typically these get advertised into BGP on the BNs in the GRT, but if you dont have BGP peering out anywhere, how are AP subnets reached from outside the fabric?

though INFRA_VN is merged with GRT in Fabric its prefixes dont populate underlay IGP. Instead their endpoints are EIDs of lisp like below:
instance-id 4097
remote-rloc-probe on-route-change
dynamic-eid AP-IPV4
database-mapping 10.X.Y.128/25 locator-set <rloc-set>
exit-dynamic-eid
service ipv4
eid-table default
On the BN lisp gets redistributed in ipv4 unicast for GRT (it's automatically done by CatC with BGP)
For IS-IS it's not done automatically.

Ok so if we used a single L2 IS-IS underlay we would need to manually configure the BNs to redistribute LISP into IS-IS to allow the APs pools to be reachable outside of the fabric (which is needed for DHCP and if the WLCs are located centrally)?

correct. but be aware that with any non-automated option which are 2 last u'll need to pay much more manual configuration.

jedolphi
Cisco Employee
Cisco Employee


Hi,

The challenge with option 1 (not insurmountable, but easier if avoided) is that all sites connected to SDA Transit will originate LISP registrations for all other sites into BGP. In this case, it means all sites will originate all other sites' INFRA_VN routes in BGP. This complicates the WAN BGP routing table, which then needs to be decluttered with manual route filters.

The challenge with option 3 is that we don’t need Site 1 loopbacks to be known in Sites 2–10, which will happen automatically if it’s all one OSPF domain. It’s not a problem if there are only a small number of loopbacks and p2p links, but if it becomes thousands of /32s and /31s, then smaller fabric switches might become overwhelmed. You could make the non-zero areas totally stubby, but then later, if you want to inject an external route into an area, the totally stubby design would be a blocker.

Personally, I like option 2. To answer your second question: if only your hub site BNs are running a BGP IP Transit, then all LISP prefixes from all VNs at all sites (including INFRA_VN) will automatically be redistributed from LISP to BGP. The challenge will be the non-LISP prefixes, e.g., the loopback0 range from each site. I’d do as Andrii suggested and add summary routes for loopbacks into OSPF at each spoke site and then inject those summary routes into BGP at the hub site. BGP network statements at the hub site should do the trick.

HTH,
Jerome

Thanks for the response @jedolphi - So if we went with option 2, communication to/from the AP subnets at each of the spoke sites will be carried through LISP/SD-Access transit, entering and exiting through the hub site's BGP IP transit, and not nativley in the underlay via the spoke sites WAN links. Is that correct? Also what is the use case to inject the non-LISP prefixes (loopback0 ranges) into BGP at the hub site? I'm not quite following the need to do that. Thanks 

" if we went with option 2, communication to/from the AP subnets at each of the spoke sites will be carried through LISP/SD-Access transit"
communications to/from the AP subnets (ed. population) will be carried via VXLAN across SDA-transit. LISP is for signalling/control-plane only.
"Also what is the use case to inject the non-LISP prefixes (loopback0 ranges) into BGP at the hub site? I'm not quite following the need to do that."
i assume u asks for RLOCs IPs. They are injected into BGP with network statement. if u have RLOCs IPs in the contiguous subnet. Otherwise there will be many /32 network statement. This is what CatC will automate for u. but can aggregate RLOCs with less specific subnet if your underlay IP-plan allows for this.
UPD: why is it needed? GRT behind BGP-peer on IP-transit preferably to know how to reach out to BNs at least from outside :0)

Thanks

I've tested option 2 with ISIS and OSPF on a spoke site's border nodes, and I had to redistribute ISIS into OSPF and then configure a summary in OSPF to advertise a summary of the spoke site's loopbacks to the core. I also had to redistribute OSPF back into to ISIS to advertise shared services subnets to the spoke site's fabric edge nodes (WLC subnet and /32s for multisite remote border nodes that we are planning on adding at a later phase). I had to put in extra controls to prevent issues with mutual/two-way redistribution between IS-IS and OSPF. This worked ok but required a bit of manual config.

I also tested by adding an AP pool to the spoke site, and as stated above, the AP pool was advertised by the hub site's IP transit (using BGP). What was interesting is that traffic to the spoke site's AP pool from outside of the fabric flows in via the hub's IP transit and VXLAN to the spoke site, however traffic back out from the the AP pool flows via the spoke site's WAN link in the underlay (following a default learnt via OSPF). Without the default in OSPF, the AP pool was unreachable. This didnt cause an issue with DHCP or allowing a test AP to join a central WLC, however it did create an asymetric flow, which I think based on the above was expected. To force AP traffic to flow via the spoke site directly in the underlay, I had to apply a filter to the IP transit as well as redistribute LISP into OSPF on the spoke site's border nodes.

In contrast, a lot less manual config with option 1 using BGP. ISIS was automatically advertised into BGP on the border nodes during LAN automation as well as the AP pool once provisioned. The only manual config needed was to redistribute shared services routes from BGP into ISIS in the underlay and to apply route filters on the core side of the BGP peers to only allow the site's respective border loopbacks, underlay summary and AP pool.

sounds good. though i dont really get where u put OSPF to play. i assumed we always say about scenario with IGP bw spoke fabric & aggregating BN in underlay with eBGP bw aggregating BN & non-LISP network. thus there is always one IGP proto in use.
nevertheless, good to know u managed to do what u needed.