04-14-2020 11:18 AM
It seems in the early versions of ACI Spines connected to leaf switches only, I am reading this is no longer the case. Can Spine switches connect out to the traditional network and basically take the place of border leafs? I am not quite understanding this article: https://www.mvankleij.nl/post/aci_topology_hardware/
04-14-2020 11:35 AM
Ok so Spines connect to an IPN which is strictly for Multi-site ACI pod deployments. I have also read only OSPF is supported from the IPN switches across multi-sites. Not sure why only OSPF, seems novice...Also seems like Multicast is a big player in multi-site pods. I get that the role of the spine is route leaf to leaf traffic...but you would think the Spines were the entry and exit point of the traditional network and not border leafs. Just like back in the day 2ks would connect to 5k, 5ks connected to 7ks, 7ks connected to campus cores like 6500s.
04-14-2020 11:46 AM
04-14-2020 12:17 PM
04-14-2020 12:00 PM
Its not uncommon to look at it that way...but I like to tell my clients to look at ACI as a big 6500 switch..you have line cards (leaf switches in ACI) and you have a backplane (Spine switches in ACI). Its not a traditional network..its a fabric..right? Your external connections are on the line cards (they don't tap into the backplane directly). Why is it different for Multi Site...I like to think of Multi Site as connecting two backplanes together.
As far as OSPF, you are not the fist to think that. I always figured it has to do supporting multi vendor networks with a mature non-proprietary routing protocol (yes EIGRP is "open" now but it has limitations in ACI) and may not have support if you are coming out of your ACI to another vendors router.
04-14-2020 01:04 PM
Hi Steven,
It's nice to see my own blog being referenced here. I hope you like it (even though the referenced article caused some confusion).
As to connecting stuff to the spines, currently there are four use cases (that I'm aware of). Except for remote leafs these use cases have all been mentioned here in this topic:
- Multi-Pod
- Multi-Site (Including connections into the cloud for ACI anywhere deployments)
- Remote (physical) Leaf
- GOLF
Multi-Pod is used to connect two (or more) 'datacenters' to each other where both datacenters are part of the same fabric. This means that both datacenters will be managed by a single APIC cluster. To connect these datacenters together we need to have a InterPod Network. This network uses OSPF, PIM BIDIR and DHCP Relay to enable communications between the two datacenters. These InterPod switches are directly connected to the Spine switches.
Multi-Site is also used to connect two (or more) datacenters to each other. The big difference between Multi-Pod and Multi-Site is that multi-site are in fact two fabrics that are connected to each other. These fabrics are managed by their own APIC cluster. On top of these APIC clusters is the Multi-Site Orchestrator. To connect sites together in a Multi-Site deployment you also need a network (but this time it's called InterSite Network). You can also use back-to-back connections in a Multi-Site deployment in which you connect the spines from site 1 directly to the spines in site 2. The intersite network has less requirements than the interpod network as it doesn't require PIM BIDIR or DHCP Relay (but it still requires OSPF).
Remote leafs enable you to extend your fabric into a satellite site where you don't want to use spine switches. The leaf switches in that case are connected to an IP network which is also connected to the spines in the actual datacenter. This IP network will again be connected using OSPF to the spines (and remote leaf switches) and uses DHCP relay for the discovery of the leafs. Multicast isn't used in this case, so there is no requirement for PIM BIDIR.
GOLF is something else entirely. This is a scalability feature which enables you to create a single BGP session for all Tenants and VRFs in the fabric. The VRF's in this case are extended into the regular network. The first image on this website shows it clearer than I can ever say in text: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/L3_config/b_Cisco_APIC_Layer_3_Configuration_Guide/b_Cisco_APIC_Layer_3_Configuration_Guide_chapter_010010.html
In short, when using GOLF you could in fact forego border leafs.
As to OSPF, I don't know why OSPF was chosen here, but both multi-pod and multi-site have some weird choices in the design. For both the IPN/ISN requires the use of vlan 4 for the OSPF connections. This can't be changed.
Please let me know if you have other questions and whether there is anything I can clarify on my website :-)
04-14-2020 01:27 PM
04-14-2020 11:15 PM
Hi Steven,
@Steven Williams wrote:
... so I would appeal to me to move all ingress/egress through the spines to the traditional network. I assume others have felt this way as well.
Unless you plan on using GOLF and having BGP EVPN on your WAN routers, the SPINEs cannot be used to interconnect with your traditional network. You can use SPINEs only for interconnecting:
Cheers,
Sergiu
04-15-2020 01:44 AM
It's still a work in progress. These posts are pre-published posts for a whole lot more which ultimate goal is to help people pass the DCACI exam. This is a lot of work and not yet done. But I'm still working on it.
As to whether leafs are a bottleneck for L3outs. Keep in mind that both leafs and spines are line rate switches. They are capable of transporting way more data than most L3outs ever will require. Furthermore, Leafs are designed to have more policy applied on them. It looks like spines have more horsepower, but they have different goals to achieve.
04-15-2020 06:26 AM
03-30-2022 06:41 AM
nice one @mvknl
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide