cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5562
Views
17
Helpful
10
Replies

What can we connect to Spine switches?

Steven Williams
Level 4
Level 4

It seems in the early versions of ACI Spines connected to leaf switches only, I am reading this is no longer the case. Can Spine switches connect out to the traditional network and basically take the place of border leafs? I am not quite understanding this article: https://www.mvankleij.nl/post/aci_topology_hardware/

 

 

10 Replies 10

Steven Williams
Level 4
Level 4

Ok so Spines connect to an IPN which is strictly for Multi-site ACI pod deployments. I have also read only OSPF is supported from the IPN switches across multi-sites. Not sure why only OSPF, seems novice...Also seems like Multicast is a big player in multi-site pods. I get that the role of the spine is route leaf to leaf traffic...but you would think the Spines were the entry and exit point of the traditional network and not border leafs. Just like back in the day 2ks would connect to 5k, 5ks connected to 7ks, 7ks connected to campus cores like 6500s. 

Spines can connect to Routers when talking about GOLF feature and will also connect to routers for the Inter-pod/inter-spine network for both Multi-Pod and Multi-site. Spines can also directly connect to another Site's spines in the case of Multisite.

OSPF is fairly simple to configure and in this case we do not need anything more complicated than that, so why go with something more complicated? The MP-BGP of the underlay is all configured for you. PIM is only required in the Multipod case, it is not required when configuring Multisite.

So can they connect to your traditional network and forgo border leafs? I feel like the answer is no, but kinda yes. I assume it will connect to your traditional network in the WAN agg layer for multi-site/multi-pod, but cannot connect to something like your campus core.

Hi @Steven Williams

 

Its not uncommon to look at it that way...but I like to tell my clients to look at ACI as a big 6500 switch..you have line cards (leaf switches in ACI) and you have a backplane (Spine switches in ACI).  Its not a traditional network..its a fabric..right?  Your external connections are on the line cards (they don't tap into the backplane directly).  Why is it different for Multi Site...I like to think of Multi Site as connecting two backplanes together.  

As far as OSPF,  you are not the fist to think that. I always figured it has to do supporting multi vendor networks with a mature non-proprietary routing protocol (yes EIGRP is "open" now but it has limitations in ACI) and may not have support if you are coming out of your ACI to another vendors router.  

mvknl
Level 1
Level 1

Hi Steven,

 

It's nice to see my own blog being referenced here. I hope you like it (even though the referenced article caused some confusion).

 

As to connecting stuff to the spines, currently there are four use cases (that I'm aware of). Except for remote leafs these use cases have all been mentioned here in this topic:

 

- Multi-Pod

- Multi-Site (Including connections into the cloud for ACI anywhere deployments)

- Remote (physical) Leaf

- GOLF

 

 

Multi-Pod is used to connect two (or more) 'datacenters' to each other where both datacenters are part of the same fabric. This means that both datacenters will be managed by a single APIC cluster. To connect these datacenters together we need to have a InterPod Network. This network uses OSPF, PIM BIDIR and DHCP Relay to enable communications between the two datacenters. These InterPod switches are directly connected to the Spine switches.

 

Multi-Site is also used to connect two (or more) datacenters to each other. The big difference between Multi-Pod and Multi-Site is that multi-site are in fact two fabrics that are connected to each other. These fabrics are managed by their own APIC cluster. On top of these APIC clusters is the Multi-Site Orchestrator. To connect sites together in a Multi-Site deployment you also need a network (but this time it's called InterSite Network). You can also use back-to-back connections in a Multi-Site deployment in which you connect the spines from site 1 directly to the spines in site 2. The intersite network has less requirements than the interpod network as it doesn't require PIM BIDIR or DHCP Relay (but it still requires OSPF).

 

Remote leafs enable you to extend your fabric into a satellite site where you don't want to use spine switches. The leaf switches in that case are connected to an IP network which is also connected to the spines in the actual datacenter. This IP network will again be connected using OSPF to the spines (and remote leaf switches) and uses DHCP relay for the discovery of the leafs. Multicast isn't used in this case, so there is no requirement for PIM BIDIR.

 

GOLF is something else entirely. This is a scalability feature which enables you to create a single BGP session for all Tenants and VRFs in the fabric. The VRF's in this case are extended into the regular network. The first image on this website shows it clearer than I can ever say in text: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/L3_config/b_Cisco_APIC_Layer_3_Configuration_Guide/b_Cisco_APIC_Layer_3_Configuration_Guide_chapter_010010.html

In short, when using GOLF you could in fact forego border leafs.

 

As to OSPF, I don't know why OSPF was chosen here, but both multi-pod and multi-site have some weird choices in the design. For both the IPN/ISN requires the use of vlan 4 for the OSPF connections. This can't be changed.

 

Please let me know if you have other questions and whether there is anything I can clarify on my website :-)

There was a portion in your blog that stated: "There are some requirements to the IPN, which we will cover in detail later. For now, remember that the IPN must support OSPF and PIM BIDIR."

Did you ever finish this post or explain it in more detail?

I would think border leafs are slightly a bottleneck when you look at the horsepower of leafs vs spines, so I would appeal to me to move all ingress/egress through the spines to the traditional network. I assume others have felt this way as well.

Hi Steven,


@Steven Williams wrote:
... so I would appeal to me to move all ingress/egress through the spines to the traditional network. I assume others have felt this way as well.

Unless you plan on using GOLF and having BGP EVPN on your WAN routers, the SPINEs cannot be used to interconnect with your traditional network. You can use SPINEs only for interconnecting:

  •  two or more ACI Pods over IPN (inter pod network)
  •  two or more ACI Sites over ISN (inter site network)
  •  local ACI Pod with a remote leaf over IPN
  •  local ACI Site with public cloud

Cheers,

Sergiu

It's still a work in progress. These posts are pre-published posts for a whole lot more which ultimate goal is to help people pass the DCACI exam. This is a lot of work and not yet done. But I'm still working on it.

 

As to whether leafs are a bottleneck for L3outs. Keep in mind that both leafs and spines are line rate switches. They are capable of transporting way more data than most L3outs ever will require. Furthermore, Leafs are designed to have more policy applied on them. It looks like spines have more horsepower, but they have different goals to achieve. 

Very well put @mvknl

nice one @mvknl 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Save 25% on Day-2 Operations Add-On License