cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2544
Views
0
Helpful
5
Replies

ACI to be or not to be.NEED ADVICE

D_Lebedev
Level 3
Level 3

Hi everybody,

 

It's time to upgrade our core network,so i pointed my choise on Nexus 7K 9K solutions.

 

Nexus 9K available in "classic" and ACI modes. Which one will you advice?

Anyway we're going to use 10G copper links with our server farm,but on what  practical advantages depend of ACI?

 

Are there any requirements for server HW? Sorry if this question is dummy but i wasn't able to find the answer by myself.

 

Which server farm is preffered for get more advantages from ACI- Virtual farm or phys. servers?

 

I would like if possible listen to the opinion who realy uses it.

 

Thank you.

1 Accepted Solution

Accepted Solutions

RedNectar
VIP
VIP

Hi @D_Lebedev,

See answers online

Nexus 9K available in "classic" and ACI modes. Which one will you advice? .... what  practical advantages depend of ACI?

As you have probably already realised, the Nexus 9K in either classic or ACI mode gives you a rich set of routing and switching features, and even a powerful API if you want to integrate your hardware with automation/orchestrations suites such as Open Stack.

Whether you choose classic or ACI mode, you will probably want to design a data-centre topology that will allow expansion (possibly rapid expansion) in the future.  Current thinking on data centre topologies is to use a Spine and Leaf topology (sometimes referred to as a Clos topology after Charles Clos who wrote a paper about it in 1953). 

Now the problem with Spine and Leaf topologies is that the prevailing current L2 topology defining protocol - i.e. Spanning Tree - does not support an ECMP (Equal-Cost Multi Path) between the leaves and the spine switches, so no matter how many spine switches you deploy, there will be only one active path between any two leaf switches.

So to deploy ECMP, designers of data centre Spine/Leaf topologies have made use of two technologies, one old and one somewhat newer.  In the case of ACI, these choices are fixed.  If you decide to build your Leaf/Spine topology with regular Nexus 7K/9K or other switches, you will have to choose your own answers and build it yourself.

  1. Use a layer 3 routing protocol to provide ECMP
  2. Encapsulate L2 frames at the Leaf edge and route them across the L3 ECMP topology

To make #1 work, the links between the Leaf and Spine are deployed at Layer 2 as point-to-point links, making it possible to simplify the L3 addressing of the Leaf and Spine devices to a single L3 address which would be deployed as a loopback address.  Strictly speaking, the L3 protocol does not have to be IPv4, (it could be IPv6, or research Fabricpath for a non-IP example) but ACI uses IPv4, and if you were deigning your own I imagine you would too.  So given that you are using IPv4 as the L3 protocol, all you need now is a L3 routing protocol that supports ECMP.  Your choices are BGP, EIGRP, IS-IS, OSPF and RIP.  For ACI, Cisco chose IS-IS.  If you are designing your own, it's up to you.

To make #2 work, you need to choose an encapsulation method that supports multiple overlays and can encapsulate L2 frames.  The obvious choice is VXLAN which is what Cisco chose for ACI, but you could for example choose to use NVGRE.

So if you are ready to move into the next generation of data centre topology design - ie Leaf/Spine, ACI has some distinct advantages because if makes the choices of ECMP and encapsulation for you and keeps them completely transparent to the user, you don't have to configure a thing.  If you want to build and design your own Leaf/Spine you will have to make your own choices and configure it yourself - although if you choose 9Ks you will find that there is still quite a bit of support/help out there.

However, ACI still has one more advantage in this scalable future.

Centralised control.

You do all the configuration of your topology in one place. On the APIC (Application Infrastructure Application Controller). You no longer have to configure each switch individually. The APIC sees the switches as stateless devices, so should say a leaf switch fail, it could be replaced and assigned the same ID as the previous switch and all configuration would be restored.

 

Are there any requirements for server HW? Sorry if this question is dummy but i wasn't able to find the answer by myself.

Not a dummy question - but no, there are no requirements for server HW.  This indeed is one of ACI's strong advantages over many competing systems.  ACI doesn't care if you use Bare-Metal Hosts, VMware virtualised hosts running on any variety of VMware, or Microsoft virtualised hosts or any other Hypervisor. ACI is host/hypervisor agnostic. However...(see next answer)

Which server farm is preffered for get more advantages from ACI- Virtual farm or phys. servers?

...there are distinct advantages if you integrate ACI with your virtual servers.  You can, for example, configure ACI to integrate with VMware or Microsoft so that when a new Application Profile and its associated End Point Groups are created in ACI, appropriate vSwitch configurations are pushed to VMware vCenter or Microsoft System Center (or both at the same time if using both) to make VM assignment to the appropriate security group (End Point Group) seamless.

I hope this helps


Don't forget to mark answers as correct if it solves your problem. This helps others find the correct answer if they search for the same problem


RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

View solution in original post

5 Replies 5

D_Lebedev
Level 3
Level 3

Please share your experience.

i'm trying to build/implement an ACI fabric right now.

it's a very steep learning curve and it has some limitations.

things do not always map simply from legacy to aci. in fact every step of the way so far for me has been painful.

 

from what i can see, ACI is not exactly a new technology. it is a smorgasbord of every routing protocol known to man, stitched together (very cleverly!) to form a fabric. the policy and object model is obviously new, but it has clearly been built upon older technologies (BGP, OSPF, VXLAN, ISIS, etc etc). this brings with it some limitations (for example, routes learned from one leaf by OSPF are redistributed across the fabric via BGP, so routing metrics are lost.... but if transit routing on the same leaf, then same OSPF process is used so the metrics are preserved).

 

in my humble opinion, ACI is not mature enough yet. it's also very "heavyweight" (e.g. APICs, Spines, IPNs, 40Gb-only connectivity) and i would say it's definitely not suited for small environments.

 

It does have potential however, but i think the software needs maturing.

Hi @jamie-nicol,

i'm trying to build/implement an ACI fabric right now.

it's a very steep learning curve and it has some limitations.

Indeed it does have a steep learning curve. And it is a Data Centre technology - if you try to use it for other purposes such as a Service Provider backbone, you will find it has some limitations.  Some of the early limitations have been overcome with the newer generation hardware.

things do not always map simply from legacy to aci. in fact every step of the way so far for me has been painful.

Correct, things do not always map easily from legacy to ACI. That is part of the learning curve. The latest release of software (v3.2) has some features (such as black-list filters) that may make it a little easier. And I feel your pain.  I tell people that it took me about three months working with ACI on an almost daily basis before I became comfortable - and I'm still learning new things. 

from what i can see, ACI is not exactly a new technology. it is a smorgasbord of every routing protocol known to man, stitched together (very cleverly!) to form a fabric. the policy and object model is obviously new, but it has clearly been built upon older technologies (BGP, OSPF, VXLAN, ISIS, etc etc).

ACI is based on IS-IS and VXLAN. But you don't have to lift a finger to configure either.  Should you wish your servers to talk to the outside world, it needs a way of taking the routes of the outside world and distributing them within the ACI fabric within a VRF context.  There is only one protocol known to man that does this - MP-BGP, so with a simple one step process (define an ASN and a route reflector) ACI configures all the BGP and MP-BGP elements for you.  All you have to do now is configure the fabric to talk to your routers, so when you say "it is a smorgasbord of every routing protocol known to man" you are correct if what you mean is that it supports a rich set of external protocols (BGP, EIGRP, OSPF, OSPFv3), but it doesn't actually support RIP.

 

this brings with it some limitations (for example, routes learned from one leaf by OSPF are redistributed across the fabric via BGP, so routing metrics are lost.... but if transit routing on the same leaf, then same OSPF process is used so the metrics are preserved).

Correct - this is exactly the same as what would happen if you used transit routing across a service providers' network using MP-BGP

in my humble opinion, ACI is not mature enough yet. it's also very "heavyweight" (e.g. APICs, Spines, IPNs, 40Gb-only connectivity) and i would say it's definitely not suited for small environments.

See my answer to the original question regarding the leaf/spine topology and the APICs - if your environment is not large enough or not planning to grow and you don't want to take advantage of centralised control, then your observation that ACI is not suited for small networks is correct.

Regarding connectivity - 1G, 10G, 40G and 100G connectivity options are supported.  

It does have potential however, but i think the software needs maturing.


Yes, ACI does have lots of potential - and the product will undoubtedly continue maturing, but it is certainly production ready today.  I'm not sure what more maturation you require!

Hope this clears up some of your confusion.

RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

 I tell people that it took me about three months working with ACI on an almost daily basis before I became comfortable - and I'm still learning new things. 

I took delivery in March, going into production at the end of this month. i've been growing my nails so i have plenty to chew on as the time gets nearer!

 

ACI is based on IS-IS and VXLAN.

And MP-BGP. And OSPF (for Multipod). All stitched together.

 

you are correct if what you mean is that it supports a rich set of external protocols

No, I meant it uses all of those protocols internally, just to work!

 

Correct - this is exactly the same as what would happen if you used transit routing across a service providers' network using MP-BGP

You keep mentioning service providers - i'm not a service provider and i don't use one. i'm talking about my company's own data centres.

 

Regarding connectivity - 1G, 10G, 40G and 100G connectivity options are supported.  

No. Multipod requires 40G from Spine to IPN. No other options are available. It's 40G or nothing. The IPN itself can be anything but from Spine to IPN it's 40G.  

 

Yes, ACI does have lots of potential - and the product will undoubtedly continue maturing, but it is certainly production ready today.  I'm not sure what more maturation you require!

I look forward to the maturation, it is being updated regularly.

The maturity i think is missing is mainly around ease of configuration. Multipod was a sod to set up, but it shouldn't be. I need to configure BGP route reflectors? Why? It's a multipod, route reflectors are required - so do it for me! I need to configure a vlan and an access policy for Spine interfaces for Multipod? Why? Do it for me! Give me a Multipod wizard.

 

I want to connect a couple of external routers - but I can't add 2 interfaces to the same leaf on the same subnet. Why? And then I need to set up an xyz profile and then an abc profile and then a thingamie profile and tie them all together with a widget profile. I mean jeez.

 

Maybe the answer is "flexibility" but a lot of the ACI configuration seems laborious and unnecessarily complicated, and after 5 months some of it still makes no sense to me whatsoever.

RedNectar
VIP
VIP

Hi @D_Lebedev,

See answers online

Nexus 9K available in "classic" and ACI modes. Which one will you advice? .... what  practical advantages depend of ACI?

As you have probably already realised, the Nexus 9K in either classic or ACI mode gives you a rich set of routing and switching features, and even a powerful API if you want to integrate your hardware with automation/orchestrations suites such as Open Stack.

Whether you choose classic or ACI mode, you will probably want to design a data-centre topology that will allow expansion (possibly rapid expansion) in the future.  Current thinking on data centre topologies is to use a Spine and Leaf topology (sometimes referred to as a Clos topology after Charles Clos who wrote a paper about it in 1953). 

Now the problem with Spine and Leaf topologies is that the prevailing current L2 topology defining protocol - i.e. Spanning Tree - does not support an ECMP (Equal-Cost Multi Path) between the leaves and the spine switches, so no matter how many spine switches you deploy, there will be only one active path between any two leaf switches.

So to deploy ECMP, designers of data centre Spine/Leaf topologies have made use of two technologies, one old and one somewhat newer.  In the case of ACI, these choices are fixed.  If you decide to build your Leaf/Spine topology with regular Nexus 7K/9K or other switches, you will have to choose your own answers and build it yourself.

  1. Use a layer 3 routing protocol to provide ECMP
  2. Encapsulate L2 frames at the Leaf edge and route them across the L3 ECMP topology

To make #1 work, the links between the Leaf and Spine are deployed at Layer 2 as point-to-point links, making it possible to simplify the L3 addressing of the Leaf and Spine devices to a single L3 address which would be deployed as a loopback address.  Strictly speaking, the L3 protocol does not have to be IPv4, (it could be IPv6, or research Fabricpath for a non-IP example) but ACI uses IPv4, and if you were deigning your own I imagine you would too.  So given that you are using IPv4 as the L3 protocol, all you need now is a L3 routing protocol that supports ECMP.  Your choices are BGP, EIGRP, IS-IS, OSPF and RIP.  For ACI, Cisco chose IS-IS.  If you are designing your own, it's up to you.

To make #2 work, you need to choose an encapsulation method that supports multiple overlays and can encapsulate L2 frames.  The obvious choice is VXLAN which is what Cisco chose for ACI, but you could for example choose to use NVGRE.

So if you are ready to move into the next generation of data centre topology design - ie Leaf/Spine, ACI has some distinct advantages because if makes the choices of ECMP and encapsulation for you and keeps them completely transparent to the user, you don't have to configure a thing.  If you want to build and design your own Leaf/Spine you will have to make your own choices and configure it yourself - although if you choose 9Ks you will find that there is still quite a bit of support/help out there.

However, ACI still has one more advantage in this scalable future.

Centralised control.

You do all the configuration of your topology in one place. On the APIC (Application Infrastructure Application Controller). You no longer have to configure each switch individually. The APIC sees the switches as stateless devices, so should say a leaf switch fail, it could be replaced and assigned the same ID as the previous switch and all configuration would be restored.

 

Are there any requirements for server HW? Sorry if this question is dummy but i wasn't able to find the answer by myself.

Not a dummy question - but no, there are no requirements for server HW.  This indeed is one of ACI's strong advantages over many competing systems.  ACI doesn't care if you use Bare-Metal Hosts, VMware virtualised hosts running on any variety of VMware, or Microsoft virtualised hosts or any other Hypervisor. ACI is host/hypervisor agnostic. However...(see next answer)

Which server farm is preffered for get more advantages from ACI- Virtual farm or phys. servers?

...there are distinct advantages if you integrate ACI with your virtual servers.  You can, for example, configure ACI to integrate with VMware or Microsoft so that when a new Application Profile and its associated End Point Groups are created in ACI, appropriate vSwitch configurations are pushed to VMware vCenter or Microsoft System Center (or both at the same time if using both) to make VM assignment to the appropriate security group (End Point Group) seamless.

I hope this helps


Don't forget to mark answers as correct if it solves your problem. This helps others find the correct answer if they search for the same problem


RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.
Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: