01-22-2024 08:18 AM
Hello,
I have a couple of questions regarding the potential for moving to ACI for a greenfield deployment. To give a little bit of background, our environment is small in comparison to others. We have a single data center that houses approx. 340 VMs as well as several Azure environments that may total somewhere around 80 VMs. We do a lot of in-house custom applications as well as some off the shelf apps in addition to SaaS applications. We are in the beginning stages of a redesign where we will be doing redesign sessions from a high-level to determine the path we want to go (traditional data center design or ACI). The problem is that none of our applications flows are documented so no one can really tell us how these apps work and what they need to communicate to with certainty. I would conclude that due to EPGs and contracts required by ACI to allow EPGs to communicate between each other, this would be problematic in an ACI environment. I want to make sure we have the best design to meet our needs as well as provide optimal traffic flow without wasting as many hardware resources as possible. Lastly, we have already purchased hardware for this redesign that includes 4 Nexus 93108TC-FX3P, 2 Nexus 93180YC-FX3, and 2 Nexus 93600CD-GX switches to provide a bit more context for justifying the best path forward. So, my questions are as follows:
1. How critical is understanding our application flow to deploying a successful ACI environment? And, would this be a deal breaker if we cannot ID the flow before we're ready to make a decision on the best path forward?
2. It's my understanding that ACI has matured since it's inception but how does one determine whether ACI is right for their environment? What questions should I ask to make sure we're not setting ourselves up for failure or a horrible and frustrating experience from a deployment and support standpoint?
Thanks!
Terence
01-25-2024 07:16 AM
Several insights from out ACI journey:
1. Greenfield. Use L3 to connect the ACI fabric to your existing / rest DC. Never use L2.
2. Likely we do not know the applications in great details which is totally fine, then just host the applications in a separate EPG, like EPG-AD, EPG-DNS, EPG-NFS, EPG-HR, etc, etc. Do not worry to create many EPGs but need to ensure there won't be much traffics between EPGs
3. For common service EPGs, like EPG-AD, EPG-DNS, EPG-NFS, etc, leverage vzAny contracts.
4. Leverage VMM integration
5. Leverage automation (Ansible, Postman) to manage ACI but leave GUI for the show.
6. The experience and workflow would be much smooth. The overall administration overhead will be greatly reduced, and the error margin is much smaller then before. But the damage domain becomes bigger as you are dealing with 1 single switch.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide