09-06-2024 12:46 AM
Besides the ESXi hosts or blade chassis, we connect most or all of our servers to the UCS Fabric Interconnects and I believe that was because we want to manage them via the UCSM for CIMC, firmware updates and service profiles for ease of management. But what servers should just connect straight to the leaf switches? We have UC/Collab and CyberArk servers that I need to connect and thought why not just go straight to the ACI leaf? In what situation would I do that? A quick search says for latency reasons or more control over micro-segmentation but that was not a reliable or real world answer.
09-09-2024 01:25 AM
hello @KVS7 i guess beside the latency and vPC you can avoid the single point of failure, you might want keep the infrastructure services separated.
09-09-2024 05:15 AM - edited 09-09-2024 05:27 AM
Thanks for responding. What do you mean when you say a single point of failure? When a service is connected straight to both leafs, you are still connected to both leafs in a LAG if using ESXi and you can still create a vPC using the APIC if directly connected to the leaf switches.
09-09-2024 06:01 AM
If you need the infrastructure services for other systems in the DC that are not in the UCS domain, connecting these services to the Fibre Interconnect leads to Fate Sharing,
For example, if you host your AAA server behind the Fibre Interconnect and for some reason you lose the connection to the Fibre Interconnect, you will no longer be able to log in to the ACI or other systems that using the AAA.
Hope this helps.
09-09-2024 07:26 AM
Oh I see, it's like adding an additional point of failure.
09-09-2024 09:24 AM
yes, so you don't lose everything in case of failure . hope this helpful
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide