03-23-2013 06:36 AM - edited 03-01-2019 10:56 AM
So we are having a debate surrounding the network configuration/design with an upcoming UCS implementation across 2 or 3 Data Centers. I think the count is 6-10 chassis per site (active/active) plus other related devices.... I would like to know why for such a small footprint can we not plug directly into the 7K's and not go with the traditional core (7K's) - access layer (5K's).......... From others who have had experience with such deployments (UCS/Nexus) what has been the best practice regarding the network design?
Thanks
P.S. Maybe I should post in the networking section........
03-23-2013 06:55 AM
HX,
Hopefully other customers will chime in with their experience but I'll give you my take. You can definately go directly into your core. As the UCS system is regarded as an end host, it's really an access layer device itself. I've seen customers go directly into their core, if they're using a collapsed core design, or they don't have scaling concerns on their core (port count). One other reason some folks go through an intermediate layer before hitting their core is for any hit any services (firewall, NMS, ANS etc) along the way. Networking services are typical located before the core.
There's no one-size fits all design. The best design will be dependent on what services your DC is offering and what other devices you have deployed. Can you give us a better (more detailed) picture of your DC deployment?
Regards,
Robert
03-23-2013 07:01 AM
Thanks for the quick reply Robert. Now that you mention it, the traffic will be hitting a slew of firewalls/security appliances before they reach the UCS/VM, so I guess we will need some sort of Access Layer before we hit the core 7K. I am not able to post a detailed design doc for various reasons...
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide