Hello Community,
I need best practice assistance. We have a small datacenter, with a pair of Nexus N5K-C5548P switches in a vpc pair, along with 2 Nexus 2K FEX connections. All our access switches are connected to the 5K's (Dual homed), along with our servers either connected to the 5K's, or the 2K's (Fiber and Ethernet)
We are replacing our Core switches (We are using a collapsed core design) with a pair of C9500-48Y4C (I have currently setup with a SVL over x4 40Gbe links), and replacing the 2K's with a pair of C9200L-48PXG-4X.
The C9200L-48PXG-4X switches are also capable of stacking, using the stackwise cables in the back of the switches. Would I want to make these switches a stack as well and the create port channels to connect to the core?
What best practice would the community recommend for migrating from the Nexus 5K's over to the C9500's/9200L's as our core datacenter switches.
I realize I may not have included enough info here, so whatever is necessary for someone to help me out I can certainly provide further information.
I have attached two network diagrams, our current network design, and the proposed network design, as well as the running configs. We are using the typical SVI's on the 5K with HSRP, ip pim sparse mode, eigrp, and a handful of static routes. Pretty basic L3 switch settings I believe, but just want to know best practice for using our new 9500's and 9200L's and a good migration strategy.
Thank you,
Eric