I'm deploying a setup where I'll have a pair of 2232TM-E switches in a single rack (servers have the option to run data connections to both) which are going to then be uplinked to a pair of 5548UPs near my 7K cores using FET. My confusion comes from how the 2232s are uplinked to the 5548s. If each 2232 has 8x FET uplinks, is it 4x to each 5548, or all 8 to a single 5548? I would assumed half to each 5548 but I see design guides that said the 2232s should only connect to a single 5548 upstream?
Both options are valid. You can connect the Nexus 2232 FEX with all eight uplink ports to a single Nexus 5500 (called single attached or straight through) or with four ports to two Nexus 5500 (called dual-homed).
The design guide is a little dated now as one of the limitations described i.e., no support for host facing port-channels with dual-homed FEX, was removed when Enhanced vPC was introduced for the Nexus 5500 series switches in NX-OS 5.1(3)N1(1). There's some more details on that in the Using Enhanced vPC guide.
I personally prefer the single attached option and go into some of the reasons in the forum post N2K dual homing to N5K final sentence. As I see it the main reasons for using dual-homed FEX are when you have a large number of single NIC servers, or when you want to ensure that east-west traffic flows traverse an equal number of hops irrespective of which NIC on a dual NIC server is used to send or receive. This scenario is described in Ivan Pepelnjak's post vSphere Does Not Need LAG Bandaids – The Network Does.
I don't believe there are any special considerations as such. As I mentioned previously, both single and dual-homed FEX are valid options. It could simply be that dual homed FEX was chosen within the document to demonstrate migration in a slightly more complex environment.
To a large extent the downstream connectivity model from the Nexus 5K switches to the FEX is independent of the topology or features you use upstream from the Nexus 5K to the aggregation.
That said, the one area you might want to consider if using FabricPath upstream is whether to enable vPC on the Nexus 5K.
If you have an environment where the servers don't use port-channels i.e., they use an active/active load balancing mechanism like VMware vSphere ESXi with "route based on originating virtual port ID", then if you chose to use single homed FEX you no longer need to enable vPC on the Nexus 5K. This really simplifies the Nexus 5K device configuration as you have FabricPath ports facing upstream and "access ports" facing downstream.
If there's no vPC, then there's no vPC peer-link required which frees up a number of ports on the Nexus 5K which can be reused for upstream connectivity. This does mean that east-west traffic for servers connected to the same Nexus 5K switch pair potentially goes all the way to the aggregation, but that's then the same for east-west traffic between pods and so arguably a more deterministic traffic pattern across the whole environment.
I like simple from an operational point of view, and using single attached FEX with FabricPath connectivity upstream provides a way to operate a really simple environment.
Do you use Cisco DNA Center? Have you used and are you willing to provide your feedback in using the Cisco DNA Center help and documentation?
If so, we’d like you to complete the survey linked below. Your feedback will help provide more effective and easi...
Listen: https://smarturl.it/CCRS9E18Follow us: https://twitter.com/CiscoChampion Reaching the height of your career is no simple feat. It often requires a combination of pursuing the right education, building the right professional network and being ...
In a typical production SD-WAN deployment, we would probably have many remote sites connected via many different Internet connections to a centralized data center or a regional hub. In most regions in the world, Internet providers will always use some typ...