cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
652
Views
0
Helpful
1
Replies

USC-X: network configuration

Anton88
Level 1
Level 1

Hi! We have:
- servers UCSX-210C-M6
- chassis UCSX-9508
- fabric UCS-FI-6454
We plan to deploy a cluster based on vSphere. What are the best practices network configuration for this type of equipment? Below is an example
1 variant
Example, for Management create one vNIC adapters, turn on Failover. Perform the following setting in vCenter:

1 variant.png

2 variant
Example, for Management create two vNIC adapters, turn off Failover. Perform the following setting in vCenter:

2 variant.png

Is it possible to implement balancing through the fabric so that two adapters are active?
Do I need to create a separate virtual switch for each service? e.g. vMotion, Management, Provision, VMs

1 Accepted Solution

Accepted Solutions

Kirk J
Cisco Employee
Cisco Employee

The fabric interconnects do not share a dataplane, so a Host's pair of links pinned to each FI can't do LACP type teaming.

If you look at the way Hyperflex installs setup networking, that might be a good place to start.

  • For specific L2 communication you know is going between hosts connected to same FIs, create single vnic pinned to specific FI side, and make that the 'active' vmnic in esxi teaming.  For the 2nd vnic, pinned to the other FI side, make that the 'standby' vmnic in ESXi teaming.  This design minimizes the amount of l2 hops traffic has to make, and keeps latency low, but still has redundancy capabilities that will kick in if 'active' links/paths go down.
  • If you have multiple types of src/dst L2 adjacent traffic , then you could use the aforementioned design and make some active on A and some active on B to even out the traffic load between FI-A and FI-B.
  • For traffic that has to go through L3 gateway northbound, then set those to have multiple active links (side a, side b) using the default VMware teaming option (originating port id) that round robins the guestVM nic outbound connections among the active VMNICs available to that guestVMs portgroup.  This design also has built in redundancy as traffic will be immediately switched over in the event a link goes down.

Kirk...

View solution in original post

1 Reply 1

Kirk J
Cisco Employee
Cisco Employee

The fabric interconnects do not share a dataplane, so a Host's pair of links pinned to each FI can't do LACP type teaming.

If you look at the way Hyperflex installs setup networking, that might be a good place to start.

  • For specific L2 communication you know is going between hosts connected to same FIs, create single vnic pinned to specific FI side, and make that the 'active' vmnic in esxi teaming.  For the 2nd vnic, pinned to the other FI side, make that the 'standby' vmnic in ESXi teaming.  This design minimizes the amount of l2 hops traffic has to make, and keeps latency low, but still has redundancy capabilities that will kick in if 'active' links/paths go down.
  • If you have multiple types of src/dst L2 adjacent traffic , then you could use the aforementioned design and make some active on A and some active on B to even out the traffic load between FI-A and FI-B.
  • For traffic that has to go through L3 gateway northbound, then set those to have multiple active links (side a, side b) using the default VMware teaming option (originating port id) that round robins the guestVM nic outbound connections among the active VMNICs available to that guestVMs portgroup.  This design also has built in redundancy as traffic will be immediately switched over in the event a link goes down.

Kirk...

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: