cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
921
Views
10
Helpful
3
Replies

Mapping out UCS traffic flows: vEths FI-A to/from vEths FI-B

CiscoMedMed
Level 1
Level 1

I just finished a video on traffic engineering with the UCS. It made the point that left unreviewed you could end up with traffic on a vEth on FI-A needing to talk to a a vEth on FI-B via the upstream LAN. It would be better to keep it all within the UCS via having the servers that need to talk to each other all on the same FI. Then use one vNIC at each ESX host and Fabric Failover at the UCS to deal with redundancy. 

 

The issue is - I have no visibility at the moment how much of an issue this is at the moment. How much traffic flows from vEths on one FI to another via the LAN. What would be the best method to try and map this out? Are there any automated tools to check the traffic engineering design of one's inherited UCS? Ideas apprecaited.

1 Accepted Solution

Accepted Solutions

Hi @CiscoMedMed ,

What you are describing sounds exactly like Cisco's Hyperflex solution, and I'd configure it exactly how the Hyperflex installer does - I've stolen a diagram from this document to show you:

HX VLAN Config.jpg

In your case, ignore the vMotion vSwitch, but I'd advise you configure your 6 VMNics just like the diagram, which is:

  • Configure each pair of VMNics so one connects to FI-A and the other to FI-B
  • Configure a vSwitch for the Storage pair, and set the ESXi Load Balancing to an explicit fail-over order, with the primary linnk going to say FI-B
  • Configure a vSwitch for the Mangement pair, and set the ESXi Load Balancing to an explicit fail-over order, with the primary linnk going to say FI-A
  • Configure a vSwitch for the Server-to-Server pair, and set the ESXi Load Balancing to Route based on the originating virtual port

Possibly this diagram from this source shows the vNic and vmnic realtionship.  This doc also describes the failover scenarios too

305795.webp.jpg

Don't get hung up on the fact I've mentioned Hyperflex - I'm just trying to show you a case that works exactly like yours.

Re the "how would you quantify how much FI to FI traffic flows via the upstream switch?" question:

For Storage traffic - 0% unless there is a failover situation

For Mangement traffic 0% unless there is a failover situation

For VM-to-VM traffic - 50% if the load is evenly distributed.  Remember, Cisco designed it this way so that if a VM moved to another cluster, or another host on a different pari of FIs, they'd get the same performance.  If you have only ONE pair of FIs and there are particular VMs you want to have better host-to-host communication, you can build another vSwitch and pair of VMNics for them just like the Storage or Management vSwithces, (compicated) or create a portgroup for them on either the Management or Storage Vswitch (easier) 

 

 

I hope this helps

 



Don't forget to mark answers as correct if it solves your problem. This helps others find the correct answer if they search for the same problem


RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

View solution in original post

3 Replies 3

RedNectar
VIP
VIP

Hi @CiscoMedMed ,

Cisco suffers a lot from competitors who point out "big picture" features as flaws by concentrating on the "little picture".  Your observation reminds me of Hub vendors furiously peddling the idea that Shared Ethernet Hubs were faster than Ethernet Switches because they didn't' have to buffer the frame.  And in a network of two computers, they were absolutely right.

Similarly, in a network of a single server attached to a pair of FIs - your observation is absolutely correct.

But Cisco likes to work on the bigger picture and provide a predictable and consistent performance as your network scales.  The idea in this case is that traffic from one vEth interface to another vEth interface should be consistent, even if the infrastructure changes, such as a VM being vMotioned, or particularly in the case if UCS, a Service Profile being moved from one piece of hardware to another, possibly even to a different pair of Fabric Interconnects.

In other words, normal traffic shouldn't be dependent on a single FI.

There are exceptions to this of course, but in those cases it is quite easy to engineer a Service Profile so say all xxx type traffic ALWAYS hit FI-A or FI-B first.

But unless you make ALL traffic hot one or other of the FIs ALL the time, you are going to end up in the situation that sometimes traffic has to go from FI-A to FI-B via an external network, and when it does, you want it to CONSISTENTLY do this.

I hope this helps

 



Don't forget to mark answers as correct if it solves your problem. This helps others find the correct answer if they search for the same problem


RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

So would you say that it's no big issue if a lot of traffic is flowing from vEth7000 on FI-A to say vEth7001 on FI-B via the upstream LAN? Or should one try to get those common large loads/traffic paths to be on the same FI so it doesn't have to flow upstream and back down stream?

There is one critical app where they want every microsecond they can shaved off the response time in my environment. The ESXi hosts have six VMNics each three to FI A and three to FI B - two per storage VLAN, two for management and two for server to server data flows. Your advice would be to just let UCS sort it out? Or would it be better to be sure the application server and the database server use only vEths on the same FI?

And lastly - how would you quantify how much FI to FI traffic flows via the upstream switch?

Hi @CiscoMedMed ,

What you are describing sounds exactly like Cisco's Hyperflex solution, and I'd configure it exactly how the Hyperflex installer does - I've stolen a diagram from this document to show you:

HX VLAN Config.jpg

In your case, ignore the vMotion vSwitch, but I'd advise you configure your 6 VMNics just like the diagram, which is:

  • Configure each pair of VMNics so one connects to FI-A and the other to FI-B
  • Configure a vSwitch for the Storage pair, and set the ESXi Load Balancing to an explicit fail-over order, with the primary linnk going to say FI-B
  • Configure a vSwitch for the Mangement pair, and set the ESXi Load Balancing to an explicit fail-over order, with the primary linnk going to say FI-A
  • Configure a vSwitch for the Server-to-Server pair, and set the ESXi Load Balancing to Route based on the originating virtual port

Possibly this diagram from this source shows the vNic and vmnic realtionship.  This doc also describes the failover scenarios too

305795.webp.jpg

Don't get hung up on the fact I've mentioned Hyperflex - I'm just trying to show you a case that works exactly like yours.

Re the "how would you quantify how much FI to FI traffic flows via the upstream switch?" question:

For Storage traffic - 0% unless there is a failover situation

For Mangement traffic 0% unless there is a failover situation

For VM-to-VM traffic - 50% if the load is evenly distributed.  Remember, Cisco designed it this way so that if a VM moved to another cluster, or another host on a different pari of FIs, they'd get the same performance.  If you have only ONE pair of FIs and there are particular VMs you want to have better host-to-host communication, you can build another vSwitch and pair of VMNics for them just like the Storage or Management vSwithces, (compicated) or create a portgroup for them on either the Management or Storage Vswitch (easier) 

 

 

I hope this helps

 



Don't forget to mark answers as correct if it solves your problem. This helps others find the correct answer if they search for the same problem


RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card