01-09-2018 08:22 AM - edited 03-01-2019 01:24 PM
Hi,
We have a UCS Chassis with b200 M4s, connected to 6248 Fi's.
In those are vic 1340's with port expanders.
We have ESXi 6.5 installed on the blades. (each assigned 4 vnics, 2 on each fabric)
I understand that we have a maximum bandwidth of 40Gb per vnic (40Gb to fabric a and 40Gb to fabric b) Is that correct?
For the vMotion network we plan to use the vmotion tcp/ip stack new to vsphere (at least for us)
The plan is to use the same standard vswitch and configure as follows;
vm management vmkernel adapater; vswicth0; vnic0 Active, vnic 1 Standby
vmotion vmkernel adapater ; vswitch0 ; vnic0 Standby, vnic 1 Active
Does this seem reasonable?
Now then, for a single vmotion between 2 hosts I am guessing the maximum bandwidth is 10Gb - as they are 10gb 'KR' lanes on the backplane and is it is a single tcp/ip converstaion it can only use a single 10GB trace (regardless of the fact the card reports 40Gb and the port channel is in place) - is that correct assumption ????
If I do 2 simultaneous vmotions what is the maximum bandwidth then - is it 20G; does it use another KR 10gb LANE (I guess ucs does that load balancing, but no sure if it can split simultaneous vmotion operations between the same 2 hosts?)
Many thanks.
Solved! Go to Solution.
01-11-2018 09:13 AM
There are a few moving parts here that need to be considered. The final answer is simple, yes 10GB would be the max for any one connection between two hosts during the vMotion exchange that you're discussing. Further, the links to the FIs are 10GB so that would be a limitation as well.
Determining what the outcome of multiple vMotions would be at a single point in time would involve knowing exactly where vNICs are pinned that service that traffic as well.
There is no data plane connection between the two fabric interconnects, if for example Host 1 vNIC 1 is pinned on FI-A and Host 2 vNIC 1 is pinned on FI-B. The traffic would need to utilize the upstream network, if you have a gig switch connected for example, that's going to impact throughput of course.
The UCS does not implement live load balancing and further since LACP port channels pass traffic on a per flow basis determined by the set hashing method. So there is potential that two different flows would utilize the same link.
I hope that this moves you closer to the answer to your question. Let me know what clarifying questions you have and I can jump back in to assist with that as soon as possible.
Thanks!
01-11-2018 09:13 AM
There are a few moving parts here that need to be considered. The final answer is simple, yes 10GB would be the max for any one connection between two hosts during the vMotion exchange that you're discussing. Further, the links to the FIs are 10GB so that would be a limitation as well.
Determining what the outcome of multiple vMotions would be at a single point in time would involve knowing exactly where vNICs are pinned that service that traffic as well.
There is no data plane connection between the two fabric interconnects, if for example Host 1 vNIC 1 is pinned on FI-A and Host 2 vNIC 1 is pinned on FI-B. The traffic would need to utilize the upstream network, if you have a gig switch connected for example, that's going to impact throughput of course.
The UCS does not implement live load balancing and further since LACP port channels pass traffic on a per flow basis determined by the set hashing method. So there is potential that two different flows would utilize the same link.
I hope that this moves you closer to the answer to your question. Let me know what clarifying questions you have and I can jump back in to assist with that as soon as possible.
Thanks!
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide