03-18-2021 07:08 PM
For this question lets say that I have a B200M5 with a 1440 VIC connected to a 2304 FE. I know the backplane between the blade and the FE is 40Gbps (I think 4x10Gbps in a Po).
Let's say I create a service profile with 3 vNICs (mgmt, data1, data2). I know the OS will show them all at speed of 20Gbps, I'm I correctly guessing that bandwidth is not reserved per vNIC? In other words if Data1 is using all 20Gbps, data1 can still use the remaining 20 and mgmt is out of bandwidth? or will the bandwidth then be 40/3? Leaving each vNIC with a max of 13.3Gbps regardless of consumption on the vNICs?
Solved! Go to Solution.
03-18-2021 09:07 PM
Start by checking the B200 M5 spec sheet to see section:
The combination of VIC 1440 mLOM + 2x 2304V2 IOModule -=> 40(1):
Notes:1. These configurations implement two 2x10 Gbps port-channels
So you have a 2x10Gbps port-channel out each IOM up to each Fabric Interconnect.
As a best-practice I'd be sure to put data1 on FI-A and data2 on FI-B to maximize bandwidth across both up-links.
To answer your specific question; the vNICs *share* the aggregate bandwidth.
Just image the NICs are physical 20Gbps links connected to a switch and that switch has a 2x10Gbps port-channel. . .
How much bandwidth would each physical 20Gbps link get?
"It depends."
A single-flow/single TCP connection won't be more than 10Gbps (due to the 2x10Gbps port-channel).
Many connections could get close to 20Gbps.
The NICs would compete for the shared resource so mgmt would impact data
But how much? Again "it depends".
Is the mgmt NIC doing a lot? Probably not as much as data1.
If mgmt is doing not-much, then data1 will get all 19.99Gbps.
Hope that helps.
03-18-2021 09:07 PM
Start by checking the B200 M5 spec sheet to see section:
The combination of VIC 1440 mLOM + 2x 2304V2 IOModule -=> 40(1):
Notes:1. These configurations implement two 2x10 Gbps port-channels
So you have a 2x10Gbps port-channel out each IOM up to each Fabric Interconnect.
As a best-practice I'd be sure to put data1 on FI-A and data2 on FI-B to maximize bandwidth across both up-links.
To answer your specific question; the vNICs *share* the aggregate bandwidth.
Just image the NICs are physical 20Gbps links connected to a switch and that switch has a 2x10Gbps port-channel. . .
How much bandwidth would each physical 20Gbps link get?
"It depends."
A single-flow/single TCP connection won't be more than 10Gbps (due to the 2x10Gbps port-channel).
Many connections could get close to 20Gbps.
The NICs would compete for the shared resource so mgmt would impact data
But how much? Again "it depends".
Is the mgmt NIC doing a lot? Probably not as much as data1.
If mgmt is doing not-much, then data1 will get all 19.99Gbps.
Hope that helps.
03-19-2021 05:11 AM
Thanks that confirms what I was guessing. Now I can create a vNIC template for Mgmt, data, iSCSI knowing that they all share the pool of bandwidth.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide