12-15-2012 09:12 PM - edited 03-01-2019 10:46 AM
i got some b200m2 half servers wit the vic 1280 card. then on the 2208xp iom, but i am using only 2 ports out of 8 ports on each module.
so my total uplink is 40gb per chassis. how much do i get going to each server?
12-15-2012 09:59 PM
Hi Tony,
With the combination of the 1280 and the 2208 the blade will have 40 G per side b/w i.e. between the blade and the IOM which totals upto 80 G b/w.
./Abhinav
12-16-2012 12:42 AM
hello
so 80gb to the iom per blade but from iom to fi will be 20gb?
12-16-2012 01:11 AM
Correct!
12-16-2012 07:34 AM
Hi
So it would make sense to create 8 vnics to get maximum potential from the blade right? For an esxi deployment for example,
8 vnic design?
But I guess I'm still limited by the uplinks to the fi which is utilizing only 2 links per iom. Total 40gb
Thoughts?
Sent from Cisco Technical Support iPhone App
12-16-2012 11:18 AM
If you are connecting to IP storage (NFS/iSCSI) 8 vNICs would make since. I have a few customers with 10 vNICs, they are using VMware FT so there are 2 vNICs for that. FT vSwitch setup is similar to the vMotion setup where the NIC teaming is set to active/standby to keep the traffic local.
12-16-2012 02:01 PM
Is there a limit on how many vnics I can create? I thought the max is 8 vnics on the
Vic 1280?
12-16-2012 02:30 PM
Yes, there is a limit - 256 interfaces (128 per side)
Sent from Cisco Technical Support iPad App
12-16-2012 05:24 PM
so just to be sure again..
if I have 2 vnics or 8 vnics, the uplink is to the fi is always limited by how many uplinks I have which is 20,20 (40GB).
if there is minimum vm to vm communication between the servers on the chassis, why would it be necessary to have all these vnics or 80gb throughput to a blade?
for a loaded chassis of 10 servers, 40/10 will only give 4GBps to each blade. compared to a regular dell r710 server, which has 8 pnics of 1gbps each (total 8Gbps)
Is the correct?
80GB is all internal communication to th blade
12-17-2012 04:40 AM
80GB is the max potential if you have 4 IOM fabric ports back to the Fabric Interconnects. Every server has access to all of the potential bandwidth. UCS uses a fair share QoS so every sever has access to the same potential bandwidth. There is no limiting unless imposed by the admin in a QoS policy. (not recommended, this is how the competition does it)
I would highly recommend enabling and configuring QoS for the various traffic classes. By default FC traffic is guaranteed 50% and everything else is best effort. These are not limits though, these are only guarantees if there is contention.
I normally put VM traffic in gold or platinum, vMotion in bronze, mgmt in silver and IP storage in platinum. I always play with the percentages a bit but then again these are only imposed if there is contention.
These QoS policies are then mapped the the vNIC for that traffic type.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide