cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements

1970
Views
0
Helpful
9
Replies
Highlighted
Beginner

2208xp ucs-vic-m82-8p, vic 1280

i got some b200m2 half servers wit the vic 1280 card. then on the 2208xp iom, but i am using only 2 ports out of 8 ports on each module.

so my total uplink is 40gb per chassis. how much do i get going to each server?

Everyone's tags (2)
9 REPLIES 9
Highlighted
Enthusiast

2208xp ucs-vic-m82-8p, vic 1280

Hi Tony,

With the combination of the 1280 and the 2208 the blade will have 40 G per side b/w i.e. between the blade and the IOM which totals upto 80 G b/w.

./Abhinav

Highlighted
Beginner

2208xp ucs-vic-m82-8p, vic 1280

hello

so 80gb to the iom per blade but from iom to fi will be 20gb?

Highlighted
Enthusiast

2208xp ucs-vic-m82-8p, vic 1280

Correct!

Highlighted
Beginner

Re: 2208xp ucs-vic-m82-8p, vic 1280

Hi
So it would make sense to create 8 vnics to get maximum potential from the blade right? For an esxi deployment for example,
8 vnic design?

But I guess I'm still limited by the uplinks to the fi which is utilizing only 2 links per iom. Total 40gb

Thoughts?

Sent from Cisco Technical Support iPhone App

Highlighted
Enthusiast

Re: 2208xp ucs-vic-m82-8p, vic 1280

If you are connecting to IP storage (NFS/iSCSI) 8 vNICs would make since. I have a few customers with 10 vNICs, they are using VMware FT so there are 2 vNICs for that. FT vSwitch setup is similar to the vMotion setup where the NIC teaming is set to active/standby to keep the traffic local.

Highlighted
Beginner

Re: 2208xp ucs-vic-m82-8p, vic 1280

Is there a limit on how many vnics I can create? I thought the max is 8 vnics on the
Vic 1280?

Highlighted
Beginner

Re: 2208xp ucs-vic-m82-8p, vic 1280

Yes, there is a limit - 256 interfaces (128 per side)

Sent from Cisco Technical Support iPad App

Highlighted
Beginner

2208xp ucs-vic-m82-8p, vic 1280

so just to be sure again..

if I have 2 vnics or 8 vnics, the uplink is to the fi is always limited by how many uplinks I have which is 20,20 (40GB).

if there is minimum vm to vm communication between the servers on the chassis, why would it be necessary to have all these vnics or 80gb throughput to a blade?

for a loaded chassis of 10 servers, 40/10 will only give 4GBps to each blade. compared to a regular dell r710 server, which has 8 pnics of 1gbps each (total 8Gbps)

Is the correct?

80GB is all internal communication to th blade

Highlighted
Enthusiast

2208xp ucs-vic-m82-8p, vic 1280

80GB is the max potential if you have 4 IOM fabric ports back to the Fabric Interconnects. Every server has access to all of the potential bandwidth. UCS uses a fair share QoS so every sever has access to the same potential bandwidth. There is no limiting unless imposed by the admin in a QoS policy. (not recommended, this is how the competition does it)

I would highly recommend enabling and configuring QoS for the various traffic classes. By default FC traffic is guaranteed 50% and everything else is best effort. These are not limits though, these are only guarantees if there is contention.

I normally put VM traffic in gold or platinum, vMotion in bronze, mgmt in silver and IP storage in platinum. I always play with the percentages a bit but then again these are only imposed if there is contention.

These QoS policies are then mapped the the vNIC for that traffic type.

CreatePlease to create content
Content for Community-Ad

Cisco COVID-19 Survey