cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5445
Views
10
Helpful
18
Replies

UCS Switching Questions

Aaron Dhiman
Level 2
Level 2

I'm a little concerned as to why 2:1 Oversubscription is the best possible between the servers and the Fabric Extenders (e.g., 8 x 10G interfaces from the blades, with 4 x 10G Uplink Ports).  If there is no QoS there (b/c there is no switching), then what happens if those uplinks get congested?  It seems there is no way to prioritize the traffic until the Fabric Interconnect.

18 Replies 18

I would say that either way, I am still very impressed with UCS am looking to deploy soon.  It is quite revolutionary.  In reality, say you have four servers running on a single blade.  Are you ever going to average 5 Gbps consistently on all four servers to cause any sort of problem with 20 Gbps vNIC?  I don't; so, this level of over-subscription does not concern me.  I do intend to activate all 8 connects to the Interconnects though.

Victor..

Victory..

Yes I read Brad's Blog.  He's not posting inaccurate info, but his topology also includes a single vNIC per blade.  That being said he's limiting the amount of bandwidth per blade by only using a single vNIC with failover, which is a valid topology.  2nd gerenation Menlo cards will NOT support failover.  If you want to have any form of redundancy you would need two vNICs, one for each fabric.

So if you have an adapter that supports failover, and you only configure a single NIC, then you will only pump 10GB/s to the blade, whereas you'll have 20GB/s of bandwidth with a dual adapter configured profile.

This is just a difference of requirements.  Do you want 2 x 10Gb/s adapters with oversubscription, or a single adapter without.  90% of UCS customers are deploying dual adapters with their service profiles

Robert

An interesting thread this, I just wanted to chip in my own question about oversubscription that I have been wondering about. So a blade server has two 10 gig CNA ports (depending on card type). The operating system then thinks it sees two 10 GbE ports and also two 4 Gbps FC ports as well, correct? Does this not mean there is a theoretical maximum of 28 gig utilising the available 20 gig? I maybe don't understand correctly but it seems that there is already oversubscription to begin with right on the CNA before traffic even hits the FEX. Is that right?

Simon,

That is correct.  The host OS will see two 10G Ethernet Adapters, and two Fiber Channel SCSI adapters.  As with any current CNA (not just Mezz cards for blades) they have a maximum transmission of 10Gb/s which is "wide open" for ethernet, but limited to 4Gb/s for FC traffic.  I would hazard a guess that 3rd gen CNAs might incorporate the 8Gb/s Fiber Channel chips into the cards, but we'll have to wait and see.  When you start open storage up to those rates QoS & CoS would need to play a major roll in managing your drop vs. no drop traffic queues.   As you can tell with CNA vendors other than Cisco (Emulex & Qlogic) - there is a solid understanding that a host under normal operation will be not hindered by a certain amount of oversubscription.  Understand that FCoE traffic is much more efficient than TCP (due to QCN & PFC for congestion) so you can rest assured the paths from your blades to your interconnects will experience less oversubscription that you might find in the regular ethernet world.

Robert

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card