cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1429
Views
5
Helpful
9
Replies

UCS Saves Core Ports?

visitor68
Level 5
Level 5

I saw an interesting presentation last week  where Cisco justifies their server prices within UCS because they save  core port costs. That sounds real interesting. But I didnt understand  how they save core port costs...

Does  someone who is familiar with UCS understand the logic behind this  argument? I really like UCS and we are thinking of bring it in for a  PoC....

Thanks

9 Replies 9

Robert Burns
Cisco Employee
Cisco Employee

Not sure I'd agree with the reduction of "core" layer ports, but since your interconnects are the aggregation point of all your compute devices, it will definately reduce your aggregation/distribution port count.  In legacy devices having all your legacy x86 blades and/or rack servers connecting to separate switches for FC & Ethernet ToR or EoR would require more ports uplink ports from access/aggregation into your core.  UCS will definately lowers the overall port count of FC & Ethernet ports, now whether that reduction lays in your distribution or access layer is the dependent on your topology.

Regards,

Robert

Hi, Robert...not too sure I get it. Do you mean that converging FC and Ethernet will save some switching hardware and ports? If so, I agree..let me show you this drawing. My requirement is the following:

4:1 oversubscription from server to SAN handoff (The SAN folks are demanding this)

8:1 oversubscription from server to LAN core (nexus 7ks)

We can use the older VIC cards (dual ports) and the older 2104 FEX, since this is not a high density set up.

We can use the 8 half width blades.

So,per chassis, we will have 2 10G converged links from each FEX to its associated Fabric Interconnect (that gives 4:1 for BOTH Ethernet and FC). Remember, these are converged links and each link, by default with Cisco, is expected to have 5G of Ethernet and 5G of FCoE.Of course ETS can change that allotment dynamically, but we have to put the stakes somewhere to figure out oversubscription.

Given all that, this is what I have. Is this correct? By the way, where it says 8 x 10 from the chassis (in green), I am talking about the whole rack - 2 uplinks per FEX to the FI per chassis. Four chassis makes 8 uplinks per fabric.

If you calculate the bandwidths you would achieve the required oversubscriptior. However if you create port-channels with these number of links the throughput will not be the same as the sum of the links. 2 out of the five links will be underutilized due to the hashing algorithm. The hashing algorithm works best when the number of links is a power of 2

I would suggest to create a vPc with 2 x 4 10Gbit from each FI towards the N7K pair. This would give you a total bandwidth of 160 gigabit from the FI northbound this would result in a oversubscription 1:5.

If understand the hashing method correctly using a 5 port port-channel would also result in a maximum bandwidth of 40 gig per port-channel. The first three ports would be utilized for 100% and the last two for only 50% resulting in 40Gig.

It would be possible to keep all the ethernet uplinks as seperate links and let the FI do static pinning for each server to a particular uplink. I am unaware of the pinning method and not sure if this will result in a full utilization of all links.

There is a bigger change of imbalanced traffic as there are only 160 servers which need to be distributed among 10 uplinks where the balancing of traffic ovet hte members of a port-channel can be done based on the sessions of those servers (and the virtuals running on them)

Hi e:

You make an interesting point. I was concerned if my numbers were right and if there was anything about UCS functionality that impacted my design - and you mentioned one of them. Thank you.

But I do find something interesting....whenever you read Cisco's data sheets for UCS, they say things like, "a UCS 6140 FI can support up to 40 chassis with 8 servers (320) servers." Yes, thats true, if each FEX has only one uplink. And thats OK, that may be fine for some. HOWEVER, what about the FC links??? They are never mentioned at all.

Or am I missing something? Whenever oversubscription and quantities of ports are discused, the ports required to support native FC to the SAN are never mentioned or shown in any diagrams. And I think that is a big deal because it  drastically impacts the number of chassis and oversubscription ratios that UCS can indeed support. I had to use a 6296 FI - the largest one available - just to get 20 chassis (with 20 ports left over, so they go to waste because UCSM only supports 20 chassis at a time, although I hear that is changing soon). If I used the 6248, I would only be able to support 11 or 12 chassis and 96 servers.

Is all this accurate? We really like UCS for its UCSM and its general approach to things, but we need to understand what kind of architectures we can build to make sure its for us....was hoping someone from Cisco would continue answering my questions....

In regards to Chassis scale - the 6140 has 40 fixed ports and then 2 expansion modules which can be 6/8 x FC, 4FC + 4Eth, or 6 Eth configuration options. Also we lowered the scale of a single UCS domain from 40 to 20 chassis.  We did this as the best scaling point for uplink connectivity as well as size of failure domain.  No longer do we tout 40 support chassis in single domain.  To address any scaling concerns we have UCS central coming out towards the end of the year which will manage multiple UCS domains - allowing you to share/manage policies between domains.

The 6296 is geared more towards deployment that utilize the 2208 IOMs - (eight port IOMs).  With the additional ports you may have free you can utilize them for SPAN sessions, appliance ports, direct connect storage etc.  I assume your SAN team knows what they're doing in regards to oversubscription demands.  You must have some seriously high IO appliactions for requiring 200GB of redundant FC uplink bandwidth.  Be sure to check there's no other bottlenecks in your SAN fabric that would cause this to be overkill.

I do agree with e. that Eth VPC and FC Port Channels are the way to go.  You'll gain much higher utilization and efficiency with your uplinks.  With dynamic pinning you can potentially have a heavy veth monopolize the single uplink.  With the PC you can rely on LACP & IP hashing to help avoid single link congestion.

There's a bunch of decent white papers which look at various deployment considerations you may wish to have a look over.

http://www.cisco.com/en/US/netsol/ns944/networking_solutions_white_papers_list.html

http://www.cisco.com/en/US/products/ps10276/prod_white_papers_list.html

Let us know if you have further questions.

Robert

With specs from a vendor (any vendor for that matter) they always tend to specify the absolute maximum of a specific feature. If you want to mix a number of features the maximum of each of those feature is seldom achieved. Always look critical at specs and you will be fine.

Further I agree with Robert. Are the SAN targets capable of delivering that 200Gb of IO per second? Having 160 servers with for example 2 times 32Gb (4x8) 64 gbit is already a lot of bandwidth.

Folks, that is perhaps where we disagree. I dont think 200G of FC (the fact that it is redundant is a requirement of SAN air gap and has nothing to do with my design) for 160 servers is much at all. Thats 1.2G per server. A server typoically uses its own 8G HBA! So how is this too much?? The 200G gives a 4:1 OS ratio from server to SAN handoff. These are pretty common requirements. Think of a legacy environment with separate FC and Eth switches. An HP or Dell blade chassis with 16 servers (and 16 8G HBAs) would have a FC switch that would typically have 4 uplinks...this is common. Thats a 4:1 OS ratio.

So why did I use the 6296? because the max mgmt domain is 20 chassis and if I am going to buy a UCS, I want to maximize its usage. So, I use a 6296, get the 20 chassis I need, and just accept that I have 20 ports extra....maybe they can be used for something else in the future....not sure what.

You can't just look at oversubscription as a equaling 1.2G of FC pipe per server.  This is under the assumption that EVERY one of your 160 servers is maxed out at the exact same time.  Not likely or realistic in 99% of environments. Most traffic at that rate is bursting to capacity at best, it's hardly sustained.  I'd be very curious if your SAN fabric and/or target could even support this rate - unless you have 8 or more target interfaces and your storage switch interfaces are operating in dedicated rate mode...  Can you detail the rest of your fabric topology (SAN switch & array models).  This would give more insight into the big picture.  

Have you taken and performance & traffic data from your existing system to see what it's currently capable of?  Before you size your new UCS deployment you should have facts on how the exisitng system is performing including capacity limitations.  Just because you "can" doesn't mean you need to.  Oversubscription needs to be viewed as a balance between actual and desired performance and needs to be designed end-to-end rather than a single hop.

Robert

Robert, 1.2G is not maximum...these are 8G HBAs...so, it is quite conceivable that a server is using only 1/8th the capacity of a single HBA.

Anyway, our network today is operating with a 4:1 OS ratio from server to SAN. This has been the case for the last 7 years. We will be virtualizing our servers with greater VM density, so the storage I/O is expected to increase if anything.

Upon discussing this with a FC guru I know, I will revise my contention that 4:1 is typical for a first hop OS ratio. 8:1 is more the average...

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card