cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1755
Views
21
Helpful
11
Replies

About UCS architecture

Yea9632
Level 1
Level 1

Hi,

i'm posting here because i'm a little bit confused about DataCenter's architectures

what's the difference between using nexus switching solutions and UCS solutions ( unified fabric interconnect as a switch instead of the nexus series ) ?

as i know, nexus switches are more powerful with many futures, and at the same time, UCS solutions are the new generation of datacenter's System,

i'm just wondering what's the benefit by using the 1st and not the second, and the otherwise ?! ( features maybe ?, if yes , which features exactly ??) 

a last question that i would can  to answer if i will get answers to above questions , which solutions is the optimal for Cloud service Providers ? ( i know that many providers use UCS blades as servers , but in term of switching , they use fabric interconnect ,fabric interconnect 6120xp as an example, why using this one instead of nexus switches? )

all these questions, just to justify the solution chosen and why 

thank you in advance 

11 Replies 11

Walter Dey
VIP Alumni
VIP Alumni

UCS is a solution which consists of several hardware components, most importantly Blade and/or Rack Mount servers, which are interconnected by means of fabric interconnects. Fabric interconnects are Ethernet (L2) and Fibrechannel switches, based on N5k hardware architecture; however, N5K and UCS FI are not exactly identical; software is a superset of NX-OS; the management application, called UCS Manager runs on the FI as well. And strange enough for the networkers: conf t doesn't work; only show commands are available.

Cloud providers could use UCS for their compute and network infrastructure; the Northbound connectivity from UCS would attach to any Nexus 5k, 7k, ACI (N9k) and / or MDS (FC).

see attachment

Hope that helps ?

we have a similar architecture (in the company where i'm doing my project now )

but it seems that we have an access layer only (or Aggregation / core in 1 layer ?)  , where both FI are connected to the LAN (i didn't mentioned the connection to the LAN in this diagram ) , please look at the attachment 

in your example, where can i add LAN connections ?in the Nexus 5K ?

i'm really confused about how to chose switches ( and justify these choices) and where to put them

Nexus 7K : Core or Core+ aggregation consolidated 

Nexus 5K or Fabric interconnect : aggregation and  access ( can i use N5K directly connected to my chassis in the acces layer ?! )

 

As Walter explained, there is a software called UCS Manager (UCSM) which runs on top of the Fabric Interconnects.  This software gives you the option to manage your UCS Blades servers (and any rack server integrated to it).  In UCS context, when we say manage, we mean that you need UCSM to admin your blades that will need a Service Profile to work.

A "Service Profile" is a group of IDs (MAC addresses, WWxNs, UUID,etc), Policies (boot order, local disk configurations, firmware to be ran by the servers) and Pools (Management IPs,Server Pools, Mgmt IP addresses, WWxN addresses) all together that bring your server to life.  Without a UCS Fabric Interconnect, your servers will just be a piece of hardware.

UCS Fabric Interconnects are based on the Nexus 5K architecture cause we needed the servers to be able to switch their traffic (at Layer2) but this does not mean that you can attach any server to the Fabirc Interconnects and your Fabric Interconnect cannot be used as a regular Access Layer switch either, they are only intended to have a blade server chassis or a rack server below it, nothing else...

Nexus switches (in general) are Data Center Switches that can switch their traffic at very high speeds and provide the additional features that I am sure you are already aware of (Fabric Path, OTV, etc)

I hope this gives you a better understanding about UCS but let us know if you have more questions.

 

-Kenny

Thank you Keny & Walter for your replays 

So now i'm conscious that i need a fabric interconnect when using the UCS blades and especially to manage them, because there's no way to use UCS blades without Fabric interconnect switches 

but when Walter says that in my diagram, there's no connections to the Datacenter ?! i'm just confused !.. i asked the manager to give me access to the Datacenter, so what i have shown in the diagram  it's exactly what i have found there, so, for me , this is the whole Datacenter (since there's many centralized virtual servers )  ... !?

in that case, maybe there's one or more nexus switches elsewhere that i didn't found there ?

 

Or maybe they are not clear about what a Data Center diagram really means or what you really want to see.... I usually ask for details and I dont always get all I need to work on a case...

 

-Kenny

actually this is my own diagram , i made it after visiting what they call "Datacenter Room", but all i have found was just what i have shown in this "diagram"

so OK, let's say that in this diagram we have the UCS components only, So i have to connect  the Fabric interconnect to a Nexus 5K, and connect that one to the LAN ( LAN users should have access to the virtualized servers )   and in that case, it's a complete Datacenter ( ? )

You connect the FIs to an upstream switch to have connectivity to the outside world... that upstream switch is sometimes a pair of Nexus in vPC so that you can have redundancy...

 

HTH ,

(remember to rate any useful posts that you find here)

-Kenny

my last question 

what about LAN users ? i have to connect them to the same upstream switch ?

 

Have a look at

http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-6120xp-20-port-fabric-interconnect/110267-ucs-uplink-ethernet-connection.html

http://www.cisco.com/c/en/us/products/collateral/switches/nexus-7000-series-switches/white_paper_c11-623265.html

https://www.vmware.com/files/pdf/products/nsx/vmware-nsx-on-cisco-n7kucs-design-guide.pdf

 

Your diagram only shows the UCS components, no connection to the datacenter !

The UCS fabric interconnects (61xx or 62xx) are mandatory for connecting the blade chassis (South bound links); the Northbound links connect to the datacenter. It is best practise to do a vPC from each FI to 2 Northbound N5k.

Rack mount servers (Cisco as well as non Cisco) can of course also be directly connected to any Nexus Northbound switch.

I shouldn't care too much about which box for which layer ? core - distribution - access in the datacenter is now changing to spine - leaf (eg. look at Cisco ACI fabric); which optimizes East-West traffic, as well as mobility.

Please read my answer to Keny 

Thank you 

Review Cisco Networking for a $25 gift card