Showing results for 
Search instead for 
Did you mean: 

Cisco Community Designated VIP Class of 2020

UCS Fibre Channel SAN Considerations



This document captures the key aspects of connecting UCS to a Fibre Channel SAN to increase the chances of your UCS working with a SAN first time.

The outline of this document should provide a robust framework for you to develop your own methodology for UCS and SAN design, but additional detail has been provided to provide enough explanation and examples should you be new to these concepts.

Here's what you'll find in this document.  This outlines a checklist of considerations for using Cisco UCS with a Fibre Channel SAN.

FC ArrayFC FabricUCS
Array CertificationSwitch ZoningUCS End Host Mode
Array PerformanceSwitch VSANsUCS Expansion Slot
Array ConfigurationSwitch NPVUCS Identity
Array SizingSwitch ConnectivityUCS vHBAs
Array MappingUCS Boot Policy
Array ConnectivityUCS Qualification
UCS FC Failover
UCS Connectivity
Important Notes

  • Be sure to bookmark / RSS this document because it's alive!  It will be updated and spawn individual documents for particular SAN array and switch types. I envisage there being a document each configuration - one for UCS + MDS + EMC, another document for UCS + MDS + NetApp, etc.
  • Get involved! This is a collaborative effort and you are warmly invited to contribute either by feedback (positive or otherwise) or contribution.
  • Got feedback? Make a comment at the bottom, it's the quickest way to get in touch.  Or use the forum People bit to speak with the author.

  • Considerations - key points to discuss and agree on - note this document seeks to provide a framework/checklist: not every point is covered in detail, but key points are explained (want a complete guide?  Contact the author).

Not Covered

  • UCS basics - need some?  Go here
  • SAN basics - need some?  Go here
  • VMware vSphere basics - need some?  Go here
  • SAN Performance Design - this is a basic connectivity guide
  • iSCSI SAN - not supported at the time of writing.


  • Your architecture is a basic UCS system with two 6120XPs, one 5108 chassis containing two 2104XP FEX and eight half-slot B200 M1 blades.
  • Your Operating System (on each blade) is VMware vSphere 4

Intended Audience

I've written this to appeal to a wide audience rather than the propeller head geeks who probably know all this anyway :-)  It's informal and tries to communicate the key concepts rather than disappearing down in to the weeds.  I hope I achieved my goal, but in case I haven't I'll keep improving this document based on the feedback I get.

There are three audiences:

  • System architect who brings together UCS and SAN and produces a design and bill of materials.
  • Compute Subject Matter Expert (SME) who will be building/running the UCS.
  • Storage SME who will be building/running the SAN + Array.  Assuming this person also managed the FC switch fabric.

The audience is assumed to have working knowledge of UCS and SAN switching and arrays.  If you don't, then you'll have to look elsewhere for that (perhaps we can provide guidance to get that knowledge?  If you need help, contact the author via the Support Forums).


Steve Chambers, Solution Architect, Unified Computing Practice, Cisco.

I work for Cisco's Unified Computing Practice - if you would like me or one of my colleagues to help you with this topic, contact me directly or talk with your friendly local Cisco account manager.  We can help you with the following services, and more, to get your there faster first time.


  • You are reading UCS Fibre Channel SAN Design Considerations - available on the web at
  • Watch out for following on documents that provide blueprints for actual configurations based on these considerations.


This is not an official Cisco document, use it at your own risk, your mileage may vary, prices go up and down, red sky at night shepherds delight


Why connect UCS to a SAN?  The B-series blades can have disks installed in them, so why not use those?  The driver to not install the OS on the local disks is to achieve stateless computing, where each blade is stateless: this means that any blade can be deployed with a new identity and purpose then redeployed with a new identity and purpose.  It also allows for very quick recovery from blade hardware failure.  Stateless computing can be achieved with UCS by using techniques such as UCS Service Profiles and SAN Boot.

Service Profiles are logical representations of a compute node (or blade).  They define three groups of characteristics:

  1. Identity - UUID, MAC addresses, pWWN/nWWN etc
  2. Hardware - vNICs, vHBAs etc
  3. Behaviour - boot policy (local, SAN, network, etc).

In relation to SAN, each group of characteristics defined in a Service Profile need to be thought about:

  1. Identity - how will I identify my compute node (blade) on the SAN?  - these are the pWWN and nWWN names.
  2. Hardware - how will my blade connect to the SAN? - these are the number, type and connectivity of the vHBAs
  3. Behaviour - how will my blade use the SAN? - boot from SAN is a good exmaple.

To explore answers to these three characteristics, it is useful to first establish the purposes of the storage:

  1. I want stateless computing through boot from SAN.
  2. I want all of my virtual machines, applications and data to be stored on the SAN.

With the purposes agreed, let's explore the three SAN components (UCS, Switches, Array) starting at the Array end.  NOTE: it is easier to start from the array end, trust me! :-)

FC SAN Array Considerations

Consider and explore the following checklist, more detail below.

Array Certification Is the array certified with VMware vSphere?
Array Performance Is the array the correct tier for the workload you need?
Array Configuration Are you aware of the array vendor's configuration instructions for use with VMware vSphere?
Array Sizing How will the LUNs be configured - size, RAID, VMs/LUN?
Array Mapping How do you map LUNs to initiators - masking, initiator groups?
Array Connectivity What does the logical and physical connectivity look like - LUNs to Storage Processors, and Storage Processors to Fabric Switches.

Array Mapping LUNs -> Storage Processors -> Blades

It is critical, before you do anything, to understand how your LUNs map to Storage Processors and how those Storage Processors connect into the SAN, and how your hosts (the UCS blades) connect through the SAn to those LUNs.  This can be a chicken and egg situation when it comes to mapping LUNs to Hosts, because you might not have configured the Hosts yet...  but you can't configure the Hosts to see the LUNs if you haven't configured the LUNs... so how to do it?

First of all, you need to understand how LUNs map to Storage Processors in your array.  Do you have an Active/Active or Active/Passive array?  If you have an EMC array then your DMX is an Active/Active and your Clariion is an Active/Passive. Why does this matter?  In an Active/Passive array your vendor might direct you to map LUNs to specific primary storage processors.  This means that the best way to then access that LUN is via the primary storage processor: you can probably still access the LUN via the secondary storage processor, but there is an access performance cost because the traffic has to traverse an internal bus: worse, if something persists on accessing a LUN via the secondary storage processor then you will cause something called a LUN Trespass and it might occur that your LUN is re-assigned to the other storage processor.  But if other devices still want to access your LUN via the original primary, then you get more LUN Trespass and deteriorate to something called LUN Thrashing where your LUN ping-pongs between storage processors and severely degrades the performance of your array.

vSphere 4 is a lot smarter than previous versions and is pre-programmed to understand the array you connect (without you telling it!) and how to handle Active/Passive arrays.  The take away here is to let the technology work it out for itself, but be aware of how things work.  Trouble only usually happens when us dumb humans try to make a system behave in a non-default manner when we don't know any better.  You have been warned!

Array Connectivity

Remember that you have two separate fabrics.  Each blade has two HBAs connecting to separate fabrics.  There are at least two separate FC switches, one for each fabric.  When you get to the array, what happens next depends on the array model.

If the array is low tier and has just a couple of FC ports, then you are likely to connect Fabric A to Storage Processor A (or 0) and Fabric B to Storage Processor B (or 1).

If you have more ports, like four, then you are likely to connect each switch to a port on each SP - ie. for high availability, each fabric can see both SPs.

For high end devices like HDS99xx and EMC DMX-x, you have ports coming out of your ears and it's likely you don't need to worry about it :-)

Key takeway: understand the behaviour of your array, but let vSphere handle it automatically

With that out of the way, you need to work out how to limit access to your LUNs.  There are two considerations here: boot LUNs and data LUNs.

  • Boot LUNs - you only ever want one host to access one Boot LUN at any one time.  Unless you are particularly charitable in nature and expect everyone to play by the rules, then you should mask/map your LUN to one specific host.  You can do this on the array when you know the port world-wide names of your host.  But you don't know that until you've configured your UCS system so leave this for now.
  • Data LUNs - when running vSphere, these are the LUNs that will store for VMFS file system.  VMFS is a concurrent access filesystem so, unlike a boot LUN, you want more than one host to have access.  You are likely to configure a bunch of LUNs for a cluster, so you are likely to want to define a cluster of hosts and a cluster of Data LUNs (VMFS) and make sure each LUN is masked/mapped to the cluster of hosts.

FC Switch Considerations

Consider and explore the following checklist, more detail below.

Switch Zoning What zoning will you use?  Single-initiator zoning is best for VMware vSphere.
Switch VSANs Will you be using VSANs?  Only available on Cisco MDS and great at reducing under-utilized islands of FC fabric.
Switch NPV You need NPV on your switches.  Find out more about NPV.
Switch Connectivity to UCS and Arrays, both logical (VSAN) and physical (port).

More detail on Switch Zoning

Zoning is crucial for correct functioning and adequate performance of your system.  For VMware deployments you should use single initiator zoning: this is a really simple concept of configuring the SAN switch to only let your HBA see the storage processors it needs - and not other devices on the network, such as other hosts' HBAs.

Remember that you have two HBAs connected to two separate, "air gapped" fabrics which means each HBA connects to a different FC switch which means to each fabric your host is represented by only one HBA: hence, single-initiator zoning.

The key to successful zoning is not being lazy or crazy :-)

  • DO NOT configure a "cluster zone" with all HBAs for hosts in a cluster and the storage processors in one zone.  That's lazy.  All you need to get problems is for one host to start issuing BUS RESETS on the SAN and all your hosts will crawl to a halt.
  • DO NOT avoid zoning completely.  That's crazy.  Your host will be able to see every device on the SAN and be affected by what they do.

More detail on Swich VSANs

You only get VSANs on Cisco's MDS, not McData nor Brocade.  VSANs are simple and similar to VLANs.  Here's some considerations:

  • Currently each FC port northbound of UCS (ie. the port from the UCS fabric interconnect GEM to the FC switch) can only be in one VSAN.  You can have one FC port in one VSAN.
  • The VSAN that your UCS connects into can be different than the VSAN that contains your FC Array if you use Inter VSAN Routing (IVR).  Again, just Cisco MDS.
  • You tell UCS which VSAN your northbound FC ports are in, and you also connect HBAs to VSANs - all via the UCSM GUI, and more on that later.

More detail on Switch NPV

Because UCS is a really big host with more blades than FC ports, it means it has to "trunk" multiple-blade FC traffic down those over-subscribed ports while still uniquely identifying each blade (HBA) on the FC SAN.  This is what NPV does, and all you really have to worry about is that your FC switches are NPV capable and that NPV is turned on.

More detail on Switch Connectivity

Remember you have one switch per fabric UNLESS you are using Cisco MDS which means you can use VSANs.

If you aren't using VSANs, then the connectivity is quite simple - UCS northbound to a switch port, then switch ports to one or more storage processors on the array.  Do your zoning based on pWWNs (HBAs and SPs).

If you are using VSANs, then the connectivity is still simple!  A UCS port cannot (currently) "trunk" VSANs over one FC port, so you still only get one VSAN per port out of UCS.

UCS Considerations

Consider and explore the following checklist, more detail below.

UCS End Host Modewhat UCS looks like on the SAN (a big host!)
UCS Expansion Slot
you need a specific Gigabit Expansion Module (GEM) to connect to a FC SAN.
UCS Identity you need to think about pWWN and nWWN pools
UCS vHBAs how will blades connect to the SAN?
UCS Boot Policyboot from SAN configuration
UCS Qualificationthe UCS HBA drivers are included in ESX, but is ESX qualified against your array?  Check here.
UCS VIFs and VFCshow the UCS switchboard works.
UCS FC Failover HBA, FEX, FI etc
UCS Connectivity from Blade through the FEX and the 6120s to the switch to the array - putting it all together.
UCS QoSQuality of Service of FC traffic through UCS

More detail on UCS Identity

Each blade has a mezzanine card in it containing a Converged Network Adapter (CNA), and that CNA has Qlogic/Emulex/Cisco HBAs that are configurable in the BIOS and presented to the Operating System.

These HBAs need unique port World Wide Names (pWWNs) to identify them on the FC Fabric.  In UCS, you could use the hard-burned in pWWN, but for stateless computing you instead use a logical Service Profile to tell the HBA what it's name is.  You can do this manually per-Service Profile, or you can use a pool.

Use a pool to allow the Service Profile to pick a unique identity for itself, and remember this - it's what you're going to need to do your Array Mapping/Masking to expose LUNs to the HBA.

More detail on UCS vHBAs

We've already spoken (above) about identifying your UCS blade (HBA) on the SAN, but there's more to a HBA than just it's identity.  Think about the following:

  • Do you need to tune Queues?
  • Do you need to tune Interrupt Handling?
  • Do you need to turn off Performance Enhancements?
  • Do you need to turn on Receive Side Scaling?
  • Do you need to tune Failover Timeout?

The answer to all of the above is No.  But the last question is interesting - how does failover work?  (See below!)

More detail on UCS  Boot Policy

The blade BIOS dictates the boot order, and UCS Service Profiles configure the boot order via a Boot Policy, which you set in UCSM.  Once you attach a Service Profile to a blade, then UCS configures the BIOS as per the Boot Policy.

It's normal (not sure if it's best?) practice to put the boot order like this:

  1. CD/DVD
  2. Floppy

In our case, you need to configure to boot from the HBA - but how does the HBA know where to boot from?

More on VIFs and VFCs

More on FC Failover

More on UCS Connectivity (putting it all together)

More on UCS QoS
CreatePlease to create content
Content for Community-Ad
FusionCharts will render here