cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
13899
Views
14
Helpful
19
Replies

Ask The Expert: Architecture and Design of Storage Area Network (SAN)

ciscomoderator
Community Manager
Community Manager

Read the bioWith Seth Mason

Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions on design and architecture of Storage Area Networks (SAN) with expert Seth Mason. This interactive session will be focused on the areas of multi-protocol implementations such as those involving Fibre Channel over IP (FCIP), Fibre Channel over Ethernet (FCoE), Fibre Connection (FICON), or Internet Small Computer Systems Interface (iSCSI), as well as deploying Virtual SANs (VSANs) and bandwidth management to more advanced topics such as Storage Media Encryption or Data Replication and Migration.Seth Mason is a technical leader with Cisco's Enterprise Data Center Practice. His areas of expertise are storage area network architecture and migration, disaster recovery, interoperability, and Fibre Channel over Ethernet (FCoE). Mason has written several books including "MDS-9000 Family Cookbook for SAN-OS 3.x" and "The Cisco Storage Networking Cookbook" and has several pending patents. Mason was a member of the team that authored the CCIE exam in Storage Networking, and he has been presenting at Cisco Live since 2006. He holds a bachelor's degree in computer engineering from Auburn University as well as CCIE certification #20847 in Storage Networking.

Remember to use the rating system to let Seth know if you have received an adequate response. 

Seth might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum shortly after the event.   This event lasts through February 10, 2012. Visit this forum often to view responses to your questions and the questions of other community members.

19 Replies 19

e.nieuwstad
Level 1
Level 1

Seth,

for our new office we have purchased a 4 UCS chassis with two 6248 FI's and two MDS 9124 to connect to the NetApp FC storage. The primary FI will connect to Fabric A (consisting of one 9124) and the secondary FI will connect to Fabric B (the other 9124) The Netapp will have a connection to each of the Fabrics.

What is the best practice for zoning. I have read something about each HBA a dedidicated zone with all storage targets. Can you please give some advice.

Regards

Eelco

Eelco,

I would recommend that on each 9124 you use three practices:

  1. Single Initiator Zoning:  This is the practice by which you zone one initiator (HBA) with all of the storage ports that it will be accessing. Using a naming convention that incorporates the hostname and the HBA instance will help you identify the contents.
  2. Enhanced Zoning:  This standard feature provides you with the ability to verify you zone changes before committing them, abort them in case you make a mistake and prevent multiple people from making changes simultaneously all within the scope of a single VSAN.
  3. Device Aliases:  Instead of using fcAliases, Device Aliases provide a plain text name to pWWN mapping that is independent of zoning.  You can still zone by device alias, but it enables you to move devices between VSANs and still maintain the same 1:1 name mapping.  It also is used by other applications like DCNM, IVR and many of the CLI outputs which can imbed the device-alias into the command output.  Think of this like DNS.

All of these topics including the theory of their operation and examples of their use are covered in the Cisco Storage Networking Cookbook.

I hope this answers your question and if you need more clarification, please let me know.

-Seth

Seth,

Ok nice to get things confirmed from an expert I will enable enhanced zoning although I doubt the is a very big advantage for a single switch fabric.

Eelco

Hello Seth,

I’m inheriting about 2000 MDS ports worth of end devices and it seems that the prior administrator put all of them into a single VSAN. Any thoughts on if I should move them into different VSANs, how many VSANs I should use and are there any gotchas with moving to multiple VSANs that I should be aware of.

- Alexander

Alexander,

One of the benefits to going with multiple VSANs is that it enables you to reduce the size of an individual "fault domain."  This means that if something happened to one of the VSANs, let's say you made a mistake with a zoneset activation, it would only impact that one VSAN and its' associated members and not the entire 2000 port fabric.  It also makes it easy to segregate out the end devices into different catagories (Gold, Silver, Bronze) or purposes such as Tape_VSAN and Disk_VSAN.   On the other hand, if you broken the devices up into multiple VSANs, and you had a host that needed to access the disk in another VSAN, you could leverage InterVSAN Routing (IVR) and make an IVR Zone.  I have seen quite a few customers that instead of using the above separation methods, they use an arbitrary amount like, "every 1000 HBAs, create a new VSAN" or "Storage Array #1 and all of it's associated hosts. Storage Array #2 will go in a different VSAN..."

Moving from one VSAN to another isn't a hard process even though it is disruptive.  You'll need to move the storage port and all of it's associated or dependent HBAs to the same VSAN.  You can pre-configure all the zones and fcaliases (if you're using device-aliases, you don't need to touch the device-aliases) in the new VSAN beforehand.

When you move a port to a new VSAN, the link will flap and the device will, in most cases, get a new FCID. This will happen unless you pre-configure the domainID and the persistent FCID to be the same in the new VSAN.  If you have AIX or HPUX hosts, you should most definitely configure the storage ports to get the same FCID as in the old VSAN, otherwise you'll be looking at having to rescan the disks as both of those platforms embed the FCID in the OSes device path. Either way, you should practice this process out with a couple of test hosts in a test VSAN prior to trying this on production hosts/storage so you can see the behavior.

Let me know if you have any other questions,

--Seth