cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
14539
Views
14
Helpful
19
Replies

Ask The Expert: Architecture and Design of Storage Area Network (SAN)

ciscomoderator
Community Manager
Community Manager

Read the bioWith Seth Mason

Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions on design and architecture of Storage Area Networks (SAN) with expert Seth Mason. This interactive session will be focused on the areas of multi-protocol implementations such as those involving Fibre Channel over IP (FCIP), Fibre Channel over Ethernet (FCoE), Fibre Connection (FICON), or Internet Small Computer Systems Interface (iSCSI), as well as deploying Virtual SANs (VSANs) and bandwidth management to more advanced topics such as Storage Media Encryption or Data Replication and Migration.Seth Mason is a technical leader with Cisco's Enterprise Data Center Practice. His areas of expertise are storage area network architecture and migration, disaster recovery, interoperability, and Fibre Channel over Ethernet (FCoE). Mason has written several books including "MDS-9000 Family Cookbook for SAN-OS 3.x" and "The Cisco Storage Networking Cookbook" and has several pending patents. Mason was a member of the team that authored the CCIE exam in Storage Networking, and he has been presenting at Cisco Live since 2006. He holds a bachelor's degree in computer engineering from Auburn University as well as CCIE certification #20847 in Storage Networking.

Remember to use the rating system to let Seth know if you have received an adequate response. 

Seth might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum shortly after the event.   This event lasts through February 10, 2012. Visit this forum often to view responses to your questions and the questions of other community members.

19 Replies 19

cybeonic24
Level 1
Level 1

Seth, I am looking forward to conduct CCIE for Storage Networking, however i do not have much Lab, access, if you can point me to anyone who i can rent lab from, also any resources study material. appreciate your time. thanks.

Prasanna,

Welcome to the forum.  If you haven't had much hands-on experience with the MDS, I would start with something like Firefly communication's CCIE Storage Lab Bootcamp. While they don't rent out racks, the week long handson program could help you out.  For renting racks, you could look into someone like iementor who have rack's for rent.  You could also look at eBay for older equipment such as 9222i switches, 2Gb HBAs, a basic FC jbod and a brocade/mcdata switch for interoperability work.  I would also work with the CCIE Storage Study group as there are people over there in the same situation as you.

For studying material for the written, to help you understand the FC protocol itself companies like Solutions Technology have courses specific to the protocol and the accompanying books are great references, which will help you understand the protocol in preparation for the written.

For the lab you should be familiar with the MDS Configuration Guides, you'll have access to them during the lab but if you need to spend a lot of time reading them during the lab, you'll most likely run out of time.  Additionally, the Cisco Storage Networking Cookbook is a great resource as it has procedures on how to configure most of the features of the MDS. The SANOS 3.x version was a must read for SANOS CCIE labs, and the current version covers NXOS version 5.2.

Thank you and good luck with your exam

dzorgnotti
Level 1
Level 1

Seth, I just saw this topic after posting my question on the general forums:

https://supportforums.cisco.com/message/3547745#3547745

Could you please look into the matter?
Basicly the FCoE part and the connection to the existing brocade SAN (dual fabric, two datacenters) is giving me an headache.

Regards,

Dominik

Dominik,

The first FCoE topology in which you insert a nexus 2k to 5k in NPV mode connected to the existing brocade fabric  is something that we see quite frequently deployed, but is implemented a bit different than what you have below:

Based upon the fact that you have both "SAN A" and "SAN B" in both datacneters, I would recommend is in each datacenter insert a pair of nexus 2ks and connect each one to a different nexus5k. Then connect each nexus5k to a different Brocade 5100.  Thereby maintaining your "SAN A" and "SAN B" model.   With the n5k in NPV mode uplinked to the Brocade 5100s they should be very easy to manage. Your savings will come from the consolidation of the access layer and then using the appropriate transport to go between datacenters, nexus5k for traditional L2/L3 traffic and the Brocade infrastructure for your storage/fibrechannel traffic.

Does this answer your question?

-Seth

Hi Seth,

thanks for your quick response!

The solutions you describe would be my second drawing, wouldn'it? (see picture below)

I thought about using Nexus 5548UP and Nexus 2232PP, any advice on this?

The 5548UP wouldn't need an extra line card for the FC-uplinks and provides 1Gbit capabilty to connect to the cisco 6509.

And what about:

The Nexus 2000 data sheet (http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps10110/data_sheet_c78-507093.html) says:

Distance between Cisco Nexus 2000 Series Fabric Extender and Cisco Nexus 5000 Series Switch: Up to 3 km (300m for FCoE traffic)

Thanks,

Dominik

     The nexus 5548UP + 2232PP is a combination made for this type of environment and is frequently used for expanding the n5k for additional FCOE/10GbE connections. 

     On question concerning 2k to 5k distances, I'm not aware of any methods of extending the FEX farther away from the Nexus5k. Those connections are designed to be within the same datacenter and are most often implemented within the same row with the n2k being "top of rack" and the n5k being "end of row".  Recall that that n2k should be treated like a 'virtual linecard' of the n5k with all traffic going to/from the parent n5k. If you need to go farther, I would leverage n5k to n5k connection with SFP+ LR/ER optics for 10km and 40km respectively.

Okay, if I cant interconnect an n2k over the boundrys of a datacenter I'd need two N5K per datacenter to have some kind of redundancy within a datacenter.

This eliminates the need for any nexus 2000 since I've got enought ports on the N5K.

Changed the drawing, now OK or should all N5K be interconnected with each other (like a mesh).

Looks solid and clean.  If you decide to connect the n5k's together (north-south in your diagram), make sure that you do not allow the FCoE VSAN across the ISL. This keeps your SAN A and SAN B isolated.

Hi Seth,

I was hoping you can do a quick overview and explain in simple words all the mess around connecting SAN storage directly to the UCS B series (without using Nexus in between). Please take into account that I'm not an expert in storage systems; for example, I barely know that basic zone configuration is needed on Nexus to establish communication between SAN and UCS, even if don't know why I'm doing it. So, is it really mandatory to have FC switch between the server and the storage? Does it help if we use UCSM 2.0? Is there any difference between 1.4 and 2.0 when talking about that particular issue?

Thanks,

Tenaro

P.S. If you can include FI modes (I'm talking about End Host Mode and Switching Mode) in the answer that will be great...

Tenaro,

Welcome to the forum.  The first area that I would look at is the Fabric Interconnect (FI) itself.  While it may look like a switch, the mode that it is operating in contributes to how it portays its' upstream connections.  The FI can operate in two modes:

  1. End device mode, acts like the "back of a traditional server" with Ethernet NICs and FibreChannel Host Bus Adapters (HBAs). The FI in this mode, doesn't run any traditional layer2 protocols like Per VLAN Spanning Tree and on the HBAs, they will log into the upstream FC switch just like any other FC HBA with a port-type called "N_Port" for Node Port.
  2. Switching Mode, acts like "a switch" for ethernet it can run per vlan spanning tree, while the FC side acts like a FC switch and runs the FC fabric protocols, such as a zone server, name server and FSPF (it's like a FC version of OSPF). It connects to upstream FC switches with ISLs (called E_Ports, for Expansion Port) and not with links that act like HBA ports.

So now to answer your question about why do you require an upstream switch to directly connect storage to the FI? That is because you first run the FI in "switching mode" as this is the mode that enables the FI to act like a switch, and second currently, UCSM does not have the abilities to configure the FC protocols and features necessary to enable one device to communicate to another.  For example, the UCSM doesn't have the ability to configure Zoning.  Zoning is basically an ACL that enables two or more devices to communicate within a FC fabric/VSAN. 

So, since the UCSM can't configure zoning, you have to rely on the fact that zoning is a distributed configuration/feature within a FC fabric.  Meaning that amongst switches in the same VSAN, the zoning configuration (called the Active Zoneset) is distributed to all the switches in the VSAN.  Therefore you could take a nexus 5k (in switch mode) or a Cisco MDS switch, configure the zoning on the Nexus5k/MDS and then over the ISL between the two devices the zoning configuration would be passed to the FI (which if you recall is in switching mode).

Just as a reference, in almost all environments where a UCS B series is deployed, there's always an upstream FC capable switch running in switching mode (Nexus 5k or MDS) which would be your access layer.  Keeping the FI in end device mode means the FI is running in a less complex mode and thereby is easier to manage.

Let me know if there are parts that don't quite make sense.

-Seth

Seth,for many years, Cisco has advocated ‘end of row’ MDS directors.  However, now with the Nexus 5k, I potentially have to manage hundreds of domains as our edge becomes “top of rack” and our core remains a pair of 9513s for storage connectivity.  Can you help me out here, the management headache could be bad.

    Christopher,

    You are correct in that we've advocated for years with end of row directors, as they reduced the number of domains within your fibrechannel SAN.  However, with the advancements in both FC and the Nexus 5k.  This number isn't quite as bad as you might think.  There's a couple of ways around this:

    First, if you went with a nexus5k, you could run it in NPV mode.  This means that it does not consume a domain or run any of the fabric based FC services such as zoning, nameserver or FSPF.  It logs into the upstream MDS 9500 as if it was a host.  Since the n5k is in NPV mode, it's from a FC perspective, a low maintenance device.  The only things that you can really do on the n5k in this mode is create VSANs, assign interfaces to VSANs and the initial port-channel configuration to the upstream MDS.  All the zoning, IVR, device-aliases etc are done on the NPIV enabled 9500.  While this does seem like it solves your problem, it does have one drawback.  The largest nexus 5k right now is the 5596UP, so at most you could have 96 FC ports, which could be enough for a few racks depending on your server requirements. 

    However, there is an even better solution...

    I would look towards pairing the Nexus5k with Nexus 2232PPs. The n2k would essentially scale up the capacity of the nexus5k by acting as a virtual linecard.  One additional difference is that you'd need to switch over from FC at the host to FCoE.  However, given that you already need ethernet at the server, this consolidation would be an easy move.  The nexus5k would still be in NPV mode, the n2ks would be treated like n5k linecards so the hosts have no idea that there is a n2k in the configuration, and the n5k could still log into the upstream MDS 9500 with FC connections.  In this scenario you're getting your biggest savings at the access. While still not encountering a massive number of domains.  I have even seen some environments whereby a n5k is placed "end of row" and the nexus2k is placed "top of rack". 

    I see the second option (server(fcoe) -> n2k -> n5k -> MDS(FC)) as an easy transition into unified fabrics, since you can align it with the refresh cycle of your servers (most customer's don't want to swap out HBAs for CNAs), but from a management standpoint the server team will still see a pair of HBAs (virtual ones) in their OS, and from a network standpoint, you'll have fewer domains to manage.

    Hope this clears up some of your concerns,

    --Seth

    Hello Seth,

    Nice job on the new version of the cookbook.  I’ve got two datacenters, one in Chicago and one in Denver and I’m running SRDF/a over FCIP between them.  The FCIP performance seems a bit slow over the OC-48 such that the SRDF replication seems to be getting a bit far behind.  Do you have any tips on what I can do to increase the speed?

    - Lisa

    Lisa,

    There are three areas that I would measure.  The utilization of the WAN link ("How much of the OC-48's capacity are you using?"), the utilization of the FCIP tunnels and the underlying components ("How much of the FCIP tunnels and GigE ports are being used?") and the utilization of the RA ports on the EMC arrays.  This will help you understand where your bottleneck is slowing you down.  Additionally, on the FCIP tunnels you can enable compression and write-acceleration to help alleviate some of the penalty due to the WAN latency.  For write-acceleration to work correctly, you will need to bind your FCIP tunnels into a single port-channel.  This will work even if the FCIP tunnels go over different WAN circuits.

    Once you measure some of these items, it will start to become clearer what areas need to be addressed or tuned.  Let me know what you find and I can provide additional insight.

    Thank you,

    -Seth

    Review Cisco Networking for a $25 gift card