In Data Centers, there are about four different options available for storing and running virtual machines
Local Storage (DAS)
The local storage or DAS (Direct Attached Storage) is an option where virtual machines are stored on the local hard drives attached in the rack mount servers (like Cisco UCS C210M1 rack mount servers).
Then we have iSCSI (Internet Small Computer System Interface) option. As the name suggest it a mechanism to transport SCSI data over IP based networks. With this option the storage array (or hard disk) is not directly attached to the server machine rather it is connected to the server machine (which some times also referred to as word "compute" meaning CPU, Memory, I/O) via iSCSI protocol.
NFS (Network File System) in the past mainly used for file sharing purposes. But the simplicity of NFS protocol and 10GiG Ethernet availability in the data centers making it a strong choice to run virtual machines as well. It is basically a shared folder on the IP network. It allows a user on a client computer to access files over a network in a manner similar to how local storage is accessed.
Fiber Channel is widely accepted protocol used for SAN (Storage Area Networking). It has substantially lower CPU cost than iSCSI and NFS. And this is discussed in this document.
SAN Protocol Choices
There are multiple choices available in order to connect to the SAN storage. The decision is mainly based on the storage area networking you have in place in your Data Center and the type of SAN connectivity you are using or going to use. The most popular choices, in order to transport the storage data over the network, available today are
Fiber Channel Protocol and
Cisco's current recommendation is to use the Fiber Channel to run Virtual Unified Communications applications on UCS platform. But as the data center are evolving and 10GE fabric is becoming more popular in data centers, the iSCSI and NFS offer compelling reasons to go with as well.
For the purpose of this document we are going to discuss the Fiber Channel as our main SAN connectivity/networking protocol.
Physical Wiring Options
From the physical wire connectivity following option is available in UCS Architecture
Connect the Fabric Interconnect switch to another SAN switch (could be MDS or Nexus switch for instance) and eventually to the rest of SAN network including SAN-storage arrays.
NOTE: For Virtual UC deployments, Cisco's recommendation is to use the multi tier model to avoid single point of failure. Also it is not a supported configuration to directly connect UCS Fabric Interconnect switch to the SAN storage array.
Following network diagram illustrates a typical UC on UCS B-Series Architecture in Data Center.
The terms “storage server” and “storage array” will be used interchangeably in this document
N-Port Virtualization (NPV) Mode
The fabric interconnect operates in N-Port Virtualization (NPV) mode and not as a FC switch in the fabric. This means that it does not require a FC domain ID to keep the number of domain IDs in the SAN fabric the same. The fabric interconnect joins the fabric through a normal FLOGI. The FLOGIs that comes from the server blade adapters is translated by the NPV process into FDISC into the fabric.
Make sure that upstream MDS switches are NPIV enabled, and assign the selected interface to the Cisco UCS with the appropriate VSAN number.
Configuring N Port Virtualization
N Port virtualization (NPV) reduces the number of Fibre Channel domain IDs in SANs. Switches operating in the NPV mode do not join a fabric; rather, they pass traffic between NPV core switch links and end devices, which eliminates the domain IDs for these edge switches.
NPV is supported by the following Cisco MDS 9000 switches only:
Cisco MDS 9124 Multilayer Fabric Switch
Cisco MDS 9134 Fabric Switch
Cisco Fabric Switch for HP C-Class BladeSystem
Cisco Fabric Switch for IBM BladeCenter
Configuration Screen Shots
Enable npvi via CLI (if you are unable to enable it via GUI)
MDS Switch should show that npiv is enabled as shown in the following screen
Following screen shows the virtual HBA (vHBA) configuration on the UCS B-Series service profile
Make sure VSAN configuration is done and the VLAN is id is assigned within the VSAN (VSAN ID is 500 in the following example)
WWPN should be set to correct value otherwise MDS won't perform switching
In UCS Service Profile
WWPN is assigned to vHBA
WWNN is assgined to Server
In order to make it work with UCS B-Series, make sure that the minimum MDS software version is 3.3 (1c) or above.
If "show flogi database" shows the UCS WWPNs' then it means the MDS SAN switch is successfully connected to UCS.
DC1-MDS-9124# show flogi database --------------------------------------------------------------------------- INTERFACE VSAN FCID PORT NAME NODE NAME --------------------------------------------------------------------------- fc1/1 500 0x6f0000 50:00:40:20:01:fc:33:4b 20:01:00:04:02:fc:33:4b fc1/2 500 0x6f0200 50:00:40:21:01:fc:33:4b 20:01:00:04:02:fc:33:4b fc1/3 500 0x6f0300 20:41:00:0d:ec:d7:20:c0 21:f4:00:0d:ec:d7:20:c1 fc1/4 500 0x6f0400 20:42:00:0d:ec:d7:20:c0 21:f4:00:0d:ec:d7:20:c1 fc1/7 500 0x6f0100 21:00:00:1b:32:8a:eb:e7 20:00:00:1b:32:8a:eb:e7 Total number of flogi = 5.
Run following command on the UCS Fabric Interconnect switch
vse-ucs-6120-1-A(nxos)# show npv flogi-table No flogi sessions found.
If UCS Fabric Interconnect Switch returns no flogi session, then it means it is not connect to MDS and further config review and troubleshooting is required. This command should show some entries
Following screen shows some important (but not all) config. verification steps
while upgrading the Spine switch manually, it enters a boot loop and shows this error: ERROR ERROR SYSTEM IMAGE CORRUPIn /etc/rc6.d/S9[ 66.386004] reboot: Restarting system Do you have any idea, how to resolve this? Thnx
Hello,I'd like to create a span session in my ACI infrastructure, typically from my proxy appliance attached to physical port on a LEAF toward a remote VM acting as a sniffer receptor, running Wireshark.I have tested few methods for configuring SPAN polic...
hello everyone I have EVPN Multi-Site design between three data centers each DC with AS that different from other DCfirst:why we need to create full mesh neighbor ship in eBGP not iBGP between all VTEP that located in each spineSecond:is there any what to...
APIC v 4.2(5l) I am trying to find out the proper way to setup the contracts and apply service graph between EPGs/uEPGs in a few different VRFs (and across Tenants). Simply, I want to create a single "VRF1-to-VRF2" contract that I can set any E...