02-12-2013 07:29 AM - edited 03-01-2019 10:52 AM
My question pertains to the correct cabling & vSAN setup from the UCS FI to the San Switch (Cisco MDS).
Currently my cabling is as follows:
FI1 port 25 - SW1 port 3
FI1 port 26 - SW2 port 3
FI2 port 25 - SW1 port 4
FI2 port 26 - SW2 port 4
On the San Switch side, our Storage guy configured SW1 as vlan10 and SW2 as vlan20. Because of this, I was not able to have a vSAN per FI, as I've read is best practice, as the switch vlans cross FIs. Now, when I create vHBAs I have to have 4 vHBAs per server, 2 per FI per vlan if I want redundancy with a switch or FI loss.
Question: Is this the correct setup on the SAN Switch side? Should I actually be creating 4 vHBAs per server? Do I have my cabling incorrect & I should just be running from FI1 to SW1 on both ports & FI2 to SW2?
Help or guidance would be appreciated!
~Tara
Solved! Go to Solution.
02-12-2013 08:01 AM
Hi Tara,
There are two ways of cabling your SAN infrastructure. It comes down to a design decision. Traditionally SAN admins prefer to keep their fabrics separate/isolated. In this design MDS-1 would only connect to FI-A and MDS-2 would only connect to FI-B.
Another option is to connect each MDS to each FI. This gives SAN fabric access from each FI, and allows you to create dual VSANs (that exist on each FI) rather than separate.
You don't need four vHBAs per UCS server - some adaptors will not support more than two anyway. What you will do is create two vHBA's per Service Profile, each host will have access to both VSAN's 10 and 20. What you might want to do is stagger your vHBA fabric so that 1/2 your Service Profiles access VSAN 10 through FI-A and the other half access VSAN 10 through FI-B. This will help distribute the load accordingly.
Ex.
First 1/2 of SPs
vHBA0 - VSAN10 - Fabric-A
vHBA1 - VSAN20 - Fabric-B
Second 1/2 of SPs
vHBA0 - VSAN20 - Fabric-A
vHBA1 - VSAN10 - Fabric-B
As you can see this can a bit of a manual balancing act. The best case scenario would be to isolate (re-cable) your SAN fabrics as per traditional SAN design above.
Regards,
Robert
02-12-2013 08:01 AM
Hi Tara,
There are two ways of cabling your SAN infrastructure. It comes down to a design decision. Traditionally SAN admins prefer to keep their fabrics separate/isolated. In this design MDS-1 would only connect to FI-A and MDS-2 would only connect to FI-B.
Another option is to connect each MDS to each FI. This gives SAN fabric access from each FI, and allows you to create dual VSANs (that exist on each FI) rather than separate.
You don't need four vHBAs per UCS server - some adaptors will not support more than two anyway. What you will do is create two vHBA's per Service Profile, each host will have access to both VSAN's 10 and 20. What you might want to do is stagger your vHBA fabric so that 1/2 your Service Profiles access VSAN 10 through FI-A and the other half access VSAN 10 through FI-B. This will help distribute the load accordingly.
Ex.
First 1/2 of SPs
vHBA0 - VSAN10 - Fabric-A
vHBA1 - VSAN20 - Fabric-B
Second 1/2 of SPs
vHBA0 - VSAN20 - Fabric-A
vHBA1 - VSAN10 - Fabric-B
As you can see this can a bit of a manual balancing act. The best case scenario would be to isolate (re-cable) your SAN fabrics as per traditional SAN design above.
Regards,
Robert
02-12-2013 08:48 AM
Thank you Robert. This makes sense now that I see it written out & I have no desire to manually balance anything nor is that sustainable for the future. Basically as it stands right now I have 1 path on each FI that is not being used at all, so fixing the cabling will be simple, then it should be a short storage outage fixing the vlans. We haven't gone to prod with anything yet, so we wanted to get this nailed down & cabled correctly before then.
Thanks for your help!
Tara
07-31-2018 07:37 PM
08-01-2018 07:20 AM
As Robert says, keeping the fabrics A (VSAN A) and B (VSAN B) separate is always done in production environments. You are right: losing FI-A and SAN-B would completely shut off access to storage; however, this is very, very rare. Host multi-pathing is definitely needed, as you point out, for load-balancing and high-availability.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide