Some advice, queries please.?
We are connecting x2 IBM v7000 SANs directly into the Fabric Interconnects. We have a CISCO versatack document which kinda covers connecting x1 SAN (but even then there is some difference between this doc and other CISCO docs). So we have set up switching mode, configured the posts and created the connection Policies.
We have x2 vsans (vsan10 on Fabric A and vsan 11 on Fabric B)
V7000 canister 1 port 1 – connects to Fabric A
V7000 canister 1 port 2 – connects to Fabric B
V7000 canister 2 port 1 – connects to Fabric B
V7000 canister 2 port 2 – connects to Fabric A
At this stage we have x1 SAN connected and have some servers in the UCS chassis booting from SAN, however there are issues.
- On the v7000 the hosts are showing as ‘degraded’ yet from what I can see there are 4 paths available to the ESXi hosts (from the command line)
- On the ‘Actual boot order’ there are only 2 paths available (on the configured boot order there are 4 paths)
Any pointers, have checked the ZONES and they look as though they should work, rebooted hosts etc, same result.
Also we seem to have 2 options for the second SAN
- Virtualise the second v7000 IBM SAN behind the first, (so we have a guy who can do this config part, the second san will be connected to the FIs in the same manner as the first) but how do we zone the SANs so they can see each other on the FIs. Do we have to manually create a zone? Can we do this in the UCS GUI or is it command line. It seems to be target initiator, not target target and I don’t want the hosts seeing the second SAN. Advantage is SPOG for deploying and managing storage
- We can just use the second SAN separately from the first with its own identity, I guess in this case we would add the end points to the existing Storage Connection Policy – or do we create a second Connection Policy.
Thanks, we use MDS on other setup so this is new…