Trying to get UCS to boot from SAN and just running into issue after issue. Running B200-M5 blades.
The profile is set up so that the CDROM is first to boot, then the primary SAN and then secondary SAN. When I launch the ESXi installer, it sees the boot volume fine. However, after installation, the server cannot seem to locate the volume and boot from it. It always drops into a shell prompt. This is with both UEFI (with the boot image path mapped by editing the UEFI settings in UCS Manager) and Legacy boot modes. Why, if ESXi can see the volume, can't the server see the same volume. While I know that ESXi can see volumes regardless of LUN ID, the LUN ID in the profile is set to 1 and the LUN ID on the SAN is also 1 for this boot volume. So things should be matching up.
Looking at things, I find the following odd:
adapter 1/1/1 (fls):15# lunlist
vnic : 21 lifid: 19
- FLOGI State : flogi est (fc_id 0xd30a08)
- PLOGI Sessions
- WWNN 52:4a:93:70:f1:2f:83:11 WWPN 52:4a:93:70:f1:2f:83:11 fc_id 0x000000
- LUN's configured (SCSI Type, Version, Vendor, Serial No.)
LUN ID : 0x0001000000000000 access failure
- REPORT LUNs Query Response
- Nameserver Query Response
- WWPN : 52:4a:93:72:d0:33:52:01
- WWPN : 52:4a:93:72:d0:33:52:11
adapter 1/1/1 (fls):4# lunmap
lunmapid: 0 port_cnt: 1
lif_id: 19
PORTNAME NODENAME LUN PLOGI
52:4a:93:70:f1:2f:83:11 00:00:00:00:00:00:00:00 0001000000000000 N
I don't know if this is SAN side or UCS side. PLOGI isn't happening and I'm getting access denied as the server boots. Hard for me to really think this is SAN side since ESXi can see the volume and the LUN IDs are the same within the profile and on the array. But at this point, I just don't know.