11-06-2012 12:18 PM - edited 03-01-2019 10:42 AM
Dear Community,
We are trying to set a up a Cisco UCS Blade System for SAN Boot. The SAN are two EMC VNX 5300 Storages and the OS is OEL 5 Update 8 x86-64.
Steps performed:
-> boot lun with 300 GB configured on both EMC Storages and granted access for nodes
-> Cisco UCS SAN Boot configuration completed
-> OEL5U8 installation successfully performed with special linux boot flag: "linux mpath pci=nosmi" as specified in cisco documentation:
-> partitioning setup during installation:
* create software raid partition for boot and one for LVM on each of the two boot LUNs
* created one raid device (md0) for /boot and one for LVM
* created LVM configuration with swap, / on LVM RAID Disk.
-> After OS installation, the system successfully shows the kernel options (rh compat kernel and uek kernel). But after a few seconds, there is a kernel panic and it can not find root volume /dev/VolGroup00/lvroot
Q: what is the recommended approach for SAN boot when system should use two independent storages for redundancy? I am trying to implement software raid (mdtools) but am not sure if this is the way to go.
Regards,
Martin
11-06-2012 07:07 PM
Hello Martin,
## But after a few seconds, there is a kernel panic and it can not find root volume /dev/VolGroup00/lvroot
Only redhat kernels are supported. Are you booting UEK kernel ? Have you installed multipath software during the installation ? If not, try disabling one path and boot the OS with redhat kernel.
## Q: what is the recommended approach for SAN boot when system should use two independent storages for redundancy? I am trying to implement software raid (mdtools) but am not sure if this is the way to go.
You can implement any of the supported redundancy options by your SAN array for the boot LUN.
Regarding RAID, I notice you already have LVM. Are you planning to have LVM on top of software RAID ( mdadm ) ?
Check what SAN array is capable of for your boot LUN. It might be redundant to implement software RAID if the SAN array already does it for you at the back end.
Padma
11-06-2012 11:17 PM
Padma,
thank you for your response.
A: We are using RH compatible kernel, not UEK.
A: Yes, we have multipath enabled.
A: Yes, we are planning to use LVM on top of Software RAID (mdadm).
A: We have 2 independent storages in two data center rooms. We have to ensure redundancy in case of one storage failure. Therefore, i do not see any alternative to host-based software raid.
I made several tests last night:
-> setup without software raid => working
-> setup with software raid for /boot only => working
-> only if software raid is active for / and /boot, the kernel panics at boot time.
Regards,
Martin
11-06-2012 11:45 PM
We have decided to stay with local boot disks and avoid SAN Boot as for now. We have already lost too much time here and I am afraid that the system will not be as robust as needed.
The local hard disks are RAID1 with hardware RAID. Can we move the two local disks to a new blade in case of blade failure and will those be functional again?
Regards,
Martin
11-06-2012 11:49 PM
Martin,
-> only if software raid is active for / and /boot, the kernel panics at boot time.
Can you check whether initrd is compiled with RAID modules ?
lsinitrd /boot/initramfs.*.img | grep -i raid
http://tldp.org/HOWTO/Software-RAID-HOWTO-7.html#ss7.5
Yes, you can move the disks to another blade. Just remember to retain the same slot IDs and import the RAID configuration upon first boot on the new server.
http://www.cisco.com/en/US/docs/unified_computing/ucs/c/sw/raid/configuration/guide/Cisco_UCSM.html
Padma
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide