For those of you who have migrated an existing MSP from 6.3.x to 7.x, how and where did you configure it for RAID 6? I migrated an MSP today. The first step was to recover it to 6.3.1. That procedure makes it RAID 5 by default. And the upgrade rpm's to 7.0.1 did not adjust it to RAID 6.
It's an older MSP-4RU so you can probably see why I am eager to get it onto RAID 6 configuration! I lost two drives last week in fact. Any help would be much appreciated.
I have both good and bad news for you. While the LSI MegaRAID 8888ELP controller used in these servers does support Raid6
RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID10, RAID50, RAID60 Supported Drives : SAS, SATA
any recovery media supporting the 4RU models will only natively support RAID5. What that means for a customer wanting to use RAID6 on these servers is that you would have to manually create the array within the controller bios, reboot so that the array is recognized, then use a VSM 6.3.0 or earlier recovery DVD to install SuSE and VSM, followed by using the 7.x rpms to upgrade to whichever build of VSM 7 you like. (I mention using 6.3.0 or earlier because from 6.3.1 and later, the recovery image destructively creates a RAID5 array as the first step in rebuilding a 6.x server. The recovery ISO available for 6.3.3 and 7.x create4 RAID6 for the CPS-MSP-2RU, but were never designed to support the CIVS servers which came preconfigured as RAID5.
I hope that helps,
Great tip! I can easily rebuild the array manually and run the recovery (I have done it way too often in the past year, can do it with my eyes closed).
I'm logged into Cisco.com and not finding a 6.3.0 recovery iso though. Is it still available?
I found this in the archives from this forum but the link is no longer valid. And I don't think I have access to the ET site anyways, just Smartnet.
Message was edited by: Jason Rossi
RAID 5 is very bad, we have CIVS-MSP-4RU with 24 750 GB disk. and believe these disk fails very often (seagate), so many times we need to reinstall all system again and lost a lots of critical information, Cisco believe me this is the worst design you ever made. a work around we found is to install system with few disk (ex 3) and then use 6.3.0 disk. this create SO and media1 partitions. after system was installed we add other disks in raid 6 configuration we create 2 x 10 disks and then we add to configuration as media2 and media3. remaining disk I left as spare.
Actually, I wrote a document to do just this... but it appears someone has removed it from the support portal (probably because it's not supported).
In short we had a client that refused to put a server in with that many members in a RAID 5 array, and seeing as how I agreed with him, we couldn't, in good conscience, implement it as such.
Here is the original write up (pdf) ...