12-22-2023 03:58 PM
The failed drive was replaced. This was replaced when the server was powered off. The drive already had data in it. The RAID5 array is on an M4 server (slots 3-6). Slot 3 is the disk that was replaced. After powering up the server, we noticed we were unable to get a prompt. We were at the maintenance login (RHEL). I am hoping a step was missed or maybe the lvm is lost somewhere and can be recovered. A bit confused IF the data was wiped...did the new disk that was replaced by the old (same slot) copy itself to the other 3 disks? Any info is helpful. Ty!
12-22-2023 10:50 PM
I’ll move you post to the UCS part of the community as this is not really related to Collaboration where you originally post this.
12-26-2023 03:56 AM
TY!
12-25-2023 11:20 PM
I should add, the steps I did:
Noticed drive was degraded.
Powered off server
Removed bad drive in slot 3
Put in new drive (that was preconfigured) in slot 3
Powered up server.
Noticed I was no longer able to get a prompt to login; instead had the root maintenance login.
The rebuild stopped about 4 hrs later, when I logged in, I was unable to see /dev/sdb which is the data for the vm's and it uses logical volume (lvm)
01-02-2024 11:17 AM
Curious about the 'precofigured' state of the replacement drive???
Normally a blank, new drive without raid controller meta data written to it, will get auto rebuilt by the raid controller.
If the drive had been plugged into another server previously, it may have old meta data, and be marked as 'foreign config'.
You would need to use the CIMC, or LSI option-rom utility to clear the foreign config status.
After that the drive might be auto rebuilt, or occasionally requires being set as the global hot spare before being used to rebuild the original failed one.
Kirk...
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide