cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
440
Views
0
Helpful
4
Replies

RAID5 - filesystem is not coming up after replaced failed drive

arlenebruno378
Level 1
Level 1

The failed drive was replaced.  This was replaced when the server was powered off.  The drive already had data in it.  The RAID5 array is on an M4 server (slots 3-6). Slot 3 is the disk that was replaced.  After powering up the server, we noticed we were unable to get a prompt.  We were at the maintenance login (RHEL).  I am hoping a step was missed or maybe the lvm is lost somewhere and can be recovered.  A bit confused IF the data was wiped...did the new disk that was replaced by the old (same slot) copy itself to the other 3 disks?  Any info is helpful.  Ty!

4 Replies 4

I’ll move you post to the UCS part of the community as this is not really related to Collaboration where you originally post this.



Response Signature


TY!

arlenebruno378
Level 1
Level 1

I should add, the steps I did:

Noticed drive was degraded.

Powered off server

Removed bad drive in slot 3

Put in new drive (that was preconfigured) in slot 3

Powered up server. 

Noticed I was no longer able to get a prompt to login; instead had the root maintenance login.

The rebuild stopped about 4 hrs later, when I logged in, I was unable to see /dev/sdb which is the data for the vm's and it uses logical volume (lvm)

Curious about the 'precofigured' state of the replacement drive???

Normally a blank, new drive without raid controller meta data written to it, will get auto rebuilt by the raid controller.

If the drive had been plugged into another server previously, it may have old meta data, and be marked as 'foreign config'.

You would need to use the CIMC, or LSI option-rom utility to clear the foreign config status.

After that the drive might be auto rebuilt, or occasionally requires being set as the global hot spare before being used to rebuild the original failed one.

Kirk...

Review Cisco Networking for a $25 gift card