03-29-2011 12:48 PM
Dear All,
I have a server running since 3 months normally without any problem , when I visit the customer last week I see one of the HDD have a red led and the processing of the sever was normal , I requested from the customer to make restart for the server as there were some Cisco services down and I made rebuild for this sever which takes more than five hours and after finishing the rebuild I made uncheck for the Repository (media 1 and media 2 ) then checked them again and made restart for the Cisco services , this sever is 4U server with 24 HDD one tera, the total size was 21.9 T but now it is 9.99 T ,I think the server didn't active VD2 from the array configuration ,so can anyone help me to check this HW issue?
also now all Cisco service are running but the problem is in the HDD size from the VSOM.
Thanks,
Ahmed Ellboudy
03-30-2011 07:26 AM
Dear All ,
If what I sent before was not clear ,the attached two images may clear the case,The good images shows you VSMS running without any problem media1 and media2 in Local repositories,while the bad VSMS contains Media1 only.
also VSOM server view shows there is no medai2.
so my question how can I show Media2 on the server has this issue?
Thanks,
Ahmed
03-31-2011 01:41 PM
Ahmed,
My first guess would be that you did not build RAID correctly. Run this commands and post it back to me.
Get details on the hard drives
shell> /opt/MegaRAID/MegaCLI/MegaCli -pdlist -aALL
Get details on the logical (virtual) disks
shell> /opt/MegaRAID/MegaCLI/MegaCli -ldinfo -lALL -aALL
Or, if you would like you can try this tool to also get your RAID info:
04-03-2011 03:26 AM
04-02-2011 09:02 PM
Hi Ahmed,
How was the server rebuilt (manually or using a VSM 6.3.0 or 6.3.1 Recovery DVD?) If you are running the latest version of VSM, the support report will contain the RAID logs which may shed some light on the subject. Michael Williams suggested an action plan which will gather information about your RAID array as well.
If may be that the array is not configured correctly, or you could be facing a different issue with partitioning or a correctly configured partition is not set to mount.
What is the result from typing the following commands while logged in as root in an ssh session?
# df -lh
# cd /etc
# more fstab
Thanks,
Jim
04-03-2011 03:38 AM
Hi Jim,
The answer of your question , the RAID was done manually,also please find attached the RAID files from the system report I taken last Wednesday .
If you find any problem and how can I solve it ,please tell me.
also,I will send you the command output soon.
Thanks,
Ahmed
04-03-2011 06:25 AM
04-03-2011 11:32 AM
It appears the array was build correctly:
Adapter 0 -- Virtual Drive Information:
Virtual Disk: 0 (target id: 0)
RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3
Size:12287MB
State: Optimal
Virtual Disk: 1 (target id: 1)
RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3
Size:9999998MB
State: Optimal
Virtual Disk: 2 (target id: 2)
RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3
Size:11900273MB
State: Optimal
The array was still being initialized at the time those logs were gathered, but you would have been able to access the file system on /media2 had this partition been mounted. I see that /dev/sdc1 should be mounted as /media2, but I don't see it mounting. It is correctly created as xfs file system.
From an ssh seesion as root, enter the command
# yast
Click "System" in the column at the left side of the YaST window.
Open the Partitioner configuration module by double-clicking "Partitioner" in the YaST window.
Acknowledge the "Only Use This Program if you are Familiar with Partitioning Hard Disks" warning by clicking "Yes."
Screen shot the partition information from that table. Do you see /dev/sdc1 as 11.9TB, Linux native, but nothing under Mount?
04-04-2011 11:08 AM
04-04-2011 12:25 PM
Hi Ahmed,
It sounds as if the file system is corrupted on that partition, and that the kernel is not allowing sdc1 to be mounted.
You have two options available. If you have the Recovery DVD for 6.3.1, boot from the Recovery DVD (set boot order in bios as necessary to ensure the USB DVD drive you will need to use boots before the hard drive.)
Once the Recovery DVD menu appears, enter the option "rescue" (not listed on the boot menu). This will boot a Linux environment. Login with the user name 'root' (no password).
Enter the command
# xfs_repair
In your case, xfs_repair /dev/sdc1
If we are unable to repair the file system, your only other option would be to delete and recreate the sdc1 partition.
I hope that helps,
Jim
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: