This guide is to provide example steps on how expand a Virtual Drive (VD) by adding additional physical drives.
UCS C series M4 generation server
12Gb LSI raid controller
Drives being added are same technology (i.e. HDD, SSD, SAS, SATA)
ALWAYS make sure you have valid backups of data before attempting raid volume modifications such as this.
Adding Drives to an existing Virtual Drive
Boot server, hit F2 during post to get into BIOS setup.
Select the 'Avago MegaRAID (Cisco 12G SAS Modular Raid controller) Configuration Utility', and hit Enter.
Select 'Main MENU', hit Enter
Arrow down, and select 'Virtual Drive Management'
In my example, I have an existing VD1, that is a raid5 volume comprised of three 300GB SAS drives, with netspace of 556.929GB
Arrow down, and select the VD that you plan to expand and hit Enter.
Arrow up to the 'Operation' field and hit Enter.
You will get a pop menu were you want to select 'Reconfigure Virtual Drives', hit Enter
Now that the type of operation has been set, arrow down to the 'Go' field, hit Enter.
Now select the 'Choose the Operation'
You will automatically be in the 'Add Drives' operation, will allow you to scroll/arrow down the list of available 'UNCONFIGURED DRIVES
Arrow to each of the new drives you have added, and want to add to the virtual drive, and hit the Spacebar. This will toggle the 'Disabled/Enabled' value. Set this value to 'Enabled' for all the drives you intent to add.
Arrow up to the 'Apply Changes' field, hit Enter.
Arrow to the 'Confirm' field, hit the SpaceBar to toggle the value from 'disabled' to 'Enabled'. You can now arrow to the 'Yes' field, and hit Enter
You should get a operation successful message. Hit Enter
The next screen you will want to arrow down to 'Start Operation'. In our previous steps we have setup the config that we want, but now we need to trigger the actual raid reconstruction/expansion with the additional drives.
You should now get a success message and notice this may take some time to complete. In my lab test, adding (4) 300GB SAS drives to a raid5 (3 existing disks) took 5 hours for the reconstruction stage to complete, then another 30 minutes for the Background Initialization to finish.
The Reconstruction % will update slowly.
Notice that the Virtual Drive1 size still shows the original size during Reconstruction and Background Initialization stages.
A 'few' hours later, and we now have an expanded 1.9TB raid5 volume that still contains the original data.
Your Operating System will now see the VD1 volume as being 1.9TB instead of 556GB, but you will still need to expand the OS volume to utilize the additional space.
Hi there, I've recently had to reboot an ACE appliance, post it ceasing to pass traffic and I'm now receiving the below message after consoling into the device. This is constantly being repeated. "Waiting for lock /tmp/octeon-pci-lock" Code...
Hi All,Bit confusing : Is IGMP involved if OTV configured with unicast mode ?If so what can we check for IGMP command to verify control plane converged between two DCI / OTV site like sho ip mrou for multicast OTV .OR anything else ?I appreciate your help...
Can anyone please advise / confirm the process to completely wipe all server data & storage from a UCS-B 5108 server? Can you also confirm the estimated time frame to wipe and factory reset with a fully loaded chassis? Any guidance woul...
stcli cluser info returns incomplete node information, I also have one computer only node that was added with a FQDN instead of the address and it doesn't show up in hyperflex connection as a node, but works fine. dnsServers:nodeIPSetting...
if we can extend the Vlan's between the DC & DR using the ACI and I think we can do it by multi Pod through one controller APIC domain by vxlan but the customer asked if he has IBM EXSI environment in two sites if the ACI can manage it in ...