This guide is to provide example steps on how expand a Virtual Drive (VD) by adding additional physical drives.
UCS C series M4 generation server
12Gb LSI raid controller
Drives being added are same technology (i.e. HDD, SSD, SAS, SATA)
ALWAYS make sure you have valid backups of data before attempting raid volume modifications such as this.
Adding Drives to an existing Virtual Drive
Boot server, hit F2 during post to get into BIOS setup.
Select the 'Avago MegaRAID (Cisco 12G SAS Modular Raid controller) Configuration Utility', and hit Enter.
Select 'Main MENU', hit Enter
Arrow down, and select 'Virtual Drive Management'
In my example, I have an existing VD1, that is a raid5 volume comprised of three 300GB SAS drives, with netspace of 556.929GB
Arrow down, and select the VD that you plan to expand and hit Enter.
Arrow up to the 'Operation' field and hit Enter.
You will get a pop menu were you want to select 'Reconfigure Virtual Drives', hit Enter
Now that the type of operation has been set, arrow down to the 'Go' field, hit Enter.
Now select the 'Choose the Operation'
You will automatically be in the 'Add Drives' operation, will allow you to scroll/arrow down the list of available 'UNCONFIGURED DRIVES
Arrow to each of the new drives you have added, and want to add to the virtual drive, and hit the Spacebar. This will toggle the 'Disabled/Enabled' value. Set this value to 'Enabled' for all the drives you intent to add.
Arrow up to the 'Apply Changes' field, hit Enter.
Arrow to the 'Confirm' field, hit the SpaceBar to toggle the value from 'disabled' to 'Enabled'. You can now arrow to the 'Yes' field, and hit Enter
You should get a operation successful message. Hit Enter
The next screen you will want to arrow down to 'Start Operation'. In our previous steps we have setup the config that we want, but now we need to trigger the actual raid reconstruction/expansion with the additional drives.
You should now get a success message and notice this may take some time to complete. In my lab test, adding (4) 300GB SAS drives to a raid5 (3 existing disks) took 5 hours for the reconstruction stage to complete, then another 30 minutes for the Background Initialization to finish.
The Reconstruction % will update slowly.
Notice that the Virtual Drive1 size still shows the original size during Reconstruction and Background Initialization stages.
A 'few' hours later, and we now have an expanded 1.9TB raid5 volume that still contains the original data.
Your Operating System will now see the VD1 volume as being 1.9TB instead of 556GB, but you will still need to expand the OS volume to utilize the additional space.
Hello,I have just tried to configure a vlan interface (SVI) on a VLAN arriving through a layer 2 Vxlan on a laboratory device N9K-C9396PX. The purpose is to use that VLAN interface in order to manage the device itself (snmp/telnet/ssh etc.) but I can't ge...
HiI have been tasked with uploading via tftp new images to our Nexus 5k devices ready for the upgrade process, do I use the same commands that I would when uploading via tftp to a catalyst switch.? Thanks
I am trying to download all v6 routes from the fabric along with the nexthop. I used the following URL to retrieve the v6 routesapi/class/uribv6Route.xml?rsp-subtree=full&query-target-filter=wcard(uribv6Route.dn,"<vrfName") and I got this <error...
Greetings,Let's say I have the following network topology. There is currently L2 connectivity to the legacy VSS switch pair via a VPC. On the VSS side; the port-channel is configured as "spanning-tree link-type shared": I would like to ad...
We have intersite link between two pairs of Cisco Nexus 5672 in our DC. We need your guidance to confirm is it possible to apply netflow and export it to solarwinds on that layer2 link so that we can see what data is traversing on this layer 2 link and wh...