05-21-2017 06:12 PM
Hello
What are the chances of getting to the Linux CLI on an ISE node? Or even, running a command as root user?
I am not in the fortunate position of being able to nail up 200GB of disk space for all of my lab PSN's - therefore I made them all as thin-provisioned - which works really well in a lab. I recognise that one doesn't do this for production and therefore the general practice is to thick provision.
But the VM's are growing and even if I purge all the logs, the free space is not reclaimed, and the VMWare Disk compress option does not work via GUI. For performance reasons my VM's are on SSD and the space is at a premium - therefore I want to shrink the .vmdk's because they will eventually expand to 200GB and that is untenable.
VMWare Linux VM's have a solution to this but it requires root access.
I have had good success on CentOS 7 with the shrink command
vmware-toolbox-cmd disk shrinkonly to manage the reclaiming of free space.
Perhaps someone has an alternative suggestion?
thanks
Arne
Solved! Go to Solution.
05-22-2017 03:30 PM
Hi Tim
thanks for letting me know about the patch.
I have devised my own method of shrinking the disks and have found substantial savings in doing this. In one case I recovered over 20GB of SSD in one node - an that is from a VM that has been running for only 2 months.
The method involves shutting down the VM, making a backup of the .vmdk, installing the .vmdk on another Linux host (HDD hot add), mounting the various ext4 file systems and then running wipe&shrink on them. Takes a bit of time but this is something that can run on the weekend when the lab is not in use. For me there are real $$$ savings involved in doing this, especially for large labs.
05-22-2017 06:53 AM
Hi Arne,
The only way to get root access to ISE is through the use of a root patch. The root patch can only be issued from TAC.
Regards,
-Tim
05-22-2017 03:30 PM
Hi Tim
thanks for letting me know about the patch.
I have devised my own method of shrinking the disks and have found substantial savings in doing this. In one case I recovered over 20GB of SSD in one node - an that is from a VM that has been running for only 2 months.
The method involves shutting down the VM, making a backup of the .vmdk, installing the .vmdk on another Linux host (HDD hot add), mounting the various ext4 file systems and then running wipe&shrink on them. Takes a bit of time but this is something that can run on the weekend when the lab is not in use. For me there are real $$$ savings involved in doing this, especially for large labs.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide