10-21-2016 01:30 AM
Hi All,
We recently intend to upgrade the newest version PI3.1.3 from 2.2 in order to manage the newest version of WLC, so we want to backup the PI 2.2( virtual version) and restore the backup into the refresh 3.0 environment. However we found the error as below:
ERROR : Filesystem is more than 75% full
Run 'ncs cleanup' to free up space, before attempting to run upgrade again
Aborting application upgrade ...
% Application upgrade failed. Please check logs for more details.
So we have drilled down the system to check disk usage, obviously the issue is the opt folder. Then we could know the the issue is the WCS foulder under oracle/base/oradata. Could you help us to fix it or give us more suggestion, thx in advance!
In addition, we tried to implement the command "ncs cleanup" to shrink the database, however it can't work and just reduce about 5GB data. Also we tried to perform inline upgrade and it gaves us the error " the usage of filesystem has over 75%" . Please give us the helpful suggestion to troubleshooting it, thx in advance!
ade # df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/smosvg-rootvol 3.8G 314M 3.3G 9% /
/dev/mapper/smosvg-varvol 3.8G 1.5G 2.2G 41% /var
/dev/mapper/smosvg-optvol 447G 378G 47G 90% /opt
/dev/mapper/smosvg-tmpvol 1.9G 37M 1.8G 3% /tmp
/dev/mapper/smosvg-usrvol 6.6G 1.1G 5.2G 17% /usr
/dev/mapper/smosvg-recvol 93M 5.6M 83M 7% /recovery
/dev/mapper/smosvg-home 93M 5.6M 83M 7% /home
/dev/mapper/smosvg-storeddatavol 9.5G 151M 8.9G 2% /storeddata
/dev/mapper/smosvg-altrootvol 93M 5.6M 83M 7% /altroot
/dev/mapper/smosvg-localdiskvol 87G 2.4G 80G 3% /localdisk
/dev/sda2 97M 5.6M 87M 7% /storedconfig
/dev/sda1 485M 18M 442M 4% /boot
tmpfs 7.8G 2.5G 5.3G 33% /dev/shm
ade # du -lh --max-depth=6 | grep [0-9]G
3.1G ./oracle/base/product/11.2.0/dbhome_1
3.1G ./oracle/base/product/11.2.0
3.1G ./oracle/base/product
1.4G ./oracle/base/diag/tnslsnr/SLB-Cisco-PI01/wcstns
1.4G ./oracle/base/diag/tnslsnr/SLB-Cisco-PI01
1.4G ./oracle/base/diag/tnslsnr
14G ./oracle/base/diag/rdbms/wcs/wcs
14G ./oracle/base/diag/rdbms/wcs
14G ./oracle/base/diag/rdbms
16G ./oracle/base/diag
5.6G ./oracle/base/fast_recovery_area/WCS/archivelog/2016_10_21
5.6G ./oracle/base/fast_recovery_area/WCS/archivelog
3.1G ./oracle/base/fast_recovery_area/WCS/onlinelog
8.6G ./oracle/base/fast_recovery_area/WCS
8.6G ./oracle/base/fast_recovery_area
337G ./oracle/base/oradata/WCS/datafile
3.1G ./oracle/base/oradata/WCS/onlinelog
340G ./oracle/base/oradata/WCS
340G ./oracle/base/oradata
366G ./oracle/base
366G ./oracle
1.1G ./CSCOlumos/domainmaps
1.1G ./CSCOlumos/tmp
1.1G ./CSCOlumos/staging/app
1.9G ./CSCOlumos/staging
3.6G ./CSCOlumos/logs
Regards,
Tira
10-21-2016 02:08 AM
We had this problem in the past, add an additional virtual disk and reboot the VM, it will automatically use this extra space (though it will distribute it to multiple partitions, the bulk will go to the opt partition).
10-21-2016 02:18 AM
10-21-2016 02:28 AM
If you log a call with TAC they should be able to sort it out.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide