07-31-2019 01:11 AM
After patching to 2.4 9 we get massive Log-Massages "High Disk Utilization: Server=" from the secondary Cluster Member.
Anyone else got this "Problem / Bug"
Reloading the Clustermembers won´t fix it.
Solved! Go to Solution.
07-31-2019 08:47 AM
11-24-2019 12:37 AM
11-25-2019 07:04 AM
@Greeners84 wrote:
I have also just applied the Cisco ISE patch 9 and seeing the same symptoms.
Do we know which directory in the underlying system would require cleaning up?
yes issues like this need to be worked through tac
07-31-2019 02:37 AM
we found the problem -> /opt is full.
is there any possibility to clean up /opt? or should we open an TAC case?
ise2/admin# show disks
disk repository: 17% used (4571472 of 30106488)
Internal filesystems:
/ : 96% used ( 13616252 of 14987616)
/dev : 0% used ( 0 of 32838396)
/dev/shm : 0% used ( 0 of 32849728)
/run : 1% used ( 1272 of 32849728)
/sys/fs/cgroup : 0% used ( 0 of 32849728)
/boot : 24% used ( 106511 of 487634)
/tmp : 1% used ( 9444 of 1983056)
/storedconfig : 2% used ( 1588 of 95054)
/opt : 11% used ( 110075244 of 1125348968)
/boot/efi : 4% used ( 8800 of 276312)
/run/user/440 : 0% used ( 0 of 6569948)
/run/user/301 : 0% used ( 0 of 6569948)
/run/user/321 : 0% used ( 0 of 6569948)
/opt/docker/runtime/devicemapper/mnt/f8bc361d4d73dac8f9ac7d807d530d77605e24395a1bac25044dfa13e1e4f3f9 : 1% used ( 403348 of 104805376)
/opt/docker/runtime/containers/a5ff40a2e22a9f29238625c76984100ebfc0ffd4b133d652cb5246a68d5c1806/shm : 0% used ( 0 of 65536)
/run/user/0 : 0% used ( 0 of 6569948)
/run/user/304 : 0% used ( 0 of 6569948)
/run/user/303 : 0% used ( 0 of 6569948)
/run/user/322 : 0% used ( 0 of 6569948)
warning - / is 96% used (13616252 of 14987616)
07-31-2019 08:47 AM
09-20-2019 03:12 AM
09-21-2019 09:45 AM
The only filesystem that you would be able to clean up yourself is the local filesystem where upgrade/patch files are stored. Sometimes the local filesystem will get full with coredumps too and you have to clean them out. If any of the other filesystems are full, such as /opt or root, then you have to open a TAC case. They will have to install a rootpatch and access the underlying root filesystem. TAC will figure out what needs to be cleaned up.
11-25-2019 07:04 AM
@Greeners84 wrote:
I have also just applied the Cisco ISE patch 9 and seeing the same symptoms.
Do we know which directory in the underlying system would require cleaning up?
yes issues like this need to be worked through tac
11-22-2019 10:51 AM
11-23-2019 10:15 AM
11-24-2019 12:37 AM
01-10-2020 07:40 AM
Hello,
With Cisco TAC assistance we gained access to the underlying OS and purged archive files from this location - cd /var/log/journal/ and using the following command 'journalctl --vacuum-time=1d' free'd up approx 3.5 GB of data in my case. Over time the archive files slowly built up again and I'm now back to where I started. In my case it's only affecting 2 nodes in my 4 node deployment. I have been informed that this is related to a bug which will be fixed in patch 12. I would advise against installing patch 11 until this is fixed because in my case I installed 2.4 patch 11 and now the issue is worse and I've lost access to one of my nodes due to disk space in the underlying OS being 100% utilized. This may be different for you. Seek advice from Cisco.
Regards
Adam
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide