cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6540
Views
45
Helpful
9
Replies

ISE 2.4 Patch 9 -> High Disk Utilization: Server=

bdr_noc
Level 1
Level 1

After patching to 2.4 9 we get massive Log-Massages "High Disk Utilization: Server=" from the secondary Cluster Member. 

Anyone else got this "Problem / Bug" 

Reloading the Clustermembers won´t fix it. 

3 Accepted Solutions

Accepted Solutions

Root "/" is 96% full, not /opt where the radius and tacacs logs are stored. The 11% being reported for /opt is correct, there are a couple extra digits in the total number.

You will need to contact TAC to access the underlying root file system and clean this up.

View solution in original post

Thank you for the information. Looks like I need Cisco to enable root and delete file from underlying OS, again.

View solution in original post


@Greeners84 wrote:
I have also just applied the Cisco ISE patch 9 and seeing the same symptoms.
Do we know which directory in the underlying system would require cleaning up?

yes issues like this need to be worked through tac

View solution in original post

9 Replies 9

bdr_noc
Level 1
Level 1

we found the problem -> /opt is full. 

 

is there any possibility to clean up /opt? or should we open an TAC case?

 

ise2/admin# show disks

disk repository: 17% used (4571472 of 30106488)

Internal filesystems:
/ : 96% used ( 13616252 of 14987616)
/dev : 0% used ( 0 of 32838396)
/dev/shm : 0% used ( 0 of 32849728)
/run : 1% used ( 1272 of 32849728)
/sys/fs/cgroup : 0% used ( 0 of 32849728)
/boot : 24% used ( 106511 of 487634)
/tmp : 1% used ( 9444 of 1983056)
/storedconfig : 2% used ( 1588 of 95054)
/opt : 11% used ( 110075244 of 1125348968)
/boot/efi : 4% used ( 8800 of 276312)
/run/user/440 : 0% used ( 0 of 6569948)
/run/user/301 : 0% used ( 0 of 6569948)
/run/user/321 : 0% used ( 0 of 6569948)
/opt/docker/runtime/devicemapper/mnt/f8bc361d4d73dac8f9ac7d807d530d77605e24395a1bac25044dfa13e1e4f3f9 : 1% used ( 403348 of 104805376)
/opt/docker/runtime/containers/a5ff40a2e22a9f29238625c76984100ebfc0ffd4b133d652cb5246a68d5c1806/shm : 0% used ( 0 of 65536)
/run/user/0 : 0% used ( 0 of 6569948)
/run/user/304 : 0% used ( 0 of 6569948)
/run/user/303 : 0% used ( 0 of 6569948)
/run/user/322 : 0% used ( 0 of 6569948)
warning - / is 96% used (13616252 of 14987616)

Root "/" is 96% full, not /opt where the radius and tacacs logs are stored. The 11% being reported for /opt is correct, there are a couple extra digits in the total number.

You will need to contact TAC to access the underlying root file system and clean this up.

I have also just applied the Cisco ISE patch 9 and seeing the same symptoms.
Do we know which directory in the underlying system would require cleaning up?

The only filesystem that you would be able to clean up yourself is the local filesystem where upgrade/patch files are stored.  Sometimes the local filesystem will get full with coredumps too and you have to clean them out.  If any of the other filesystems are full, such as /opt or root, then you have to open a TAC case.  They will have to install a rootpatch and access the underlying root filesystem.  TAC will figure out what needs to be cleaned up.


@Greeners84 wrote:
I have also just applied the Cisco ISE patch 9 and seeing the same symptoms.
Do we know which directory in the underlying system would require cleaning up?

yes issues like this need to be worked through tac

Greeners84
Level 1
Level 1
I also have a very similar problem in 2.4 patch 9. On two of the nodes in my 4 node deployment I'm seeing 90+ pct utilised after installing patch 9. I deleted everything from disk:/ on all nodes except patch9 but 2 of the nodes were are still at 93+ percent. Cisco TAC visited and installed the root patch and found a load of files in one of the subdirectories of root and they got the disk space back down around 75%. The following day a diagrunn.gz file was generated on one of these nodes which has increased the disk space back up to 94 percent. I deleted the file from disk:/corefiles area but it's still remained at 94% under 'show disk:/'. There's something very buggy going on and not aware of the fix. Not sure if the same symptoms are affecting patch 10 or newer versions?

Deleting the core file would not free up any overall used disk space. /localdisk where generated core files are saved is actually mounted in /opt and in 2.4 is a pre thick provisioned 30GB mount.

So adding/removing files in this directory doesn't free up any space other than in /localdisk when viewing sh disk. /localdisk will always use 30 GB of /opt.


Thank you for the information. Looks like I need Cisco to enable root and delete file from underlying OS, again.

Greeners84
Level 1
Level 1

Hello,

 

With Cisco TAC assistance we gained access to the underlying OS and purged archive files from this location - cd /var/log/journal/ and using the following command 'journalctl --vacuum-time=1d' free'd up approx 3.5 GB of data in my case. Over time the archive files slowly built up again and I'm now back to where I started. In my case it's only affecting 2 nodes in my 4 node deployment. I have been informed that this is related to a bug which will be fixed in patch 12. I would advise against installing patch 11 until this is fixed because in my case I installed 2.4 patch 11 and now the issue is worse and I've lost access to one of my nodes due to disk space in the underlying OS being 100% utilized. This may be different for you. Seek advice from Cisco.

 

Regards


Adam