cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
10387
Views
25
Helpful
6
Replies

Cisco ISE 2.2 sufficient disk space

Maurice Ball
Level 3
Level 3

The /opt directory is showing 73% used on my ISE node. I have tried running the following commands, but the disk usage is still showing the same.

 

application configuration ise

 

[1]Reset M&T Session Database

[2]Rebuild M&T Unusable Indexes

[3]Purge M&T Operational Data

[4]Reset M&T Database

 

 Is there command I can run that will free up disk space?

 

Internal filesystems:

/ : 34% used ( 4705772 of 14987616)

/dev : 0% used ( 0 of 8124300)

/dev/shm : 0% used ( 0 of 8134308)

/run : 1% used ( 1092 of 8134308)

/sys/fs/cgroup : 0% used ( 0 of 8134308)

/opt : 73% used ( 124852548 of 180485852)

/storedconfig : 2% used ( 1583 of 95054)

/tmp : 1% used ( 6616 of 1983056)

/boot : 23% used ( 103904 of 487634)

/run/user/440 : 0% used ( 0 of 1626864)

/run/user/301 : 0% used ( 0 of 1626864)

/run/user/321 : 0% used ( 0 of 1626864)

/run/user/0 : 0% used ( 0 of 1626864)

/run/user/304 : 0% used ( 0 of 1626864)

/run/user/303 : 0% used ( 0 of 1626864)

/run/user/322 : 0% used ( 0 of 1626864)

  all internal filesystems have sufficient free space

4 Accepted Solutions

Accepted Solutions

Nidhi
Cisco Employee
Cisco Employee

Please raise a TAC case and they can help you free up the space 

View solution in original post

Arne Bier
VIP
VIP

ISE 2.2 has "issues" with managing its own disk space and it's completely out of the control of the normal user. *groan*

I have a customer whose ISE 2.2 system will periodically crash approximately twice a year because the system fills up due to bugs, logs and core dumps.  System is completely unusable when this happens.  Each time TAC tells us that the problem is bug x, fixed in release [insert_latest_at_the_time].  When this happens in 6 months again, the bug will be a different one again, and the fix will be release [insert_latest_here] - it makes no sense to me. The partitioning is so arcane, that a process core dump can push the Linux system over the edge and cause a complete outage.

 

Luckily I have not seen this in ISE 2.4 or later.

 

Ever since I can remember administering a unix system over 25 years ago, running a daily system check using cron can be used to clean out the junk to make sure this never happens.  Cisco denies us the right to perform basic system admin on ISE.  It would be a sad indictment if users had to take matters into their own hands, but I think we could at least reduce the amount of time wasted on TAC cases, when the TAC could be doing more serious things.

#gettingoffmysoapbox

View solution in original post

This is definitely hitting version 2.4 patch 9 as when I upgraded from patch 8 two of my 4 nodes went over 90pct disk utilised. Deleting old patches and files from disk:/ had no affect on the disk space whatsoever. Had to get Cisco to install patch and remove files from root subdirectories. Painful. Thinking the only way to get round this is to clean install and restore the data.

View solution in original post

axa_tech_uk
Level 1
Level 1

We previously hit a similar bug and needed to ultimately re-image our nodes

View solution in original post

6 Replies 6

Nidhi
Cisco Employee
Cisco Employee

Please raise a TAC case and they can help you free up the space 

Arne Bier
VIP
VIP

ISE 2.2 has "issues" with managing its own disk space and it's completely out of the control of the normal user. *groan*

I have a customer whose ISE 2.2 system will periodically crash approximately twice a year because the system fills up due to bugs, logs and core dumps.  System is completely unusable when this happens.  Each time TAC tells us that the problem is bug x, fixed in release [insert_latest_at_the_time].  When this happens in 6 months again, the bug will be a different one again, and the fix will be release [insert_latest_here] - it makes no sense to me. The partitioning is so arcane, that a process core dump can push the Linux system over the edge and cause a complete outage.

 

Luckily I have not seen this in ISE 2.4 or later.

 

Ever since I can remember administering a unix system over 25 years ago, running a daily system check using cron can be used to clean out the junk to make sure this never happens.  Cisco denies us the right to perform basic system admin on ISE.  It would be a sad indictment if users had to take matters into their own hands, but I think we could at least reduce the amount of time wasted on TAC cases, when the TAC could be doing more serious things.

#gettingoffmysoapbox

Thanks for the information.

This is definitely hitting version 2.4 patch 9 as when I upgraded from patch 8 two of my 4 nodes went over 90pct disk utilised. Deleting old patches and files from disk:/ had no affect on the disk space whatsoever. Had to get Cisco to install patch and remove files from root subdirectories. Painful. Thinking the only way to get round this is to clean install and restore the data.

Damien Miller
VIP Alumni
VIP Alumni
How many patches do you have installed on this node?

I recently had an issue doing an inline upgrade on 2.2 to 2.4 because the 200GB nodes did not have enough space free in /opt. Upon investigation, a large part of the disk space in /opt was file for patches and their backups.

There are a number of reasons for /opt to be full, but even if nothing is wrong this can be a problem if you have installed all the patches as they released. ex. Patch 14 used a combined 12 GB of /opt for the nodes I looked at.

axa_tech_uk
Level 1
Level 1

We previously hit a similar bug and needed to ultimately re-image our nodes

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: