<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ISE root disk filesystem in Network Access Control</title>
    <link>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3864561#M472585</link>
    <description>I would checking out the info under &lt;A href="http://cs.co/ise-help" target="_blank"&gt;http://cs.co/ise-help&lt;/A&gt; how to get help in the community, we are not the TAC, you should open a case with them to debug</description>
    <pubDate>Wed, 29 May 2019 15:12:52 GMT</pubDate>
    <dc:creator>Jason Kunst</dc:creator>
    <dc:date>2019-05-29T15:12:52Z</dc:date>
    <item>
      <title>ISE root disk filesystem</title>
      <link>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3864319#M472584</link>
      <description>&lt;P&gt;We are using the Cisco ISE and one of the appliance servers is not calculating used disk space correctly...it states 75%, but it should be more like 70%. We have the threshold snmp trap set to 25 ( =75% used ) and the server started to send alerts reaching that level. After removing some patches and corefile dumps - "show disk" still says 75% used. Now, if I poll the server with snmpwalk, the used disk is 70%. But it is still trapping...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;show disk:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Internal filesystems:&lt;BR /&gt;/ : 75% used ( 10532540 of 14987616)&lt;BR /&gt;/dev : 0% used ( 0 of 32842248)&lt;BR /&gt;/dev/shm : 0% used ( 0 of 32851936)&lt;BR /&gt;/run : 1% used ( 1092 of 32851936)&lt;BR /&gt;/sys/fs/cgroup : 0% used ( 0 of 32851936)&lt;BR /&gt;/storedconfig : 2% used ( 1588 of 95054)&lt;BR /&gt;/boot : 21% used ( 94882 of 487634)&lt;BR /&gt;/boot/efi : 4% used ( 8792 of 276312)&lt;BR /&gt;/tmp : 1% used ( 7000 of 1983056)&lt;BR /&gt;/opt : 26% used ( 276781400 of 1125348968)&lt;BR /&gt;/run/user/440 : 0% used ( 0 of 6570388)&lt;BR /&gt;/run/user/301 : 0% used ( 0 of 6570388)&lt;BR /&gt;/run/user/321 : 0% used ( 0 of 6570388)&lt;BR /&gt;/run/user/0 : 0% used ( 0 of 6570388)&lt;BR /&gt;/run/user/304 : 0% used ( 0 of 6570388)&lt;BR /&gt;/run/user/303 : 0% used ( 0 of 6570388)&lt;BR /&gt;/run/user/322 : 0% used ( 0 of 6570388)&lt;BR /&gt;/run/user/308 : 0% used ( 0 of 6570388)&lt;BR /&gt;all internal filesystems have sufficient free space&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;System info:&lt;/P&gt;&lt;P&gt;Cisco Application Deployment Engine OS Release: 3.0&lt;BR /&gt;ADE-OS Build Version: 3.0.3.030&lt;BR /&gt;ADE-OS System Architecture: x86_64&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The second question I have is: What is taking up all space? There are no big files in the root system, but still 70% is used. I have tried to reload the system.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards / Fred&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 29 May 2019 08:11:39 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3864319#M472584</guid>
      <dc:creator>registry</dc:creator>
      <dc:date>2019-05-29T08:11:39Z</dc:date>
    </item>
    <item>
      <title>Re: ISE root disk filesystem</title>
      <link>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3864561#M472585</link>
      <description>I would checking out the info under &lt;A href="http://cs.co/ise-help" target="_blank"&gt;http://cs.co/ise-help&lt;/A&gt; how to get help in the community, we are not the TAC, you should open a case with them to debug</description>
      <pubDate>Wed, 29 May 2019 15:12:52 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3864561#M472585</guid>
      <dc:creator>Jason Kunst</dc:creator>
      <dc:date>2019-05-29T15:12:52Z</dc:date>
    </item>
    <item>
      <title>Re: ISE root disk filesystem</title>
      <link>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3864672#M472586</link>
      <description>&lt;P&gt;As Jason said, the root file system can't be cleaned up without TAC assistance. What we are able to clean usually is only the /localdisk, which uses the file system mounted in /opt. The "show disks" gives the outputs from Linux command "df" and is more accurate than the data obtained from a SNMP query, which is also unclear from which OID you received the data.&lt;/P&gt;</description>
      <pubDate>Wed, 29 May 2019 18:56:08 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3864672#M472586</guid>
      <dc:creator>hslai</dc:creator>
      <dc:date>2019-05-29T18:56:08Z</dc:date>
    </item>
    <item>
      <title>Re: ISE root disk filesystem</title>
      <link>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3864753#M472587</link>
      <description>&lt;P&gt;Maybe it's time to create a cron job for ISE systems that does a daily disk cleanup (around midnight or whatever) - these used to be part of every unix system ever since I can remember.&amp;nbsp; I am sure the TAC have gathered enough best practice experience of problems that occur most frequently that this can be automated.&amp;nbsp; This should not require human intervention.&lt;/P&gt;</description>
      <pubDate>Wed, 29 May 2019 21:25:37 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3864753#M472587</guid>
      <dc:creator>Arne Bier</dc:creator>
      <dc:date>2019-05-29T21:25:37Z</dc:date>
    </item>
    <item>
      <title>Re: ISE root disk filesystem</title>
      <link>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3866797#M472588</link>
      <description>&lt;P&gt;I've seen the "show disk" output take some time to update after deleting files to free up space.&amp;nbsp; This could at least temporarily explain the difference between the command output and SNMP polling results&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2019 13:22:57 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3866797#M472588</guid>
      <dc:creator>packetplumber9</dc:creator>
      <dc:date>2019-06-03T13:22:57Z</dc:date>
    </item>
    <item>
      <title>Re: ISE root disk filesystem</title>
      <link>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3867245#M472589</link>
      <description>&lt;P&gt;ISE does have such cron jobs in cleaning files. The original post did not provide enough data and would be best investigated by TAC.&lt;/P&gt;</description>
      <pubDate>Tue, 04 Jun 2019 08:47:02 GMT</pubDate>
      <guid>https://community.cisco.com/t5/network-access-control/ise-root-disk-filesystem/m-p/3867245#M472589</guid>
      <dc:creator>hslai</dc:creator>
      <dc:date>2019-06-04T08:47:02Z</dc:date>
    </item>
  </channel>
</rss>

