06-18-2020 11:36 AM
I have tried everything but still getting same error "Failed to create Application context" during ncs start.
Health Monitor is running, with an error.
failed to start PI on startup Health Monitor
Matlab Server Instance 1 is running
Ftp Server is running
Database server is running
Matlab Server is running
Tftp Server is running
NMS Server is stopped.
Matlab Server Instance 2 is running
CNS Gateway with port 11011 is down
CNS Gateway SSL with port 11012 is down
CNS Gateway with port 11013 is down
CNS Gateway SSL with port 11014 is down
Plug and Play Gateway Broker with port 61617 is down
Plug and Play Gateway config, image and resource are down on https
Plug and Play Gateway is stopped.
SAM Daemon is running ...
DA Daemon is running ...
Please help me if anyone have idea.
06-18-2020 11:46 AM
its quite old one .. login to the console as root and check any space issue first place.
if not i will follow the below steps :
NCS stop - give 2-4min
NCS cleanup
NCS start - take some time like give 15m.
then check status.
06-20-2020 07:35 AM
Didnt work still same error.
06-19-2020 01:50 AM
- Probably a bug ; as stated your Prime version is very old. Consider using latest version of Cisco Prime.
M.
06-20-2020 07:36 AM
I know its very old we will upgrade to the latest but still i need to use it for 2 month.
Thanks
Zeeshan Khan
06-19-2020 08:41 AM
Check for Space issues:
This process can be used to temporarily fix space issues on all versions of Prime. Contact TAC for a more permanent resolution to these bugs:
CSCvp17013 & CSCvo65492 & CSCvr53612 & CSCvu05246
If /var is not full, but /opt is, then a new disk needs to be added to the virtual machine. This process is explained in the documentation for Prime.
If none of the above, then gather a log bundle, open a case with TAC and upload the logs.
06-20-2020 06:16 AM
None of them is full but var is getting closer but inside var i didnt found any file in 100 MBs or GB.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/smosvg-rootvol
3.8G 310M 3.3G 9% /
/dev/sda2 97M 5.6M 87M 7% /storedconfig
/dev/mapper/smosvg-tmpvol
1.9G 36M 1.8G 2% /tmp
/dev/mapper/smosvg-localdiskvol
122G 9.3G 106G 9% /localdisk
/dev/mapper/smosvg-home
93M 5.6M 83M 7% /home
/dev/mapper/smosvg-varvol
3.8G 3.2G 493M 87% /var
/dev/mapper/smosvg-optvol
647G 145G 469G 24% /opt
/dev/mapper/smosvg-recvol
93M 5.6M 83M 7% /recovery
/dev/mapper/smosvg-altrootvol
93M 5.6M 83M 7% /altroot
/dev/mapper/smosvg-storeddatavol
9.5G 151M 8.9G 2% /storeddata
/dev/mapper/smosvg-usrvol
6.6G 873M 5.5G 14% /usr
/dev/sda1 485M 18M 442M 4% /boot
tmpfs 7.8G 0 7.8G 0% /dev/shm
/var
ade # ls -lah /var/
total 192K
drwxr-xr-x 23 root root 4.0K Jun 18 20:49 .
drwxr-xr-x 29 root root 4.0K Jun 18 20:48 ..
drwxr-xr-x 9 root root 4.0K Nov 8 2016 KickStart
drwxr-xr-x 2 root root 4.0K Nov 7 2016 account
drwxr-xr-x 6 root root 4.0K Nov 7 2016 cache
drwxr-xr-x 2 root root 4.0K May 21 2013 crash
drwxr-xr-x 3 root root 4.0K Nov 7 2016 db
drwxr-xr-x 3 root root 4.0K Nov 7 2016 empty
drwxr-xr-x 2 root root 4.0K Oct 1 2009 games
drwxr-xr-x 20 root root 4.0K Nov 7 2016 lib
drwxr-xr-x 2 root root 4.0K Oct 1 2009 local
drwxrwxr-x 6 root lock 4.0K Jun 20 04:03 lock
drwxrwxr-x 12 prime gadmin 4.0K Jun 20 04:03 log
drwx------ 2 root root 16K Nov 7 2016 lost+found
lrwxrwxrwx 1 root root 10 Nov 7 2016 mail -> spool/mail
drwxr-x--- 5 root named 4.0K Nov 7 2016 named
drwxr-xr-x 2 root root 4.0K Oct 1 2009 nis
drwxr-xr-x 2 root root 4.0K Oct 1 2009 opt
drwxr-xr-x 2 root root 4.0K Oct 1 2009 preserve
drwxr-xr-x 2 root root 4.0K Aug 29 2012 racoon
drwxrwxrwx 16 root root 4.0K Jun 20 14:37 run
drwxr-xr-x 11 root root 4.0K Nov 7 2016 spool
drwxrwxrwt 3 root root 4.0K Jun 18 20:51 tmp
drwxr-xr-x 2 root root 4.0K Oct 1 2009 yp
/var/log
ade # ls -lah /var/log/
total 6.7M
drwxrwxr-x 12 prime gadmin 4.0K Jun 20 04:03 .
drwxr-xr-x 23 root root 4.0K Jun 18 20:49 ..
drwxrwxr-x 2 prime gadmin 4.0K Jun 20 04:03 ade
-rw-rw-r-- 1 prime gadmin 282K Nov 7 2016 anaconda.log
-rw-rw-r-- 1 prime gadmin 43K Nov 7 2016 anaconda.syslog
drwxrwxr-- 2 prime gadmin 4.0K Oct 18 2019 audit
-rw-rw-rw- 1 prime gadmin 3.4K Jun 5 03:57 backup-success.log
-rw-rw-rw- 1 prime gadmin 3.4K Jun 5 03:57 backup.log
-rw-rw-r-- 1 prime gadmin 0 Jun 20 04:03 boot.log
-rw-rw-r-- 1 prime gadmin 0 Jun 19 04:03 boot.log.1
-rw-rw-r-- 1 prime gadmin 0 Jun 18 04:03 boot.log.2
-rw-rw-r-- 1 prime gadmin 0 Jun 17 04:03 boot.log.3
-rw-rw-r-- 1 prime gadmin 0 Jun 16 04:03 boot.log.4
-rw-rw-r-- 1 prime gadmin 4.9K Jun 8 19:53 btmp
drwxrwxr-x 2 prime gadmin 4.0K Jun 28 2007 conman
drwxrwxr-x 2 prime gadmin 4.0K Jun 28 2007 conman.old
-rw-rw-r-- 1 prime gadmin 108K Jun 20 15:08 cron
-rw-rw-r-- 1 prime gadmin 234K Jun 20 04:03 cron.1
-rw-rw-r-- 1 prime gadmin 228K Jun 19 04:03 cron.2
-rw-rw-r-- 1 prime gadmin 233K Jun 18 04:03 cron.3
-rw-rw-r-- 1 prime gadmin 230K Jun 17 04:03 cron.4
-rw-rw-r-- 1 prime gadmin 36K Jun 18 20:48 dmesg
-rw-rw-r-- 1 prime gadmin 16K Jun 20 14:54 faillog
drwxrwxr-x 2 prime gadmin 4.0K Nov 8 2016 httpd
-rw-rw-r-- 1 prime gadmin 143K Jun 20 14:54 lastlog
drwxrwxr-x 2 prime gadmin 4.0K Nov 7 2016 mail
-rw-rw-r-- 1 prime gadmin 94K Jun 20 15:06 maillog
-rw-rw-r-- 1 prime gadmin 203K Jun 20 04:03 maillog.1
-rw-rw-r-- 1 prime gadmin 197K Jun 19 04:03 maillog.2
-rw-rw-r-- 1 prime gadmin 203K Jun 18 04:03 maillog.3
-rw-rw-r-- 1 prime gadmin 199K Jun 17 04:03 maillog.4
-rw-rw-r-- 1 prime gadmin 1.4K Jun 20 14:54 messages
-rw-rw-r-- 1 prime gadmin 159 Jun 19 04:03 messages.1
-rw-rw-r-- 1 prime gadmin 7.8K Jun 18 21:22 messages.2
-rw-rw-r-- 1 prime gadmin 159 Jun 17 04:03 messages.3
-rw-rw-r-- 1 prime gadmin 7.7K Jun 16 20:18 messages.4
drwxrwxr-x 2 prime gadmin 4.0K Nov 7 2016 pm
drwxrwxr-x 2 prime gadmin 4.0K Nov 8 2016 prelink
-rw-r--r-- 1 root root 15K Jun 20 04:03 rpmpkgs
-rw-rw-r-- 1 prime gadmin 15K Jun 13 04:04 rpmpkgs.1
-rw-rw-r-- 1 prime gadmin 15K Jun 6 04:03 rpmpkgs.2
-rw-rw-r-- 1 prime gadmin 15K May 30 04:03 rpmpkgs.3
-rw-rw-r-- 1 prime gadmin 15K May 23 04:03 rpmpkgs.4
drwxrwxr-x 2 prime gadmin 4.0K Jun 20 00:00 sa
-rw-rw-r-- 1 prime gadmin 1.1M Jun 20 15:08 secure
-rw-rw-r-- 1 prime gadmin 61K Jun 20 04:02 secure.1
-rw-rw-r-- 1 prime gadmin 1.5M Jun 19 04:02 secure.2
-rw-rw-r-- 1 prime gadmin 61K Jun 18 04:02 secure.3
-rw-rw-r-- 1 prime gadmin 1.4M Jun 17 04:02 secure.4
-rw-rw-r-- 1 prime gadmin 0 Jun 20 04:03 spooler
-rw-rw-r-- 1 prime gadmin 0 Jun 19 04:03 spooler.1
-rw-rw-r-- 1 prime gadmin 0 Jun 18 04:03 spooler.2
-rw-rw-r-- 1 prime gadmin 0 Jun 17 04:03 spooler.3
-rw-rw-r-- 1 prime gadmin 0 Jun 16 04:03 spooler.4
drwxrwxr-- 2 prime gadmin 4.0K Sep 1 2014 squid
-rw-rw-r-- 1 prime gadmin 0 Nov 7 2016 tallylog
-rw-rw-r-- 1 prime gadmin 60K Jun 20 14:54 wtmp
-rw-rw-r-- 1 prime gadmin 0 May 1 04:03 wtmp.1
06-20-2020 07:34 AM
now /var is at 9% but still same issue.
06-20-2020 11:04 AM
This is quite old and TAC may not accept the request, or you can contact them for an upgrade if possible.
there are several bugs reported on this version, so suggest to upgrade new and latest stable.
06-22-2020 05:28 AM
As Prime is not able to start and you need to use it for 2 months, I would suggest installing a new version of 3.0 and restoring a backup to it. TAC is available to help in this regard, just have to open a case. The install takes about 3 hours and the restore will also take about 3 hours. After restoring is successful, upgrade Prime to 3.6 or higher. Versions of Prime before 3.5 have an EOL notice out.
Here is some info on upgrading from 2.2 to 3.0:
EOL notice for 3.5 and earlier:
06-22-2020 11:20 AM
Hi i am planing for the same as you suggested thank you for your response.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide