cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
Popup Hotspot Using ISR 1000 with WiFi/LTE for Teleworkers and Micro Branchesr
1100
Views
15
Helpful
12
Replies
Highlighted
Beginner

Unable to start Prime Infra 2.2 services.

Hi.

Seeking help from here.

Have anybody encounter this issue and can advise me on the rectification?

Have tried to manually stop and start, NCS cleanup and NCS DB reinit also no help.

=========================================================================

SentinelPrime/admin# NCS status
Exception in thread "main" java.util.NoSuchElementException
at java.util.Scanner.throwFor(Unknown Source)
at java.util.Scanner.next(Unknown Source)
at com.cisco.common.ha.config.HighAvailabilityConfig.parseXmlFile(HighAvailabilityConfig.java:212)
at com.cisco.common.ha.config.HighAvailabilityConfig.parseXmlFile(HighAvailabilityConfig.java:194)
at com.cisco.common.ha.config.HighAvailabilityConfig.<init>(HighAvailabilityConfig.java:174)
at com.cisco.common.ha.config.HighAvailabilityConfig.getInstance(HighAvailabilityConfig.java:183)
at com.cisco.packaging.HMAdmin.initHealthMonitorApi(HMAdmin.java:404)
at com.cisco.packaging.HMAdmin.hmServerStatus(HMAdmin.java:299)
at com.cisco.packaging.HMAdmin.status(HMAdmin.java:170)
at com.cisco.packaging.WCSAdmin.status(WCSAdmin.java:914)
at com.cisco.packaging.WCSAdmin.status(WCSAdmin.java:888)
at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:294)
at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:1123)
CNS Gateway with port 11011 is down
CNS Gateway SSL with port 11012 is down
CNS Gateway with port 11013 is down
CNS Gateway SSL with port 11014 is down
Plug and Play Gateway Broker with port 61617 is down
Plug and Play Gateway config, image and resource are down on https
Plug and Play Gateway is stopped.
SAM Daemon is stopped.
DA Daemon is stopped.

==================================================================================

Please help.

Thanks.

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted
Beginner

didn't manage to solve,

didn't manage to solve, instead i reinstall the whole PI.

View solution in original post

12 REPLIES 12
Highlighted
Beginner

Were you able to resolve this

Were you able to resolve this issue - I have the same errors.

Highlighted
Beginner

didn't manage to solve,

didn't manage to solve, instead i reinstall the whole PI.

View solution in original post

Highlighted
Cisco Employee

Hi

Hi
Could be because of haSettings.xml file is corrupted in the system.
Issue below command from root/shell CLI to verify the file size and check.
du -sh /opt/CSCOlumos/conf/rfm/classes/com/cisco/common/ha/config/haSettings.xml
-- replacing the file from working lab server and remove the content between <HighAvailabilityServerInstanceConfiguration/> ~</HighAvailabilityServerInstanceConfiguration>)  //--> HighAvailabilityServerInstanceConfiguration tag inclusive.
-- Restart the service would resolve your problem. However I would recommend take TAC assistance to perform the same.

Highlighted
Beginner

now the same problem came

now the same problem came back and I can't even access the /opt to remove files.

 

temp. space 2% used (35896 of 1967952)
disk: 1% used (188376 of 90305368)

Internal filesystems:
  warning - /opt is 100% used (443594788 of 468476992)

SentinelPrime/admin# dir disk:opt

Directory of disk:opt
% Error: Could not access directory disk:opt
SentinelPrime/admin#

Any advise?

Highlighted
Cisco Employee

Hi

Hi
Prime will not let you access the /opt volume details from admin CLI. If you have access to shell/root access execute below outputs and share here.
df -h
du -h --max-depth=6 /opt | grep [0-9]G | sort -k2
thank you
regards
Govardhan
Highlighted
Beginner

Results as below

Results as below

 

 

ade # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/smosvg-rootvol
                      3.8G  312M  3.3G   9% /
/dev/mapper/smosvg-varvol
                      3.8G  618M  3.0G  17% /var
/dev/mapper/smosvg-optvol
                      447G  424G  690M 100% /opt
/dev/mapper/smosvg-tmpvol
                      1.9G   35M  1.8G   2% /tmp
/dev/mapper/smosvg-usrvol
                      6.6G  1.2G  5.1G  19% /usr
/dev/mapper/smosvg-recvol
                       93M  5.6M   83M   7% /recovery
/dev/mapper/smosvg-home
                       93M  5.6M   83M   7% /home
/dev/mapper/smosvg-storeddatavol
                      9.5G  151M  8.9G   2% /storeddata
/dev/mapper/smosvg-altrootvol
                       93M  5.6M   83M   7% /altroot
/dev/mapper/smosvg-localdiskvol
                       87G  184M   82G   1% /localdisk
/dev/sda2              97M  5.6M   87M   7% /storedconfig
/dev/sda1             485M   18M  442M   4% /boot
tmpfs                 7.8G     0  7.8G   0% /dev/shm
ade # du -h --max-depth=6 /opt | grep [0-9]G | sort -k2


423G    /opt
12G     /opt/CSCOlumos
4.6G    /opt/CSCOlumos/NCS_10.32.69.211
1.1G    /opt/CSCOlumos/NCS_10.32.69.211/decap
2.8G    /opt/CSCOlumos/logs
1.9G    /opt/CSCOlumos/staging
1.1G    /opt/CSCOlumos/staging/app
412G    /opt/oracle
412G    /opt/oracle/base
1.1G    /opt/oracle/base/diag
193G    /opt/oracle/base/fast_recovery_area
3.1G    /opt/oracle/base/fast_recovery_area/WCS
3.1G    /opt/oracle/base/fast_recovery_area/WCS/onlinelog
190G    /opt/oracle/base/fast_recovery_area/WCSS1
179G    /opt/oracle/base/fast_recovery_area/WCSS1/archivelog
24G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_23
40G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_24
40G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_25
40G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_26
36G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_27
8.1G    /opt/oracle/base/fast_recovery_area/WCSS1/flashback
3.1G    /opt/oracle/base/fast_recovery_area/WCSS1/onlinelog
214G    /opt/oracle/base/oradata
6.7G    /opt/oracle/base/oradata/WCS
3.7G    /opt/oracle/base/oradata/WCS/datafile
3.1G    /opt/oracle/base/oradata/WCS/onlinelog
208G    /opt/oracle/base/oradata/WCSS1
202G    /opt/oracle/base/oradata/WCSS1/datafile
3.1G    /opt/oracle/base/oradata/WCSS1/onlinelog
3.1G    /opt/oracle/base/oradata/WCSS1/standbylog
3.1G    /opt/oracle/base/product
3.1G    /opt/oracle/base/product/11.2.0
3.1G    /opt/oracle/base/product/11.2.0/dbhome_1
ade #

Highlighted
Cisco Employee

Hi

Hi

I see lots of space consumed by archive log from secondary database(WCSS1). Need to investigate further before  remove them.

179G    /opt/oracle/base/fast_recovery_area/WCSS1/archivelog
24G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_23
40G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_24
40G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_25
40G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_26
36G     /opt/oracle/base/fast_recovery_area/WCSS1/archivelog/2017_03_27

 Feel free to open up a TAC case.

thank you

regards

Govardhan

Highlighted
Beginner

can advise how to remove them

can advise how to remove them?

Highlighted
VIP Engager

      - Check your prime

      - Check your prime server disk space (check all partitions).

M.

Highlighted
Beginner

Thanks, looks like the /opt

Thanks, looks like the /opt directory was at 100%.  We were able to move some files around to get it working.

Highlighted
Beginner

i checked my disk space and

i checked my disk space and only 13% being used.

Highlighted
Beginner

checked and my whole disk

checked and my whole disk space is only 13% used.

CreatePlease to create content
Content for Community-Ad