cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1570
Views
5
Helpful
9
Replies

MARS 6.0.5 Incident Details Times Out

natehausrath
Level 1
Level 1

We have a MARS-50 that I recently upgraded to version 6.0.5.  I did a series of upgrades, so this may or may not be related to 6.0.5.

Everything seems to work fine except the incidents.  If I click on an incident number to view the details, MARS will begin to bring up the page, but it will always time out before it displays any of the incident data.

I'm not sure what the issue is or even how to diagnose it.  pnstatus shows everything running fine, and 'show healthinfo' also looks fine.  The system memory is completely used, however.

Has anyone seen this before?

9 Replies 9

Farrukh Haroon
VIP Alumni
VIP Alumni

I would try restarting the services or rebooting the box

Regards

Farrukh

Thanks for the suggestion, but it did not fix the problem.

Can you please post the output of sysstatus (the top frontend)?

https://supportforums.cisco.com/docs/DOC-2990.pdf;jsessionid=527DB65DC89ECDE7D5C521E8060754AF.node0


Regards

Farrukh


Are you looking for anything in particular?

sysstatus - 16:20:16 up 1 day,  1:06,  1 user,  load average: 2.54, 2.21, 2.11
Tasks: 156 total,   1 running, 155 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.3% us, 11.9% sy,  0.0% ni,  0.0% id, 77.2% wa,  0.7% hi,  0.0% si
Mem:   2074880k total,  2073360k used,     1520k free,     3216k buffers
Swap:  4194272k total,    22676k used,  4171596k free,  1152352k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
4642 oracle    15   0  387m 105m 102m D 16.3  5.2  28:20.28 oracle
3128 pnadmin   18   0  214m  67m 7468 S  2.0  3.3  14:31.44 pnesloader
   48 root      15   0     0    0    0 D  0.7  0.0   1:31.68 kswapd0
3158 pnadmin   15   0  472m  66m  14m S  0.7  3.3   1:20.32 pnparser
3125 pnadmin   16   0  122m  12m 7812 S  0.3  0.6   2:05.45 pnarchiver
    1 root      15   0  1796  568  480 S  0.0  0.0   0:02.87 init
    2 root      34  19     0    0    0 S  0.0  0.0   0:00.11 ksoftirqd/0
    3 root       5 -10     0    0    0 S  0.0  0.0   0:00.04 events/0
    4 root       6 -10     0    0    0 S  0.0  0.0   0:00.00 khelper
    5 root      15 -10     0    0    0 S  0.0  0.0   0:00.00 kacpid
   28 root       5 -10     0    0    0 S  0.0  0.0   0:00.00 kblockd/0
   46 root      15   0     0    0    0 S  0.0  0.0   0:00.02 pdflush
   47 root      15   0     0    0    0 S  0.0  0.0   0:00.17 pdflush
   49 root      12 -10     0    0    0 S  0.0  0.0   0:00.00 aio/0
   29 root      15   0     0    0    0 S  0.0  0.0   0:00.00 khubd
  195 root      25   0     0    0    0 S  0.0  0.0   0:00.00 kseriod
  326 root       6 -10     0    0    0 S  0.0  0.0   0:00.00 ata/0
  328 root      21   0     0    0    0 S  0.0  0.0   0:00.00 scsi_eh_0
  329 root      22   0     0    0    0 S  0.0  0.0   0:00.00 scsi_eh_1
  344 root      15   0     0    0    0 S  0.0  0.0   0:00.05 md0_raid1
  345 root      15   0     0    0    0 S  0.0  0.0   0:04.09 kjournald
1463 root       6 -10  1992  452  376 S  0.0  0.0   0:00.00 udevd
1532 root      15   0     0    0    0 S  0.0  0.0   0:00.00 kedac
1535 root      25   0     0    0    0 S  0.0  0.0   0:00.00 scsi_eh_2
1536 root      25   0     0    0    0 S  0.0  0.0   0:00.00 scsi_eh_3
1537 root      25   0     0    0    0 S  0.0  0.0   0:00.00 scsi_eh_4
1538 root      25   0     0    0    0 S  0.0  0.0   0:00.00 scsi_eh_5
1790 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kjournald
1791 root      15   0     0    0    0 S  0.0  0.0   0:00.80 kjournald
1792 root      15   0     0    0    0 S  0.0  0.0   0:01.00 kjournald
1793 root      15   0     0    0    0 S  0.0  0.0   0:00.22 kjournald

Thanks.

I was asking because if the Oracle process is one of the high-CPU utilizing PIDs, than most probably MARS is/was still rebuilding the DB after the upgrade/re-image.


Regards

Farrukh

How long does this normally take?  I did the upgrades about a month ago.

Hello Nate

This should only last for a few hours (or even days). No chance for this to go on for a month. I think you should check the backend logs for any unusual messsages.


Regards

Farrukh

I was finally able to resolve this issue.  I opened a TAC case and we discovered that several index tables in the database were missing.  Once these tables were rebuilt and MARS churned through all the data, I was able to view incidents again.

Thanks for your help though.

I'm glad that your issue was resolved, and thanks for sharing the issue with all of us