cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6196
Views
20
Helpful
17
Replies

High IO on Prime after upgrade from 3.0 to 3.1

Kanes Ramasamy
Level 1
Level 1

Hi All,

We are seeing high IO on Cisco Prime after upgrading from 3.0 to 3.1.

Would this be its usual behaviour or could it be affected by a bug?

Please share your thoughts/experience. 

Thanks in advance. 

Regards,

Kanes.R

17 Replies 17

Installed a new instance of Prime?

What was the initial version in the new install and were there any subsequent updates after the new install?

If the database was corrupted, was all our data/settings lost?

Tommy Pitts
Officer, Network Engineer
The Pew Charitable Trusts
901 E Street, NW, Washington, DC 20004<>
p: (202) 552-2267<>| e: tpitts@pewtrusts.org | www.pewtrusts.org<>

On Jul 26, 2018, at 3:42 AM, Cisco Community > wrote:

[https://community.cisco.com/html/assets/logo.png]<><>
Cisco Community

Hi cwdykstra,

m-niemi (Beginner) posted a new reply in Network Management<> on 07-26-2018 12:42 AM

________________________________

Re: High IO on Prime after upgrade from 3.0 to 3.1<>

Hi.



Still no clue what caused hi IO. It seemed like there were ”too many” upgrades Ih the databse and the DB got somehow corrupted.



Solved the problem by installing fresh copy of PI.



Br



Mika

Helpful<> Reply

________________________________

Cisco Community sent this message to networkandsystems@pewtrusts.org.
You are receiving this email because a new message matches your subscription to a topic.
To control which emails we send you please go to, manage your subscription & notification settings<> or unsubscribe<>.

BOG
Level 1
Level 1

Hello,

   we had the same issue in virtual appliance PI after upgrading to 3.2 (from 3.0 or 3.1). About 1500 IOPS for read, 500-600 MB/s sustained, usually continues from 1 hour to few, dissappeares by itself, reoccures after few hours.  No any tasks/heavy deploys or activity during that time, that might correlate, no even logged on users to GUI. Having investigated closely from VM shell found up to 15 threads in Oracle DB persistently doing table full access through some 50-60 K records table, each taking 30 to 60 MB/s read. With help of strong DBA competence created an index over 1 or 2 columns, and the behaviour disappeares. So we have mitigated negative impact of excessive IO by DB data tier optimisation reducing physical read / fetches from disk. Once we did that (by our own) and shared results with TAC we got an expected reply that this is not supported config and we should not proceed further. 

 

Having conducted few webex and all those mandatory check for virtual hardware sizing and ESX resource priority aspects TAC came up with same solution - create a combined index for 2 columns. 

So at the end of the day we still have enormous numbers of SELECt statement execution, however they are effective from DB read perspective. 

 

Obviously this need to be fixed at the code level / business logic.

 

Hope this helps,

Irakli

We had similar issue at our customer's site. Their Oracle guru's have fixed it by creating indexes as follows. Perhaps it might help someone here:

 

CREATE INDEX IDRDEVICEID_TMP_ZZZ ON ALARM
(IDRDEVICEID)
LOGGING
TABLESPACE TS_IDX
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           );

CREATE INDEX ZZZ_TEMP ON EVENT
(MACADDRESSRRM, CLASSNAME, IFTYPERRM, POWERUPDATEEVENTTIMERRM)
LOGGING
TABLESPACE TS_IDX
PCTFREE    10
INITRANS   2
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           );

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: