cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
905
Views
0
Helpful
1
Replies

DCNM 6.1.1a and Nexus 5020

Fernando Bello
Level 1
Level 1

Hi all,

I've got a problem that I am not able to figure out.

I've got 2 nexus 5020 running

Software

  system:    version 5.0(2)N2(1)

  system image file is:    bootflash:/n5000-uk9.5.0.2.N2.1.bin

Hardware

  cisco Nexus5020 Chassis ("40x10GE/Supervisor")

I use Nagios for management and so far, everything is working fine. Since a couple of days, I was thinking to install dcnm 6.1.1a on a red hat linux to test the app and see what can I do with the nexus. The problem is that every time that I try to work with the DCNM app, the CPU goes up a lot.

The show process cpu history is the following:

            1
   1 1 2 2 2014 3 1
   175785558555676737556055796009577855685588655825576458656955
100 #
90 #
80 #
70 #
60 #
50 # #
40 # #
30 # # #
20 # # # #### # #
10 ################################################### ########
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5

CPU% per second (last 60 seconds)
# = average CPU%


   111111111111111111111111111111111111111111111111111111111111
   000000000000000000000000000000000000000000000000000000000000
   000000000000000000000000000000000000000000000000000000000000
100 ************************************************************
90 ************************************************************
80 ***************************************** *******************
70 ************************************************************
60 ************************************************************
50 ************************************************************
40 ************************************************************
30 ************************************************************
20 ******#*****************************************************
10 ############################################################
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5

CPU% per minute (last 60 minutes)
* = maximum CPU% # = average CPU%



    111111111111111111111111111111111111111111111111111111111111111111111111
    335654444333535333333474344735935547557454443544355623332443363242463447
100
90
80
70
60
50
40
30
20 *** * * * * ** ** **** * * *** * * *
10 ############################################### #########################
   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
             0    5    0    5    0    5    0    5    0    5    0    5    0

CPU% per hour (last 72 hours)
* = maximum CPU% # = average CPU%

        1      1 1          1 1  1                  1        1
   799870778787080999797778070770777887797877878760777776770777
   418260550868090312495185010220451125592520612190533039160102
100  * * * * * * * * * * *
90 ** * * ****** * ** * * * * * * *
80 ***************** ** *** * * * ***** * * * ** **
70 ************************************************************
60 ************************************************************
50 ************************************************************
40 ************************************************************
30 ************************************************************
20 ************************************************************
10 ############################################################
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5

CPU% per minute (last 60 minutes)
* = maximum CPU% # = average CPU%

Could SNMP be the cause of this high CPU? How could I confirm it? Maybe it is a bug but I could not find anything.

Any ideas or comments are welcomed!

THanks in advance,

Fernando

1 Reply 1

Fernando Bello
Level 1
Level 1

Just to close this thread, we saw that the average cpu utilization is 10%. The CPU spikes might be due to the polling time or some cosmetic output. There is no output error, no discards, no traffic loss.

thanks in advance,

Fernando