cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1369
Views
0
Helpful
5
Replies

3750X Stack high CPU

Ben Cargill
Level 1
Level 1

Seeing very high CPU on one of our 3750X stacks.  Usally its around 30% but recently its 70, 80, 90 or maxed at 100% during the main day.

IOS version c3750e-universalk9-mz.122-55.SE1.bin.

Has a ten gig uplink connected to a nexus 7k and another ten gig connected to another 7k.  Running Layer 3 and EIGRP on the 3750X.

Is this normal for the CPU?

#sh processes cpu history                

    5555555554444455555555555555588888666669999888885555577777

    8888000001111111111222227777755555666662222222229999977777

100

90                              *****     ****

80                              *****     *********     *****

70                              *******************     *****

60 ****                    **********************************

50 *********     ********************************************

40 **********************************************************

30 **********************************************************

20 **********************************************************

10 **********************************************************

   0....5....1....1....2....2....3....3....4....4....5....5....

             0    5    0    5    0    5    0    5    0    5

               CPU% per second (last 60 seconds)

                      1 1

    9767879979787656790608888987655996687678888766788688899779

    2656585452606260420108897312089806303930545519713514611107

100       *           * *          *                         *

90 *   * ** *       ** ******     **       * *         ***  *

80 ** ***#******    ** #**#*#*    #*  *   *****  *** *****  *

70 #***####*****   **# #######*   #** *****###* *#****#*##**#

60 ##*##########****##*#######****#******######*#############

50 ##########################################################

40 ##########################################################

30 ##########################################################

20 ##########################################################

10 ##########################################################

   0....5....1....1....2....2....3....3....4....4....5....5....

             0    5    0    5    0    5    0    5    0    5

               CPU% per minute (last 60 minutes)

              * = maximum CPU%   # = average CPU%

    1                          1

    0999999984496655586578999990999995896766565655755655456655594555655696

    0856979738992746602908899920997884596100357434396030822425198632325243

100 ********   *          **** ******  *                       *

90 ********   *         ************ **                       *        *

80 *********  *     *   ************ **                       *        *

70 *********  * *   *  ***##***##*** ****   *    *            *        *

60 ##*####**  *** *******####*####** ****** ***  ****    ** * * *  * ****

50 ########*************###########**************************************

40 #########***********#############*************************************

30 ######################################################################

20 ######################################################################

10 ######################################################################

   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.

             0    5    0    5    0    5    0    5    0    5    0    5    0

                   CPU% per hour (last 72 hours)

   

#sh processes cpu sorted
CPU utilization for five seconds: 50%/5%; one minute: 58%; five minutes: 63%
PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
159  2834098413 336879982       8412 15.33% 15.24% 15.28%   0 Hulc LED Process
201    87242591 171844402        507 11.82% 17.25% 20.74%   0 IP Input
  75   977278274 159196566       6138  5.59%  4.95%  4.90%   0 RedEarth Tx Mana
  74   321267248 260833627       1231  2.07%  2.44%  2.56%   0 RedEarth I2C dri
119   288345412  34415982       8378  1.91%  1.63%  1.64%   0 hpm counter proc
366       40658      2869      14171  0.95%  0.37%  0.50%   2 SSH Process
171    69569880  34027167       2044  0.79%  0.53%  0.51%   0 HRPC qos request
170    59830354   3411209      17539  0.31%  0.35%  0.32%   0 HQM Stack Proces
115    12883059 211235534         60  0.31%  0.13%  0.11%   0 hpm main process
  99     1528727 489856679          3  0.31%  0.03%  0.00%   0 HLFM address ret
  34    11133578   3252382       3423  0.31%  0.09%  0.06%   0 Net Background
189    55536512  82178186        675  0.15%  0.39%  0.38%   0 CDP Protocol
247     3014880  10005666        301  0.15%  0.04%  0.01%   0 IPC LC Message H
  38     6805234  16967063        401  0.15%  0.05%  0.08%   0 Per-Second Jobs
  14           0         1          0  0.00%  0.00%  0.00%   0 Policy Manager

1 Accepted Solution

Accepted Solutions

Leo Laohoo
Hall of Fame
Hall of Fame

Upgrade your IOS to 12.2(55)SE6.  This version is one of the most stable.

View solution in original post

5 Replies 5

chucktranhpb
Level 1
Level 1

Did you change any configuration of the switch recently? Or has traffic going through the switch changed noticabely?

It used to be single connected 1 gig to a 6509 by L3.  The day after we dual connected Ten gig to a nexus 7k we noticed the CPU spike.  Its doing equal cost EIGRP routing over both links.

Leo Laohoo
Hall of Fame
Hall of Fame

Upgrade your IOS to 12.2(55)SE6.  This version is one of the most stable.

That worked for us.  Upgraded to 12.2(55)SE6 and its been running 30 % solid since.  Thank You.

Glad to see it's working. 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco