cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
777
Views
0
Helpful
4
Replies

High CPU in 3750X stack switch due to Auth Manager process

Sri v
Level 1
Level 1

I have a 9 switch stacked and facing a high cpu with hitting 99%

CPU utilization for five seconds: 98%/4%; one minute: 98%; five minutes: 94%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
   4    13034934    560643      23250  2.07%  1.05%  0.52%   0 Check heaps     
   8     3172312     25304     125368  0.00%  0.22%  0.18%   0 Licensing Auto U
  10     1508046   1106880       1362  0.00%  0.10%  0.13%   0 ARP Input       
  33     5273038    356570      14788  0.31%  0.70%  0.47%   0 Net Background  
  35      320685    415836        771  0.00%  0.01%  0.00%   0 Logger          
  36      544549   1150122        473  0.00%  0.02%  0.00%   0 TTY Background  
  37     1415105   1164757       1214  0.00%  0.15%  0.12%   0 Per-Second Jobs 
  38      720727     29976      24043  0.00%  0.04%  0.00%   0 Per-minute Jobs 
  43     2373739    291553       8141  0.15%  0.15%  0.15%   0 Compute load avg
  46      243337    397873        611  0.00%  0.03%  0.00%   0 HC Counter Timer
  66     7400947   7108581       1041  1.27%  0.84%  0.53%   0 RedEarth I2C dri
  67    25983359   4948436       5250  2.07%  1.26%  1.03%   0 RedEarth Tx Mana
  68      534149   3554270        150  0.00%  0.01%  0.00%   0 RedEarth Rx Mana
  74     1289561   1293856        996  0.31%  0.19%  0.13%   0 hrpc -> response
  75      133914   2876654         46  0.00%  0.03%  0.00%   0 hrpc -> request 
  76     5703702  11402233        500  0.79%  0.78%  0.51%   0 hrpc <- response
  80      257641    398768        646  0.00%  0.02%  0.00%   0 HRPC hdm non blo
  86    27771475  13262485       2093 12.29%  3.05%  2.86%   0 HRPC hlfm reques
  87    15303639  23605706        648  1.11%  2.23%  1.84%   0 HLFM address lea
  89      375576  25436647         14  0.00%  0.02%  0.00%   0 HLFM address ret
 105      418373    879371        475  0.00%  0.01%  0.00%   0 HRPC pm request 
 107    17675645   1929640       9160  1.91%  0.92%  0.75%   0 hpm counter proc
 108     3226351  12143813        265  0.79%  0.15%  0.08%   0 HRPC pm-counters
 123       66737    253452        263  0.00%  0.02%  0.00%   0 HULC CISP Proces
 124      117066    450967        259  0.00%  0.01%  0.00%   0 HRPC dot1x reque
 126      458181    661340        692  0.00%  0.04%  0.00%   0 HULC DOT1X Proce
 132     2800126   4572121        612  0.95%  0.32%  0.19%   0 Hulc Storm Contr
 140   163675954  18392002       8899 17.09%  8.96%  6.84%   0 Hulc LED Process
 141      169174    880836        192  0.00%  0.01%  0.00%   0 HL3U bkgrd proce
 142      390237   3252266        119  0.00%  0.01%  0.00%   0 HRPC hl3u reques
 146        1603       267       6003  1.27%  1.13%  0.40%   1 SSH Process     
 149     1271058   2780282        457  0.63%  0.43%  0.17%   0 HRPC snmp reques
 150    45871061   2923783      15688 38.65% 14.69%  6.37%   0 HULC SNMP Proces
 151     5680659    277197      20493  0.31%  0.36%  0.34%   0 HQM Stack Proces
 152    15044025   4568772       3292  1.75%  1.26%  0.92%   0 HRPC qos request
 153           0         1          0  0.00%  0.00%  0.00%   0 HRPC span reques
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
 167   543589725   1344116     404456  0.00% 43.03% 53.29%   0 Auth Manager    
 177      501409   1058469        473  0.00%  0.02%  0.00%   0 EAP Framework   
 180     1839579   2507708        733  0.00%  0.06%  0.04%   0 IP Input        
 194    16136514  14763127       1093  0.79%  0.95%  1.03%   0 Spanning Tree   
 201    56332595  71255987        790  3.19%  3.75%  3.86%   0 HULC DAI Process
 206     1543138   1150016       1341  0.15%  0.13%  0.09%   0 PI MATM Aging Pr
 214     8420523    340746      24712  0.00%  1.25%  1.13%   0 VMATM Callback  

 

Current running IOS is c3750e-universalk9-mz.122-55.SE5.bin

 

Can anyone suggest me to troubleshoot on this case

4 Replies 4

TIM JUDGE
Level 1
Level 1

You're running a very old IOS release. I suggest an upgrade before you spend too much time troubleshooting. We're running these stacked switches on IOS 15.2.3E with DOT1x & MAB & 10gbit uplinks with no issues. Looking through the release notes there are many fixed caveats after your release which fix CPU hog issues. 

As you have 9 stacked switches then perhaps these CPU hog bugs would be more prevalent.

Are you running DOT1x etc?

Hi TIM,

 

Yes I do run with DOT 1x.

OK, check the release notes for the later release, the fixed caveats state in a number of places that you'll get CPU hog conditions so I strongly suggest an upgrade.

Leo Laohoo
Hall of Fame
Hall of Fame
I have a 9 switch stacked and facing a high cpu with hitting 99%

9 switches in a stack????

 

WOW!  Very dangerous stack.  :P

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card