cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
628
Views
3
Helpful
7
Replies

Cisco Cat 3750X utilization High

Manish221
Level 1
Level 1

Hello Friends,

 

I am seeing  a high CPU utilization due to huluc led processes:-

CPU utilization for five seconds: 97%/1%; one minute: 90%; five minutes: 84%
 PID       Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process 
   179  1035501709   578129793       1791 44.47% 49.94% 52.01%   0 Hulc LED Process 

Can anyone help what it may be related to?? 

Iam using IOS 15.0.2 SE4.bin 

7 Replies 7

InayathUlla Sharieff
Cisco Employee
Cisco Employee

Manish,

 "Hulc LED" process does following tasks:
- Check Link status on every port
- If the switch supports POE, it checks to see if there is a Power Device (PD) detected
- Check the status of the transceiver
- Update Fan status
- Set Main LED and ports LEDs
- Update both Power Supplies and RPS
- Check on system temperature status

 

Please find below link on troubleshooting high cpu on 3750:

https://supportforums.cisco.com/document/12298401/troubleshooting-high-cpu-3750

 

HTH

Hello Inayath thanks for your response i check the same and found that following


cpu-queue-frames  retrieved  dropped    invalid    hol-block  stray
----------------- ---------- ---------- ---------- ---------- ----------
rpc               1321125382             0          0          0          0         
stp               446083709              0          0          0          0         
ipc               284511674                 0          0          0          0         
routing protocol  146570711      0          0          0          0         
L2 protocol       29486653         0          0          0          0         
remote console      0                    0          0          0          0         
sw forwarding       0                     0          0          0          0         
host                          41361532   0          0          0          0         
broadcast               502507420  0          0          0          0         
cbt-to-spt                        0          0          0          0          0         
igmp snooping       14721120   0          0          0          0         
icmp                              0          0          0          0          0         
logging                      16            0          0          0          0         
rpf-fail                          0            0          0          0          0         
dstats                           0           0          0          0          0         
cpu heartbeat     460723427    0          0          0          0  

Run this command twice or thrice span of 1 min difference and then do the debug for respective cpu q.

 

After which provide me the debug output.

CPU utilization is already 90% if i run debug for that particular cpu q may  be it will spike my cpu more and result in crash of switch

 

What command should  i use to to obtain the debug for specific CPU Que as iam not able to see any specific command in the console of switch

 

debug platform cpu-queues {broadcast-q | cbt-to-spt-q | cpuhub-q | host-q |
icmp-q | igmp-snooping-q | layer2-protocol-q | logging-q |remote-console-q |
routing-protocol-q | rpffail-q | software-fwd-q | stp-q}   -----------> its intrusive
in nature and run only when you see drops in the queue.

I have upgraded to IOS 15.0(2) SE7

 

BNGSW001#sh proces cpu hist                                                                  
      444444444444444444444555554444466666666664444444444444445555
      944444333333333344444000009999933333777774444444444333333333
  100                                                           
   90                                                           
   80                                                           
   70                                     *****                 
   60                                **********                 
   50 *                    ********************               **
   40 **********************************************************
   30 **********************************************************
   20 **********************************************************
   10 **********************************************************
     0....5....1....1....2....2....3....3....4....4....5....5....6
               0    5    0    5    0    5    0    5    0    5    0
               CPU% per second (last 60 seconds)


      656657659995555665565565555665666665655555555566666565655655
      731691698942155013817509946445002049154963177020040439238465
  100         *#                                                
   90         *#*                                               
   80         *#*                                               
   70 *  * ** *##                                               
   60 * *******##  **** ******* ************ **  ** ***** *** **
   50 ##########################################################
   40 ##########################################################
   30 ##########################################################
   20 ##########################################################
   10 ##########################################################
     0....5....1....1....2....2....3....3....4....4....5....5....6
               0    5    0    5    0    5    0    5    0    5    0
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%

      99999999999999799999999999999999999999999999699999999999999999999999999
      899999988988998099989989889989888898988998998998999999889988998999888998
  100 *************** ***************************** ************************
   90 *************** ***************************** ************************
   80 *************** ***************************** ************************
   70 **********************************************************************
   60 **********************************************************************
   50 ###############*###############*#####**###############################
   40 ######################################################################
   30 ######################################################################
   20 ######################################################################
   10 ######################################################################
     0....5....1....1....2....2....3....3....4....4....5....5....6....6....7..
               0    5    0    5    0    5    0    5    0    5    0    5    0  
                   CPU% per hour (last 72 hours)
                  * = maximum CPU%   # = average CPU%

BNGSW001#sh proces cpu hist    sort | ex 0.00
CPU utilization for five seconds: 50%/1%; one minute: 47%; five minutes: 50%
 PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process 
 180   102593028    12024036       8532 20.32% 19.82% 19.60%   0 Hulc LED Process 
  90    37856526     6636961       5703 10.72%  7.98%  7.39%   0 RedEarth Tx Mana 
  89    14555710    11176632       1302  3.52%  2.80%  3.00%   0 RedEarth I2C dri 
 230     4740949     9115604        520  1.59%  1.47%  1.52%   0 IP Input         
 135     5917536      484014      12225  1.59%  1.59%  1.59%   0 hpm counter proc 
  58      291333        8227      35411  0.79%  0.09%  0.06%   0 Per-minute Jobs  
 248     2104545     1935558       1087  0.63%  0.72%  0.64%   0 Spanning Tree    
 130        1576         420       3752  0.47%  1.00%  0.41%   1 SSH Process      
 216     1822623     1556492       1170  0.31%  0.34%  0.34%   0 CDP Protocol     
  96     1189418      141482       8406  0.31%  0.10%  0.09%   0 Adjust Regions   
  91      909479     5245053        173  0.31%  0.17%  0.18%   0 RedEarth Rx Mana 
 136     1412408     1764361        800  0.31%  0.29%  0.31%   0 HRPC pm-counters 
  13     1473913     1940752        759  0.31%  0.46%  0.43%   0 ARP Input        
 169      567772     2374949        239  0.31%  0.12%  0.11%   0 Hulc Storm Contr 
 196     2047740       96702      21175  0.31%  0.39%  0.38%   0 HQM Stack Proces 
 218      219432      741597        295  0.15%  0.05%  0.04%   0 CEF: IPv4 proces 
 197     2349386      765734       3068  0.15%  0.42%  0.45%   0 HRPC qos request 
  59      396614      484370        818  0.15%  0.10%  0.08%   0 Per-Second Jobs  
 266      263871      478903        550  0.15%  0.08%  0.08%   0 PI MATM Aging Pr 
  46      563171      107950       5216  0.15%  0.12%  0.11%   0 Net Background   
 131     1301748     9062267        143  0.15%  0.24%  0.27%   0 hpm main process 
 423      811582    41563884         19  0.15%  0.16%  0.14%   0 IP SLAs XOS Even 
 318      365684     1058685        345  0.15%  0.12%  0.10%   0 Marvell wk-a Pow 

Also in logs im getting 057734: Dec 11 08:16:28.298: %ADJ-3-RESOLVE_REQ: Adj resolve request: Failed to resolve 10.94.5.138 Vlan5

The only switch model I have ran 15.0(2)SE4 without any issues for > 2 years are the 2960-series line of switches. 

 

3560- & 3750 series line of switches and 15.0(2)SE4 ... spells trouble.  For 3560 and 3750, the IOS 12.2(55)SE9 will work just fine.

Review Cisco Networking for a $25 gift card