cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3510
Views
4
Helpful
10
Replies

Reduce CPU Utilization in cisco switches 2960, 3560 series

thanksarjun1
Level 1
Level 1

Greeting All,

In our topology, we are getting huge CPU Utilizations in L2 and L3 switches. We are unable to reduce the same.Due to this issue,we are facing too much latency. Kindly help me to reduce the CPU utilization in switches.Below i mention some inputs.

Switch # sh processes cpu sorted
CPU utilization for five seconds: 57%/33%; one minute: 88%; five minutes: 92%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
136 1565885522 183251236 8545 12.35% 13.72% 14.04% 0 Hulc LED Process
190 274423562 282538455 971 2.98% 2.55% 2.44% 0 Spanning Tree
204 91477802 9166350 9979 0.89% 0.85% 0.86% 0 PI MATM Aging Pr
140 491 194 2530 0.89% 0.41% 0.10% 1 SSH Process

Switch #s h processes cpu history

5555555555555555555555555556666655555555555555555555555555
6699999666669999966666888881111188888888866666666669999977
100
90
80
70
60 **********************************************************
50 **********************************************************
40 **********************************************************
30 **********************************************************
20 **********************************************************
10 **********************************************************
0....5....1....1....2....2....3....3....4....4....5....5....
0 5 0 5 0 5 0 5 0 5
CPU% per second (last 60 seconds)

Thanks in Advance.

Rgds,

Nagarjun.B

10 Replies 10

Mark Malone
VIP Alumni
VIP Alumni

The HULC is a known bug in your version of IOS you need to move to an ios its not in

136 1565885522 183251236 8545 12.35% 13.72% 14.04% 0 Hulc LED Process

Cisco Bug: CSCsi78581 - Hulc LED Process uses 10-15 ...

https://quickview.cloudapps.cisco.com/quickview/bug/CSCsi78581
Mar 23, 2016 - Symptom: CPU utilization for Hulc LED Process of around 10-15% is seen on Cisco Catalyst 3750 and 3560 switches, even with no ports ...

Cisco Bug: CSCtn42790 - 3560X/3750X: Elevated CPU ...

https://quickview.cloudapps.cisco.com/quickview/bug/CSCtn42790
Mar 29, 2016 - Symptom: Hulc LED Process uses 15-30% CPU on Catalyst 3560X/3750X platforms. Conditions: The is seen in 12.2(53)SE releases or later.

Cisco Bug: CSCti61764 - 3750X: Constant High CPU 99 ...

https://quickview.cloudapps.cisco.com/quickview/bug/CSCti61764
Mar 18, 2016 - Symptom: Catalyst 3750X switch may report CPU usage more than 90%, due to "hulc LED" and "SFF8472" processes. Conditions: Seen on ...

Cisco Bug: CSCtg86211 - Hulc LED Process uses 6-23 ...

https://quickview.cloudapps.cisco.com/quickview/bug/CSCtg86211

Greetings Mark,

Thanks for your reply.So the only solution is need to update the iOS in all switches.Right?Kindly reply if any other solution are available.

Thanks in Advance.

Rgds,

Nagarjun.B

Yes if you want to get rid of it yes , I have seen some bug outputs say for hulc if you shut unused ports it can bring it down cpu as well if you don't want to upgrade all your switches but as below some of the bug ids have no fix or workaround other than go to a different ios

In some of the forum posts users have stated that the 12.2.55 SE train later images are stable and don't have it or you could try the current safe harbour version for your switch the one with the star beside it on Cisco download section for your platform or just live with it as your cpu is still normal enough , only if its sitting constantly around 80% or more should you be really concerned

Symptom:
Hulc LED Process uses 6-23% CPU on Catalyst 2960/2960S/2960X 24 or 48-port switch.

Conditions:
The CPU utilization for Hulc LED Process will be in the 6-23% range for the
Catalyst 2960 series like 2960, 2960S, 2960X switch models.

For WS-C2960X-48LPS-L which is 48ports PoE 2960X switch Hulc LED Process
Could reach about 22%-23%.

The is seen in 12.2(50)SE03 or later releases.

Workaround:
This is an expected behavior and there is no workaround.

Further Problem Description:

Noted..,

Thanks for your valuable reply.

Rgds,

Nagarjun.B

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

As an aside, high CPU, on a switch, is not always adverse to the switch's performance.

Why?

Transit traffic normally, and should be, processed by the switch's ASICs, not the CPU.

Processes on CPU should be prioritized, such that less important processes, like the Hulc LED Process shouldn't be adverse (i.e. slow) more important control plane processes.

I.e. your bouts of latency might have another cause.

eirirs
Level 1
Level 1

Hey

 

We got three C3750X with current Cisco suggested IOS version 15.2.4E10. 

All are affected by this bug. CPU stays around 30-50%, with 100% spikes.

This got my attention as I was investigating an unexpected reboot for apparently no reason.

 

Eirik

Okay, but does the high CPU appear detrimental to your transit traffic and/or to other control plane processes?

If not, the only real "problem" is often only system management that "alarms" over high CPU.

Not tested, but main concern is this one random reboot. Redundant power supply, which the one side are connected to UPS, and sh env all show all OK. Then I noticed the unusual high CPU usage, and wondered maybe it might had something to do with that.

". . . but main concern is this one random reboot."

Have you considered sunspots?  (Laugh)

Seriously, though, hardware often has an expected "soft" error rate, although a very, very large probability value against it happening (unless hardware is starting to fail, or some external "thing" [e.g. yes, possibly even sunspots] increases the error probability).

I've seen Cisco devices in service for over a decade without an issue, I've seen very random reboots and/or some, but not all, features/functions of the device fail.  With such problems, after a reboot, device may continue to run fine for years too.

There's also issues, including reboots, caused by some really unusual software execution path or something like a very, very, very slow memory leak.

So, one unexpected reboot might not be a real concern, but, generally, reoccurrence is.

Could the CPU issue be a root cause of reboot?  Yes, although, alone, I think it unlikely.

Now I had more time to analyze this, and are getting convinced that there have been a sunspot at 13. December. (laughs) - Just found this at uptime sensor in PRTG; but 'sh env all' shows all are OK.

Sensor Status History (UPTIME)

Status Date Time
 
 Unknown
12/13/2021 12:00:00 AM − 12/13/2021 12:02:31 AM    (=02m 31s)
 
 Up
12/13/2021 12:02:31 AM − 12/17/2021 1:41:33 AM                  (=04d 01h 39m 02s)
 
 Unknown
12/17/2021 1:41:33 AM − 12/17/2021 1:51:07 AM    (=09m 34s)
 
 Up
12/17/2021 1:51:07 AM − 12/18/2021 2:33:06 AM              (=01d 00h 41m 58s)
 
 Unknown
12/18/2021 2:33:06 AM − 12/18/2021 2:42:12 AM    (=09m 06s)
 
 Up
12/18/2021 2:42:12 AM − 12/18/2021 2:42:43 AM    (=30s)
 
 Unknown
12/18/2021 2:42:43 AM − 12/18/2021 2:49:50 AM    (=07m 06s)
 
 Up
12/18/2021 2:49:50 AM − 12/18/2021 3:10:49 AM      (=20m 58s)
 
 Unknown
12/18/2021 3:10:49 AM − 12/18/2021 3:19:57 AM    (=09m 08s)
 
 Up
12/18/2021 3:19:57 AM − 12/19/2021 9:20:27 AM                (=01d 06h 00m 29s)
 
 Unknown
12/19/2021 9:20:27 AM − 12/19/2021 9:32:12 AM      (=11m 44s)
 
 Up
12/19/2021 9:32:12 AM − 12/19/2021 9:35:11 AM    (=02m 58s)
 
 Unknown
12/19/2021 9:35:11 AM − 12/19/2021 9:53:27 AM      (=18m 16s)
 
 Up
12/19/2021 9:53:27 AM − 12/19/2021 10:11:26 AM      (=17m 58s)
 
 Unknown
12/19/2021 10:11:26 AM − 12/19/2021 10:51:59 AM        (=40m 33s)
 
 Up
12/19/2021 10:51:59 AM − 12/19/2021 10:55:13 AM    (=03m 14s)
 
 Unknown
12/19/2021 10:55:13 AM − 12/19/2021 11:02:36 AM    (=07m 23s)
 
 Up

12/19/2021 11:02:36 AM − 12/20/2021 8:17:19 AM              (=21h 14m 42s)

 

Review Cisco Networking for a $25 gift card