04-05-2016 02:56 AM - edited 03-08-2019 05:14 AM
Greeting All,
In our topology, we are getting huge CPU Utilizations in L2 and L3 switches. We are unable to reduce the same.Due to this issue,we are facing too much latency. Kindly help me to reduce the CPU utilization in switches.Below i mention some inputs.
Switch # sh processes cpu sorted
CPU utilization for five seconds: 57%/33%; one minute: 88%; five minutes: 92%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
136 1565885522 183251236 8545 12.35% 13.72% 14.04% 0 Hulc LED Process
190 274423562 282538455 971 2.98% 2.55% 2.44% 0 Spanning Tree
204 91477802 9166350 9979 0.89% 0.85% 0.86% 0 PI MATM Aging Pr
140 491 194 2530 0.89% 0.41% 0.10% 1 SSH Process
Switch #s h processes cpu history
5555555555555555555555555556666655555555555555555555555555
6699999666669999966666888881111188888888866666666669999977
100
90
80
70
60 **********************************************************
50 **********************************************************
40 **********************************************************
30 **********************************************************
20 **********************************************************
10 **********************************************************
0....5....1....1....2....2....3....3....4....4....5....5....
0 5 0 5 0 5 0 5 0 5
CPU% per second (last 60 seconds)
Thanks in Advance.
Rgds,
Nagarjun.B
04-05-2016 03:16 AM
The HULC is a known bug in your version of IOS you need to move to an ios its not in
136 1565885522 183251236 8545 12.35% 13.72% 14.04% 0 Hulc LED Process
04-05-2016 09:49 PM
Greetings Mark,
Thanks for your reply.So the only solution is need to update the iOS in all switches.Right?Kindly reply if any other solution are available.
Thanks in Advance.
Rgds,
Nagarjun.B
04-06-2016 12:23 AM
Yes if you want to get rid of it yes , I have seen some bug outputs say for hulc if you shut unused ports it can bring it down cpu as well if you don't want to upgrade all your switches but as below some of the bug ids have no fix or workaround other than go to a different ios
In some of the forum posts users have stated that the 12.2.55 SE train later images are stable and don't have it or you could try the current safe harbour version for your switch the one with the star beside it on Cisco download section for your platform or just live with it as your cpu is still normal enough , only if its sitting constantly around 80% or more should you be really concerned
Symptom:
Hulc LED Process uses 6-23% CPU on Catalyst 2960/2960S/2960X 24 or 48-port switch.
Conditions:
The CPU utilization for Hulc LED Process will be in the 6-23% range for the
Catalyst 2960 series like 2960, 2960S, 2960X switch models.
For WS-C2960X-48LPS-L which is 48ports PoE 2960X switch Hulc LED Process
Could reach about 22%-23%.
The is seen in 12.2(50)SE03 or later releases.
Workaround:
This is an expected behavior and there is no workaround.
Further Problem Description:
04-06-2016 07:06 AM
Noted..,
Thanks for your valuable reply.
Rgds,
Nagarjun.B
04-06-2016 08:25 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
As an aside, high CPU, on a switch, is not always adverse to the switch's performance.
Why?
Transit traffic normally, and should be, processed by the switch's ASICs, not the CPU.
Processes on CPU should be prioritized, such that less important processes, like the Hulc LED Process shouldn't be adverse (i.e. slow) more important control plane processes.
I.e. your bouts of latency might have another cause.
12-19-2021 05:20 AM - edited 12-19-2021 05:24 AM
Hey
We got three C3750X with current Cisco suggested IOS version 15.2.4E10.
All are affected by this bug. CPU stays around 30-50%, with 100% spikes.
This got my attention as I was investigating an unexpected reboot for apparently no reason.
Eirik
12-19-2021 07:29 AM
Okay, but does the high CPU appear detrimental to your transit traffic and/or to other control plane processes?
If not, the only real "problem" is often only system management that "alarms" over high CPU.
12-19-2021 08:31 AM
Not tested, but main concern is this one random reboot. Redundant power supply, which the one side are connected to UPS, and sh env all show all OK. Then I noticed the unusual high CPU usage, and wondered maybe it might had something to do with that.
12-19-2021 09:09 AM
". . . but main concern is this one random reboot."
Have you considered sunspots? (Laugh)
Seriously, though, hardware often has an expected "soft" error rate, although a very, very large probability value against it happening (unless hardware is starting to fail, or some external "thing" [e.g. yes, possibly even sunspots] increases the error probability).
I've seen Cisco devices in service for over a decade without an issue, I've seen very random reboots and/or some, but not all, features/functions of the device fail. With such problems, after a reboot, device may continue to run fine for years too.
There's also issues, including reboots, caused by some really unusual software execution path or something like a very, very, very slow memory leak.
So, one unexpected reboot might not be a real concern, but, generally, reoccurrence is.
Could the CPU issue be a root cause of reboot? Yes, although, alone, I think it unlikely.
12-19-2021 11:28 PM
Now I had more time to analyze this, and are getting convinced that there have been a sunspot at 13. December. (laughs) - Just found this at uptime sensor in PRTG; but 'sh env all' shows all are OK.
12/13/2021 12:00:00 AM − 12/13/2021 12:02:31 AM (=02m 31s) | |
12/13/2021 12:02:31 AM − 12/17/2021 1:41:33 AM (=04d 01h 39m 02s) | |
12/17/2021 1:41:33 AM − 12/17/2021 1:51:07 AM (=09m 34s) | |
12/17/2021 1:51:07 AM − 12/18/2021 2:33:06 AM (=01d 00h 41m 58s) | |
12/18/2021 2:33:06 AM − 12/18/2021 2:42:12 AM (=09m 06s) | |
12/18/2021 2:42:12 AM − 12/18/2021 2:42:43 AM (=30s) | |
12/18/2021 2:42:43 AM − 12/18/2021 2:49:50 AM (=07m 06s) | |
12/18/2021 2:49:50 AM − 12/18/2021 3:10:49 AM (=20m 58s) | |
12/18/2021 3:10:49 AM − 12/18/2021 3:19:57 AM (=09m 08s) | |
12/18/2021 3:19:57 AM − 12/19/2021 9:20:27 AM (=01d 06h 00m 29s) | |
12/19/2021 9:20:27 AM − 12/19/2021 9:32:12 AM (=11m 44s) | |
12/19/2021 9:32:12 AM − 12/19/2021 9:35:11 AM (=02m 58s) | |
12/19/2021 9:35:11 AM − 12/19/2021 9:53:27 AM (=18m 16s) | |
12/19/2021 9:53:27 AM − 12/19/2021 10:11:26 AM (=17m 58s) | |
12/19/2021 10:11:26 AM − 12/19/2021 10:51:59 AM (=40m 33s) | |
12/19/2021 10:51:59 AM − 12/19/2021 10:55:13 AM (=03m 14s) | |
12/19/2021 10:55:13 AM − 12/19/2021 11:02:36 AM (=07m 23s) | |
12/19/2021 11:02:36 AM − 12/20/2021 8:17:19 AM (=21h 14m 42s)
|
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide