cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
700
Views
0
Helpful
5
Replies

4507 Switch High CPU Utilization

cacravero
Level 1
Level 1

Hi everyone, I would like to know if somebody has had this issue sometime.

We have a 4507 switch with high cpu utilization due this process

CPU utilization for five seconds: 86%/11%; one minute: 87%; five minutes: 87%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
51 7911708403272260220 0 69.03% 70.07% 70.82% 0 Cat4k Mgmt LoPri
50 26119635321290208045 2024 3.19% 3.12% 3.10% 0 Cat4k Mgmt HiPri
109 351826268 220266259 1597 1.75% 1.62% 1.36% 0 IP Input
115 14942900 510226981 29 0.47% 0.51% 0.51% 0 Spanning Tree
15 74028172 768712638 96 0.15% 0.15% 0.15% 0 ARP Input
38 3526208 201885362 17 0.07% 0.03% 0.02% 0 IDB Work
148 9687420 307799496 31 0.07% 0.11% 0.10% 0 CEF: IPv4 proces

ARCOR1SWI001#sho platform health | exc 0.00
%CPU %CPU RunTimeMax Priority Average %CPU Total
Target Actual Target Actual Fg Bg 5Sec Min Hour CPU
GalChassisVp-review 3.00 0.10 10 10 100 500 0 0 0 2773:44
Lj-poll 1.00 0.01 2 0 100 500 0 0 0 717:25
StatValueMan Update 1.00 0.10 1 0 100 500 0 0 0 1309:02
GalK5TatooineStatsMa 0.70 0.02 4 0 100 500 0 0 0 734:39
K5L3FlcMan FwdEntry 2.00 0.13 15 7 100 500 0 0 0 3254:01
K5L3Unciast IFE Revi 2.00 0.13 15 7 100 500 0 0 0 3766:18
K5L3UnicastRpf IFE R 2.00 0.13 15 2 100 500 0 0 0 3735:39
K5L3Unicast Adj Tabl 2.00 2.91 15 9 100 500 1 1 1 21258:49
K5L3AdjStatsMan Revi 2.00 0.11 10 8 100 500 0 0 0 11403:40
K5FlcHitMan review 2.00 0.47 5 6 100 500 0 0 0 4882:28
K5PortMan Regular Re 2.00 0.36 15 9 100 500 0 0 0 3151:36
K5PortMan Ondemand L 3.00 0.53 30 4 100 500 0 0 0 13720:50
K5 L2 Aging Table Re 2.00 0.11 20 4 100 500 0 0 0 4265:10
K5ForerunnerPacketMa 1.50 62.34 4 4 100 500 113 133 99 34641:17
K5ForerunnerPacketMa 0.70 0.22 4 4 100 500 0 0 0 7568:53
K5QosDhmMan Rate DBL 2.00 4.04 7 8 100 500 3 4 3 125212:24
K5QosPolicerStatsMan 1.00 0.01 10 0 100 500 0 0 0 145:59
K5VlanStatsReview 2.00 0.68 10 3 100 500 0 0 0 9313:30
K5AclCamStatsMan hw 3.00 0.90 10 8 100 500 0 0 0 12567:11
VSI DMA Local Jawa R 1.00 0.14 6 0 100 500 0 0 0 1245:52
VSI DMA Remote Jawa 1.00 0.05 6 4 100 500 0 0 0 1256:50
RkiosPortMan Port Re 2.00 0.13 12 17 100 500 0 0 0 2547:04
Rkios Module State R 4.00 0.01 40 0 100 500 0 0 0 770:56
Rkios Online Diag Re 4.00 0.01 40 0 100 500 0 0 0 735:10
RkiosIpPbr IrmPort R 2.00 0.01 10 1 100 500 0 0 0 1046:38
RkiosAclMan Review 3.00 0.04 30 0 100 500 0 0 0 1459:35
Xgstub Stats Review 0.50 0.10 5 0 100 500 0 0 0 954:41
Xgstub Stats Review 0.50 0.09 5 0 100 500 0 0 0 968:42
GlmBridgeMan(3) revi 0.50 0.01 2 0 100 500 0 0 0 47:44
PluggableLinecard(2) 0.50 0.01 1 0 100 500 0 0 0 7:55

We review the stp topology and nothing appear to be wrong, in addition, no make any changes neither connect new devices. We are pretty lost with this.

If someone can help me i will appreciate, I have more than 3000 user been impacted and the only solution that I thought is reboot.

5 Replies 5

Peter Paluch
Cisco Employee
Cisco Employee

Hi,

This document appears to explain some of the recommended steps:

http://www.cisco.com/c/en/us/support/docs/switches/catalyst-4000-series-switches/65591-cat4500-high-cpu.html#understand

In particular, the "Cat4k Mgmt LoPri" is an aggregate process that aggregates all tasks whose utilization is higher that expected and that have been moved into low-priority process as a result. To find out precisely which tasks are those, you need to issue the show platform health command. In this output, watch out for all processes whose actual %CPU and/or RunTimeMax exceeds the target value. These are the most likely culprit.

We will need to follow up from here - without seeing the show platform health, it's generally difficult to pinpoint the cause.

Best regards,
Peter

Hi Peter, thanks for the reply, however, I had read this document previous to post here and I couldn't find any solution.
We still having this issue and we are scheduling a reboot for this sunday morning.

Can you post the output of "sh platform cpu packet statistics"

-Raj

Hello,

Let me check on the outputs more closely and get back to you.

Best regards,
Peter

Hi,

The K5ForerunnerPacketMa process seems to be eating far more CPU than expected. Information on this process is very scarce - this is the only bit of information I could find:

https://supportforums.cisco.com/discussion/12528761/high-cpu-4507

Unfortunately, the show platform health listing you have posted is not complete because of the "| exc 0.00" filter - you have omitted all lines that have at least one "0.00" from the listing. However, those lines could have had non-zero values in some crucial columns, and so we're risking missing some information. Would you mind posting an unfiltered show platform health output along with the show platform cpu packet statistics requested by Raj?

Best regards,
Peter

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card