01-26-2018 06:24 AM - edited 03-08-2019 01:34 PM
Hello,
We are finding a very high CPU usage on one of our main switches and not seeming to find a reason to explain it. It is likely the source of our network issues. We have many of these switches in use, but only this one in particular is giving us problems. I was directed not to reboot the switch as a troubleshooting measure, but I am not seeing any improvement on the two schools it is affecting otherwise.
Switch is a Cisco Catalyst 2960-X 48FPD-L running software version 15.0(2a)EX5 with the CPU processes to follow:
JB_MDF#show proc cpu
CPU utilization for five seconds: 98%/21%; one minute: 99%; five minutes: 98%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
1 691 10713 64 0.00% 0.00% 0.00% 0 Chunk Manager
2 1018151 2729226 373 0.00% 0.00% 0.00% 0 Load Meter
3 774 4940 156 0.00% 0.00% 0.00% 0 hulc_entropy_thr
4 5416 117499 46 0.00% 0.00% 0.00% 0 DHCPD Timer
5 0 1 0 0.00% 0.00% 0.00% 0 Retransmission o
6 0 1 0 0.00% 0.00% 0.00% 0 IPC ISSU Dispatc
7 31895940 2849631 11193 0.00% 0.24% 0.20% 0 Check heaps
8 3610 4027 896 0.00% 0.00% 0.00% 0 Pool Manager
9 0 1 0 0.00% 0.00% 0.00% 0 DiscardQ Backgro
10 0 2 0 0.00% 0.00% 0.00% 0 Timers
11 340 15505 21 0.00% 0.00% 0.00% 0 WATCH_AFS
12 1061376 54352732 19 0.00% 0.00% 0.00% 0 HUSB Console
13 0 1 0 0.00% 0.00% 0.00% 0 License Client N
14 3483 227159 15 0.00% 0.00% 0.00% 0 IPC Dynamic Cach
15 0 1 0 0.00% 0.00% 0.00% 0 Image License br
16 7112500 227033 31328 0.00% 0.03% 0.00% 0 Licensing Auto U
17 16610252 27434048 605 0.12% 0.23% 0.27% 0 ARP Input
18 1500530 14126131 106 0.00% 0.00% 0.00% 0 ARP Background
19 0 1 0 0.00% 0.00% 0.00% 0 AAA_SERVER_DEADT
20 0 1 0 0.00% 0.00% 0.00% 0 Policy Manager
21 31 15 2066 0.00% 0.00% 0.00% 0 Entity MIB API
22 0 1 0 0.00% 0.00% 0.00% 0 IFS Agent Manage
23 42214 2714973 15 0.00% 0.00% 0.00% 0 IPC Event Notifi
24 277563 13246366 20 0.00% 0.00% 0.00% 0 IPC Mcast Pendin
25 0 1 0 0.00% 0.00% 0.00% 0 IPC Session Serv
26 1059 691 1532 0.36% 0.13% 0.09% 1 SSH Process
27 311623 13246369 23 0.00% 0.00% 0.00% 0 IPC Periodic Tim
28 251585 13246384 18 0.00% 0.00% 0.00% 0 IPC Deferred Por
29 0 1 0 0.00% 0.00% 0.00% 0 IPC Process leve
30 0 5 0 0.00% 0.00% 0.00% 0 IPC Seat Manager
31 9187 779372 11 0.00% 0.00% 0.00% 0 IPC Check Queue
32 14 29 482 0.00% 0.00% 0.00% 0 IPC Seat RX Cont
33 0 1 0 0.00% 0.00% 0.00% 0 IPC Seat TX Cont
34 30043 1364623 22 0.00% 0.00% 0.00% 0 IPC Keep Alive M
35 178889 2722675 65 0.00% 0.00% 0.00% 0 IPC Loadometer
36 49 4 12250 0.00% 0.00% 0.00% 0 PrstVbl
37 0 1 0 0.00% 0.00% 0.00% 0 Crash writer
38 0 1 0 0.00% 0.00% 0.00% 0 Exception contro
39 2717 5786 469 0.00% 0.00% 0.00% 0 crypto sw pk pro
40 0 3 0 0.00% 0.00% 0.00% 0 License IPC stat
41 0 1 0 0.00% 0.00% 0.00% 0 License IPC serv
42 256532 13636411 18 0.00% 0.00% 0.00% 0 GraphIt
43 0 1 0 0.00% 0.00% 0.00% 0 client_entity_se
44 0 2 0 0.00% 0.00% 0.00% 0 SMART
45 0 2 0 0.00% 0.00% 0.00% 0 XML Proxy Client
46 0 1 0 0.00% 0.00% 0.00% 0 ARP Snoop
47 147794 13550163 10 0.00% 0.00% 0.00% 0 Dynamic ARP Insp
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
48 0 1 0 0.00% 0.00% 0.00% 0 Critical Bkgnd
49 3011401 14548056 206 0.05% 0.03% 0.05% 0 Net Background
50 0 1 0 0.00% 0.00% 0.00% 0 IDB Work
51 6738 31001 217 0.00% 0.00% 0.00% 0 Logger
52 172213 13550090 12 0.00% 0.00% 0.00% 0 TTY Background
53 15 142 105 0.00% 0.00% 0.00% 0 SXP CORE
54 0 1 0 0.00% 0.00% 0.00% 0 Cat6k NTI ICC pr
55 4 9 444 0.00% 0.00% 0.00% 0 IF-MGR control p
56 10 1101 9 0.00% 0.00% 0.00% 0 IF-MGR event pro
57 0 1 0 0.00% 0.00% 0.00% 0 ICC Nego
58 0 1 0 0.00% 0.00% 0.00% 0 Inode Table Dest
59 0 3 0 0.00% 0.00% 0.00% 0 IP Admission HA
60 45892 826387 55 0.00% 0.00% 0.00% 0 Net Input
61 1702020 2729384 623 0.05% 0.02% 0.00% 0 Compute load avg
62 2543386 230238 11046 0.12% 0.01% 0.00% 0 Per-minute Jobs
63 8391490 13637294 615 0.12% 0.12% 0.11% 0 Per-Second Jobs
64 0 1 0 0.00% 0.00% 0.00% 0 AggMgr Process
65 1711 38903 43 0.00% 0.00% 0.00% 0 Transport Port A
66 1219459 3997648 305 0.05% 0.00% 0.00% 0 HC Counter Timer
67 43052202 117608 366090 0.00% 0.07% 0.28% 0 SFF8472
68 0 132 0 0.00% 0.00% 0.00% 0 EEM ED Identity
69 0 76 0 0.00% 0.00% 0.00% 0 EEM ED MAT
70 291544 7003447 41 0.00% 0.00% 0.00% 0 EEM ED ND
71 1145 13 88076 0.00% 0.00% 0.00% 0 USB Startup
72 0 2 0 0.00% 0.00% 0.00% 0 APM 86392 RTC
73 4139465 251343727 16 0.00% 0.02% 0.00% 0 DownWhenLooped
74 0 1 0 0.00% 0.00% 0.00% 0 HRPC power_mgmt
75 0 2 0 0.00% 0.00% 0.00% 0 Porter Power Man
76 0 1 0 0.00% 0.00% 0.00% 0 HULC ACL Tcam Me
77 0 1 0 0.00% 0.00% 0.00% 0 Hulc EEM Process
78 0 3 0 0.00% 0.00% 0.00% 0 HRPC lpip reques
79 0 2 0 0.00% 0.00% 0.00% 0 HLPIP Sync Proce
80 0 1 0 0.00% 0.00% 0.00% 0 HRPC hnetwpol re
81 0 1 0 0.00% 0.00% 0.00% 0 HRPC EnergyWise
82 0 1 0 0.00% 0.00% 0.00% 0 HRPC actual powe
83 0 1 0 0.00% 0.00% 0.00% 0 HRPC ipadm reque
84 8577013 453889098 18 0.00% 0.06% 0.05% 0 Draught link sta
85 14282 454177 31 0.00% 0.00% 0.00% 0 PSP Timer
86 0 1 0 0.00% 0.00% 0.00% 0 HULC QM Tcam Mem
87 4 8 500 0.00% 0.00% 0.00% 0 Stack DI Update
88 2345 45513 51 0.00% 0.00% 0.00% 0 OBFL TEMP obfl0
89 0 1 0 0.00% 0.00% 0.00% 0 HRPC asic-stats
90 527777 5427025 97 0.00% 0.00% 0.00% 0 Adjust Regions
91 522923 27197797 19 0.00% 0.00% 0.00% 0 FlexStack Hotswa
92 33931916 592996257 57 0.60% 0.24% 0.23% 0 RedEarth Tx Mana
93 38844973 595252448 65 0.24% 0.23% 0.23% 0 RedEarth Rx Mana
94 279569 2729226 102 0.00% 0.00% 0.00% 0 HULC Thermal Pro
95 0 1 0 0.00% 0.00% 0.00% 0 HRPC hsm request
96 0 23 0 0.00% 0.00% 0.00% 0 Stack Mgr
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
97 671 63 10650 0.00% 0.00% 0.00% 0 Stack Mgr Notifi
98 674170 6803846 99 0.00% 0.00% 0.00% 0 hrpc -> response
99 262555 19324108 13 0.00% 0.00% 0.00% 0 hrpc -> request
100 2460176 19228959 127 0.00% 0.00% 0.00% 0 hrpc <- response
101 0 3 0 0.00% 0.00% 0.00% 0 HRPC hcomp reque
102 1038091 67929508 15 0.00% 0.00% 0.00% 0 apm86xxx_enet_pr
103 0 19 0 0.00% 0.00% 0.00% 0 HULC Device Mana
104 26250 96727 271 0.00% 0.00% 0.00% 0 HRPC hdm non blo
105 0 7 0 0.00% 0.00% 0.00% 0 HRPC hdm blockin
106 754 22 34272 0.00% 0.00% 0.00% 0 HRPC cfg_backup
107 80843 2714963 29 0.00% 0.00% 0.00% 0 HIPC bkgrd proce
108 0 1 0 0.00% 0.00% 0.00% 0 IPC RTTYC Messag
109 0 1 0 0.00% 0.00% 0.00% 0 RTTYC Flush
110 254 1982 128 0.00% 0.00% 0.00% 0 Hulc Port-Securi
111 18 241 74 0.00% 0.00% 0.00% 0 HRPC hpsecure re
112 0 1 0 0.00% 0.00% 0.00% 0 HRPC hrcmd reque
113 3 24 125 0.00% 0.00% 0.00% 0 HRPC emac reques
114 538 4013 134 0.00% 0.00% 0.00% 0 HRPC hulc misc r
115 69407 4538008 15 0.00% 0.00% 0.00% 0 HVLAN main bkgrd
116 81 3171 25 0.00% 0.00% 0.00% 0 SSH Event handle
117 0 2 0 0.00% 0.00% 0.00% 0 Vlan shutdown Pr
118 13 89 146 0.00% 0.00% 0.00% 0 HRPC vlan reques
119 0 1 0 0.00% 0.00% 0.00% 0 HULC VLAN REF Ba
120 712251 9257285 76 0.00% 0.00% 0.00% 0 HRPC ilp request
121 313535 13550098 23 0.00% 0.00% 0.00% 0 Hulc ILP Alchemy
122 1237134 908063 1362 0.00% 0.01% 0.00% 0 Strider Tcam Mem
123 424638 2293830 185 0.00% 0.00% 0.00% 0 HRPC hlfm reques
124 65580059 350891197 186 0.90% 0.65% 0.70% 0 HLFM address lea
125 443779 13550066 32 0.00% 0.00% 0.00% 0 HLFM aging proce
126 33949 3731425 9 0.00% 0.00% 0.00% 0 HLFM address ret
127 0 1 0 0.00% 0.00% 0.00% 0 HULC PM Vector P
128 0 1 0 0.00% 0.00% 0.00% 0 HPM Msg Retry Pr
129 0 3 0 0.00% 0.00% 0.00% 0 OBFL Cfg Dispatc
130 48014236 292091497 164 0.30% 0.30% 0.30% 0 hpm main process
131 1115 3386 329 0.00% 0.00% 0.00% 0 HPM Stack Sync P
132 451 1061 425 0.00% 0.00% 0.00% 0 HRPC pm request
133 0 3 0 0.00% 0.00% 0.00% 0 HPM if_num mappi
134 106143083 13636394 7783 0.66% 0.74% 0.72% 0 hpm counter proc
135 2829101 17797213 158 0.00% 0.02% 0.00% 0 HRPC pm-counters
136 11 21 523 0.00% 0.00% 0.00% 0 hpm vp events ca
137 2161 46952 46 0.00% 0.00% 0.00% 0 HRPC hcmp reques
138 3293 5441 605 0.00% 0.00% 0.00% 0 HCEF ADJ Refresh
139 0 1 0 0.00% 0.00% 0.00% 0 HACL Queue Proce
140 0 5 0 0.00% 0.00% 0.00% 0 HRPC acl request
141 7 100 70 0.00% 0.00% 0.00% 0 HACL Acl Manager
142 0 1 0 0.00% 0.00% 0.00% 0 HRPC aim request
143 0 1 0 0.00% 0.00% 0.00% 0 HRPC backup inte
144 3 3 1000 0.00% 0.00% 0.00% 0 HRPC cdp request
145 10 31 322 0.00% 0.00% 0.00% 0 HULC CISP Proces
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
146 24596 297251 82 0.00% 0.00% 0.00% 0 HRPC dot1x reque
147 3 37 81 0.00% 0.00% 0.00% 0 Dot1X Msg Retry
148 44952 213131 210 0.00% 0.00% 0.00% 0 HULC DOT1X Proce
149 0 3 0 0.00% 0.00% 0.00% 0 HRPC lldp reques
150 0 3 0 0.00% 0.00% 0.00% 0 HRPC system mtu
151 0 4 0 0.00% 0.00% 0.00% 0 HRPC rep request
152 4 9 444 0.00% 0.00% 0.00% 0 REP Helper Proc
153 0 1 0 0.00% 0.00% 0.00% 0 HRPC sdm request
154 0 1 0 0.00% 0.00% 0.00% 0 SMI MSG Retry Pr
155 0 5 0 0.00% 0.00% 0.00% 0 HRPC Smart Insta
156 4108112 66807374 61 0.05% 0.08% 0.06% 0 Hulc Storm Contr
157 0 2 0 0.00% 0.00% 0.00% 0 HSTP Sync Proces
158 378058 1930043 195 0.00% 0.00% 0.00% 0 HRPC stp_cli req
159 30518 246315 123 0.00% 0.00% 0.00% 0 HRPC stp_state_s
160 0 2 0 0.00% 0.00% 0.00% 0 S/W Bridge Proce
161 0 30 0 0.00% 0.00% 0.00% 0 HRPC hudld reque
162 0 1 0 0.00% 0.00% 0.00% 0 HRPC vqpc reques
163 51 259 196 0.00% 0.00% 0.00% 0 HRPC hled reques
164 2845488462 335832663 8472 18.73% 18.08% 19.58% 0 Hulc LED Process
165 26783215 10001726 2677 0.24% 0.32% 0.34% 0 HL3U bkgrd proce
166 598089 5480412 109 0.00% 0.00% 0.00% 0 HRPC hl3u reques
167 492282 3113518 158 0.00% 0.00% 0.00% 0 HIPV6 bkgrd proc
168 594411 5452997 109 0.00% 0.00% 0.00% 0 HRPC IPv6 Unicas
169 0 1 0 0.00% 0.00% 0.00% 0 HRPC obfl reques
170 0 1 0 0.00% 0.00% 0.00% 0 IPC Zone Manager
171 0 1 0 0.00% 0.00% 0.00% 0 HRPC dtp request
172 0 1 0 0.00% 0.00% 0.00% 0 HRPC show_forwar
173 0 5 0 0.00% 0.00% 0.00% 0 HRPC snmp reques
174 0 1 0 0.00% 0.00% 0.00% 0 HULC SNMP Proces
175 50953392 2714324 18772 0.42% 0.36% 0.36% 0 HQM Stack Proces
176 15326271 10835141 1414 0.12% 0.10% 0.12% 0 HRPC qos request
177 3 3 1000 0.00% 0.00% 0.00% 0 HRPC span reques
178 53 270 196 0.00% 0.00% 0.00% 0 HRPC system post
179 0 9 0 0.00% 0.00% 0.00% 0 Hulc Reload Mana
180 0 1 0 0.00% 0.00% 0.00% 0 Hulc Blue Beacon
181 5681 16701 340 0.00% 0.00% 0.00% 0 HRPC hrcli-event
182 226 1232 183 0.00% 0.00% 0.00% 0 HRPC rfs request
183 0 1 0 0.00% 0.00% 0.00% 0 HRFS OIR Proc
184 166778 4548710 36 0.00% 0.00% 0.00% 0 Power RPS Proces
185 53260 138651 384 0.00% 0.00% 0.00% 0 HL2MCM
186 28 19 1473 0.00% 0.00% 0.00% 0 HL2MCM
187 465 12382 37 0.00% 0.00% 0.00% 0 AAA Server
188 0 1 0 0.00% 0.00% 0.00% 0 AAA ACCT Proc
189 0 1 0 0.00% 0.00% 0.00% 0 ACCT Periodic Pr
190 0 1 0 0.00% 0.00% 0.00% 0 AAA System Acct
191 0 1 0 0.00% 0.00% 0.00% 0 AUTH POLICY Fram
192 617550 13628427 45 0.00% 0.00% 0.00% 0 Auth Manager
193 0 1 0 0.00% 0.00% 0.00% 0 Auth-proxy AAA B
194 1256 45487 27 0.00% 0.00% 0.00% 0 IP Admin SM Proc
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
195 16952084 23434516 723 0.12% 0.13% 0.12% 0 CDP Protocol
197 3913191 31518681 124 0.00% 0.07% 0.10% 0 DHCPD Receive
198 0 2 0 0.00% 0.00% 0.00% 0 CMD HANDLER
199 0 2 0 0.00% 0.00% 0.00% 0 AAA Dictionary R
200 4679 113718 41 0.00% 0.00% 0.00% 0 DHCP Snooping
201 0 1 0 0.00% 0.00% 0.00% 0 DHCP Snooping db
202 0 2 0 0.00% 0.00% 0.00% 0 Dot1x Mgr Proces
203 0 1 0 0.00% 0.00% 0.00% 0 EAP Framework
204 0 1 0 0.00% 0.00% 0.00% 0 EAP Test
205 0 2 0 0.00% 0.00% 0.00% 0 CEF switching ba
206 116459 97374 1195 0.05% 0.00% 0.00% 0 IP ARP Adjacency
207 15115037 350889367 43 0.00% 0.01% 0.00% 0 IP ARP Retry Age
208 692212549 441330784 1568 38.42% 38.80% 37.78% 0 IP Input
209 0 1 0 0.00% 0.00% 0.00% 0 ICMP event handl
210 1524620 26116726 58 0.00% 0.00% 0.00% 0 IP ARP Track
211 0 2 0 0.00% 0.00% 0.00% 0 ADJ NSF process
212 0 1 0 0.00% 0.00% 0.00% 0 IPv6 ping proces
214 0 2 0 0.00% 0.00% 0.00% 0 REP Topology cha
215 0 17 0 0.00% 0.00% 0.00% 0 SMI Director DB
216 17460 781296 22 0.00% 0.00% 0.00% 0 SMI CDP Update H
217 0 1 0 0.00% 0.00% 0.00% 0 SMI Backup Proce
218 0 2 0 0.00% 0.00% 0.00% 0 SMI IBC server p
219 0 1 0 0.00% 0.00% 0.00% 0 SMI IBC client p
220 0 2 0 0.00% 0.00% 0.00% 0 SMI IBC Download
221 66849898 155685605 429 0.60% 0.71% 0.74% 0 Spanning Tree
222 62158 227429 273 0.00% 0.00% 0.00% 0 Spanning Tree St
223 5751 1123 5121 0.00% 0.00% 0.00% 0 802.1x switch
224 0 1 0 0.00% 0.00% 0.00% 0 802.1x Webauth F
225 1812769 1568187 1155 0.00% 0.00% 0.00% 0 DTP Protocol
226 0 5 0 0.00% 0.00% 0.00% 0 HRPC IPv6 Host r
227 0 3 0 0.00% 0.00% 0.00% 0 IPv6 Platform Ho
228 0 1 0 0.00% 0.00% 0.00% 0 HRPC dai request
229 0 1 0 0.00% 0.00% 0.00% 0 HULC DAI Process
230 0 1 0 0.00% 0.00% 0.00% 0 HRPC power down
231 0 1 0 0.00% 0.00% 0.00% 0 HRPC ip device t
232 0 1 0 0.00% 0.00% 0.00% 0 HRPC ip source g
233 0 1 0 0.00% 0.00% 0.00% 0 HULC IP Source g
234 3 4 750 0.00% 0.00% 0.00% 0 HRPC sisf reques
235 0 3 0 0.00% 0.00% 0.00% 0 HULC SISF Proces
236 0 1 0 0.00% 0.00% 0.00% 0 HULC SISF Source
237 5485400 13549643 404 0.12% 0.10% 0.11% 0 PI MATM Aging Pr
238 24114943 135914355 177 0.05% 0.16% 0.12% 0 UDLD
239 13763 454874 30 0.00% 0.00% 0.00% 0 Port-Security
240 0 3 0 0.00% 0.00% 0.00% 0 Switch Backup In
241 0 2 0 0.00% 0.00% 0.00% 0 IP Host Track Pr
242 0 1 0 0.00% 0.00% 0.00% 0 Link State Group
243 2892 227156 12 0.00% 0.00% 0.00% 0 MMN bkgrd proces
244 15739 1362089 11 0.00% 0.00% 0.00% 0 Ethchnl
245 106359 20241 5254 0.00% 0.00% 0.00% 0 VMATM Callback
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
246 18085 252764 71 0.00% 0.00% 0.00% 0 CEF background p
247 0 1 0 0.00% 0.00% 0.00% 0 fib_fib_bfd_sb e
248 0 1 0 0.00% 0.00% 0.00% 0 IP IRDP
249 21 4 5250 0.00% 0.00% 0.00% 0 CEF RF HULC Conv
250 0 2 0 0.00% 0.00% 0.00% 0 XDR background p
251 130318 473551 275 0.00% 0.00% 0.00% 0 XDR mcast
257 27021 1362103 19 0.00% 0.00% 0.00% 0 IP ACL XDR LC Ba
258 0 1 0 0.00% 0.00% 0.00% 0 Critical Auth
259 3292 40241 81 0.00% 0.00% 0.00% 0 TCP Timer
260 4625 27335 169 0.00% 0.00% 0.00% 0 TCP Protocols
261 125007 15359995 8 0.00% 0.00% 0.00% 0 Socket Timers
262 308290 452668 681 0.00% 0.00% 0.00% 0 HTTP CORE
263 3449 170562 20 0.00% 0.00% 0.00% 0 Cluster L2
264 17215 1362087 12 0.00% 0.00% 0.00% 0 Cluster RARP
265 103468 2310799 44 0.00% 0.00% 0.00% 0 Cluster Base
266 0 2 0 0.00% 0.00% 0.00% 0 Dot1x Supplicant
267 0 2 0 0.00% 0.00% 0.00% 0 Dot1x Supplicant
268 0 2 0 0.00% 0.00% 0.00% 0 Dot1x Supplicant
269 0 2 0 0.00% 0.00% 0.00% 0 Routing Topology
270 3 3 1000 0.00% 0.00% 0.00% 0 Flow Exporter Ti
272 0 1 0 0.00% 0.00% 0.00% 0 RARP Input
273 16 15 1066 0.00% 0.00% 0.00% 0 IP RIB Update
274 5439 26167 207 0.00% 0.00% 0.00% 0 HRPC hl2mcm igmp
275 10 127 78 0.00% 0.00% 0.00% 0 HRPC hl2mcm mlds
276 3 18 166 0.00% 0.00% 0.00% 0 static
277 0 1 0 0.00% 0.00% 0.00% 0 IPv6 RIB Event H
278 0 1 0 0.00% 0.00% 0.00% 0 MAB Framework
279 364548 1362112 267 0.00% 0.00% 0.00% 0 QoS stats proces
280 2999 227157 13 0.00% 0.00% 0.00% 0 DHCPD Database
281 0 2 0 0.00% 0.00% 0.00% 0 REP LSL Proc
282 0 2 0 0.00% 0.00% 0.00% 0 REP BPA/EPA Proc
283 0 3 0 0.00% 0.00% 0.00% 0 SNMP Timers
284 0 4 0 0.00% 0.00% 0.00% 0 HRPC dhcp snoopi
285 4 5 800 0.00% 0.00% 0.00% 0 HULC DHCP Snoopi
286 814 2355 345 0.00% 0.00% 0.00% 0 IGMPSN L2MCM
287 23342 455025 51 0.00% 0.00% 0.00% 0 IGMPSN MRD
288 215730 1248522 172 0.00% 0.00% 0.00% 0 IGMPSN
289 0 1 0 0.00% 0.00% 0.00% 0 IGMPQR
290 0 3 0 0.00% 0.00% 0.00% 0 L2TRACE SERVER
291 2042569 21609142 94 0.00% 0.01% 0.00% 0 Inline Power
292 11837386 29903640 395 0.00% 0.05% 0.05% 0 Marvell wk-a Pow
293 639 2355 271 0.00% 0.00% 0.00% 0 MLDSN L2MCM
294 0 1 0 0.00% 0.00% 0.00% 0 MRD
295 0 1 0 0.00% 0.00% 0.00% 0 MLD_SNOOP
296 0 2 0 0.00% 0.00% 0.00% 0 AAA Cached Serve
297 2017 7584 265 0.00% 0.00% 0.00% 0 NIST rng proc
298 0 2 0 0.00% 0.00% 0.00% 0 ENABLE AAA
299 0 3 0 0.00% 0.00% 0.00% 0 LDAP process
300 0 2 0 0.00% 0.00% 0.00% 0 LINE AAA
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
301 23857 402352 59 0.00% 0.00% 0.00% 0 LOCAL AAA
302 0 2 0 0.00% 0.00% 0.00% 0 EPM MAIN PROCESS
303 7313454 20253558 361 0.12% 0.13% 0.12% 0 CEF: IPv4 proces
304 0 2 0 0.00% 0.00% 0.00% 0 TPLUS
306 0 2 0 0.00% 0.00% 0.00% 0 crypto engine pr
307 0 1 0 0.00% 0.00% 0.00% 0 encrypt proc
308 454 11772 38 0.00% 0.00% 0.00% 0 Crypto CA
309 0 1 0 0.00% 0.00% 0.00% 0 Crypto PKI-CRL
310 98 10 9800 0.00% 0.00% 0.00% 0 HRPC x_setup req
311 0 2 0 0.00% 0.00% 0.00% 0 REP Switch Helpe
312 0 1 0 0.00% 0.00% 0.00% 0 Licensing MIB pr
313 0 58 0 0.00% 0.00% 0.00% 0 VTP Trap Process
314 1549 48053 32 0.00% 0.00% 0.00% 0 ASP Process Crea
315 3080 57165 53 0.00% 0.00% 0.00% 0 AAA SEND STOP EV
316 0 1 0 0.00% 0.00% 0.00% 0 Test AAA Client
317 0 2 0 0.00% 0.00% 0.00% 0 DHCP Security He
318 1025 19606 52 0.00% 0.00% 0.00% 0 EEM ED Syslog
319 0 1 0 0.00% 0.00% 0.00% 0 Syslog Traps
320 8079 2225631 3 0.00% 0.00% 0.00% 0 FEX Logger Proce
321 0 1 0 0.00% 0.00% 0.00% 0 HCD Process
322 3564164 13915 256138 0.00% 0.00% 0.00% 0 HRPC cable diagn
323 2081 42 49547 0.00% 0.00% 0.00% 0 ADJ background
324 0 2 0 0.00% 0.00% 0.00% 0 DiagCard2/-1
325 0 1 0 0.00% 0.00% 0.00% 0 Online Diag EEM
326 6627802 100878280 65 0.00% 0.00% 0.00% 0 PM Callback
327 0 1 0 0.00% 0.00% 0.00% 0 HULC FNF
328 1495 130 11500 0.00% 0.00% 0.00% 0 Collection proce
329 25420 1089715 23 0.00% 0.00% 0.00% 0 dhcp snooping sw
330 0 3 0 0.00% 0.00% 0.00% 0 SNMP Traps
331 0 25 0 0.00% 0.00% 0.00% 0 EEM Server
332 4 2 2000 0.00% 0.00% 0.00% 0 Call Home proces
333 0 2 0 0.00% 0.00% 0.00% 0 EEM Policy Direc
334 67 72 930 0.00% 0.00% 0.00% 0 OBFL MSG obfl0
335 0 2 0 0.00% 0.00% 0.00% 0 EEM ED Config
336 0 3 0 0.00% 0.00% 0.00% 0 EEM ED Env
337 0 3 0 0.00% 0.00% 0.00% 0 EM ED GOLD
338 0 3 0 0.00% 0.00% 0.00% 0 EEM ED OIR
339 0 3 0 0.00% 0.00% 0.00% 0 EEM ED Test
340 17288 395999 43 0.00% 0.00% 0.00% 0 EEM ED Timer
341 1277 15533 82 0.00% 0.00% 0.00% 0 Syslog
342 206 89 2314 0.00% 0.00% 0.00% 0 RBM CORE
343 0 1 0 0.00% 0.00% 0.00% 0 tHUB
344 9694 227162 42 0.00% 0.00% 0.00% 0 Call Home Timer
345 36 203 177 0.00% 0.00% 0.00% 0 HRPC eee request
346 13250 227156 58 0.00% 0.00% 0.00% 0 hulc_eee_monitor
347 0 2 0 0.00% 0.00% 0.00% 0 STP FAST TRANSIT
348 0 14 0 0.00% 0.00% 0.00% 0 CSRT RAPID TRANS
349 91506 350306 261 0.00% 0.00% 0.00% 0 VLAN Manager
350 1181 663 1781 0.00% 0.00% 0.00% 0 SpanTree Helper
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
351 0 1 0 0.00% 0.00% 0.00% 0 DiagCard1/-1
352 4968 19449 255 0.00% 0.00% 0.00% 0 SpanTree Flush
353 32 2 16000 0.00% 0.00% 0.00% 0 OBFL ENV obfl0
354 0 1 0 0.00% 0.00% 0.00% 0 Connection Mgr
355 3 1 3000 0.00% 0.00% 0.00% 0 LICENSE AGENT
356 19627 227174 86 0.00% 0.00% 0.00% 0 OBFL VOLT obfl0
358 44051 907629 48 0.00% 0.00% 0.00% 0 OBFL I/O Buffer
359 0 1 0 0.00% 0.00% 0.00% 0 image mgr
360 6093 190743 31 0.00% 0.00% 0.00% 0 IP Background
361 112 1281 87 0.00% 0.00% 0.00% 0 IP Connected Rou
362 640632 2612002 245 0.00% 0.00% 0.00% 0 IP SNMP
363 97575 1322200 73 0.00% 0.00% 0.00% 0 PDU DISPATCHER
364 514658 1325534 388 0.00% 0.00% 0.00% 0 SNMP ENGINE
365 0 2 0 0.00% 0.00% 0.00% 0 IP SNMPV6
366 0 1 0 0.00% 0.00% 0.00% 0 SNMP ConfCopyPro
367 1136797 11427640 99 0.05% 0.00% 0.00% 0 NTP
368 10 5 2000 0.00% 0.00% 0.00% 0 hulc cfg mgr mas
369 1222 51 23960 0.00% 0.00% 0.00% 0 hulc running con
370 0 1 0 0.00% 0.00% 0.00% 0 RTTYS Process
372 13863 63384 218 0.00% 0.00% 0.00% 0 HCMP sync proces
373 0 2 0 0.00% 0.00% 0.00% 0 hulc_tb_process
374 4 3 1333 0.00% 0.00% 0.00% 0 XDR RP Ping Back
375 926151 2118358 437 0.05% 0.00% 0.00% 0 XDR mcast rcv
376 7112 189929 37 0.00% 0.00% 0.00% 0 XDR RP backgroun
377 3 4 750 0.00% 0.00% 0.00% 0 IPC LC Message H
378 0 1 0 0.00% 0.00% 0.00% 0 XDR RP Test Back
379 0 2 0 0.00% 0.00% 0.00% 0 ADJ resolve proc
381 0 3 0 0.00% 0.00% 0.00% 0 CEF RP IPC Backg
Any advice the community can provide will be greatly appreciated.
If there is any information I can also provide to help, please let me know.
Solved! Go to Solution.
02-01-2018 07:25 AM
We found a solution, although it is more of a bandage than a long term fix.
In configuration mode: sdm prefer lanbase-default
By changing the template from lanbase-routing to lanbase-default, the CPU maxing out has stopped. It now hangs out around 40% Load versus 90-100% with clockwork spikes every 4 hours (and lasting about an hour) that spikes up to 70%.
This is a bandage solution for us however as what we also found is that this switch is being a sort of pseudo-distribution switch when it was only ever intended as an access switch. We're looking into options to get a better setup in place, but the problem was resolved.
01-26-2018 06:29 AM - edited 01-26-2018 06:30 AM
sh proc cpu sorted | ex 0.00
this will give y a better indicator of higher use processes
This is a known bug
164 2845488462 335832663 8472 18.73% 18.08% 19.58% 0 Hulc LED Process
01-26-2018 06:37 AM
I have seen that Hulc LED is a "feature"that is a known issue.
Any thoughts on what may make IP Input so high?
JB_MDF#sh proc cpu sorted | ex 0.0.0
CPU utilization for five seconds: 99%/20%; one minute: 98%; five minutes: 98%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
208 696041491 443942492 1567 37.61% 36.72% 35.89% 0 IP Input
164 2859993238 337518591 8473 20.60% 20.15% 20.63% 0 Hulc LED Process
221 67207083 156429836 429 0.95% 0.80% 0.78% 0 Spanning Tree
124 65971849 352637396 187 0.77% 0.69% 0.67% 0 HLFM address lea
134 106664462 13704689 7783 0.65% 0.70% 0.71% 0 hpm counter proc
130 48266661 293550724 164 0.53% 0.35% 0.31% 0 hpm main process
175 51209397 2727916 18772 0.41% 0.35% 0.35% 0 HQM Stack Proces
17 16726913 27633563 605 0.29% 0.21% 0.23% 0 ARP Input
93 39057226 598209192 65 0.23% 0.27% 0.24% 0 RedEarth Rx Mana
303 7338637 20356250 360 0.23% 0.15% 0.12% 0 CEF: IPv4 proces
165 26840617 10051762 2670 0.17% 0.19% 0.18% 0 HL3U bkgrd proce
01-26-2018 06:40 AM
01-26-2018 07:03 AM
From one of the docs
The Cisco IOS® software process called IP input takes care of process-switching IP packets. If the IP input process uses unusually high CPU resources, the router is process-switching a lot of IP traffic. Check these issues:
Interrupt switching is disabled on an interface (or interfaces) that has (have) a lot of traffic
Interrupt switching refers to the use of switching algorithms other than process switching. Examples include fast switching, optimum switching, Cisco Express Forwarding switching, and so on (refer to Performance Tuning Basics for details). Examine the output of the show interfaces switching command to see which interface is burdened with traffic. You can check the show ip interface command to see which switching method(s) are used on each interface. Re-enable interrupt switching on that interface. Remember that regular fast switching is configured on output interfaces: if fast switching is configured on an interface, packets that go out of that interface are fast-switched. Cisco Express Forwarding switching is configured on input interfaces. To create Forwarding Information Base (FIB) and adjacency table entries on a particular interface, configure Cisco Express Forwarding switching on all interfaces that route to that interface.
Fast switching on the same interface is disabled
If an interface has a lot of secondary addresses or subinterfaces and there is a lot of traffic sourced from the interface and destined for an address on that same interface, then all of those packets are process-switched. In this situation, you should enable ip route-cache same-interface on the interface. When Cisco Express Forwarding switching is used, you do not need to enable Cisco Express Forwarding switching on the same interface separately.
Fast switching on an interface providing policy routing is disabled
If a route-map has been configured on an interface, and a lot of traffic is handled by the route-map, then the router process-switches this traffic. In this situation, you should enable ip route-cache policy on the interface. Check the restrictions mentioned in the "Enabling Fast-Switched Policy-Based Routing" section of Configuring Policy-Based Routing.
Traffic that cannot be interrupt-switched arrives
This can be any of the listed types of traffic. Click on linked items for more information.
Packets for which there is no entry yet in the switching cache
Even if fast, optimum, or Cisco Express Forwarding switching (CEF) is configured, a packet for which there is no match in the fast-switching cache or FIB and adjacency tables is processed. An entry is then created in the appropriate cache or table, and all subsequent packets that match the same criteria are fast, optimum, or CEF-switched. In normal circumstances, these processed packets do not cause high CPU utilization. However, if there is a device in the network which 1) generates packets at an extremely high rate for devices reachable through the router, and 2) uses different source or destination IP addresses, there is not a match for these packets in the switching cache or table, so they are processed by the IP Input process (if NetFlow switching is configured, source and destination TCP ports are checked against entries in the NetFlow cache as well). This source device can be a non-functional device or, more likely, a device attempting an attack.
(*) Only with glean adjacencies. Refer to Cisco Express Forwarding for more information about Cisco Express Forwarding adjacencies.
Packets destined for the router
These are examples of packets destined for the router:
Routing updates that arrive at an extremely high rate. If the router receives an enormous amount of routing updates that have to be processed, this task might overload the CPU. Normally, this cannot happen in a stable network. The way you can gather more information depends on the routing protocol you have configured. However, you can start to check the output of the show ip route summary command periodically. Values that change rapidly are a sign of an unstable network. Frequent routing table changes mean increased routing protocol processing, which results in increased CPU utilization. For further information on how to troubleshoot this issue, refer to the Troubleshooting TCP/IP section of the Internetwork Troubleshooting Guide.
Any other kind of traffic destined for the router. Check who is logged on to the router and user actions. If someone is logged on and issues commands that produce long output, the high CPU utilization by the "IP input" process is followed by a much higher CPU utilization by the Virtual Exec process.
Spoof attack. To identify the problem, issue the show ip traffic command to check the amount of IP traffic. If there is a problem, the number of received packets with a local destination is significant. Next, examine the output of the show interfaces and show interfaces switching commands to check which interface the packets are coming in. Once you have identified the receiving interface, turn on ip accounting on the outgoing interface and see if there is a pattern. If there is an attack, the source address is almost always different, but the destination address is the same. An access list can be configured to solve the issue temporarily (preferably on the device closest to the source of the packets), but the real solution is to track down the source device and stop the attack.
Broadcast traffic
Check the number of broadcast packets in the show interfaces command output. If you compare the amount of broadcasts to the total amount of packets that were received on the interface, you can gain an idea of whether there is an overhead of broadcasts. If there is a LAN with several switches connected to the router, then this can indicate a problem with Spanning Tree.
IP packets with options
Packets that require protocol translation
Multilink Point-to-Point Protocol (supported in Cisco Express Forwarding switching)
Compressed traffic
If there is no Compression Service Adapter (CSA) in the router, compressed packets must be process-switched.
Encrypted traffic
If there is no Encryption Service Adapter (ESA) in the router, encrypted packets must be process-switched.
Packets that go through serial interfaces with X.25 encapsulation
In the X.25 protocol suite, flow control is implemented on the second Open System Interconnection (OSI) layer.
A lot of packets, that arrive at an extremely high rate, for a destination in a directly attached subnet, for which there is no entry in the Address Resolution Protocol (ARP) table. This should not happen with TCP traffic because of the windowing mechanism, but can happen with User Datagram Protocol (UDP) traffic. To identify the problem, repeat the actions suggested in order to track down a spoof attack.
A lot of multicast traffic goes through the router. Unfortunately, there is no easy way to examine the amount of multicast traffic. The show ip traffic command only shows summary information. However, if you have configured multicast routing on the router, you can enable fast-switching of multicast packets with the ip mroute-cache interface configuration command (fast-switching of multicast packets is off by default).
Router is oversubscribed. If the router is over-used and cannot handle this amount of traffic, try to distribute the load among other routers or purchase a high-end router.
IP Network Address Translation (NAT) is configured on the router, and lots of Domain Name System (DNS) packets go through the router. UDP or TCP packets with source or destination port 53 (DNS) are always punted to process level by NAT.
There are other packet types that are punted to processing.
There is fragmentation of IP Datagram. There is a small increase in CPU and memory overhead due to fragment of an IP datagram. Refer to Resolve IP Fragmentation, MTU, MSS, and PMTUD Issues with GRE and IPSEC for more information on how to troubleshoot this issue.
Whatever the reason for high CPU utilization in the IP Input process, the source of the problem can be tracked down if you debug the IP packets. Since the CPU utilization is already high, the debug process has to be performed with extreme caution. The debug process produces lots of messages, so only logging buffered should be configured.
Logging to a console raises unnecessary interrupts to the CPU and increases the CPU utilization. Logging to a host (or monitor logging) generates additional traffic on interfaces.
The debug process can be started with the debug ip packet detail exec command. This session should not last longer than three to five seconds. Debugging messages are written in the logging buffer. A capture of a sample IP debugging session is provided in the Sample IP Packet Debugging Session section of this document. Once the source device of unwanted IP packets is found, this device can be disconnected from the network, or an access list can be created on the router to drop packets from that destination.
01-26-2018 08:48 AM
Thank you.
Admittedly, I am not savvy on switches, so the familiarity with the configurations and reports are new to me, but this gives me a good place to start.
I hope to find a resolution soon. This only started last week with no known configuration changes having taken place in that time span.
01-26-2018 10:10 AM
theres a free tool on cisco website caused cli analyzer where it can point out issues with devices its a powerful tool checks hardware and softwrae and even configuration issues i would also take a full show tech from the switch , you will need to save it to text file through putty and then you can run it through the toolm and see if it points anything out
01-26-2018 10:12 AM
02-01-2018 07:25 AM
We found a solution, although it is more of a bandage than a long term fix.
In configuration mode: sdm prefer lanbase-default
By changing the template from lanbase-routing to lanbase-default, the CPU maxing out has stopped. It now hangs out around 40% Load versus 90-100% with clockwork spikes every 4 hours (and lasting about an hour) that spikes up to 70%.
This is a bandage solution for us however as what we also found is that this switch is being a sort of pseudo-distribution switch when it was only ever intended as an access switch. We're looking into options to get a better setup in place, but the problem was resolved.
02-01-2018 07:58 AM
01-26-2018 12:12 PM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide