cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
427
Views
0
Helpful
6
Replies

3750 stack high CPU

the-lebowski
Level 4
Level 4

Seen a lot of issues with these stacks and CPU over the internet but not sure why its happening on mine.  9 switch stack running 15.0(2)SE2.   I disabled all SNMP traps but that doesn't seem to have helped it all.  No PBR, no ACLs just basic access switches with CPU constantly hovering above 80%.  

 

3750-9stack#show processes cpu sorted | exc 0.00
CPU utilization for five seconds: 79%/4%; one minute: 82%; five minutes: 86%
 PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process
 180    96655298  1851631389         52 21.44% 21.41% 21.26%   0 Hulc LED Process
 428  1067125296  3162777873          0 10.40%  9.50% 10.76%   0 SNMP ENGINE
 308   993701911   991353893       1002  9.76%  9.80% 10.92%   0 IP SNMP
  90  1232001452   929624847       1325  6.72%  6.20%  6.26%   0 RedEarth Tx Mana
  89  3657727476  1553238153       2354  5.92%  4.45%  4.45%   0 RedEarth I2C dri
 229  2507156978  2699623439          0  4.16%  4.25%  4.99%   0 IP Input
  59  2018043721    76315044      26443  2.08%  2.45%  2.50%   0 Per-Second Jobs
 426  1307358511  2637566506          0  1.91%  2.16%  2.48%   0 PDU DISPATCHER
 197   794795486   263506197       3016  1.12%  1.04%  0.99%   0 HRPC qos request
 136   802190269    74858635      10716  0.95%  0.89%  0.86%   0 hpm counter proc
3750-9stack#show ip traffic
IP statistics:
  Rcvd:  2849690005 total, 2825754286 local destination
         0 format errors, 0 checksum errors, 12198769 bad hop count
         0 unknown protocol, 11736909 not a gateway
         0 security failures, 0 bad options, 5815628 with options
  Opts:  0 end, 0 nop, 0 basic security, 0 loose source route
         0 timestamp, 0 extended security, 0 record route
         0 stream ID, 0 strict source route, 5815628 alert, 0 cipso, 0 ump
         0 other
  Frags: 0 reassembled, 0 timeouts, 0 couldn't reassemble
         0 fragmented, 0 couldn't fragment
  Bcast: 91898 received, 0 sent
  Mcast: 0 received, 0 sent
  Sent:  2852174001 generated, 0 forwarded
  Drop:  4080 encapsulation failed, 0 unresolved, 0 no adjacency
         0 no route, 0 unicast RPF, 0 forced drop
         0 options denied, 0 source IP address zero

ICMP statistics:
  Rcvd: 0 format errors, 0 checksum errors, 0 redirects, 668704 unreachable
        4134161 echo, 105 echo reply, 0 mask requests, 0 mask replies, 0 quench
        0 parameter, 7 timestamp, 0 info request, 0 other
        0 irdp solicitations, 0 irdp advertisements
  Sent: 0 redirects, 95 unreachable, 110 echo, 4134161 echo reply
        0 mask requests, 0 mask replies, 0 quench, 7 timestamp
        0 info reply, 0 time exceeded, 0 parameter problem
        0 irdp solicitations, 0 irdp advertisements

  

cpu-queue-frames  retrieved  dropped    invalid    hol-block  stray
----------------- ---------- ---------- ---------- ---------- ----------
rpc               2355622498 0          0          0          0
stp               2556824953 0          0          0          0
ipc               2596718619 0          0          0          0
routing protocol  3103025534 0          0          0          0
L2 protocol       241612206  0          0          0          0
remote console    660        0          0          0          0
sw forwarding     359631066  0          0          0          0
host              2831343938 0          0          0          0
broadcast         1468175178 0          0          0          0
cbt-to-spt        0          0          0          0          0
igmp snooping     1683267739 0          0          0          0
icmp              0          0          0          0          0
logging           65         0          0          0          0
rpf-fail          0          0          0          0          0
dstats            0          0          0          0          0
cpu heartbeat     1475751075 0          0          0          0

 

CAM Utilization for ASIC# 0                      Max            Used
                                             Masks/Values    Masks/values

 Unicast mac addresses:                       6364/6364       2348/2348
 IPv4 IGMP groups + multicast routes:         1072/1072          1/1
 IPv4 unicast directly-connected routes:      6144/6144          1/1
 IPv4 unicast indirectly-connected routes:    2048/2048         38/38
 IPv6 Multicast groups:                        112/112          11/11
 IPv6 unicast directly-connected routes:        74/74            0/0
 IPv6 unicast indirectly-connected routes:      32/32            3/3
 IPv4 policy based routing aces:               412/412          12/12
 IPv4 qos aces:                                512/512          21/21
 IPv4 security aces:                           924/924          44/44
 IPv6 policy based routing aces:                18/18            8/8
 IPv6 qos aces:                                 22/22           18/18
 IPv6 security aces:                            60/60           17/17
6 Replies 6

Alex Pfeil
Level 7
Level 7

We had a bug on a 3650 stack which caused it to run high CPU. The resolution was to upgrade the software and reboot.  I would open a TAC case and have them take a look.

 

Please mark helpful posts.

Do you remember what version of IOS you were running and what you went to?

Hello,

 

on a side note, what is the uptime of the stack (sh ver) ?

....uptime is 2 years, 19 weeks, 3 days, 14 hours, 2 minutes


@the-lebowski wrote:
....uptime is 2 years, 19 weeks, 3 days, 14 hours, 2 minutes

1.  Upgrade to something more recent of 15.0(2)SE train.  I've used 15.0(2)SE2 and I was not particularly impressed by it.  (No joke, 48-port 2960 PoE had high CPU ... without any config!)

2.  Uptime of two years, yup.  That's not good.  Never keep switches up for more than a year at a time.  If it's possible, reboot the entire stack yearly (twice a year if it can be done) just to clear out the "gunk" in the system.

3.  Check your ports.  HULC normally is associated to ports going down/up.

Its without a doubt related to SNMP.  I removed all mention of SNMP and the CPU dropped down to 45-55% give or take.   Maybe a combination of uptime + SNMP? I have a case open with TAC and they don't seem to think its a software bug.  I don't know what else it can be if CPU only pegs once you enable SNMP.  

 

CPU utilization for five seconds: 49%/3%; one minute: 54%; five minutes: 66%
 PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process
 180   156371417  1857975342         84 21.43% 21.05% 21.35%   0 Hulc LED Process
  90  1249574519   932587982       1339  6.07%  5.98%  6.09%   0 RedEarth Tx Mana
  59  2025032365    76576400      26444  2.23%  2.48%  2.49%   0 Per-Second Jobs
 247   924666536   542248728       1705  1.27%  1.10%  1.08%   0 Spanning Tree
 136   804686249    75115695      10712  1.11%  0.88%  0.89%   0 hpm counter proc
 292   206265235     7647033      26973  0.63%  0.35%  0.35%   0 QoS stats proces
 197   797522452   264411069       3016  0.63%  0.95%  0.96%   0 HRPC qos request
 438   160374947    75680506       2119  0.47%  0.26%  0.23%   0 XDR mcast rcv
  46   205366881    72633610       2827  0.47%  0.22%  0.23%   0 Net Background
 265   272611714    73958033       3686  0.31%  0.27%  0.32%   0 PI MATM Aging Pr
Review Cisco Networking for a $25 gift card