09-05-2010 11:28 AM - edited 03-06-2019 12:50 PM
Hello, we have a Cisco WS-C3750G-24T running IOS 12.2(25)SEE (image C3750-IPSERVICESK9-M). We have this device running layer-3 services for a few hosts, and total throughput is probably around 300Mbps peak for the whole network. The setup is very simple, and this device is one of two devices on the network that is running OSPF. The other is our border router, which also runs BGP.
We are seeing very unusual CPU usage on this 3750G. I have attached a graph from our monitoring platform, and I have verified that the values are correct here.
How can I track down what is causing this unusually high load?
-Chris
09-05-2010 11:37 AM
Here is a show proc cpu. Adding up the columns show about 2-3% utilization total for 5 sec, and less than 2% for 1min and 5min. Why is it reporting 60%+? The interface is also slow to respond, so it feels like something actually is using 60%+ of the CPU.
CPU utilization for five seconds: 63%/0%; one minute: 64%; five minutes: 64% PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process 1 167 4262 39 0.00% 0.00% 0.00% 0 Chunk Manager 2 16807 1596737 10 0.00% 0.00% 0.00% 0 Load Meter 3 0 1 0 0.00% 0.00% 0.00% 0 CEF IPC Backgrou 4 6817713 860164 7926 0.00% 0.07% 0.05% 0 Check heaps 5 0 12 0 0.00% 0.00% 0.00% 0 Pool Manager 6 0 2 0 0.00% 0.00% 0.00% 0 Timers 7 3322358 17548192 189 0.00% 0.04% 0.03% 0 ARP Input 8 0 1 0 0.00% 0.00% 0.00% 0 AAA_SERVER_DEADT 9 0 2 0 0.00% 0.00% 0.00% 0 AAA high-capacit 10 0 1 0 0.00% 0.00% 0.00% 0 Policy Manager 11 8 5 1600 0.00% 0.00% 0.00% 0 Entity MIB API 12 0 1 0 0.00% 0.00% 0.00% 0 IFS Agent Manage 13 766 133714 5 0.00% 0.00% 0.00% 0 IPC Dynamic Cach 14 0 1 0 0.00% 0.00% 0.00% 0 IPC Zone Manager 15 60982 7779495 7 0.00% 0.00% 0.00% 0 IPC Periodic Tim 16 52566 7779495 6 0.00% 0.00% 0.00% 0 IPC Deferred Por 17 5914 533962 11 0.00% 0.00% 0.00% 0 IPC Seat Manager 18 27918 1990095 14 0.00% 0.00% 0.00% 0 HC Counter Timer 19 0 1 0 0.00% 0.00% 0.00% 0 HRPC asic-stats 20 71290 7779491 9 0.00% 0.00% 0.00% 0 Dynamic ARP Insp 21 0 1 0 0.00% 0.00% 0.00% 0 ARP Snoop 22 0 2 0 0.00% 0.00% 0.00% 0 XML Proxy Client 23 0 1 0 0.00% 0.00% 0.00% 0 Critical Bkgnd 24 227210 1275588 178 0.00% 0.01% 0.00% 0 Net Background 25 60 4171 14 0.00% 0.00% 0.00% 0 Logger 26 77339 7779452 9 0.00% 0.00% 0.00% 0 TTY Background 27 559430 7779496 71 0.15% 0.02% 0.00% 0 Per-Second Jobs 28 1568130 133719 11727 0.00% 0.00% 0.00% 0 Per-minute Jobs 29 0 2 0 0.00% 0.00% 0.00% 0 AggMgr Process 30 32 1035 30 0.00% 0.00% 0.00% 0 Net Input 31 123799 1594472 77 0.00% 0.00% 0.00% 0 Compute load avg 32 548 1842 297 0.00% 0.00% 0.00% 0 Collection proce 33 58141 132661708 0 0.00% 0.00% 0.00% 0 DownWhenLooped 34 0 1 0 0.00% 0.00% 0.00% 0 HRPC hdwl reques 35 0 1 0 0.00% 0.00% 0.00% 0 HRPC lpip reques 36 0 2 0 0.00% 0.00% 0.00% 0 HLPIP Sync Proce 37 0 1 0 0.00% 0.00% 0.00% 0 HRPC Multi-FS Sy 38 0 17 0 0.00% 0.00% 0.00% 0 HULC multifs pro 39 0 2 0 0.00% 0.00% 0.00% 0 MIRAGE RBCP Moni 40 0 1 0 0.00% 0.00% 0.00% 0 HRPC hsm request 41 0 7 0 0.00% 0.00% 0.00% 0 Stack Mgr 42 126 4 31500 0.00% 0.00% 0.00% 0 Stack Mgr Notifi 43 2605845 340914300 7 0.00% 0.12% 0.15% 0 Fifo Error Detec 44 5247 331 15851 0.00% 0.00% 0.00% 0 Adjust Regions 45 172595 7779461 22 0.00% 0.00% 0.00% 0 hrpc -> response 46 8 282 28 0.00% 0.00% 0.00% 0 hrpc -> request 47 26368 1594751 16 0.00% 0.00% 0.00% 0 hrpc <- response 48 0 3 0 0.00% 0.00% 0.00% 0 HULC Device Mana 49 0 3 0 0.00% 0.00% 0.00% 0 HRPC hdm non blo 50 0 3 0 0.00% 0.00% 0.00% 0 HRPC hdm blockin 51 7735 1594474 4 0.00% 0.00% 0.00% 0 HIPC bkgrd proce 52 8 180 44 0.00% 0.00% 0.00% 0 Hulc Port-Securi 53 0 1 0 0.00% 0.00% 0.00% 0 HRPC hpsecure re 54 0 1 0 0.00% 0.00% 0.00% 0 HRPC hlfm reques 55 325438 193767412 1 0.15% 0.01% 0.00% 0 HLFM address lea 56 146094 7779458 18 0.00% 0.00% 0.00% 0 HLFM aging proce 57 118087 193868147 0 0.00% 0.00% 0.00% 0 HLFM address ret 58 0 1 0 0.00% 0.00% 0.00% 0 HRPC hrcmd reque 59 44 529 83 0.00% 0.00% 0.00% 0 HRPC x_setup req 60 0 1 0 0.00% 0.00% 0.00% 0 HRPC system mtu 61 17441 2647583 6 0.00% 0.00% 0.00% 0 HVLAN main bkgrd 62 0 2 0 0.00% 0.00% 0.00% 0 HVLAN Mapped Vla 63 0 2 0 0.00% 0.00% 0.00% 0 Vlan shutdown Pr 64 0 1 0 0.00% 0.00% 0.00% 0 HRPC vlan reques 65 0 1 0 0.00% 0.00% 0.00% 0 HULC VLAN REF Ba 66 0 1 0 0.00% 0.00% 0.00% 0 HRPC hfbm reques 67 109 44601 2 0.00% 0.00% 0.00% 0 HCMP sync proces 68 0 1 0 0.00% 0.00% 0.00% 0 HRPC hulc misc r 69 0 1 0 0.00% 0.00% 0.00% 0 HPM Msg Retry Pr 70 323 66882 4 0.00% 0.00% 0.00% 0 DHCPD Timer 71 175 80 2187 0.00% 0.00% 0.00% 0 hpm main process 72 0 164 0 0.00% 0.00% 0.00% 0 HPM Stack Sync P 73 0 1 0 0.00% 0.00% 0.00% 0 HRPC pm request 74 0 3 0 0.00% 0.00% 0.00% 0 HPM if_num mappi 75 2854204 7779457 366 0.15% 0.13% 0.15% 0 hpm counter proc 76 0 1 0 0.00% 0.00% 0.00% 0 HRPC pm-counters 77 0 1 0 0.00% 0.00% 0.00% 0 hpm vp events ca 78 0 1 0 0.00% 0.00% 0.00% 0 HRPC hcmp reques 79 50 3816 13 0.00% 0.00% 0.00% 0 HCEF ADJ Refresh 80 0 5 0 0.00% 0.00% 0.00% 0 HL2MM 81 0 1 0 0.00% 0.00% 0.00% 0 HRPC hl2mm reque 82 0 1 0 0.00% 0.00% 0.00% 0 HRPC hl3mm reque 83 0 1 0 0.00% 0.00% 0.00% 0 hl3md_rpfq_thrl_ 84 26063 5500640 4 0.00% 0.00% 0.00% 0 hl3mm 85 0 1 0 0.00% 0.00% 0.00% 0 HACL Queue Proce 86 0 1 0 0.00% 0.00% 0.00% 0 HRPC acl request 87 9 57 157 0.00% 0.00% 0.00% 0 HACL Acl Manager 88 0 1 0 0.00% 0.00% 0.00% 0 HRPC backup inte 89 49854 14774307 3 0.00% 0.00% 0.00% 0 IP NAT Ager 90 0 1 0 0.00% 0.00% 0.00% 0 HRPC cdp request 91 0 1 0 0.00% 0.00% 0.00% 0 HRPC dot1x reque 92 0 4 0 0.00% 0.00% 0.00% 0 HULC DOT1X Proce 93 0 1 0 0.00% 0.00% 0.00% 0 HRPC sdm request 94 379100 38008349 9 0.00% 0.00% 0.00% 0 Hulc Storm Contr 95 0 2 0 0.00% 0.00% 0.00% 0 HSTP Sync Proces 96 0 1 0 0.00% 0.00% 0.00% 0 HRPC stp_cli req 97 0 1 0 0.00% 0.00% 0.00% 0 HRPC stp_state_s 98 0 2 0 0.00% 0.00% 0.00% 0 S/W Bridge Proce 99 0 1 0 0.00% 0.00% 0.00% 0 HRPC hudld reque 100 0 1 0 0.00% 0.00% 0.00% 0 HRPC vqpc reques 101 0 1 0 0.00% 0.00% 0.00% 0 HRPC iec_load_ba 102 0 1 0 0.00% 0.00% 0.00% 0 HRPC l2pt qnq rp 103 8837 3953766 2 0.00% 0.00% 0.00% 0 hl3mm_rp 104 0 1 0 0.00% 0.00% 0.00% 0 HRPC hled reques 105 6439762 157546603 40 0.15% 0.20% 0.19% 0 Hulc LED Process 106 881772 5762788 153 0.00% 0.01% 0.00% 0 HL3U bkgrd proce 107 0 1 0 0.00% 0.00% 0.00% 0 HRPC hl3u reques 108 8139 1822361 4 0.00% 0.00% 0.00% 0 HL3U PBR bkgrd p 109 578 66801 8 0.00% 0.00% 0.00% 0 HL3U PBR n-h res 110 0 1 0 0.00% 0.00% 0.00% 0 HRPC dtp request 111 0 1 0 0.00% 0.00% 0.00% 0 HRPC show_forwar 112 0 1 0 0.00% 0.00% 0.00% 0 HRPC snmp reques 113 2474011 1594489 1551 0.00% 0.07% 0.09% 0 HQM Stack Proces 114 2693258 3188974 844 0.15% 0.03% 0.00% 0 HRPC qos request 115 0 1 0 0.00% 0.00% 0.00% 0 HRPC span reques 116 0 1 0 0.00% 0.00% 0.00% 0 HRPC system post 117 0 1 0 0.00% 0.00% 0.00% 0 Hulc Reload Mana 118 0 1 0 0.00% 0.00% 0.00% 0 HRPC hrcli-event 119 4167 2267411 1 0.00% 0.00% 0.00% 0 DHCPD Database 120 8 2 4000 0.00% 0.00% 0.00% 0 image mgr 121 0 5 0 0.00% 0.00% 0.00% 0 HL2MCM 122 0 1 0 0.00% 0.00% 0.00% 0 HRPC hl2mcm mlds 123 0 2 0 0.00% 0.00% 0.00% 0 EAPoUDP Process 124 0 3 0 0.00% 0.00% 0.00% 0 CEF switching ba 125 444623 7779464 57 0.00% 0.01% 0.00% 0 PI MATM Aging Pr 126 0 1 0 0.00% 0.00% 0.00% 0 Switch Backup In 127 224 133712 1 0.00% 0.00% 0.00% 0 MMN bkgrd proces 128 0 2 0 0.00% 0.00% 0.00% 0 Dot1x Mgr Proces 129 0 1 0 0.00% 0.00% 0.00% 0 MAB Framework 130 0 37 0 0.00% 0.00% 0.00% 0 802.1x switch 131 0 1 0 0.00% 0.00% 0.00% 0 802.1x Critical 132 67744 598205 113 0.00% 0.00% 0.00% 0 DTP Protocol 133 0 1 0 0.00% 0.00% 0.00% 0 EAP Framework 134 0 1 0 0.00% 0.00% 0.00% 0 HRPC dai request 135 0 1 0 0.00% 0.00% 0.00% 0 HULC DAI Process 136 0 1 0 0.00% 0.00% 0.00% 0 HRPC dhcp snoopi 137 0 4 0 0.00% 0.00% 0.00% 0 HULC DHCP Snoopi 138 0 1 0 0.00% 0.00% 0.00% 0 HRPC ip source g 139 0 1 0 0.00% 0.00% 0.00% 0 HULC IP Source g 140 27992 8028032 3 0.00% 0.00% 0.00% 0 UDLD 141 670 267602 2 0.00% 0.00% 0.00% 0 Port-Security 142 8198 533961 15 0.00% 0.00% 0.00% 0 MDFS LC Download 143 0 2 0 0.00% 0.00% 0.00% 0 Switch IP Host T 144 0 1 0 0.00% 0.00% 0.00% 0 Link State Group 145 2447 800091 3 0.00% 0.00% 0.00% 0 Ethchnl 146 90 1032 87 0.00% 0.00% 0.00% 0 VMATM Callback 147 0 2 0 0.00% 0.00% 0.00% 0 AAA Server 148 0 1 0 0.00% 0.00% 0.00% 0 AAA ACCT Proc 149 0 1 0 0.00% 0.00% 0.00% 0 ACCT Periodic Pr 150 458670 1267410 361 0.00% 0.00% 0.00% 0 CDP Protocol 151 0 1 0 0.00% 0.00% 0.00% 0 TCP Command 152 0 1 0 0.00% 0.00% 0.00% 0 HRPC ilp request 153 0 2 0 0.00% 0.00% 0.00% 0 AAA Dictionary R 154 596 66903 8 0.00% 0.00% 0.00% 0 DHCP Snooping 155 23813059 71069427 335 0.95% 0.81% 0.75% 0 IP Input 156 0 1 0 0.00% 0.00% 0.00% 0 ICMP event handl 157 111128 70234353 1 0.00% 0.00% 0.00% 0 MDFS MFIB Proces 158 1152198 8022373 143 0.00% 0.01% 0.00% 0 Spanning Tree 159 318 133765 2 0.00% 0.00% 0.00% 0 Spanning Tree St 160 0 1 0 0.00% 0.00% 0.00% 0 CEF RF HULC Conv 161 0 3 0 0.00% 0.00% 0.00% 0 XDR mcast 162 3068 134122 22 0.00% 0.00% 0.00% 0 CEF background p 163 0 1 0 0.00% 0.00% 0.00% 0 IP IRDP 164 0 1 0 0.00% 0.00% 0.00% 0 IPC LC Message H 165 0 1 0 0.00% 0.00% 0.00% 0 XDR RP Ping Back 166 308 66882 4 0.00% 0.00% 0.00% 0 XDR RP backgroun 167 0 1 0 0.00% 0.00% 0.00% 0 XDR RP Test Back 168 0 3 0 0.00% 0.00% 0.00% 0 MDFS LC Process 169 0 1 0 0.00% 0.00% 0.00% 0 Routemap RP IPC 170 254 100317 2 0.00% 0.00% 0.00% 0 Cluster L2 171 7400 800089 9 0.00% 0.00% 0.00% 0 Cluster RARP 172 86247 1398019 61 0.00% 0.00% 0.00% 0 Cluster Base 173 575290 7401012 77 0.00% 0.02% 0.00% 0 TCP Timer 174 795848 2822113 282 0.00% 0.01% 0.00% 0 TCP Protocols 175 0 1 0 0.00% 0.00% 0.00% 0 Socket Timers 176 49 26765 1 0.00% 0.00% 0.00% 0 HTTP CORE 177 0 1 0 0.00% 0.00% 0.00% 0 RARP Input 178 1105 560 1973 0.00% 0.00% 0.00% 0 L2MM 179 3869 63652 60 0.00% 0.00% 0.00% 0 MRD 180 171135 757070 226 0.00% 0.00% 0.00% 0 IGMPSN 181 0 1 0 0.00% 0.00% 0.00% 0 IGMPQR 182 0 2 0 0.00% 0.00% 0.00% 0 L2TRACE SERVER 183 25 552 45 0.00% 0.00% 0.00% 0 MLDSN L2MCM 184 0 1 0 0.00% 0.00% 0.00% 0 MRD 185 0 1 0 0.00% 0.00% 0.00% 0 MLD_SNOOP 186 33 4 8250 0.00% 0.00% 0.00% 0 Crypto CA 187 17060 133766 127 0.00% 0.00% 0.00% 0 IP RIB Update 188 0 1 0 0.00% 0.00% 0.00% 0 Auth-proxy AAA B 189 77 26758 2 0.00% 0.00% 0.00% 0 IP Admin SM Proc 190 6643 30360 218 0.00% 0.00% 0.00% 0 DHCPD Receive 191 0 1 0 0.00% 0.00% 0.00% 0 Crypto PKI-CRL 192 192622 69704531 2 0.00% 0.00% 0.00% 0 MDFS RP process 193 3139516 16038 195754 0.00% 0.00% 0.00% 0 crypto sw pk pro 194 0 2 0 0.00% 0.00% 0.00% 0 AAA Cached Serve 195 0 2 0 0.00% 0.00% 0.00% 0 LOCAL AAA 196 0 2 0 0.00% 0.00% 0.00% 0 TPLUS 197 0 1 0 0.00% 0.00% 0.00% 0 Crypto SSL 198 0 9 0 0.00% 0.00% 0.00% 0 VTP Trap Process 199 0 2 0 0.00% 0.00% 0.00% 0 VTPMIB EDIT BUFF 200 0 2 0 0.00% 0.00% 0.00% 0 DHCP Security He 201 0 1 0 0.00% 0.00% 0.00% 0 HCD Process 202 0 1 0 0.00% 0.00% 0.00% 0 HRPC cable diagn 203 0 2 0 0.00% 0.00% 0.00% 0 DiagCard2/-1 204 99081 22078422 4 0.00% 0.00% 0.00% 0 PM Callback 205 8 4 2000 0.00% 0.00% 0.00% 0 VLAN Manager 206 721 4458 161 0.00% 0.00% 0.00% 0 SSH Event handle 207 2880 640421 4 0.00% 0.00% 0.00% 0 dhcp snooping sw 208 0 3 0 0.00% 0.00% 0.00% 0 RADIUS TEST CMD 209 0 2 0 0.00% 0.00% 0.00% 0 AAA SEND STOP EV 210 0 1 0 0.00% 0.00% 0.00% 0 Syslog Traps 211 0 2 0 0.00% 0.00% 0.00% 0 STP FAST TRANSIT 212 0 2 0 0.00% 0.00% 0.00% 0 CSRT RAPID TRANS 213 146968 1636298 89 0.00% 0.00% 0.00% 0 OSPF Hello 214 1181128 10068662 117 0.15% 0.01% 0.00% 0 CEF: IPv4 proces 215 9 62 145 0.00% 0.00% 0.00% 0 ADJ background 216 99400 133843 742 0.00% 0.00% 0.00% 0 IP Background 217 6667 999844 6 0.00% 0.00% 0.00% 0 Cluster Cmdr 218 71807 7896048 9 0.00% 0.00% 0.00% 0 OSPF Router 1 219 50 49 1020 0.00% 0.00% 0.00% 0 SpanTree Helper 220 0 39 0 0.00% 0.00% 0.00% 0 hulc cfg mgr mas 221 0 2 0 0.00% 0.00% 0.00% 0 CMDR VQP Proxy 222 0 4 0 0.00% 0.00% 0.00% 0 SNMP Timers 223 0 1 0 0.00% 0.00% 0.00% 0 reqRespRedirecti 224 0 5 0 0.00% 0.00% 0.00% 0 CommStringConfig 225 819319 2833330 289 0.00% 0.00% 0.00% 0 IP SNMP 226 168839 1424517 118 0.00% 0.00% 0.00% 0 PDU DISPATCHER 227 770231 1424776 540 0.00% 0.00% 0.00% 0 SNMP ENGINE 228 0 1 0 0.00% 0.00% 0.00% 0 SNMP ConfCopyPro 229 0 2 0 0.00% 0.00% 0.00% 0 SNMP Traps 230 0 1 0 0.00% 0.00% 0.00% 0 VQP client recei 231 0 1 0 0.00% 0.00% 0.00% 0 VQP general 232 30922 172 179779 0.00% 0.05% 0.15% 0 hulc running con 233 1064 9 118222 0.00% 0.00% 0.00% 2 SSH Process 235 1065 15 71000 0.00% 0.00% 0.00% 4 SSH Process 236 1954 667 2929 0.31% 0.14% 0.17% 5 SSH Process
09-05-2010 12:52 PM
Hello,
This is a document that may be of interest to you:
http://www.cisco.com/en/US/products/hw/switches/ps5023/products_tech_note09186a00807213f5.shtml
Can you also have a look at the show processes cpu sorted output? Your output is currently sorted according to the PID, not according to the total load. Your current output suggests that the CPU load is caused by a process consuming many resources. The processes should be identifiable in the sorted output. It may help you to narrow down the search (for example, excessive BPDUs received, lots of IGMP traffic, perhaps a routing protocol issue).
Alternatively, try the show processes cpu sorted 1min and show processes cpu sorted 5min commands to see the output sorted according to longer-period loads.
Best regards,
Peter
09-05-2010 01:14 PM
This is what doesn't make sense... if you add up the total CPU usage reported, it's less than 3%... but it's still showing 60%, and it does "feel" like something is slowing this device way down by the way it's responding to pings directly and SSH delay when typing. See below:
#show proc cpu sorted 5sec CPU utilization for five seconds: 63%/0%; one minute: 66%; five minutes: 65% PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process 155 23862737 71198161 335 0.78% 0.75% 0.79% 0 IP Input 236 1140 256 4453 0.31% 1.36% 0.33% 5 SSH Process 105 6453039 157621546 40 0.15% 0.24% 0.21% 0 Hulc LED Process 43 2615320 341049414 7 0.15% 0.16% 0.15% 0 Fifo Error Detec 57 118440 193952074 0 0.15% 0.01% 0.00% 0 HLFM address ret 75 2864311 7785216 367 0.15% 0.15% 0.15% 0 hpm counter proc 6 0 2 0 0.00% 0.00% 0.00% 0 Timers 5 0 12 0 0.00% 0.00% 0.00% 0 Pool Manager 7 3325428 17560695 189 0.00% 0.06% 0.04% 0 ARP Input 4 6823244 860848 7926 0.00% 0.07% 0.05% 0 Check heaps 11 8 5 1600 0.00% 0.00% 0.00% 0 Entity MIB API 8 0 1 0 0.00% 0.00% 0.00% 0 AAA_SERVER_DEADT 9 0 2 0 0.00% 0.00% 0.00% 0 AAA high-capacit 14 0 1 0 0.00% 0.00% 0.00% 0 IPC Zone Manager 10 0 1 0 0.00% 0.00% 0.00% 0 Policy Manager 16 52647 7785263 6 0.00% 0.00% 0.00% 0 IPC Deferred Por ******** ITEMS BELOW HERE ARE ALL 0.00% ********
09-05-2010 01:32 PM
Hello,
Do the 1min and 5min sorted results provide similar output?
Try to also have a look on the show platform tcam utilization output to see whether any of the TCAM applications is maxing out the allotted TCAM place (compare the Max Masks/Values column to the Used Masks/values and see whether any usage is close to the max value).
Have you had a look on the URL I referenced in my first reply? There is a number of possible scenarios responsible for high CPU load. Is any of those scenarios improbable for your situation?
Best regards,
Peter
09-05-2010 01:51 PM
Sorting by 1min and 5min on the CPU utilization shows simliar results -- all totals are less than 3%, yet it's showing 60%+ utilization.
Also, on TCAM all looks good:
#show platform tcam utilization CAM Utilization for ASIC# 0 Max Used Masks/Values Masks/values Unicast mac addresses: 784/6272 54/373 IPv4 IGMP groups + multicast routes: 144/1152 7/29 IPv4 unicast directly-connected routes: 784/6272 54/373 IPv4 unicast indirectly-connected routes: 272/2176 24/133 IPv4 policy based routing aces: 0/0 0/0 IPv4 qos aces: 512/512 6/6 IPv4 security aces: 1024/1024 23/23 Note: Allocation of TCAM entries per feature uses a complex algorithm. The above information is meant to provide an abstract view of the current TCAM utilization
I'm reviewing the document you linked to now. Will post followup on that shortly.
09-05-2010 02:11 PM
I reviewed the document you posted, and the only thing that seems like it might apply is the excessive ARP problem mentioned. Looking at the ARP table, there are 401 entries, and 80 of them are Incomplete.
I put a packet sniffer on the largest VLAN segment, and I'm seeing ARP requests going by about 4-6 per second. That seems like a lot, but there are probably 100 hosts on that particular network segment, so maybe not. Most of them are not broadcast.
-Chris
09-05-2010 02:27 PM
Hello Chris,
Having 4-6 ARP packets per second on a large network should not be causing such a high load on your CPU.
I would like to ask you: is it possible for you to perform a reload of your 3750 switch? Your IOS is quite dated and it is possible that there are some software issues. If the CPU load decreases significantly after the reload then there is probably something going on in your IOS (which you probably should upgrade so or so).
Also, is it certain that there is no Layer2 loop in your network? Frequently, looping BPDUs, IP broadcasts and similar issues can result in increased CPU load. The show controllers utilization command could help you to pinpoint an interface with heightened load which would be indicative of, among other things, a possible Layer2 loop. The show ip traffic command is also very helpful in determining whether the CPU is loaded with inappropriate number of IP packets requiring special handling.
Best regards,
Peter
09-05-2010 08:20 PM
Well, the CPU load just kept creeping up. It hit 90% sustained, so I started shutting ports down. The problem didn't go away until I shutdown a VLAN that is the largest segment of the network (spans 5 ports, about 75 end-point devices). Not big by any means, but as soon as I shutdown that VLAN the CPU load dropped to 5%. I then brought everything (including that VLAN) back up, and it's sustained at 5% now.
I think there is a STP problem going on here. I have checked the network for physical network loops, and there are none.
There are a total of 5 switches on this network segment that would participate in STP. I'm going to pull out my trusty CCNA/CCNP books and brush up on STP. I haven't configured -- or even thought about -- STP in at least a couple of years. It usually just works.
I'll post what I finally do to resolve this.
-Chris
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide