cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1121
Views
6
Helpful
9
Replies

Help! PIX high CPU utilization

josephqiu
Level 1
Level 1

My PIX is running with high CPU utilization now (90%). Normally it's just 5-7%. Problem started since yesterday, but no changes were made for couple of months. PIX stopped sending syslog after the CPU utilization increased. Telnet to PIX also failed. VPNs with other sites are also stopped. But PIX is still forward and NATing traffics to and from Internet and DMZ. "who" command shows there is no user logged in.

Can anyone PLEASE provide some suggestion on what might be the problem? "show process" command didn't tell me a lot.

Thanks!

9 Replies 9

getmedrew
Level 1
Level 1

Can you post the Show process??

Show Process (part 1)

PC SP STATE Runtime SBASE Stack Process

Hsi 001e83d9 00a65a04 0054e008 2370 00a64a7c 3420/4096 arp_timer

Lrd 001ed55d 00b28c2c 0054e4d0 50 00b27cb4 3760/4096 FragDBGC

Lwe 00119bbf 00b972cc 00551768 0 00b96464 3688/4096 dbgtrace

Lwe 003dab25 00b9945c 0054e4d0 2541810 00b97514 6528/8192 Logger

Hsi 003deb7d 00b9c554 0054e008 110 00b9a5dc 7708/8192 tcp_fast

Hsi 003dea1d 00b9e604 0054e008 40 00b9c68c 7500/8192 tcp_slow

Lrd 002f8891 02a7c474 0054e4d0 50 02a7b4ec 3740/4096 xlate clean

Lrd 002f879f 02a7d514 0054e4d0 0 02a7c59c 3744/4096 uxlate clean

Mwe 002efa7f 02ced8e4 0054e008 0 02ceb94c 7740/8192 tcp_intercept_timer_process

Lrd 0043016d 02d9813c 0054e4d0 10 02d971b4 3768/4096 route_process

Hsi 002e0c1c 02d991cc 0054e008 15480 02d98264 2624/4096 PIX Garbage Collector

Hwe 002141c9 02da2efc 0054e008 520 02d9ef94 14512/16384 isakmp_time_keeper

Lrd 002de99c 02dbc9d4 0054e4d0 0 02dbba4c 3816/4096 perfmon

Mwe 0020ba01 02de6e04 0054e008 0 02de4e8c 5264/8192 IPsec timer handler

Hwe 0039164b 02dfb864 00569030 110 02df991c 6860/8192 qos_metric_daemon

Mwe 0025d61d 02e123fc 0054e008 10 02e11c94 1296/2048 IP Background

Lwe 002f0582 02ec528c 00564348 40 02ec4414 1596/4096 pix/trace

Lwe 002f079e 02ec633c 00564a78 0 02ec54c4 3704/4096 pix/tconsole

H* 0011f5b7 0009ff2c 0054dff0 14060 02ed4ab4 12720/16384 ci/console

Hwe 00429b02 02eda2cc 005c3308 540 02ed9394 3376/4096 lu_ctl

Csi 002e94bb 02edb39c 0054e008 50 02eda444 3368/4096 update_cpu_usage

Hwe 002d64a1 02f7f234 0052d3b8 0 02f7b3ac 15884/16384 uauth_in

Hwe 003dd66d 02f81334 00b4b798 0 02f7f45c 7896/8192 uauth_thread

Hwe 003f326a 02f82484 00546f38 0 02f8150c 3928/4096 udp_timer

Hsi 001e0092 02f84134 0054e008 0 02f831bc 3760/4096 557mcfix

Crd 001e0047 02f851f4 0054e480 770624468 02f8426c 3628/4096 557poll

Lrd 001e00fd 02f86294 0054e4d0 30 02f8531c 2344/4096 557timer

Cwe 001e1c71 03288314 00741368 33014340 0328641c 4968/8192 pix/intf0

Mwe 003f2fda 03289404 00b93fc0 0 032884cc 3896/4096 riprx/0

Mrd 0039a8c9 0328a514 0054e040 0 0328959c 3760/4096 riptx/0

Cwe 001e1c71 0338c64c 007b68d8 77550360 0338a754 4968/8192 pix/intf1

Mwe 003f2fda 0338d75c 00b93f78 0 0338c824 3896/4096 riprx/1

Mrd 0039a8c9 0338e86c 0054e040 0 0338d8f4 3760/4096 riptx/1

Cwe 001e1c71 034909a4 0082be48 4386310 0348eaac 5944/8192 pix/intf2

Mwe 003f2fda 03491ab4 00b93f30 0 03490b7c 3896/4096 riprx/2

Mrd 0039a8c9 03492bc4 0054e040 0 03491c4c 3760/4096 riptx/2

Cwe 001e1c71 03594cfc 008a13b8 1344170 03592e04 5800/8192 pix/intf3

Mwe 003f2fda 03595e0c 00b93ee8 0 03594ed4 3896/4096 riprx/3

Mrd 0039a8c9 03596f1c 0054e040 0 03595fa4 3788/4096 riptx/3

Cwe 001e1c71 03699054 00916928 7705270 0369715c 5524/8192 pix/intf4

Mwe 003f2fda 0369a164 00b93ea0 0 0369922c 3896/4096 riprx/4

Mrd 0039a8c9 0369b274 0054e040 0 0369a2fc 3800/4096 riptx/4

Cwe 001e1c71 0379d3ac 0098be98 9520 0379b4b4 7468/8192 pix/intf5

Mwe 003f2fda 0379e4bc 00b93e58 0 0379d584 3896/4096 riprx/5

Mrd 0039a8c9 0379f5cc 0054e040 0 0379e654 3760/4096 riptx/5

Hsi 0042a871 037a17e4 0054e008 10 037a086c 3488/4096 lu_xmit_timer

Hwe 0042962d 037a2884 0054a6e8 10320 037a191c 2504/4096 lu_rx

Show process (part 2)

PC SP STATE Runtime SBASE Stack Process

Hwe 001ae919 03817994 00555c58 520 03816a2c 2088/4096 fover_thread

Hwe 0011f5b7 038186ec 004f8c20 109030 03817a44 3040/4096 fover_rx

Hwe 001b1691 038199d4 005562d4 12440 03818a5c 3648/4096 fover_tx

Hwe 001aeb44 0381a9ec 005562e0 20 03819a74 1724/4096 fover_rep

Lwe 001aecfd 0381ba14 005562e8 860 0381aa8c 3356/4096 fover_lu_rep

Hwe 001b1c72 0381fa1c 005562f0 65410 0381baa4 12472/16384 fover_parse

Mwe 003f2fda 038f8ce4 00b93ca8 0 038f6dbc 7372/8192 radius_rcvauth

Mwe 003f2fda 038f9d94 00b93c60 0 038f8e6c 3548/4096 radius_rcvacct

Mwe 00392dd2 038fae94 0054e040 161960 038f9f1c 3372/4096 radius_snd

Hwe 003dd66d 038fbe94 00b4b7b8 578560 038fafcc 3372/4096 websns_rcv_tcp

Hwe 003f2fda 038fcfa4 00b93e10 0 038fc07c 3548/4096 websns_rcv_udp

Mwe 004257cc 038fe094 005beedc 1439030 038fd12c 2704/4096 websns_snd

Lrd 00427909 038ff164 0054e4d0 0 038fe1dc 3776/4096 websns_clean_cache

Mrd 00426d4c 03900204 0054e040 10 038ff28c 3112/4096 websns_keepalive

Hwe 003f2fda 0390e4f4 00b93d80 79260 0390db4c 912/4096 snmp

Hwe 003f2fda 0390f11c 00b93dc8 0 0390edd4 840/1024 snmp_ex

Hwe 003c49b5 03911db4 03912264 163930 0390ff8c 4764/8192 isakmp_receiver

Hwe 003dd901 03912664 00b36c98 0 0391241c 188/1024 listen/pfm

Hrd 002e7c51 03912fec 0054dff0 35100300 039128f4 1212/2048 listen/telnet_1

Hwe 003dd901 03913874 00b36e88 0 0391322c 1212/2048 listen/ssh_0

Hrd 002e7c51 0391422c 0054dff0 34848270 03913b34 1212/2048 listen/ssh_1

Mwe 00367a62 03ad2f0c 0054e008 100 03ad0f94 5496/8192 Crypto CA

Mrd 0025407d 0391f634 0054e040 20 0391e6bc 3164/4096 ntp

Mwe 003f2fda 03920834 00b93c18 20 0391f90c 2940/4096 ntp1

Hwe 003dd901 03920e04 00b36ba0 0 03920b5c 300/1024 listen/http1

Hsi 003e7712 03b8506c 0054e008 0 03b838e4 4556/8192 telnet/ci

Mrd 001036c8 03bf8ee4 0054e040 0 03bf6ef4 8176/8192 ssh_init

Hrd 002e7c51 03ba5b4c 0054dff0 762800 03ba5854 256/1024 listen/pfm

Hrd 002e7c51 03ba6424 0054dff0 761090 03ba5d2c 1280/2048 listen/telnet_2

Hrd 002e7c51 03ba4e9c 0054dff0 765350 03ba4ba4 256/1024 listen/pfm

Hrd 002e7c51 03b9c51c 0054dff0 753190 03b9be24 1280/2048 listen/telnet_3

Hwe 003dd901 03b9c91c 00b45230 218080 03b9c6d4 188/1024 listen/pfm

Hrd 002e7c51 03ba6d4c 0054dff0 755910 03ba6654 1280/2048 listen/telnet_4

Hrd 002e7c51 03ba71fc 0054dff0 751900 03ba6f04 256/1024 listen/pfm

Hrd 002e7c51 03ba7ad4 0054dff0 752590 03ba73dc 1280/2048 listen/telnet_5

More user friendly format... in attachment.

Thanks!

A few questions for you:

Have you enabled logging? If so is it via TCP by any chance?

One you have enabled it, please do a sh logging and see what it throws up, you may see the source of the high cpu usage there.

I'll keep an eye out for your repsonse so that I can help

Yes, logging was enabled, and I believe it's via UDP. Actually as I stated earlier, when the CPU was running at 90%, syslog process had stopped. "show logging" didn't show me anything.

By the way, I noticed in the "show process" output, telnet and ssh daemons are taking pretty high CPU load, and it looks like all 5 telnet sessions have been taken up:

Hrd 002e7c51 03912fec 0054dff0 35100300 039128f4 1212/2048 listen/telnet_1

Hwe 003dd901 03913874 00b36e88 0 0391322c 1212/2048 listen/ssh_0

Hrd 002e7c51 0391422c 0054dff0 34848270 03913b34 1212/2048 listen/ssh_1

Hrd 002e7c51 03ba6424 0054dff0 761090 03ba5d2c 1280/2048 listen/telnet_2

Hrd 002e7c51 03ba4e9c 0054dff0 765350 03ba4ba4 256/1024 listen/pfm

Hrd 002e7c51 03b9c51c 0054dff0 753190 03b9be24 1280/2048 listen/telnet_3

Hwe 003dd901 03b9c91c 00b45230 218080 03b9c6d4 188/1024 listen/pfm

Hrd 002e7c51 03ba6d4c 0054dff0 755910 03ba6654 1280/2048 listen/telnet_4

Hrd 002e7c51 03ba71fc 0054dff0 751900 03ba6f04 256/1024 listen/pfm

Hrd 002e7c51 03ba7ad4 0054dff0 752590 03ba73dc 1280/2048 listen/telnet_5

But "who" command didn't show any users logged in. Is it possible that the telnet/ssh daemon caused high CPU load? Could this be a DoS attack?

A couple of additional questions:

Are you running logging? What level? Number of logs per second?

What size PIX?

How much bandwidth is going through it?

What does show perfmon say?

What does show interface say?

What does show block say?

This will help narrow it down. I have seen logging take down a 525 with only 5Mb of traffic. Smaller PIX are much easier to kill with logs due to smaller memory blocks.

Yes, logging is running: buffered and traps. Level: warnings. Peak logging rate: 2-3 per second. Is this logging rate too high?

PIX 520 with 6 interfaces. Bandwidth: 10M

Show blocks:

SIZE MAX LOW CNT

4 1600 1589 1600

80 400 276 398

256 1524 1394 1524

1550 2212 1212 1435

2560 200 192 198

I don't have outputs for "show perf" and "show int". But I remember they all looked good.

josephqiu
Level 1
Level 1

Update of the high CPU issue:

Thanks everyone for providing helpful information. I finally got this issue resolved earlier today, although I still have no idea what was the root cause of the problem. I will keep investigating. So please provide any further inputs, that will defintely help my investigation.

What I did was rebooting the PIX in the downtime window. However, I think the rebooting itself didn't really help, because every time after rebooting (I rebooted it 3 times!), the PIX was still running at 30% CPU. Telnet and syslog were back running, but VPNs were still down. But what seemed to be interesting is, as a last trial, I initiated 5 telnet sessions to the PIX, because I always feel it was the telnet and/or ssh daemon that killed the PIX. At the same time when 5 telnet sessions were created, CPU load started to drop (I thought it would increase!!!), and stablized at around 3%. Then, VPNs are up running again. Everything backs to normal.

There is not many helpful tips on cisco.com, except this one CSCdt35429, which is about SSH daemon to cause high CPU load. But searching on cisco.com didn't provide details about this bug.

Thanks!

Review Cisco Networking for a $25 gift card