02-25-2010 05:50 AM - edited 03-11-2019 10:14 AM
Has anyone scene a problem with the ASA 5520 where the CPU runs very high? We are sending 60Mbps of VOIP traffic through the devices and the CPU is close to 60%. We plan on going as high as 150Mbps which would seem to be a disaster waiting to happen.
Thank you in advance for any help you can provide.
Matthew
Here is the output from show processes cpu-usage
vasa# show processes cpu-usage
PC Thread 5Sec 1Min 5Min Process
08054f7c c91afc90 0.0% 0.0% 0.0% block_diag
081ab744 c91af8a0 60.0% 57.6% 57.4% Dispatch Unit
083af4d5 c91af4b0 0.0% 0.0% 0.0% CF OIR
08a43050 c91af2b8 0.0% 0.0% 0.0% lina_int
08068a26 c91aecd0 0.0% 0.0% 0.0% Reload Control Thread
08070c86 c91aead8 0.0% 0.0% 0.0% aaa
08c53b1d c91ae8e0 0.0% 0.0% 0.0% UserFromCert Thread
080a1b36 c91ae4f0 0.0% 0.0% 0.0% CMGR Server Process
080a2045 c91ae2f8 0.0% 0.0% 0.0% CMGR Timer Process
08c310c8 c91ae100 0.0% 0.0% 0.0% snmp
081aab6c c91ad920 0.0% 0.0% 0.0% dbgtrace
05-23-2013 11:47 AM
1) Is it safe to run capture on such load?
If you want to fix or determine what the issue is : Yes, but it must be done carefully
2) If it is Ok, how to run it and diag?
http://www.cisco.com/en/US/products/ps6120/products_tech_note09186a0080a9edd6.shtml
That's the link to create capture, you will first create one on the inside interface, matching all traffic, let it run for a few seconds and then check it,
Note: Use the headers-only option for the capture
Regards
05-23-2013 09:41 PM
So, I need to run
access-list asdm_cap_selector_inside extended permit ip any any
capture capin headers-only interface inside access-list asdm_cap_selector_inside
and after few seconds
no capture capin headers-only interface inside access-list asdm_cap_selector_inside
Correct?
And after
show capture capin
How many second use for running?
I afraid to lost connection.
05-23-2013 09:56 PM
Hello,
Before removing the capture you will need to see what you got and make sure you see a relevant pattern,
If you remove it first then you will lost the captured data.
How many seconds... Well if your network is still on 60 % it means that we should be able to see the traffic withing a few seconds (5-15)
Remember(you must have a baseline of the CPU usage on all of your network devices for a better approach on this cases)
You could even donwload the capture to your Desktop so you could analize it more deeply
Regards,
05-23-2013 10:29 PM
At my sample, is it right for ALL trafic?
I'll run it on weekend. I don't have such experience and I avoiding to break something.
So, what at captured data needs my attention? And what other captures to run?
If ASAs run at stack, will heavy capture break both devices? Or does it break one device and switch to other?
05-24-2013 02:32 PM
Hello Sergey,
yeah, it is.
Make sure that over the weekend you have the same CPU, otherwise you will not capture the traffic need it.
The capture must be taken when the ASA has the CPU problems,
What to look for in the capture.. Basically any relevant amount of connections per host,
It's going to be just for 1-15 seconds and the CPU is in 60 %. It will not break anything,
Regards,
Julio
Remember to rate all of the helpful posts Sergey
05-24-2013 06:24 PM
Ok, I have lower cpu-u, so, I ran
iland-ASA-5520# sh pro cpu-u n
PC Thread 5Sec 1Min 5Min Process
081a86c4 6c1afa08 26.9% 26.2% 26.7% Dispatch Unit
083ccbfc 6c1a17a0 0.1% 0.1% 0.1% fover_health_monitoring_thread
08bde96c 6c18c8f0 0.1% 0.1% 0.6% ssh
iland-ASA-5520# capture c headers-only interface inside access-list captraffic
iland-ASA-5520# sh pro cpu-u n
PC Thread 5Sec 1Min 5Min Process
081a86c4 6c1afa08 35.3% 27.1% 26.8% Dispatch Unit
083ccbfc 6c1a17a0 0.1% 0.1% 0.1% fover_health_monitoring_thread
08bde96c 6c18c8f0 0.1% 0.1% 0.5% ssh
iland-ASA-5520# sh clo
iland-ASA-5520# sh clock
21:51:25.086 EDT Fri May 24 2013
iland-ASA-5520# sh pro cpu-u n
PC Thread 5Sec 1Min 5Min Process
081a86c4 6c1afa08 35.9% 29.4% 27.4% Dispatch Unit
083ccbfc 6c1a17a0 0.1% 0.1% 0.1% fover_health_monitoring_thread
08bde96c 6c18c8f0 0.0% 0.1% 0.5% ssh
iland-ASA-5520#
iland-ASA-5520# sh clock
21:51:47.946 EDT Fri May 24 2013
iland-ASA-5520# no capture c headers-only interface inside access-list captraf$
iland-ASA-5520# sh pro cpu-u n
PC Thread 5Sec 1Min 5Min Process
081a86c4 6c1afa08 30.9% 30.0% 27.6% Dispatch Unit
083ccbfc 6c1a17a0 0.1% 0.1% 0.1% fover_health_monitoring_thread
08bde96c 6c18c8f0 0.6% 0.2% 0.5% ssh
iland-ASA-5520#
I have inscreased cpu usage about 8-10%, will it increased such with usage 60-70% on workdays?
Next, I have captured less, then 2 seconds. If it die small buffer, do I need increase it? If yes, how?
iland-ASA-5520# sh capture c
7290 packets captured
1: 20:49:42.370921 IP.55942 > 10.5.0.86.443: . ack 3426703542 win 555
...
7290: 20:49:43.519809 10.5.1.6.5645 > IP.4728: R 0:0(0) ack 3319581259 win 0
And about results, I see most connections are tcp beetween public IP and private on port 443. I have 4215 of 7290 such connections. And 2204 connections to app port, 422 to public 80 port.
I have 134 connectins udp from 4500 to 4500 port. I don't know, what is it. And other small lines on app ports.
All connections go throw nat. So may be it is my problem? What can I do next?
05-24-2013 06:30 PM
Hi Sergey,
It will increase the CPU around that %.
To change the buffer size you do it in the capture command,
Again, the CPU is at 26% .. You have now see how the network behaves on that percent,
You then will see the results on Monday, if there is any difference,
Again is hard to troubleshoot this problems via messages as there is nothing like being on the console taking the right outpouts and logs,
Hope you have luck next week
Regards
05-26-2013 02:30 AM
Hi Julio,
I'll increase buffer and I'll check it on business days.
Next, what findings can we get here? I think trafic is legal (no viruses trafic) and apps related.
I think I'll get same picture on next days.
I talk boss, he don't authorize me. So, I have just this way.
And, Julio, can you connect here https://supportforums.cisco.com/message/3946884?
05-26-2013 12:53 PM
Hello Sergey,
Yeah, I know running this kind of captures might affect the performance so managment would no authorize it,
If you get the same results then this is expected( I mean 65% would be the baseline)
Final recommendations:
-Keep an eye on the performance of the ASA so you can determine whether this is expected or not as right now we do not know if this is even a cpu case or not
-Keep checking the interface errors counting to determine if everything is going fine,
Regards
05-28-2013 09:54 AM
Hi, I've run capture on work day, so
i-ASA-5520# sh clock
12:54:57.869 EDT Tue May 28 2013
i-ASA-5520# sh pro cpu-u n
PC Thread 5Sec 1Min 5Min Process
081a8999 6c1afa08 79.4% 68.4% 67.6% Dispatch Unit
083ccbfc 6c1a17a0 0.1% 0.1% 0.1% fover_health_monitoring_thread
08bde96c 6c18daa8 1.0% 0.2% 2.4% ssh
i-ASA-5520# sh pro cpu-u n
PC Thread 5Sec 1Min 5Min Process
081a86c4 6c1afa08 96.4% 70.7% 68.1% Dispatch Unit
08bde96c 6c18daa8 0.3% 0.2% 2.3% ssh
i-ASA-5520# sh pro cpu-u n
PC Thread 5Sec 1Min 5Min Process
081a88af 6c1afa08 96.4% 70.7% 68.1% Dispatch Unit
08bde96c 6c18daa8 0.3% 0.2% 2.3% ssh
i-ASA-5520# sh pro cpu-u n
PC Thread 5Sec 1Min 5Min Process
081a88af 6c1afa08 96.4% 70.7% 68.1% Dispatch Unit
08bde96c 6c18daa8 0.3% 0.2% 2.3% ssh
i-ASA-5520# sh cloc
12:55:06.839 EDT Tue May 28 2013
i-ASA-5520# sh cloc
12:55:08.179 EDT Tue May 28 2013
i-ASA-5520# sh cloc
12:55:09.049 EDT Tue May 28 2013
i-ASA-5520# sh cloc
12:55:09.779 EDT Tue May 28 2013
i-ASA-5520# no capture c headers-only interface inside access-list captraf$
i-ASA-5520# sh pro cpu-u n
PC Thread 5Sec 1Min 5Min Process
081a86c4 6c1afa08 66.6% 73.8% 69.0% Dispatch Unit
083ccbfc 6c1a17a0 0.1% 0.1% 0.1% fover_health_monitoring_thread
08bde96c 6c18daa8 1.2% 0.4% 2.2% ssh
i-ASA-5520#
I see cpu-u jump to 96%. I got big output. And all lines are beween some private and public IPs. So:
1) What to check at capture?
2) How to copy/redirect capture result via ssh or tftp?
I've captured
475383 packets captured
1: 12:54:55.319670 10.1.0.2.22 > 10.5.0.14.36003: P 2217744027:2217744091(64) ack 949026555 win 32768
....
to
...
475381: 12:55:03.514484 IP.48838 > 10.5.1.66.443: . ack 1650626495 win 255
475382: 12:55:03.514499 IP.50352 > 10.5.1.66.443: . ack 3701255473 win 253
475383: 12:55:03.514499 IP.21556 > 10.5.1.26.443: . ack 2558795455 win 258
475383 packets shown
05-31-2013 03:11 PM
Hello Everyone, I had a question about this firewall stays at too high in CPU for its bandwidth - actually it is the same device that Sergey is working on. Hi Sergey
Normally during working hours the total traffic on the asa per second is between 60M/s and 80M/s and spikes to around 120M/s.
To add, the asa stays at high CPU even when the traffic drops over the non-usage hours, currently doing %35-40% or 45%-55% CPU for prolonged spikes with these kinds of numbers on the sh interface stats:
outside:
5 minute input rate 3206 pkts/sec, 1016432 bytes/sec
and inside:
5 minute input rate 2938 pkts/sec, 380279 bytes/sec
So in general I think the ~1.5M bytes/sec probably should be causing the current usage level.
ASA-5520# sh conn count
6369 in use, 24426 most used
ASA-5520# sh xlate count
1517 in use, 9204 most used
The one thing I noticed today that drove me to the forums to ask: I found this in the "show processes memory" output, it attempted to print binary and broke my putty scrollback:
ASA-5520# sh processes memory
--------------------------------------------------------------
Allocs Allocated Frees Freed Process
(bytes) (bytes)
--------------------------------------------------------------
0 0 0 0 *System Main*
2635 18446088 150 22582 Init Thread
0 0 0 0 PTHREAD-559
0 0 0 0 PTHREAD-560
0 0 0 0 PTHREAD-561
50 1834 4 1098 DATAPATH-0-562
1 24 1 24 @äl
1 24 1 24 @äl
1 256 1 256 @äl
1 128 1 128 @äl
1 128 1 128 @äl
1 128 1 128 @äl
1 24 1 24 XêqðêqWw%
1 24 1 24 XêqðêqWw%
1 24 1 24 XêqðêqWw%
1 24 1 24 XêqðêqWw%
1 256 1 256 XêqðêqWw%
1 128 1 128 XêqðêqWw%
1 128 1 128 XêqðêqWw%
1 128 1 128 XêqðêqWw%
1 24 1 24 (çíqïÍ¡ÐÔ
1 24 1 24 (çíqïÍ¡ÐÔ
1 24 1 24 (çíqïÍ¡ÐÔ
1 24 1 24 (çíqïÍ¡ÐÔ
1 256 1 256 (çíqïÍ¡ÐÔ
1 128 1 128 (çíqïÍ¡ÐÔ
1 128 1 128 (çíqïÍ¡ÐÔ
1 128 1 128 (çíqïÍ¡ÐÔ
1 24 1 24 (çíqïÍ¡ÐÔ
1 24 1 24 (çíqïÍ¡ÐÔ
1 24 1 24 (çíqïÍ¡ÐÔ
1 24 1 24 (çíqïÍ¡ÐÔ
1 256 1 256 (çíqïÍ¡ÐÔ
1 128 1 128 Hȟq
1 128 1 128 Hȟq
1 128 1 128 Hȟq
1 24 1 24 (zqX+qWw%
1 24 1 24 (zqX+qWw%
1 24 1 24 (zqX+qWw%
1 24 1 24 (zqX+qWw%
1 256 1 256 (zqX+qWw%
1 128 1 128 (zqX+qWw%
1 128 1 128 (zqX+qWw%
1 128 1 128 (zqX+qWw%
1 24 1 24 E@§]
1 24 1 24
1 24 1 24
1 24 1 24
1 256 1 256
no port-object eq 238ïÍ¡ïÍîóÐÔ 128 object eq 23854
no port-object eq 238ïÍ¡ïÍîóÐÔ 128 object eq 23854
no port-object eq 238ïÍ¡ïÍîóÐÔ 128 object eq 23854
1 154 1 154 ÿ¡q
1 128 1 128 `.öq
1 128 1 128 `.öq
1 128 1 128 `.öq
1 60 1 60 `.öq
1 24 1 24 `.öq
1 24 1 24 `.öq
1 24 1 24 `.öq
1 24 1 24 `.öq
1 128 1 128 `.öq
1 128 1 128 `.öq
1 128 1 128 `.öq
1 24 1 24 `.öq
1 128 1 128 HªqÈLrWw%
1 128 1 128 HªqÈLrWw%
1 128 1 128 HªqÈLrWw%
1 60 1 60 HªqÈLrWw%
1 24 1 24 HªqÈLrWw%
1 24 1 24 HªqÈLrWw%
1 24 1 24 HªqÈLrWw%
1 24 1 24 HªqÈLrWw%
1 128 1 128 HªqÈLrWw%
1 128 1 128 HªqÈLrWw%
1 128 1 128 HªqÈLrWw%
1 24 1 24 HªqÈLrWw%
1 128 1 128
1 128 1 128
1 128 1 128
1 60 1 60
1 24 1 24
1 24 1 24
1 24 1 24
1 24 1 24
1 128 1 128
1 128 1 128
1 128 1 128
1 24 1 24
1 24 1 24
1 128 1 128 +
1 128 1 128 +
1 128 1 128 +
1 60 1 60 +
1 24 1 24 +
1 24 1 24 +
1 24 1 24 +
1 24 1 24 +
1 128 1 128 +
1 128 1 128 +
1 128 1 128 +
1 24 1 24 +
1 24 1 24 È2Ôq@Ãqqñ
1 24 1 24 È2Ôq@Ãqqñ
1 24 1 24 È2Ôq@Ãqqñ
1 24 1 24 È2Ôq@Ãqqñ
1 128 1 128 È2Ôq@Ãqqñ
1 128 1 128
1 128 1 128 È2Ôq@Ãqqñ
1 24 1 24 È2Ôq@Ãqqñ
1 256 1 256
1 154 1 154 Ø
28842751203459245106035 2886301327 3460889994696 Dispatch Unit
4 4208 0 0 RADIUS Proxy Listener
2 32 0 0 CP Server Process
0 0 0 0 pci_nt_bridge
0 0 0 0 RADIUS Proxy Time Keeper
172 201280 68 9000 rpc_server
2 152 0 0 CF OIR
7 14163 0 0 Integrity FW Task
2293 29527808 958 26363524 rtcli async executor process
0 0 0 0 lina_int
0 0 0 0 Chunk Manager
0 0 0 0 TLS Proxy Inspector
1208 1988444 1183 1972000 ci/console
1 5242924 6 16712 PIX Garbage Collector
0 0 0 0 emweb/cifs_timer
113 331602 54 12192 fover_thread
4771 76336 4771 114504 npshim_thread
381 5461 854 34413 IP Address Assign
3 8356 0 0 netfs_mount_handler
62430963 3137838648 56485141 3053178710 lu_ctl
1 24 0 0 EAPoUDP-sock
1 640 0 0 Reload Control Thread
0 0 0 0 QoS Support Module
170 28256 207291 35012280 arp_timer
36 3321396 36 3321396 update_cpu_usage
0 0 0 0 EAPoUDP
28608 7929838 33720 8129610 aaa
0 0 0 0 Client Update Task
0 0 0 0 arp_forward_thread
0 0 0 0 health_check
36 1714117 11 33190 UserFromCert Thread
0 0 32 165466 Checkheaps
0 0 0 0 Lic TMR
6 1667 2 66 Boot Message Proxy Process
0 0 126 1008976 tcp_fast
189 17112 200 1033535 NIC status poll
1258 867452 1180 238937 CMGR Server Process
0 0 29 124388 tcp_slow
0 0 0 0 CMGR Timer Process
132683 3435992 132568 3483204 Quack process
0 0 0 0 udp_timer
373707 3214453 745669 6417041 emweb/https
1055 590800 0 0 Session Manager
0 0 0 0 CTCP Timer process
64092 3219860 67857 4792396 fover_rx
371616 2972928 0 0 Timekeeper
0 0 0 0 L2TP data daemon
21107554010604038498 190973066 10322596778 fover_tx
817649 1135312216 791799 297867595 Unicorn Proxy Thread
3 2591535 0 0 uauth
0 0 0 0 L2TP mgmt daemon
402025 20197088 363736 19660888 fover_tx_2
0 0 0 0 Uauth_Proxy
0 0 19 150304 ppp_timer_thread
173465872387146067593 1569452800 84833061869 fover_ip
1384 140752 288 24696 RIP Router
325297 13424196 289910 12869468 dbgtrace
0 0 6 16712 vpnlb_timer_thread
1177 1970463 1178 1970475 fover_rep
1161563238101303716 116131778 8078250159 snmp
0 0 508 12192 RIP Send
517113 60844046 521241 38778697 IPsec message handler
21168707110760671479 191551335 10468568351 fover_parse
3873253 276114276 343 73328 IKE Receiver
0 0 202 47472 SSL
2211383 609816079 294222 42358370 CTM message handler
1 44 1 44 fover_ifc_test
39 75062 7 112 listen/ssh
0 0 0 0 SMTP
0 0 0 0 NAT security-level reconfiguration
2247728372112921627586 2033658751 109924713734 fover_health_monitoring_thread
0 0 19 150304 557mcfix
1 36 7 16748 Logger
0 0 0 0 ICMP event handler
11424 573920 10352 700636 ha_trans_ctl_tx
0 0 0 0 557statspoll
0 0 0 0 Syslog Retry Thread
0 0 0 0 Dynamic Filter VC Housekeeper
1454613957307706124 131607914 7113795052 ha_trans_data_tx
96 14988 0 0 CP Client Process
0 0 0 0 Thread Logger
0 0 0 0 IP Background
8225 3267873 4927 2522617 fover_FSM_thread
33782225 1151761030 63945473 2550570792 vpnfol_thread_msg
141232 8293068 277025 40510553 tmatch compile thread
122866981961724408638 1084307036 58617450838 lu_rx
1851875028888607962 177832614 8785878805 vpnfol_thread_timer
0 0 0 0 Crypto PKI RECV
168 8440 152 8216 lu_dynamic_sync
44 117004 2 20 vpnfol_thread_sync
0 0 0 0 netfs_vnode_reclaim
0 0 0 0 Crypto CA
0 0 0 0 IP Thread
0 0 27 99780 vpnfol_thread_unsent
0 0 0 0 CERT API
40653779 2042378490 36782116 1988193638 ARP Thread
0 0 0 0 Integrity Fw Timer Thread
0 0 0 0 uauth_urlb clean
32 16412 353 59304 icmp_thread
0 0 8 4032 vPif_stats_cleaner
0 0 16 141971 pm_timer_thread
0 0 2 336 udp_thread
0 0 250 6375 ssh/timer
3 8356 0 0 vpnlb_thread
0 0 3 8356 IKE Timekeeper
0 0 52 832 tcp_thread
2 5664 0 0 block_diag
30 908491 0 0 netfs_thread_init
23577503 5672608039 21559530 5336212605 IKE Daemon
125 14948 14185 191936 SNMP Notify Thread
217068 9108714 194133 8739370 ssh
On-disk firmware md5sum checks out with the expected number from Cisco, and the ASA is not having a problem with free memory, it just occurs to me that something acting screwy in the running firmware could cause this binary-print in 'show processes memory' and potentially be related to other strange problems,
Any opinions or experience on whether or not this binary-print in 'sh processes memory' could be relevant?
Thank you very much for your participation so far.
06-07-2013 06:41 AM
Hi,
i had the same problem with pair of ASA 5510.
after many days of investigation i saw that in one of my interface vlan, there was an VM generating 40Mbps of multicast traffic that was beeing droped by firewall.
i have stopped the multicast traffic frmo VM and CPU droped with 20 %
Thanks
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide