02-16-2012 03:26 AM - edited 03-11-2019 03:30 PM
Hi Experts,
We had two PIXes in our environment and working as a active-failover mode. Its noted in now a days the active PIX memory utilization is 98% and for standby PIX it is 96%. And also in some times we were experiencing packet loss to the ip of active PIX and which reflects in the inside servers access also. During that time the active pix was not accessible via ssh as well as ASDM. We have tried reloading the PIX and changing failover state of the PIX, but it results only a temporary solution. Current memory installed is 128 MB (maximum upgraded), so a upgrade is also not possible. Please see the show command outputs from the PIX. Current Software version is 7.2(4)
sh memory output (PIX 1 - active)
Free memory: 4850944 bytes ( 4%)
Used memory: 129366784 bytes (96%)
------------- ----------------
Total memory: 134217728 bytes (100%)
sh memory output (PIX 2 - standby)
Free memory: 2595872 bytes ( 2%)
Used memory: 131621856 bytes (98%)
------------- ----------------
Total memory: 134217728 bytes (100%)
sh processes memory (PIX 1)
--------------------------------------------------------------
Allocs Allocated Frees Freed Process
(bytes) (bytes)
--------------------------------------------------------------
738 5813309 6 180 *System Main*
6 1020758 0 0 lu_ctl
343 941576 0 0 listen/ssh
601 216518 556 19501 fover_thread
3 52 0 0 listen/telnet
29 20623 4 4179 ci/console
88 11192 0 0 IKE Receiver
0 0 0 0 557statspoll
6 14112 0 0 Integrity FW Task
0 0 0 0 557mcfix
0 0 0 0 RADIUS Proxy Time Keeper
4 4160 0 0 RADIUS Proxy Listener
0 0 0 0 EAPoUDP
0 0 0 0 Thread Logger
0 0 0 0 RADIUS Proxy Event Daemon
1 24 0 0 EAPoUDP-sock
0 0 0 0 dbgtrace
205935 20044340 0 0 Logger
7652 738430 7790 745240 IKE Daemon
0 0 205997 20045008 SNMP Notify Thread
0 0 0 0 SMTP
0 0 0 0 IKE Timekeeper
2 228 158 1264 tcp_thread
0 0 18 1208 SSL
0 0 0 0 pm_timer_thread
0 0 0 0 udp_thread
0 0 0 0 uauth_urlb clean
4 2624 0 0 icmp_thread
91 3222 170 6124 aaa
0 0 0 0 Crypto CA
108 7384 1 136 ARP Thread
0 0 0 0 Reload Control Thread
0 0 0 0 Uauth_Proxy
0 0 0 0 Crypto PKI RECV
0 0 0 0 IP Thread
4 311638 16 74784 uauth
2889854 521365740 2684352 509833448 tmatch compile thread
0 0 0 0 lu_dynamic_sync
6 21172 0 0 Session Manager
0 0 0 0 IP Background
0 0 0 0 lu_rx
441726 474600165 441182 474795781 Dispatch Unit
0 0 0 0 ICMP event handler
226 45359 101 16869 fover_FSM_thread
4680 152280 4710 153150 ssh/timer
0 0 8 33296 NAT security-level reconfiguration
0 0 0 0 ha_trans_data_tx
0 0 0 0 block_diag
0 0 0 0 Checkheaps
0 0 2 8324 vpnlb_timer_thread
0 0 0 0 ha_trans_ctl_tx
13 802004 0 0 emweb/cifs
0 0 0 0 Client Update Task
0 0 0 0 ppp_timer_thread
0 0 0 0 fover_serial_tx
0 0 0 0 QoS Support Module
0 0 0 0 L2TP mgmt daemon
17 4496 0 0 fover_serial_rx
8 112 20 704 IP Address Assign
0 0 0 0 L2TP data daemon
20 2684 1 32 fover_health_monitoring_thread
7 624 0 0 NTP
6 44178 0 0 route_process
0 0 0 0 CTM message handler
0 0 0 0 fover_ifc_test
302 2272076 14 224 listen/https
1 65572 0 0 PIX Garbage Collector
2 321068 0 0 IPsec message handler
397023 46730984 383345 44766650 fover_parse
0 0 2 32900 qos_metric_daemon
0 0 0 0 Chunk Manager
0 0 0 0 CTCP Timer process
2123 316833 2124 316845 fover_rep
165708 5407652 165684 5392016 ssh
0 0 0 0 udp_timer
0 0 0 0 fover_ip
0 0 26 208 tcp_slow
0 0 0 0 fover_tx
0 0 0 0 Integrity Fw Timer Thread
0 0 0 0 tcp_fast
2 2848 0 0 fover_rx
0 0 0 0 vpnfol_thread_unsent
0 0 0 0 arp_forward_thread
0 0 0 0 vpnfol_thread_sync
2 24 0 0 arp_timer
0 0 0 0 vpnfol_thread_timer
0 0 0 0 emweb/cifs_timer
0 0 2 24 NIC status poll
59 13485 59 10712 vpnfol_thread_msg
2 8324 12 58136 vpnlb_thread
1 40 0 0 update_cpu_usage
-------------------------------------------------------------------------------------------------------------------------------------------------------------
1) How we can pin point the root cause of this high memory utilization?
2) What might be the reason for the high memory utilization for the standby pix (96%), still the PIX is in idle state?
3) Is it a hardware issue or a memory leak issue, then how can we find out?
4) Is a software upgrade to new version resolves the memory issue?
Please provide your valuable suggestions regarding this issue.
Thanks and Regards,
Sihanu N
02-16-2012 05:43 AM
Hi Sihanu,
In high memory issue, I would always suggest to open a TAC case for ite, since high memory issues need to be closely investigated. Moreover, I can suggest a few things that you can try, I see logging using some memory, try disabling logging and check whether it makes any difference, since after having a look at the dispatch unit, I can see its a busy firewall, so more amount of logs would be generated, which definitely eats the memory.
The acl's are also seem to use a lot of memory, try deleting any redundant acl's in your configuration as well.
Only by opening a TAC case would you be able to get a root cause for it since, we have our own database and tools to check the memory utilization for your firewall, then only it can be treated as a software or a hardware issue.
One more thing whihc might help you:
show process cpu-usage
Hope that helps.
Thanks,
Varun
Please do rate helpful posts.
02-16-2012 01:02 PM
Hi Varun,
Many Thanks for your valuable response. We have disabled the logging, but didn't show noticable difference and also the cpu utilization is showing normal(20 to 30%). But one thing is that the configuration consists of above 1500 lines in running configuration. Also we were not getting connection to the PIX during the high packet loss to PIX(via both ssh and ASDM). During that time we forcefully make the standby pix as active and reload the primary pix from the secondary then again make the primary as active, which just work as a temporary solution for the packet loss to pix and internal servers.
Kindly let us know the following things
1) Is still cisco is providing support for PIX?
2) Is the large running configuration results high memory utilization (since standby pix also showing high memory utilization)?
3) What is ASDM buffer and normal buffer and how to disable ASDM logging?
4) Is there any possibility of DoS attack happens to the PIX which causes the memory issue?
Thanks and Regards,
Sihanu N
02-17-2012 05:12 AM
Hi Sihanu,
1. We still provides support for PIX
2. Large running configuration doesn't necessarily amount to high memory but its basically the amount of traffic handled by the PIX that could be the major reason.
3. To disable ASDM logging:
no logging asdm
4. DoS would just be a wild guess without real data, but I would first focus on other things rather than it.
Did you get the opportunity to check "show process cpu-usage" output.
Thanks,
Varun
02-20-2012 02:39 AM
Hi Varun,
Many Thanks for the support provided and sorry for the delayed reply. We were still experiencing the packet loss issues with the two PIXes.
Still we are following the temporary solving procedure as follows
1) Make standby pix as active (at that time also the packet loss is there)
2) Reloding the current standby pix from the current active
3) Then do the failover again resolves the packet loss problem
My guess and queries
-------------------------------
1) The above symptom indicates some hardware problem in standby pix, right?
2) Do a shutting down of both pixes and a cold start resolves the issue , since the failover cluster it is up for more than 1 year?
3) Did a syslog can find the problem in the PIXes?
Thanks and Regards
Sihanu N
02-22-2012 01:42 PM
Hi all,
At last we found the root cause of the pix's high memory utilization. The reason for that was due to large number of network-objects associated to the object-groups. We summarized those consecutive ip ranges by summarization, and which reduced the high memory utilization.(from 98% to 80%)
Thanks for all of yours support and suggessions.
Regards,
Sihanu N
02-22-2012 01:45 PM
Where was the object-group applied? Interface access-list? NAT configuration?
02-27-2012 01:11 PM
Hi Patric,
The object-groups was called in ACLs.
Thanks and Regards,
Sihanu N
02-16-2012 11:30 AM
Perform a 'show blocks' command and see if the low count for any size memory block shows a value a or near 0. This can help point you in the right direction in some cases.
Do you have a large NAT 0 configuration?
Do you have a large number of VPN tunnels configured that reference large crypto access-lists?
What does the connection count look like?
The memory utilization on the standby firewall is more than likely attributed to state information that is replicated between he units(assuming you have stateful failover configure). Remember, the conn and NAT tables, the security policy and security association databases, and ARP table are all replicated to the standby unit when stateful failover is enabled.
02-17-2012 05:05 AM
Hi Patric,
Many Thanks for your valuable response.
Sorry for the delayed response (since i was not in office to provide the outputs of show commands).
1) Just a single NAT 0 configuration with an access list with only 4 rules
2) No site to site VPNs were configured (some remote access vpn users are there but rarely accessing)
3) show conn count
3231 in use, 4100 most used
Please see the below output requested for the active PIX
sh blocks
SIZE MAX LOW CNT
0 300 296 300
4 100 99 100
80 100 64 100
256 1100 1063 1100
1550 1815 927 1550
2048 100 99 100
2560 40 40 40
4096 30 29 30
8192 60 60 60
16384 104 104 104
65536 10 10 10
------------------------------------
sh logging setting
Syslog logging: enabled
Facility: 20
Timestamp logging: enabled
Standby logging: disabled
Deny Conn when Queue Full: disabled
Console logging: disabled
Monitor logging: disabled
Buffer logging: level warnings, 61734 messages logged
Trap logging: level warnings, facility 20, 61734 messages logged
History logging: level warnings, 61734 messages logged
Device ID: disabled
Mail logging: disabled
ASDM logging: level informational, 23528167 messages logged
--------------------------------------------------------------------------------------------------------
1) Is the above mentioned ASDM logging will cause high memory utilization?
2) Is there any debugging option in PIX to find the memory utilization issue?
Kindly provide your valuable suggessions.
Thanks and Regards,
Sihanu N
02-23-2012 02:22 PM
Hi Patric,
At last we found the root cause of the pix's high memory utilization. The reason for that was due to large number of network-objects associated to the object-groups. We summarized those consecutive ip ranges by summarization, and which reduced the high memory utilization.(from 98% to 80%)
Thanks for all of yours support and suggessions.
Regards,
Sihanu N
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide