cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1289
Views
0
Helpful
2
Replies

Increase in memory usage by EEM

Ryo Ikoma
Level 1
Level 1

Hardware : Nexus 5672UP
System version : 7.3(1)N1(1)
=====

I tried to provide features like "IP SLA Object Tracking" on Nexus 5672.
I tried running it with reference to "IP SLA with Nexus 5500 series: RouteTrack.py",
but when there were a large number of targets, it hung up in several hours.
After executing "action cli command" in the EEM applet, when it ended, the zombie process was rising.

=====ex=====
Switch# show system internal process cpu
top - 23:41:37 up 1 day, 3:21, 4 users, load average: 0.38, 0.70, 0.68
Tasks: 982 total, 1 running, 421 sleeping, 0 stopped, 560 zombie
Cpu(s): 1.3%us, 1.0%sy, 0.0%ni, 97.6%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 8243284k total, 4342344k used, 3900940k free, 140k buffers
Swap: 0k total, 0k used, 0k free, 1557276k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19276 admin 20 0 4004 1992 1148 R 20.7 0.0 0:00.23 top
: <snip>
=====ex=====

So, I changed Python script and EEM.
The applet is executed only once after the hardware is started,
and the applet does not stop using "while".

=====ex=====
event manager applet ROUTETRACK
event syslog pattern "%PFMA-2-MOD_PWRUP: Module 2 powered up"
maxrun 2147483647
action 0.00 set _i "1"
action 0.01 wait 10
action 1.00 while 1 eq 1
action 2.11 cli command "source background route-track-exec.py"
action 2.12 wait 1
action 8.00 syslog priority warnings msg "routecheck policy succeed count" $_i"
action 8.01 increment _i
action 8.02 wait 1
action 9.99 end
=====ex=====

Unfortunately, it ran for about four days, but the Python script did not work.

=====ex=====
Switch# show event manager policy active
Service not responding
=====ex=====

When I checked memory usage, I found that it is gradually rising.
I also found that the CPU utilization of the eem_policy_dir process is high.

=====ex=====
Switch# show system internal process cpu
top - 19:22:11 up 4 days, 6:07, 5 users, load average: 1.01, 1.12, 1.22
Tasks: 270 total, 1 running, 267 sleeping, 0 stopped, 2 zombie
Cpu(s): 7.5%us, 3.8%sy, 0.0%ni, 85.4%id, 0.0%wa, 0.0%hi, 3.3%si, 0.0%st
Mem: 8243284k total, 4193596k used, 4049688k free, 184k buffers
Swap: 0k total, 0k used, 0k free, 1589048k cached

  PID USER  PR NI VIRT RES   SHR S %CPU %MEM    TIME+ COMMAND
 4250 root  20  0 613m 389m 6540 S 100.2 4.8 68:37.97 eem_policy_dir
21526 admin 20  0 3604 1560 1148 R   5.7 0.0 0:00.07 top
: <snip>
=====ex=====

Has anyone ever had this problem?

Any help will be appreciated.

2 Replies 2

Joe Clarke
Cisco Employee
Cisco Employee

This looks to be a bug, but I have not heard of anyone reporting these exact symptoms.  I have seen a recent report of EEM increasing zombie processes, but again, I don't know of an existing bug on that.  I recommend you contact TAC so this (and the possible leak) can be analyzed.

Thank you for the quick response!
I'll do that.