ā03-14-2018 10:30 AM - edited ā03-18-2019 12:23 PM
On our UCM publisher we've been seeing 100% virtual memory usage for as long as we can remember (historical data shows at least a year, always at 100%). The physical ram still shows space; the 4GB of virtual memory show at 0 free kybtes. This is a virtual machine runs on an esxi host. It seems to be cosmetic but would still like to get it fixed; anyone know what might be the problem?
VMware Installation: 2 vCPU: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz Disk 1: 80GB, Partitions aligned 4096 Mbytes RAM admin:show version active Active Master Version: 10.5.2.11900-3 admin:show perf query class Memory ==>query class : - Perf class (Memory) has instances and values: -> % Mem Used = 89 -> % Page Usage = 72 -> % VM Used = 93 -> Buffers KBytes = 10688 -> Cached KBytes = 385860 -> Free KBytes = 114136 -> Free Swap KBytes = 0 -> Page Faults Per Sec = 0 -> Page Major Faults Per Sec = 53 -> Pages = 1693339003 -> Pages Input = 1693339004 -> Pages Input Per Sec = 7135 -> Pages Output = -1 -> Pages Output Per Sec = 0 -> Shared KBytes = 90200 -> SlabCache = 165812 -> SwapCached = 0 -> Total KBytes = 3925704 -> Total Swap KBytes = 2064280 -> Total VM KBytes = 5989984 -> Used KBytes = 3505220 -> Used Swap KBytes = 2064280 -> Used VM KBytes = 5569500
Solved! Go to Solution.
ā03-18-2018 02:49 AM
spin up rtmt and check %CPU and mem utilisation and restart the service that uses most.
possibly also just restart the sercver in a maint window. i have seen weirder things happen
ā03-15-2018 12:58 AM
ā03-16-2018 06:05 AM
Below is the show tech runtime memory output.
admin:show tech runtime memory -------------------- show platform runtime -------------------- Total memory (RAM+swap) usage (in KB): total used free shared buffers cached Mem: 3925704 3814600 111104 0 9924 404152 -/+ buffers/cache: 3400524 525180 Swap: 2064280 2064280 0 Total: 5989984 5878880 111104
The CPU stays at pretty low utilization. The uptime is 354 days; looking at historical data I think it's been at 100% since then.
ā03-17-2018 09:05 AM
Hi there,
The optimal way to look at the memory and CPU usage on CUCM is through RTMT. this might display different results that than CLI.
ā12-21-2018 10:46 AM
You need to open a TAC case so that they can login as root and on 'swapon'
[root@########## syslog]# /sbin/swapoff -a
[root@########## syslog]# /sbin/swapon -a
########
From CLI we confirmed the SWAP partition was full:
show tech runtime memory
-------------------- show platform runtime --------------------
Total memory (RAM+swap) usage (in KB):
total used free shared buffers cached
Mem: 8062020 7887964 174056 0 151396 1639944
-/+ buffers/cache: 6096624 1965396
Swap: 2064280 2064148 132
Total: 10126300 9952112 174188
==>query class :
show perf query class Memory
==>query class :
- Perf class (Memory) has instances and values:
-> % Mem Used = 76
-> % Page Usage = 77
-> % VM Used = 81
-> Buffers KBytes = 151884
-> Cached KBytes = 1656144
-> Free KBytes = 154456
-> Free Swap KBytes = 164
-> Page Faults Per Sec = 0
-> Page Major Faults Per Sec = 6
-> Pages = -2
-> Pages Input = -1
-> Pages Input Per Sec = 0
-> Pages Output = -1
-> Pages Output Per Sec = 0
-> Shared KBytes = 35988
-> SlabCache = 283392
-> SwapCached = 0
-> Total KBytes = 8062020
-> Total Swap KBytes = 2064280
-> Total VM KBytes = 10126300
-> Used KBytes = 6135524
-> Used Swap KBytes = 2064116
-> Used VM KBytes = 8199640
Also the CiscoSyslogs show the low swap alert:
: %UC_RTMT-2-RTMT_ALERT: %[AlertName=LowSwapPartitionAvailableDiskSpace][AlertDetail= On Thu Dec 20 01:20:46 EST 2018, alert LowSwapPartitionAvailableDiskSpace has occured. Counter Percent Available of Partition(Swap) on node has
We cleared the SWAP memory from root and now it shows available space:
admin:show tech runtime memory
-------------------- show platform runtime --------------------
Total memory (RAM+swap) usage (in KB):
total used free shared buffers cached
Mem: 8062020 7623452 438568 0 4092 1097404
-/+ buffers/cache: 6521956 1540064
Swap: 2064280 1267980 796300
Total: 10126300 8891432 1234868admin:
According to this bug CSCva75840 the fix is already added on later releases so you can plan an upgrade.
ā12-21-2018 10:47 AM
You need to open a TAC case so that they can login as root and on 'swapon'
[root@########## syslog]# /sbin/swapoff -a
[root@########## syslog]# /sbin/swapon -a
########
From CLI we confirmed the SWAP partition was full:
show tech runtime memory
-------------------- show platform runtime --------------------
Total memory (RAM+swap) usage (in KB):
total used free shared buffers cached
Mem: 8062020 7887964 174056 0 151396 1639944
-/+ buffers/cache: 6096624 1965396
Swap: 2064280 2064148 132
Total: 10126300 9952112 174188
==>query class :
show perf query class Memory
==>query class :
- Perf class (Memory) has instances and values:
-> % Mem Used = 76
-> % Page Usage = 77
-> % VM Used = 81
-> Buffers KBytes = 151884
-> Cached KBytes = 1656144
-> Free KBytes = 154456
-> Free Swap KBytes = 164
-> Page Faults Per Sec = 0
-> Page Major Faults Per Sec = 6
-> Pages = -2
-> Pages Input = -1
-> Pages Input Per Sec = 0
-> Pages Output = -1
-> Pages Output Per Sec = 0
-> Shared KBytes = 35988
-> SlabCache = 283392
-> SwapCached = 0
-> Total KBytes = 8062020
-> Total Swap KBytes = 2064280
-> Total VM KBytes = 10126300
-> Used KBytes = 6135524
-> Used Swap KBytes = 2064116
-> Used VM KBytes = 8199640
Also the CiscoSyslogs show the low swap alert:
: %UC_RTMT-2-RTMT_ALERT: %[AlertName=LowSwapPartitionAvailableDiskSpace][AlertDetail= On Thu Dec 20 01:20:46 EST 2018, alert LowSwapPartitionAvailableDiskSpace has occured. Counter Percent Available of Partition(Swap) on node has
We cleared the SWAP memory from root and now it shows available space:
admin:show tech runtime memory
-------------------- show platform runtime --------------------
Total memory (RAM+swap) usage (in KB):
total used free shared buffers cached
Mem: 8062020 7623452 438568 0 4092 1097404
-/+ buffers/cache: 6521956 1540064
Swap: 2064280 1267980 796300
Total: 10126300 8891432 1234868admin:
According to this bug CSCva75840 the fix is already added on later releases so you can plan an upgrade.
ā03-18-2018 02:49 AM
spin up rtmt and check %CPU and mem utilisation and restart the service that uses most.
possibly also just restart the sercver in a maint window. i have seen weirder things happen
ā03-19-2018 06:00 AM
Thanks for the responses all.
Regarding the bug CSCvb86111, we are still on 10.5 so I don't think this affects us. According to RTMT and CLI the Cisco tomcat service uses the most (1.9GB of 2.0GB). We've restarted the tomcat service but it immediately jumps back up to 1.9GB. Also up there at around 1GB are java, DHCPMonitor, and TAPS. One thing that jumps out at me is the VmRSS space is definitely highest on tomcat but VmSize isn't that different between them (VmRSS is physical, VmSize is physical+virtual, correct?). Picture below of what I'm looking at.
Our plan at the moment is just to reboot the server. I'll post on here how it goes.
ā04-03-2018 05:44 AM
We rebooted the publisher (ie. server with the issue). Virtual memory is back to 50% now.
ā03-18-2018 09:15 AM
You need to see which services that take the most of the memory usage
"show process load memory"
and please how is it showing on the GUI VM.
this might help https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvb86111/?rfs=iqvred
also, take a look at this: https://supportforums.cisco.com/t5/unified-communications/unity-connection-11-5-memory-usage-constantly-above-90-usage/td-p/3219035
Thanks
//Moe
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide