cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
21300
Views
20
Helpful
9
Replies

CUCM 100% Virtual Memory Usage

esa_fresa
Level 1
Level 1

On our UCM publisher we've been seeing 100% virtual memory usage for as long as we can remember (historical data shows at least a year, always at 100%). The physical ram still shows space; the 4GB of virtual memory show at 0 free kybtes. This is a virtual machine runs on an esxi host. It seems to be cosmetic but would still like to get it fixed; anyone know what might be the problem?

 

VMware Installation:
        2 vCPU: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz
        Disk 1: 80GB, Partitions aligned
        4096 Mbytes RAM

admin:show version active
Active Master Version: 10.5.2.11900-3

admin:show perf query class Memory
==>query class :

 - Perf class (Memory) has instances and values:
                    -> % Mem Used                     = 89
                    -> % Page Usage                   = 72
                    -> % VM Used                      = 93
                    -> Buffers KBytes                 = 10688
                    -> Cached KBytes                  = 385860
                    -> Free KBytes                    = 114136
                    -> Free Swap KBytes               = 0
                    -> Page Faults Per Sec            = 0
                    -> Page Major Faults Per Sec      = 53
                    -> Pages                          = 1693339003
                    -> Pages Input                    = 1693339004
                    -> Pages Input Per Sec            = 7135
                    -> Pages Output                   = -1
                    -> Pages Output Per Sec           = 0
                    -> Shared KBytes                  = 90200
                    -> SlabCache                      = 165812
                    -> SwapCached                     = 0
                    -> Total KBytes                   = 3925704
                    -> Total Swap KBytes              = 2064280
                    -> Total VM KBytes                = 5989984
                    -> Used KBytes                    = 3505220
                    -> Used Swap KBytes               = 2064280
                    -> Used VM KBytes                 = 5569500



1 Accepted Solution

Accepted Solutions

Dennis Mink
VIP Alumni
VIP Alumni

spin up rtmt and check %CPU and mem utilisation and restart the service that uses most.

 

possibly also just restart the sercver in a maint window. i have seen weirder things happen

Please remember to rate useful posts, by clicking on the stars below.

View solution in original post

9 Replies 9

Aeby Vinod
Level 3
Level 3
Don't rule it off as cosmetic yet, please get an output of the command "show tech runtime memory". Monitor for sometime to see if there are CPU spikes, high swap partition usage can result in that. Since you have space in your vm from ram, I suspect your swap partition might have stale memory usage, restarting the callmanager service will clear this up. I suggest going for a restart of the service and monitoring for the next few months to see if the memory builds up.

Please rate if you find this helpful.

Regards,
Aeby



Please rate if you find this helpful.

Regards,
Aeby

Below is the show tech runtime memory output. 

admin:show tech runtime memory
 -------------------- show platform runtime --------------------

Total memory (RAM+swap) usage (in KB):
             total       used       free     shared    buffers     cached
Mem:       3925704    3814600     111104          0       9924     404152
-/+ buffers/cache:    3400524     525180
Swap:      2064280    2064280          0
Total:     5989984    5878880     111104

The CPU stays at pretty low utilization. The uptime is 354 days; looking at historical data I think it's been at 100% since then.

Hi there,

 

The optimal way to look at the memory and CPU usage on CUCM is through RTMT. this might display different results that than CLI.

You need to open a TAC case so that they can login as root and on 'swapon'

 

[root@########## syslog]# /sbin/swapoff -a
[root@########## syslog]# /sbin/swapon -a

 

########

From CLI we confirmed the SWAP partition was full:

 

show tech runtime memory

-------------------- show platform runtime --------------------

 

Total memory (RAM+swap) usage (in KB):

             total       used       free     shared    buffers     cached

Mem:       8062020    7887964     174056          0     151396    1639944

-/+ buffers/cache:    6096624    1965396

Swap:      2064280    2064148        132

Total:    10126300    9952112     174188

==>query class :

 

 

show perf query class Memory

==>query class :

 

- Perf class (Memory) has instances and values:

                    -> % Mem Used                     = 76

                    -> % Page Usage                   = 77

                    -> % VM Used                      = 81

                    -> Buffers KBytes                 = 151884

                    -> Cached KBytes                  = 1656144

                    -> Free KBytes                    = 154456

                    -> Free Swap KBytes               = 164

                    -> Page Faults Per Sec            = 0

                    -> Page Major Faults Per Sec      = 6

                    -> Pages                          = -2

                    -> Pages Input                    = -1

                    -> Pages Input Per Sec            = 0

                    -> Pages Output                   = -1

                    -> Pages Output Per Sec           = 0

                    -> Shared KBytes                  = 35988

                    -> SlabCache                      = 283392

                    -> SwapCached                     = 0

                    -> Total KBytes                   = 8062020

                    -> Total Swap KBytes              = 2064280

                    -> Total VM KBytes                = 10126300

                    -> Used KBytes                    = 6135524

                    -> Used Swap KBytes               = 2064116

                    -> Used VM KBytes                 = 8199640

 

Also the CiscoSyslogs show the low swap alert:

 

:  %UC_RTMT-2-RTMT_ALERT: %[AlertName=LowSwapPartitionAvailableDiskSpace][AlertDetail= On Thu Dec 20 01:20:46 EST 2018, alert LowSwapPartitionAvailableDiskSpace has occured. Counter Percent Available of Partition(Swap) on node  has 

 

 

We cleared the SWAP memory from root and now it shows available space:

 

admin:show tech runtime memory

-------------------- show platform runtime --------------------

 

Total memory (RAM+swap) usage (in KB):

             total       used       free     shared    buffers     cached

Mem:       8062020    7623452     438568          0       4092    1097404

-/+ buffers/cache:    6521956    1540064

Swap:      2064280    1267980     796300

Total:    10126300    8891432    1234868admin:

 

According to this bug CSCva75840 the fix is already added on later releases so you can plan an upgrade.

 

You need to open a TAC case so that they can login as root and on 'swapon'

 

[root@########## syslog]# /sbin/swapoff -a
[root@########## syslog]# /sbin/swapon -a

 

########

From CLI we confirmed the SWAP partition was full:

 

show tech runtime memory

-------------------- show platform runtime --------------------

 

Total memory (RAM+swap) usage (in KB):

             total       used       free     shared    buffers     cached

Mem:       8062020    7887964     174056          0     151396    1639944

-/+ buffers/cache:    6096624    1965396

Swap:      2064280    2064148        132

Total:    10126300    9952112     174188

==>query class :

 

 

show perf query class Memory

==>query class :

 

- Perf class (Memory) has instances and values:

                    -> % Mem Used                     = 76

                    -> % Page Usage                   = 77

                    -> % VM Used                      = 81

                    -> Buffers KBytes                 = 151884

                    -> Cached KBytes                  = 1656144

                    -> Free KBytes                    = 154456

                    -> Free Swap KBytes               = 164

                    -> Page Faults Per Sec            = 0

                    -> Page Major Faults Per Sec      = 6

                    -> Pages                          = -2

                    -> Pages Input                    = -1

                    -> Pages Input Per Sec            = 0

                    -> Pages Output                   = -1

                    -> Pages Output Per Sec           = 0

                    -> Shared KBytes                  = 35988

                    -> SlabCache                      = 283392

                    -> SwapCached                     = 0

                    -> Total KBytes                   = 8062020

                    -> Total Swap KBytes              = 2064280

                    -> Total VM KBytes                = 10126300

                    -> Used KBytes                    = 6135524

                    -> Used Swap KBytes               = 2064116

                    -> Used VM KBytes                 = 8199640

 

Also the CiscoSyslogs show the low swap alert:

 

:  %UC_RTMT-2-RTMT_ALERT: %[AlertName=LowSwapPartitionAvailableDiskSpace][AlertDetail= On Thu Dec 20 01:20:46 EST 2018, alert LowSwapPartitionAvailableDiskSpace has occured. Counter Percent Available of Partition(Swap) on node  has 

 

 

We cleared the SWAP memory from root and now it shows available space:

 

admin:show tech runtime memory

-------------------- show platform runtime --------------------

 

Total memory (RAM+swap) usage (in KB):

             total       used       free     shared    buffers     cached

Mem:       8062020    7623452     438568          0       4092    1097404

-/+ buffers/cache:    6521956    1540064

Swap:      2064280    1267980     796300

Total:    10126300    8891432    1234868admin:

 

According to this bug CSCva75840 the fix is already added on later releases so you can plan an upgrade.

 

Dennis Mink
VIP Alumni
VIP Alumni

spin up rtmt and check %CPU and mem utilisation and restart the service that uses most.

 

possibly also just restart the sercver in a maint window. i have seen weirder things happen

Please remember to rate useful posts, by clicking on the stars below.

Thanks for the responses all.

Regarding the bug CSCvb86111, we are still on 10.5 so I don't think this affects us. According to RTMT and CLI the Cisco tomcat service uses the most (1.9GB of 2.0GB). We've restarted the tomcat service but it immediately jumps back up to 1.9GB. Also up there at around 1GB are java, DHCPMonitor, and TAPS. One thing that jumps out at me is the VmRSS space is definitely highest on tomcat but VmSize isn't that different between them (VmRSS is physical, VmSize is physical+virtual, correct?). Picture below of what I'm looking at.

Our plan at the moment is just to reboot the server. I'll post on here how it goes.

 

cucm mem usage.PNG

 

We rebooted the publisher (ie. server with the issue). Virtual memory is back to 50% now.

moe2
Level 1
Level 1

You need to see which services that take the most of the memory usage 

"show process load memory" 

 

and please how is it showing on the GUI VM. 

 

this might help https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvb86111/?rfs=iqvred

 

also, take a look at this: https://supportforums.cisco.com/t5/unified-communications/unity-connection-11-5-memory-usage-constantly-above-90-usage/td-p/3219035

 

Thanks

 

//Moe