Hi SaintEvn -
There could be couple of things going on.
1. undersized vm (virtual machine) instance size
2. a normal memory increase during stat collection and then goes down.
For 1.
Check if your vManage is undersized. i.e., check the VM instance settings like Memory / size / vCPU etc for the # of devices that vManage is going to support to.
Check the recommended computing resources section for different releases based on the number of edge devices
-
https://www.cisco.com/c/en/us/td/docs/routers/sdwan/release/notes/compatibility-and-server-recommendations.htmlThere is a coverage of both cloud-hosted by Cisco and On-Prem deployments.
If it is cloud-Hosted by Cisco, and if the instance size is not matching as per the documentation, you can open a TAC case to get this fixed/addressed.
For 2.
Check 'top processes' from vShell. You can check sort based on the CPU or memory to see which processes consuming more resources.
In general, default vManage processes do consume CPU / memory.
Below is an example of 'top' sorted based on CPU
top - 17:51:07 up 16 days, 3:22, 1 user, load average: 0.22, 0.13, 0.21
Tasks: 467 total, 1 running, 466 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 59332.5 total, 25013.4 free, 24222.3 used, 10096.8 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 34277.5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24098 vmanage 20 0 13.9g 6.9g 28272 S 5.0 11.9 2118:39 java
1120 vmanage 20 0 232320 18732 8100 S 4.0 0.0 982:17.84 python3
19941 vmanage 20 0 2757772 61628 29492 S 1.7 0.1 351:24.62 envoy
21792 vmanage 20 0 51.9g 6.0g 609860 S 1.7 10.3 370:54.01 java
21392 vmanage 20 0 21.4g 4.6g 24044 S 1.3 8.0 715:16.87 java
1110 root 20 0 301304 12128 7372 S 1.0 0.0 94:58.68 sysmgrd
21597 vmanage 20 0 878792 50044 21652 S 1.0 0.1 233:23.37 messaging-serve
25523 admin 20 0 16056 2884 1968 R 1.0 0.0 0:00.31 top
6031 root 20 0 258916 23304 10032 S 0.7 0.0 150:32.59 vdaemon
6047 root 20 0 258740 22820 9900 S 0.7 0.0 139:50.22 vdaemon
12258 root 20 0 3405396 41804 20588 S 0.7 0.1 85:28.70 containerd
16616 vmanage 20 0 9964.2m 263016 27192 S 0.7 0.4 140:40.98 java
522 root 20 0 12036 4196 1064 S 0.3 0.0 55:33.69 haveged
5453 root 20 0 11632 2804 2504 S 0.3 0.0 18:54.37 sysmgr_stats_st
sorted based on Memory
top - 17:52:55 up 16 days, 3:24, 1 user, load average: 0.10, 0.12, 0.19
Tasks: 467 total, 1 running, 466 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.5 us, 0.4 sy, 0.0 ni, 98.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 59332.5 total, 25010.7 free, 24232.5 used, 10089.3 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 34267.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24098 vmanage 20 0 13.9g 6.9g 28272 S 15.9 11.9 2118:47 java <<<<<
21792 vmanage 20 0 51.9g 6.0g 609888 S 1.0 10.3 370:55.35 java <<<<<
21392 vmanage 20 0 21.4g 4.7g 24044 S 32.5 8.0 715:20.99 java <<<<<
6395 vmanage 20 0 23.6g 2.5g 19720 S 0.3 4.3 88:56.94 java
22520 vmanage 20 0 13.8g 858296 16964 S 0.3 1.4 39:12.14 java
19097 vmanage 20 0 25.9g 674392 19128 S 0.3 1.1 82:47.78 java
1159 vmanage 20 0 7591620 671704 6176 S 0.0 1.1 25:27.54 ncs.smp
16616 vmanage 20 0 9964.2m 263016 27192 S 0.3 0.4 140:41.57 java
20335 vmanage 20 0 9261216 184804 22656 S 0.0 0.3 6:12.36 java
21839 vmanage 20 0 1603288 149984 22628 S 0.0 0.2 33:03.49 python
27500 vmanage 20 0 7247856 122460 22740 S 0.3 0.2 6:27.33 java
19458 vmanage 20 0 433408 118836 11320 S 0.0 0.2 11:20.83 uwsgi
19456 vmanage 20 0 428380 112580 11388 S 0.0 0.2 0:25.72 uwsgi
Each of the above Java process maps to one of the vManage services.
Please check out:
https://www.cisco.com/c/en/us/support/docs/routers/sd-wan/216015-quick-start-guide-data-collection-for.html#anc5.. to see for some additional info.
HTH.