cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2778
Views
0
Helpful
6
Replies

ASR9k Memory usage Issue

behzadho
Level 1
Level 1

Hello everybody

We are using ASR 9904 and  IOS XR version is 6.2.3 . I have seen that during past months memory consumption has raised about 1 megabyte everyday and this trend is continuing .I suspect that  it is a memory leaking issue.I am worried that  this increasing behavior continues  in the next months.Also this trend dose not stop when I disable a service or remove some configurations.

Can anyone please help me fix this abnormal behavior and detect the reason ?

I put a copy of show commands below :

 

RP/0/RSP0/CPU0:ios#show watchdog memory-state
Thu Jan 9 09:41:13.784 EDT
Memory information:
Physical Memory: 6144 MB
Free Memory: 1972.0 MB
Memory State: Normal

 

RP/0/RSP0/CPU0:ios#show processes memory detail
Thu Jan 9 10:22:34.841 EDT
JID Text Data Stack Dynamic Dyn-Limit Shm-Tot Phy-Tot Process
------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- -------
1051 1M 9M 544K 594M 1978M 96M 605M bgp
1152 444K 568K 256K 301M 2176M 37M 301M ipv4_rib
62 120K 60K 160K 74M 300M 32M 74M eth_server
1153 444K 728K 192K 52M 2176M 36M 53M ipv6_rib
303 1M 636K 224K 42M 2048M 49M 43M l2fib_mgr
358 164K 144K 112K 23M 2048M 3932M 24M parser_server
1205 3M 1M 296K 15M 1024M 591M 17M pim
1206 3M 1M 208K 13M 1024M 564M 15M pim6
1180 3M 1M 236K 13M 1978M 48M 15M l2vpn_mgr
1182 40K 24K 20K 13M 300M 7M 13M schema_server
60 188K 80K 56K 12M INFINITY 3596M 12M dllmgr
351 128K 76K 112K 12M 2048M 12M 12M nvgen_server
1203 560K 1M 280K 11M 512M 566M 13M igmp
1204 492K 1M 144K 10M 512M 552M 12M mld
1150 4K 8K 40K 10M 2048M 25M 10M statsd_manager_g
420 4K 8K 32K 9M 2048M 15M 9M statsd_manager_l
1193 4K 12K 64K 9M 300M 18M 9M telemetry_encoder
90 124K 44K 112K 9M 300M 8M 9M redfs_svr
54 148K 72K 152K 8M 300M 16M 9M ce_switch
1201 916K 936K 132K 8M 300M 60M 9M mrib

 

RP/0/RSP0/CPU0:ios#show memory compare report
Thu Jan 9 09:22:14.367 EDT
JID name mem before mem after difference mallocs restart/exit/new
--- ---- ---------- --------- ---------- ------- ----------------
248 gsp 5977280 5981112 3832 1
461 wdsysmon 3864444 3865660 1216 8
66055 exec 206156 206244 88 3
428 sysdb_shared_nc 2454580 2454664 84 1
1051 bgp 608785016 608785064 48 2
442 tcp 5548200 5548248 48 1
229 fib_mgr 3192992 3193040 48 1
225 fabmgr 382048 382056 8 0
431 sysdb_svr_admin 1765360 1765352 -8 0
191 dumper_config 123644 122044 -1600 -1
426 sysdb_mc 3387848 3381724 -6124 -80
429 sysdb_shared_sc 3563864 3556536 -7328 -138
332 mibd_interface 29174392 29079004 -95388 -12

 

Thank you ,

6 Replies 6

smilstea
Cisco Employee
Cisco Employee

show memory compare needs to be run over many days to find the process holding more memory. After that we need to use show memory heap to find the exact function holding onto more memory.

 

More than likely the increased memory is due to route churn or increase ltraces that happens over time, we allocate a certain amount of memory for ltrace space but that memory is not utilized until trace messages actually occur, so that space is 'free' until events happen on the router. Ltraces are fairly small in size so seeing 1MB a day would be consistent with this behavior.

 

Sam

Thank you for your response
should I be worried about this trend or this process will stop by itself . how can I stop this increasing or make sure that this process is related to route churn and ltraces .

You can check shared memory (and ltrace consumption) with the show shmem summary command. Note that this command does not account for all memory types, but it does include more than just ltrace so it may prove useful.
Sam

Thank you very much
My last question is that if this trend will have an memory issue in the future  or it is normal and will be fixed ?

Thanks

Its not really an issue, once all the ltrace memory is used then we delete old entries to free up space for new ltraces. Over the course of a few weeks the memory consumption will level out, if that is the cause of your increased memory usage.

Thanks for your response

below is the recent results of my troubleshooting .Regarding memory compare, Is there any abnormal process causing the memory consumption OR I need other show commands ?

 

The result for memory compare is from 9th of jan to 17th of jun:

 

RP/0/RSP0/CPU0:ios#dir harddisk:/malloc_dump
Mon Jan 20 10:40:07.112 EDT

Directory of harddisk:/malloc_dump

57410 -rw- 21512 Thu Jan 9 09:20:41 2020 memcmp_start.out
57411 ---- 21512 Fri Jan 17 11:58:56 2020 memcmp_end.out

 

 

RP/0/RSP0/CPU0:ios#show memory compare report
Mon Jan 20 10:40:24.508 EDT

JID name mem before mem after difference mallocs restart/exit/new
--- ---- ---------- --------- ---------- ------- ----------------
1051 bgp 608785016 610843108 2058092 29677
442 tcp 5548200 6037544 489344 1559
341 netio 5243528 5732008 488480 1553
291 ipv4_io 4105432 4593564 488132 1552
387 raw_ip 4679888 5168020 488132 1552
451 udp 4660444 5148576 488132 1552
1001 snmpd 6817752 7305884 488132 1552
360 pfilter_ma 4581160 5062128 480968 1456
1205 pim 15414624 15853324 438700 1327
1152 ipv4_rib 314692672 314986720 294048 1
1153 ipv6_rib 54469052 54730652 261600 4
345 nrssvr 2123432 2301224 177792 4020
1114 obj_mgr 799988 825616 25628 194
452 vi_config_replicator 881360 904196 22836 13
320 lpts_pa 981460 991572 10112 158
401 sconbkup 262912 266304 3392 212
351 nvgen_server 10938316 10941548 3232 202
88 qnet 737052 739612 2560 5
461 wdsysmon 3864444 3865412 968 9
1179 xtc_agent 2798528 2798936 408 6
295 ipv6_arm 1040300 1040556 256 -6
386 qsm 1222716 1222912 196 2
433 syslogd 222880 223032 152 0
371 plat_swc_agent 346592 346720 128 8
469 ipv4_mfwd_partner 2253724 2253844 120 1

 

RP/0/RSP0/CPU0:ios#show shmem summary location 0/RSP0/CPU0
Mon Jan 20 10:42:20.290 EDT
Total Shared memory: 1612M
ShmWin: 193M
Image: 73M
LTrace: 601M
AIPC: 63M
SLD: 3M
SubDB: 528K
CERRNO: 148K
GSP-CBP: 157M
EEM: 0
XOS: 16M
CHKPT: 20M
CDM: 5M
XIPC: 1M
DLL: 64K
SysLog: 2M
Miscellaneous: 472M

 

Regards,