cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1826
Views
0
Helpful
9
Replies

OS-SHMWIN-3-ALLOC_ARENA_FAILEDSHMWIN: Failed to allocate new arena from the server : 'SHMWIN_SVR' detected the 'fatal' condition 'VM is exhausted or totally fragmented'

ty.chan007
Level 1
Level 1

I am getting the following log:

OS-SHMWIN-3-ALLOC_ARENA_FAILEDSHMWIN: Failed to allocate new arena from the server : 'SHMWIN_SVR' detected the 'fatal' condition 'VM is exhausted or totally fragmented'

Is it memory leak bug or something else?

1 Accepted Solution

Accepted Solutions

Try reloading the standby and when it comes up initiate the RP switchover.

View solution in original post

9 Replies 9

Aleksandar Vidakovic
Cisco Employee
Cisco Employee

This error indicates that the system is running out of space in the shared memory region. It would be the best to capture "sh shmwin summary [location <location>]" and open a TAC SR.

RP/0/RSP0/CPU0:R01#sh shmwin summary location 0/0/CPU0
Mon Sep 12 14:52:07.361 <pnh>
----------------------------------------
Shared memory window summary information
----------------------------------------
Virtual Memory size  : 1536 MBytes
Virtual Memory Range :  0x7c000000 - 0xdc000000
Virtual Memory Group 2 size  : 352 MBytes
Virtual Memory Group 2 Range : 0x66000000 - 0x7c000000

Window Name      ID  GRP #Usrs #Wrtrs Ownr Usage(KB) Peak(KB) Peak Timestamp
---------------- --- --- ----- ------ ---- --------- -------- -------------------
vkg_pbr_ea       83  1   1     1      0    131       131      09/09/2016 06:23:01
vkg_l2fib_vqi    97  1   2     2      0    3         0        --/--/---- --:--:--
statsd_db        60  1   1     1      0    3         0        --/--/---- --:--:--
statsd_db_l      129 P   1     1      0    1131      1131     09/09/2016 06:22:45
arp              20  1   1     1      0    67        67       09/09/2016 06:24:15
bm_lacp_tx       54  1   1     1      130  34        34       09/09/2016 06:23:16
ether_ea_shm     26  1   3     3      382  79        79       09/09/2016 06:22:37
ether_ea_tcam    58  1   4     4      382  331       331      09/09/2016 06:22:37
prm_srh_main     66  1   21    21     0    54147     54147    09/09/2016 06:23:16
prm_stats_svr    24  1   16    16     0    81587     81587    09/09/2016 06:22:19
prm_tcam_mm_svr  23  1   1     1      0    21907     21907    09/09/2016 06:23:15
prm_ss_lm_svr    65  1   1     1      0    2379      2379     09/09/2016 06:22:20
prm_ss_mm_svr    22  1   3     3      0    3755      3755     09/09/2016 06:22:17
l2fib            14  1   5     5      249  7490      7490     09/09/2016 06:23:15
pd_fib_cdll      28  1   1     1      0    35        35       09/09/2016 06:22:13
ifc-mpls         13  1   11    11     182  134301    134301   09/12/2016 14:48:08
ifc-ipv6         17  1   11    11     182  27373     27373    09/09/2016 06:23:55
ifc-ipv4         16  1   11    11     182  120909    121109   09/11/2016 03:11:24
ifc-protomax     18  1   11    11     182  4081      4281     09/09/2016 06:27:01
bfd_offload_shm  94  1   1     1      0    2         0        --/--/---- --:--:--
netio_fwd        34  1   1     1      0    0         0        --/--/---- --:--:--
inline_svc       88  1   1     1      0    635       635      09/09/2016 06:22:08
vkg_bmp_adj      30  1   3     3      127  43        43       09/09/2016 06:22:20
infra_ital       19  1   5     5      319  323       323      09/09/2016 06:22:09
im_rd            33  1   56    56     0    1131      1131     09/09/2016 06:22:05
im_db_private    128 P   1     1      0    1131      1131     09/09/2016 06:22:10
infra_statsd     8   1   3     3      342  3         0        --/--/---- --:--:--
aib              2   1   7     7      111  2255      2255     09/09/2016 06:23:12
vkg_pm           5   1   22    1      295  83930     83930    09/12/2016 14:51:32
mgid_refcount    64  1   1     1      267  40259     40259    09/09/2016 06:22:07
rspp_ma          3   1   12    12     0    3         0        --/--/---- --:--:--
subdb_fai_tbl    75  2   7     1      0    51        51       09/09/2016 06:21:52
subdb_ifh_tbl    74  2   1     1      0    35        35       09/09/2016 06:21:52
subdb_ao_tbl     72  2   1     1      0    43        43       09/09/2016 06:21:52
subdb_do_tbl     73  2   7     1      0    35        35       09/09/2016 06:21:52
subdb_co_tbl     71  2   7     1      0    39        39       09/09/2016 06:21:52
cluster_dlm      61  1   20    20     0    3         0        --/--/---- --:--:--
im_rules         31  1   65    65     0    325       325      09/09/2016 06:22:05
im_db            32  1   65    1      0    1129      1129     09/09/2016 06:22:07
pfm_node         29  1   1     1      0    163       163      09/09/2016 06:22:11
spp              27  1   39    39     88   1003      1003     09/09/2016 06:23:05
qad              6   1   1     1      0    134       134      01/01/1970 07:00:08
pcie-server      39  1   1     1      0    39        39       01/01/1970 07:00:07
RP/0/RSP0/CPU0:R01#

will reload help ?

just want to do some temporary solution while waiting for SMARTNET to be purchased :)

yes, reload will clear the condition. I would also highly recommend to download and install the XR release 5.3.4, which should be available for download around September 22nd.

My platform is ASR 9006. So, i have 2 RSPs.

Reloading one by one RSP will do ? or i have to reload the whole system ?

if the error was reported on the RP, in that case you can simply initiate a failover from active to standby. If the error was reported on a LC, you have to reload the line card.

A system reload should not be required.

RP/0/RSP1/CPU0:Sep 12 12:32:38.897 : fib_mgr[223]: %OS-SHMWIN-3-ALLOC_ARENA_FAILED : SHMWIN: Failed to allocate new arena from the server : 'SHMWIN_SVR' detected the 'fatal' condition 'VM is exhausted or totally fragmented'
RP/0/RSP0/CPU0:Sep 12 12:32:50.579 : fib_mgr[223]: %OS-SHMWIN-3-ALLOC_ARENA_FAILED : SHMWIN: Failed to allocate new arena from the server : 'SHMWIN_SVR' detected the 'fatal' condition 'VM is exhausted or totally fragmented'
RP/0/RSP1/CPU0:Sep 12 12:32:50.579 : fib_mgr[223]: %OS-SHMWIN-3-ALLOC_ARENA_FAILED : SHMWIN: Failed to allocate new arena from the server : 'SHMWIN_SVR' detected the 'fatal' condition 'VM is exhausted or totally fragmented'
RP/0/RSP0/CPU0:Sep 12 12:33:02.223 : fib_mgr[223]: %OS-SHMWIN-3-ALLOC_ARENA_FAILED : SHMWIN: Failed to allocate new arena from the server : 'SHMWIN_SVR' detected the 'fatal' condition 'VM is exhausted or totally fragmented'
RP/0/RSP1/CPU0:Sep 12 12:33:02.223 : fib_mgr[223]: %OS-SHMWIN-3-ALLOC_ARENA_FAILED : SHMWIN: Failed to allocate new arena from the server : 'SHMWIN_SVR' detected the 'fatal' condition 'VM is exhausted or totally fragmented'
RP/0/RSP0/CPU0:Sep 12 12:33:13.246 : fib_mgr[223]: %OS-SHMWIN-3-ALLOC_ARENA_FAILED : SHMWIN: Failed to allocate new arena from the server : 'SHMWIN_SVR' detected the 'fatal' condition 'VM is exhausted or totally fragmented'
RP/0/RSP1/CPU0:Sep 12 12:33:13.247 : fib_mgr[223]: %OS-SHMWIN-3-ALLOC_ARENA_FAILED : SHMWIN: Failed to allocate new arena from the server : 'SHMWIN_SVR' detected the 'fatal' condition 'VM is exhausted or totally fragmented'

==============

Based on above log, it is happening on both RSP0 and RSP1. 

So, one by one RP reload or all RP reload ? :)

Try reloading the standby and when it comes up initiate the RP switchover.

i just reload one by one and i don't see any alert anymore. Thanks !! :)

but not sure in longer run.

BTW, i forget some features to enable like:

1. isolation enable

2. nsr process-failures switchover

I am not sure if it will impact or interrupt the traffic to apply those command.