Showing results for 
Search instead for 
Did you mean: 
Join Customer Connection to register!

%PLATFORM_MCAST-3-IPV4_ERR: error message NOT found in the documentation!

what are those messages ? it this a bug ?

Platform: 3750x

IOS 15.2(2)E3

002864: 000430: Jul 15 03:54:05.035: %PLATFORM_MCAST-3-IPV4_ERR:  Cannot load multicast routing entry in hardware as corresponding TCAM region is full. (Condition hit 1 times since last warning)
002866: 000431: Jul 15 03:55:54.753: %PLATFORM_MCAST-5-IPV4_CLEAR:  All outstanding multicast routing entries loaded in hardware as TCAM space freed up.
002867: Jul 15 03:55:56.564: %PLATFORM_MCAST-5-IPV4_CLEAR:  All outstanding multicast routing entries loaded in hardware as TCAM space freed up.
t02868: 000432: Jul 15 08:24:24.037: %PLATFORM_MCAST-3-IPV4_ERR:  Cannot load multicast routing entry in hardware as corresponding TCAM region is full. (Condition hit 121 times since last warning)
e02869: Jul 15 08:24:41.323: %PLATFORM_MCAST-3-IPV4_ERR:  Cannot load multicast routing entry in hardware as corresponding TCAM region is full. (Condition hit 59 times since last warning)

Mark Malone
VIP Mentor

May not be a bug could be over utilizing the switches tcam

can you post the

show platform tcam utili

show platform ip multicast counters

CAM Utilization for ASIC# 0                      Max            Used
                                             Masks/Values    Masks/values

 Unicast mac addresses:                       1756/1756        669/669   
 IPv4 IGMP groups + multicast routes:         1072/1072        364/364   
 IPv4 unicast directly-connected routes:      1536/1536        498/498   
 IPv4 unicast indirectly-connected routes:    1280/1280        273/273   
 IPv6 Multicast groups:                       1072/1072        168/168   
 IPv6 unicast directly-connected routes:      1536/1536        116/116   
 IPv6 unicast indirectly-connected routes:    1280/1280        153/153   
 IPv4 policy based routing aces:               256/256          14/14    
 IPv4 qos aces:                                512/512          33/33    
 IPv4 security aces:                           512/512          75/75    
 IPv6 policy based routing aces:               256/256           8/8     
 IPv6 qos aces:                                512/512          29/29    
 IPv6 security aces:                           512/512          18/18

switch#sh platform ip multicast count

Multicast Queue Rx Stats
RPF-Q: 12540781 packets, 93663675 bytes
SPT-Q: 44475641 packets, 1782931475 bytes

Multicast Route Stats
mroutes in mds  316 [101 (s,g) + 215 (*,G)]
hardware        315 (315)
mdb-retry       0 (0)
mroute-retry    0 (0)
free atm space  0

Multicast Route Stats
                alloc    free     reserve
mroute-no-mds            0       
mds-no-mroute            1219154
mroute-update   1219470  1219154
mroute-retry    1202153  0        1202153
dest-index      1219469  1219154  1219154
MED             1219469  1219154  0       
stn-index       1219469  1219154

Retry Flags
global   FALSE
mdb      FALSE
mroute   FALSE
direct   FALSE

                hl3mm_mroute_limit: 4294967295 hl3mm_mroute_count: 316     
snoop           enable: 0          disable: 0       
hmdb            allocked: 1219469  freed: 1219154    group-free: 143      nogdb: 0       
clearall        total: 0           hware: 0          retry: 0       
LC mismatch     lc-di: 0           lc-ri: 0          rp-di: 0             rp-ri: 0       
RP              mdb: 3             hmdb: 3           di: 3                ri: 3       
others          no_hmdb: 0         no_diri: 0        del-skip: 0          midb-no-mdb: 0       
free error      diri: 0            atm: 0       
throttles       rpffail: 0         cbtspt: 4714     vld-nfy-failures: 0       
direct intf     enabled            ipe cfg: disabled mvrf cfg: 0x0      

Yes the hardware is running at max in what it can handle for multicast traffic, unfortunately this is common on switches they cant handle what a router can in certain features

Really only 2 options reduce the amount of multicast traffic or get something that can handle more traffic

I think the sdm on 3750 is the same across each one for multicast 1k but you could also make sure your on the best one with ----show sdm prefer

IPv4 IGMP groups + multicast routes:         1072/1072        364/364   

it probably wont effect your traffic a huge amount bit slower due to the lookup but the multicast traffic that's not processed in tcam will now be processed in hardware which will ramp the cpu a bit depending on how much more is coming through , the issue with that though is cpu is supposed to be reserved for processing traffic like syslog etc so now its doing an extra job as well puts more pressure on it

the switch is NOT running at max: max 1072, used 364 !!

sorry reading that wrong looking glanced at  the 1072 , if its not maxed and your getting alerts to say its being processed in hardware because the tcam is full you could have a software issue, the release 15.2(2)E3 is an early deployment you could try the safe harbour MD release which would be more tested  but its an earlier release but only by month