12-28-2022 02:00 AM
Hi
Does anybody else noticed this warning started this weekend on their devices ?
XXX : Security Intelligence URL: memcap exceeded (loaded 2167178 of 2939377)
This started showing up since this saturday, with no change to any policy/configuration, and only for low memory/older devices (aka ASA 5516 running FTD/Firepower 1010). This is not afecting Firepower 1120 or above models.
One interesting observation is that it seems like feed is constantly growing by each day:
Time: Sat Dec 24 04:54:44 2022 UTC - Security Intelligence URL: memcap exceeded (loaded XXX of 2317133)
Time: Sat Dec 24 20:39:59 2022 UTC - Security Intelligence URL: memcap exceeded (loaded XXX of 2354548)
Time: Sun Dec 25 04:33:19 2022 UTC - Security Intelligence URL: memcap exceeded (loaded XXX of 2365381)
Time: Sun Dec 25 20:19:23 2022 UTC - Security Intelligence URL: memcap exceeded (loaded XXX of 2413343)
Time: Mon Dec 26 04:14:15 2022 UTC - Security Intelligence URL: memcap exceeded (loaded XXX of 2444498)
Time: Mon Dec 26 19:59:33 2022 UTC - Security Intelligence URL: memcap exceeded (loaded XXX of 2612033)
Time: Tue Dec 27 03:49:22 2022 UTC - Security Intelligence URL: memcap exceeded (loaded XXX of 2667956)
Time: Tue Dec 27 19:37:55 2022 UTC - Security Intelligence URL: memcap exceeded (loaded XXX of 2891657)
Time: Wed Dec 28 03:32:46 2022 UTC - Security Intelligence URL: memcap exceeded (loaded XXX of 2939377)
01-03-2023 06:24 AM
I too came in today to find this message. I can confirm 100% that this error was not showing up on Sunday and is sitting at "loaded 1914588 of 33533414 on my 5516-x running 7.0.4 fmc & ftd. I would rather not shut off SI feeds to fix this.
01-03-2023 08:48 AM
I'm showing the same issues on all of our 1010's today.
Security Intelligence URL: memcap exceeded (loaded 1913477 of 3535927)
Is there a workaround for this? We are running 7.0.4 on FMC and FTD's.
01-03-2023 08:54 AM
No real workaround, only option is to reduce what feeds you are using in your access policy. And open a TAC case to get them to prioritize this ASAP. Users with 1010 + models are going to be able to get past TAC blaming it on old hardware not being able to handle things.
01-04-2023 03:06 AM
I am really leaning toward saying that something went wrong with feeds and is causing this problem.
You can check feeds on your own:
find /var/sf/siurl_download/. -name '*' | xargs wc -l - this commands shows which feeds consumes most entries.
That allows to check specific feeds content. In this checking, it can be seen that some of the domains are duplicated, while for others we have main domain, as well as subdomain entries.
Ideally - if more people raise a TAC case for this, maybe this will get necessary attention on proper team from Cisco side.
Examples are below:
cat /var/sf/siurl_download/./23f2a124-8278-4c03-8c9d-d28fe08b8fc9.lf | grep stilogo.it
stilogo.it/mobile/authentication/
stilogo.it/mobile/authentication
cat /var/sf/siurl_download/./23f2a124-8278-4c03-8c9d-d28fe08b8fc9.lf | grep csgo-collect.com
csgo-collect.com
csgo-collect.com/twltch
csgo-collect.com/twitch
csgo-collect.com/get
csgo-collect.com/g
csgo-collect.com/cdn-cgi/phish-bypass?atok=_imlix8xsgqllgsckwlnfys6mwbw0ugftlguymful8s-1671136044-0-/g
csgo-collect.com/go
01-04-2023 07:32 AM
I've been seeing this issue with my FMCv managed FP1010 running 7.0.5. for almost 2 weeks. I'm waiting to see if there is a resolution posted before I contact TAC
01-05-2023 07:21 AM
I am seeing this as well across all of my customers on versions from 6.5 .5 to 7.1.2
01-05-2023 12:23 PM
I've opened a TAC ticket for this issue waiting to see what the resolution will be.
01-09-2023 07:32 AM
Commenting on this to stay in the loop. Also having this issue with SFR modules version 6.5.0 on ASA 5525's.
01-10-2023 05:32 AM
Pair of 5525x with firepower. FMC is 6.4.0.14. Same issue, memcap exceeded
01-10-2023 05:58 AM
Same problem here on four 5545x running firepower 6.6.4, FMC is 6.6.7
01-12-2023 09:49 AM
There's another BugID here that reference's 1010's being a concern because they aren't old devices: https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwe00961
Based on a TAC case I opened today, they're saying it's a problem that TALOS is aware of and they will be fixing the SI feeds. They didn't provide more details than that, but I think it's likely that the abnormal growth of the feed (and duplicate entries) that was already mentioned in this thread is the root cause.
01-13-2023 03:15 AM
Hi Colin,
Yes, after opening a TAC case, I received the same information about TALOS looking to reduce the SI feed for memcap.
Since the issue is widespread, I would expect TALOS to address the SI feed issue very soon.
I was advised the system is hitting the following Cisco Bug IDs.
01-13-2023 05:14 AM
I have received bascilly the same reply via my TAC case.
From the CIsco TAC Engineer:
"I understand, the work around provided would not be a feasible option. Internally within Cisco, we have engaged the Engineering Team and TALOS to look over this issue and they are actively working on it. I will let you know as soon as I have an update for you."
01-14-2023 12:38 PM - edited 01-14-2023 01:16 PM
It looks like URL SI feed contains all the data from DNS SI feed, so they are about the same (as of Jan 14 2023):
root@fpr1010:sf# cat sidns_download/*.lf | sort | uniq | wc -l
2830418
root@fpr1010:sf# cat siurl_download/*.lf | sort | uniq | wc -l
3126993
root@fpr1010:sf# cat sidns_download/*.lf siurl_download/*.lf | sort | uniq | wc -l
3127242
I was expecting unique entries for the two SI feeds to be at least 50% higher than the largest URL SI, however they are barely larger than the URL SI alone.
I can see why Cisco/Talos decided to do so: There might be users using either one of these feeds, so the more inclusive their feeds are, the more likely they do their job. However, for us using both feeds, it exceeds the processing capacity on certain platforms, and I have no clue if, because of this error, we are getting any protection at all. Hence the perhaps workaround suggested by some TAC on recommending usage of one of the two SI feeds.
I would think DNS should takes precedence over URL queries - after all, you have to issue a DNS query before you can open a URL. At the same time, DNS queries can now be made over TLS/HTTPS, so it falls into a different processor/queue. If you do not allow or expect DOT/DOH, or if you do not do SSL inspection, keeping DNS SI feed should be enough. If you do inspection and/or if you do expect DOT/DOH, then perhaps the URL SI feed would be your best choice: as it is slightly bigger, it may be more inclusive, therefore you could perhaps capture a few things missing from the DNS SI feed, without blocking access to the entire host.
Bottom line, Cisco engineering team has to come up with a way for both lists to be used and maybe shared, instead of duplicated in between internal processes.
The catch still is, however, using one of the two feeds continues to give error, so it is not only the size of the feeds but something with its contents and/or the file processing:
Jan 14 20:49:33 fpr1010 ActionQueueScrape.pl[29314]: - Start offline processing of manifest
Jan 14 20:49:34 fpr1010 ActionQueueScrape.pl[29314]: SF::DataService::Util::downloadFile failure: Failed to copy file from manager: Failed at /ngfw/usr/local/sf/lib/perl/5.32.1/SF/DataService/Util.pm line 898.
Jan 14 20:49:35 fpr1010 ActionQueueScrape.pl[29314]: SF::DataService::Util::downloadFile failure: Failed to copy file from manager: Failed at /ngfw/usr/local/sf/lib/perl/5.32.1/SF/DataService/Util.pm line 898.
Jan 14 20:49:36 fpr1010 ActionQueueScrape.pl[29314]: SF::DataService::Util::downloadFile failure: Failed to copy file from manager: Failed at /ngfw/usr/local/sf/lib/perl/5.32.1/SF/DataService/Util.pm line 898.
Jan 14 20:49:37 fpr1010 ActionQueueScrape.pl[29314]: SF::DataService::Util::downloadFile failure: Failed to copy file from manager: Failed at /ngfw/usr/local/sf/lib/perl/5.32.1/SF/DataService/Util.pm line 898.
Jan 14 20:49:38 fpr1010 ActionQueueScrape.pl[29314]: SF::DataService::Util::downloadFile failure: Failed to copy file from manager: Failed at /ngfw/usr/local/sf/lib/perl/5.32.1/SF/DataService/Util.pm line 898.
Jan 14 20:49:39 fpr1010 ActionQueueScrape.pl[29314]: SF::DataService::Util::downloadFile failure: Failed to copy file from manager: Failed at /ngfw/usr/local/sf/lib/perl/5.32.1/SF/DataService/Util.pm line 898.
Jan 14 20:49:39 fpr1010 ActionQueueScrape.pl[29314]: - Offline processing of manifest failed
Please note I have no ties with Cisco other than being a curious customer, so take your best judgement and the above with a grain of salt. I still hope this helps the decision making until an official fix is out.
01-16-2023 03:28 AM
Just keen to know if anyone is still observing the SI memcap error message?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide