I thought I'd add the commands I've used thus far to isolate the issue. It seems that the high Dispatch Unit process isn't very conclusive other than the ASA is processing a high amount of traffic. The below commands will aid in determining the source/destination/service type traffic the ASA is processing so that you may take action.
show xlate count
show conn count
show memory detail
show processes cpu-usage
show processes cpu-hog
show processes memory
In my case so far, I haven't had any one thing causing the high cpu. The highest percentage drop I have seen so far was removing netbios from my service profile during a cpu spike to 95% and an hour of sustained cpu at 90% +. I had cleared the majority of my counters and noticed that the number of netbios packets matching the service policy was increasing at a rate far higher than the rest. Once I removed netbios the cpu dropped over 15% immediately and continued to drop over a period of time.
So really the only advice I can give is be familiar with the traffic traversing your ASA. The listed commands above will help to isolate possible sources/causes. It could be legitimate production traffic or malicious.
Hope this helps.
That is indeed very true, but that only aleaveates the CPU processing but it does not get rid of the issue, based on the ICMP messages and netbios, there could be loops and packets bouncing, hence the high amount of inspection hits. Adding to the packet processing on the interface itself, the inspection will add another CPU intensing consuming.
As you rightly pointed, the main thing on these cases is to first identify the traffic and then, check if it is normal or not.
I like where you are headed with your point. I didn't consider the possibility of a loop. I would post the output of show conn but that's over 100,000 lines. For me to remove sensitive information and post here would be a bit much. Any other ideas on how to go about it?
Open capture on the interface and verify the packets reaching the interface. The buffer will be taking a fixed amount of resources, so it shouldnt cause major issues.
Check the capture, see the most frequent IP used and check the amount of bytes on each connection and the time that they have been up, not so much the amount of connections that it has.
Let mek now when you collect this info.
Does this look odd? (a little sarcasm there)
UDP X.X.X.255:137 X.X.X.73:60660, idle 0:00:00, bytes 443113450, flags -
UDP X.X.X.255:137 X.X.X.73:51353, idle 0:00:00, bytes 1168911250, flags -
The really odd part of this is that the source and destination are subnets not the ASA but there is a route for.
Mike your loop thought just might be right.
Been doing this for a long time now.... ... Not normal at all. Try to clear the connections just for these hosts and check the CPU again.
clear conn address x.x.x.x
If you want to add the port at the end you can do it as well.
So of course I attempt to clear the connection and it no longer exists. My cpu is at 48%, which is great! I have a feeling that this connection will return and so will high cpu.
I also wanted to further expand on how I found the address ( i didn't sort through 100K + connections). I actually used the ASDM, which I hate to admit. I enabled the top usage stats. It just so happened that in the top destinations 99% of traffic was destined for a broadcast address. I then checked the connections for that host. The tricky thing is that I had seen this before but was so focused on connection count that I didn't bother looking at the size. It wasn't until Mike mentioned the size until I took a look.
I'll wait for it's return but as of now I think I've found the culprit.
Finally had a chance to circle back to the thread. Come to find out there was a routing loop. I wasn't aware of a seperate pair of ASA used strictly for RA VPN termination. The two pairs of ASA's had static routes that caused a loop when a specific situation occured.
I jumped too quick to thinking it was a firewall issue when it was simple routing.
I have the same issue with my ASA 5520, can you Post the statics routes that caused a loop in your situation, for give me an idea about your situation and compare with my scenario
I'll try and explain my scenario to see if it helps. We had two ASA's that were connected together by one subnet (a transit network). Each had seperate subnets but had one interface on each located on the same subnet. At some point static routes were placed on both sides to get to subnets present on the other ASA. One was placed for the subnet that both the ASA's resided on. A host sent out a broadcast, that was on the shared segment between the two ASA's, that wasn't viewed as a broadcast because the subnet was superneted on the ASA's but subnetted on the host side (some really odd stuff going on). So each of the ASA's would try to route the packet back to the other ASA. This was flagged as a design flaw.
Hope that explanation some how helps.
The issue was that my device came to the top the throughput specified in the datasheet, because the quantity of traffic passing across the device, in productive hours is too much. We will migrate some aplications to another device for the best performance in both devices.
I'm having the same problem here, I'm not really into security,I'm more on router and switches but our Cisco ASA 5520 CPU usage drastically increased up to 93%. I already saw your posts but I'm not familiar with the command.
I aready started a topic here and send these result:
according to the person who replied, the dispatch unit is running high. May I ask for your assistance?
What are all the information needed, so I could send it.
I ran into this issue with high CPU as well on an ASA 5540 with only about 150 users. In the case if you are using large modulus operations including large key size certificates and a higher Diffie-Hellman group, it will cause for high processing.
Since the default method of processing these operations is software-based, it will cause higher CPU usage and also slower SSL/IPsec connection establishment.
If this is the scenario for you, use hardware-based processing by using the following configuration:
"crypto engine large-mod-accel"
I had a similar issue on my end with my firewall's cpu pinned at 95% and the service-policy showed no drops at all. What helped me was the show asp drop. I kept seeing inspect-fail counts incrementing. What I did was I ran three packet capture samples for the asp drop type only (command:cap cap type asp-drop all") and exported it for further analysis. I was able to deduce that we kept getting bogus dns queries from various subnets over the internet inbound. This was due to an acl line that allowed any internet traffic inbound to access our public dns servers. I created a network object-group and only blocked those and my firewall's cpu dropped to 54%. Hope this helps someone.