Showing results for 
Search instead for 
Did you mean: 

Dispatch Unit - High CPU

edward ventura
Level 1
Level 1

Hi All,

I'm trying to do some research on the Dispatch Unit process.  It seems High CPU and this process go hand in hand.  I haven't figured out an effective way of determining what underlying issue is the actual source.  Can someone point me in the right direction to try an understand what the Dispatch Unit process is doing?  I have an ASA 5550.  I have seen the cpu hover around 85% +- 5% for sustained long periods, 30 - 60 min +.  I have always been under the impression that around 80% cpu and you're probably dropping packets (that could be an out-dated belief).

Any help to understand this is much appreciated.


62 Replies 62

edward ventura
Level 1
Level 1

I thought I'd add the commands I've used thus far to isolate the issue.  It seems that the high Dispatch Unit process isn't very conclusive other than the ASA is processing a high amount of traffic.  The below commands will aid in determining the source/destination/service type traffic the ASA is processing so that you may take action.

show xlate count

show conn

show conn count

show block

show memory detail

show processes cpu-usage

show processes cpu-hog

show processes memory

show interface

show traffic

show mroute

show local-host

In my case so far, I haven't had any one thing causing the high cpu.  The highest percentage drop I have seen so far was removing netbios from my service profile during a cpu spike to 95% and an hour of sustained cpu at 90% +.  I had cleared the majority of my counters and noticed that the number of netbios packets matching the service policy was increasing at a rate far higher than the rest.  Once I removed netbios the cpu dropped over 15% immediately and continued to drop over a period of time.

So really the only advice I can give is be familiar with the traffic traversing your ASA.  The listed commands above will help to isolate possible sources/causes.  It could be legitimate production traffic or malicious.

Hope this helps.

Hello Edward,

That is indeed very true, but that only aleaveates the CPU processing but it does not get rid of the issue, based on the ICMP messages and netbios, there could be loops and packets bouncing, hence the high amount of inspection hits. Adding to the packet processing on the interface itself, the inspection will add another CPU intensing consuming.

As you rightly pointed, the main thing on these cases is to first identify the traffic and then, check if it is normal or not.




I like where you are headed with your point.  I didn't consider the possibility of a loop.  I would post the output of show conn but that's over 100,000 lines.  For me to remove sensitive information and post here would be a bit much.  Any other ideas on how to go about it?


Open capture on the interface and verify the packets reaching the interface. The buffer will be taking a fixed amount of resources, so it shouldnt cause major issues.

Check the capture, see the most frequent IP used and check the amount of bytes on each connection and the time that they have been up, not so much the amount of connections that it has. 

Let mek now when you collect this info.



Does this look odd? (a little sarcasm there)

UDP X.X.X.255:137 X.X.X.73:60660, idle 0:00:00, bytes 443113450, flags -

UDP X.X.X.255:137 X.X.X.73:51353, idle 0:00:00, bytes 1168911250, flags -

The really odd part of this is that the source and destination are subnets not the ASA but there is a route for.

Mike your loop thought just might be right.

Been doing this for a long time now.... ... Not normal at all. Try to clear the connections just for these hosts and check the CPU again.

clear conn address x.x.x.x

If you want to add the port at the end you can do it as well.



So of course I attempt to clear the connection and it no longer exists.  My cpu is at 48%, which is great!  I have a feeling that this connection will return and so will high cpu.

I also wanted to further expand on how I found the address ( i didn't sort through 100K + connections).  I actually used the ASDM, which I hate to admit.  I enabled the top usage stats.  It just so happened that in the top destinations 99% of traffic was destined for a broadcast address.  I then checked the connections for that host.  The tricky thing is that I had seen this before but was so focused on connection count that I didn't bother looking at the size.  It wasn't until Mike mentioned the size until I took a look.

I'll wait for it's return but as of now I think I've found the culprit.

Alright Edward,

Sounds great. Glad we were able to help a bit.



Finally had a chance to circle back to the thread.  Come to find out there was a routing loop.  I wasn't aware of a seperate pair of ASA used strictly for RA VPN termination.  The two pairs of ASA's had static routes that caused a loop when a specific situation occured.

I jumped too quick to thinking it was a firewall issue when it was simple routing.

Thanks Mike!

Hi Edward,

I have the same issue with my ASA 5520, can you Post the statics routes that caused a loop in your situation, for give me an idea about your situation and compare with my scenario

Thanks. Regards


I'll try and explain my scenario to see if it helps.  We had two ASA's that were connected together by one subnet (a transit network).  Each had seperate subnets but had one interface on each located on the same subnet.  At some point static routes were placed on both sides to get to subnets present on the other ASA.  One was placed for the subnet that both the ASA's resided on.  A host sent out a broadcast, that was on the shared segment between the two ASA's, that wasn't viewed as a broadcast because the subnet was superneted on the ASA's but subnetted on the host side (some really odd stuff going on).  So each of the ASA's would try to route the packet back to the other ASA.  This was flagged as a design flaw.

Hope that explanation some how helps.



Thanks Edward,

The issue was that my device came to the top the throughput specified in the datasheet, because the quantity of traffic passing across the device, in productive hours is too much. We will migrate some aplications to another device for the best performance in both devices.