cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
58481
Views
55
Helpful
62
Replies

Dispatch Unit - High CPU

edward ventura
Level 1
Level 1

Hi All,

I'm trying to do some research on the Dispatch Unit process.  It seems High CPU and this process go hand in hand.  I haven't figured out an effective way of determining what underlying issue is the actual source.  Can someone point me in the right direction to try an understand what the Dispatch Unit process is doing?  I have an ASA 5550.  I have seen the cpu hover around 85% +- 5% for sustained long periods, 30 - 60 min +.  I have always been under the impression that around 80% cpu and you're probably dropping packets (that could be an out-dated belief).

Any help to understand this is much appreciated.

-E

62 Replies 62

edward ventura
Level 1
Level 1

I thought I'd add the commands I've used thus far to isolate the issue.  It seems that the high Dispatch Unit process isn't very conclusive other than the ASA is processing a high amount of traffic.  The below commands will aid in determining the source/destination/service type traffic the ASA is processing so that you may take action.

show xlate count

show conn

show conn count

show block

show memory detail

show processes cpu-usage

show processes cpu-hog

show processes memory

show interface

show traffic

show mroute

show local-host

In my case so far, I haven't had any one thing causing the high cpu.  The highest percentage drop I have seen so far was removing netbios from my service profile during a cpu spike to 95% and an hour of sustained cpu at 90% +.  I had cleared the majority of my counters and noticed that the number of netbios packets matching the service policy was increasing at a rate far higher than the rest.  Once I removed netbios the cpu dropped over 15% immediately and continued to drop over a period of time.

So really the only advice I can give is be familiar with the traffic traversing your ASA.  The listed commands above will help to isolate possible sources/causes.  It could be legitimate production traffic or malicious.

Hope this helps.

Hello Edward,

That is indeed very true, but that only aleaveates the CPU processing but it does not get rid of the issue, based on the ICMP messages and netbios, there could be loops and packets bouncing, hence the high amount of inspection hits. Adding to the packet processing on the interface itself, the inspection will add another CPU intensing consuming.

As you rightly pointed, the main thing on these cases is to first identify the traffic and then, check if it is normal or not.

Mike

Mike

Mike,

I like where you are headed with your point.  I didn't consider the possibility of a loop.  I would post the output of show conn but that's over 100,000 lines.  For me to remove sensitive information and post here would be a bit much.  Any other ideas on how to go about it?

Eddie

Open capture on the interface and verify the packets reaching the interface. The buffer will be taking a fixed amount of resources, so it shouldnt cause major issues.

Check the capture, see the most frequent IP used and check the amount of bytes on each connection and the time that they have been up, not so much the amount of connections that it has. 

Let mek now when you collect this info.

Mike

Mike

Does this look odd? (a little sarcasm there)

UDP X.X.X.255:137 X.X.X.73:60660, idle 0:00:00, bytes 443113450, flags -

UDP X.X.X.255:137 X.X.X.73:51353, idle 0:00:00, bytes 1168911250, flags -

The really odd part of this is that the source and destination are subnets not the ASA but there is a route for.

Mike your loop thought just might be right.

Been doing this for a long time now.... ... Not normal at all. Try to clear the connections just for these hosts and check the CPU again.

clear conn address x.x.x.x

If you want to add the port at the end you can do it as well.

Mike.

Mike

So of course I attempt to clear the connection and it no longer exists.  My cpu is at 48%, which is great!  I have a feeling that this connection will return and so will high cpu.

I also wanted to further expand on how I found the address ( i didn't sort through 100K + connections).  I actually used the ASDM, which I hate to admit.  I enabled the top usage stats.  It just so happened that in the top destinations 99% of traffic was destined for a broadcast address.  I then checked the connections for that host.  The tricky thing is that I had seen this before but was so focused on connection count that I didn't bother looking at the size.  It wasn't until Mike mentioned the size until I took a look.

I'll wait for it's return but as of now I think I've found the culprit.

Alright Edward,

Sounds great. Glad we were able to help a bit.

Mike

Mike

Finally had a chance to circle back to the thread.  Come to find out there was a routing loop.  I wasn't aware of a seperate pair of ASA used strictly for RA VPN termination.  The two pairs of ASA's had static routes that caused a loop when a specific situation occured.

I jumped too quick to thinking it was a firewall issue when it was simple routing.

Thanks Mike!

Hi Edward,

I have the same issue with my ASA 5520, can you Post the statics routes that caused a loop in your situation, for give me an idea about your situation and compare with my scenario

Thanks. Regards

Hector,

I'll try and explain my scenario to see if it helps.  We had two ASA's that were connected together by one subnet (a transit network).  Each had seperate subnets but had one interface on each located on the same subnet.  At some point static routes were placed on both sides to get to subnets present on the other ASA.  One was placed for the subnet that both the ASA's resided on.  A host sent out a broadcast, that was on the shared segment between the two ASA's, that wasn't viewed as a broadcast because the subnet was superneted on the ASA's but subnetted on the host side (some really odd stuff going on).  So each of the ASA's would try to route the packet back to the other ASA.  This was flagged as a design flaw.

Hope that explanation some how helps.

Regards,

Eddie

Thanks Edward,

The issue was that my device came to the top the throughput specified in the datasheet, because the quantity of traffic passing across the device, in productive hours is too much. We will migrate some aplications to another device for the best performance in both devices.

Hi,

I'm having the same problem here, I'm not really into security,I'm more on router and switches but our Cisco ASA 5520 CPU usage drastically increased up to 93%. I already saw your posts but I'm not familiar with the command.

I aready started a topic here and send these result:

INTFW# show process cpu-usage non-zero

PC         Thread       5Sec     1Min     5Min   Process

081aa324   6bdaf870    78.6%    79.0%    78.9%   Dispatch Unit

08bd08d6   6bda9210     5.6%     5.6%     5.6%   Logger

INTFW# show process cpu-h

Process:      snp flow bulk sync, PROC_PC_TOTAL: 12, MAXHOG: 16, LASTHOG: 16

LASTHOG At:   11:27:08 PHST Aug 8 2011

PC:           86badfe (suspend)

Process:      vpnfol_sync/Bulk Sync - Import , NUMHOG: 23, MAXHOG: 6, LASTHOG: 6

LASTHOG At:   11:27:17 PHST Aug 8 2011

PC:           80635a5 (suspend)

Traceback:    80635a5  8d9ff96  8062413

Process:      vpnfol_sync/Bulk Sync - Import , PROC_PC_TOTAL: 23, MAXHOG: 5, LAS                                                                                        THOG: 5

LASTHOG At:   11:27:17 PHST Aug 8 2011

PC:           8da1592 (suspend)

Process:      vpnfol_sync/Bulk Sync - Import , NUMHOG: 23, MAXHOG: 5, LASTHOG: 5

LASTHOG At:   11:27:17 PHST Aug 8 2011

PC:           8da1592 (suspend)

Traceback:    8da1c7e  8d9ff8f  8062413

Process:      ssh_init, PROC_PC_TOTAL: 4, MAXHOG: 4, LASTHOG: 3

LASTHOG At:   07:41:20 PHST Aug 18 2011

PC:           806dcd5 (suspend)

Process:      ssh_init, NUMHOG: 4, MAXHOG: 4, LASTHOG: 3

LASTHOG At:   07:41:20 PHST Aug 18 2011

<--- More --->

PC:           806dcd5 (suspend)

Traceback:    8b9d3e6  8bab837  8ba024a  8062413

Process:      ssh_init, PROC_PC_TOTAL: 90801, MAXHOG: 5, LASTHOG: 2

LASTHOG At:   04:47:28 PHST Apr 5 2012

PC:           8b9ac8c (suspend)

Process:      ssh_init, NUMHOG: 90801, MAXHOG: 5, LASTHOG: 2

LASTHOG At:   04:47:28 PHST Apr 5 2012

PC:           8b9ac8c (suspend)

Traceback:    8b9ac8c  8ba77ed  8ba573e  8ba58e8  8ba6971  8ba02b4  8062413

Process:      telnet/ci, PROC_PC_TOTAL: 1, MAXHOG: 3, LASTHOG: 3

LASTHOG At:   08:43:18 PHST Apr 16 2012

PC:           8870ba5 (suspend)

Process:      telnet/ci, NUMHOG: 1, MAXHOG: 3, LASTHOG: 3

LASTHOG At:   08:43:18 PHST Apr 16 2012

PC:           8870ba5 (suspend)

Traceback:    8870ba5  9298bf1  92789fe  9279191  80ca7e7  80cacbb  80c14b5

              80c1c5f  80c2da6  80c3850  8062413

Process:      Unicorn Proxy Thread, PROC_PC_TOTAL: 5, MAXHOG: 3, LASTHOG: 2

LASTHOG At:   20:23:09 PHST Apr 27 2012

PC:           8c0e8e5 (suspend)

Process:      Unicorn Proxy Thread, NUMHOG: 5, MAXHOG: 3, LASTHOG: 2

LASTHOG At:   20:23:09 PHST Apr 27 2012

PC:           8c0e8e5 (suspend)

Traceback:    8c0e8e5  8c23428  8c24561  8cff99d  8cfdb0c  8cf9f81  8cf9ef5

              8cfa9b0  8cec6c9  8cebf7b  8cec22c  8ce5e2f  8d00cfb  8d01d67

Process:      Unicorn Proxy Thread, PROC_PC_TOTAL: 12, MAXHOG: 5, LASTHOG: 4

LASTHOG At:   20:23:09 PHST Apr 27 2012

PC:           8c2bb4d (suspend)

Process:      Unicorn Proxy Thread, NUMHOG: 12, MAXHOG: 5, LASTHOG: 4

LASTHOG At:   20:23:09 PHST Apr 27 2012

PC:           8c2bb4d (suspend)

Traceback:    8c2bb4d  8c0ef7a  8c11576  8c11625  8c12748  8c140f8  8c0f074

              8c23bae  8f2f1f1  8062413

Process:      vpnfol_sync/Bulk Sync - Import , PROC_PC_TOTAL: 488, MAXHOG: 100, LASTHOG: 2

LASTHOG At:   02:44:29 PHST May 6 2012

PC:           80635a5 (suspend)

Process:      ssh_init, NUMHOG: 461, MAXHOG: 3, LASTHOG: 2

LASTHOG At:   02:44:29 PHST May 6 2012

PC:           80635a5 (suspend)

Traceback:    80635a5  8133d0b  9224474  923d3c8  9239045  9238e95  9226f50

              92263d8  92158bf  920530c  922564a  92254c1  9214606  92050bc

Process:      snmp, PROC_PC_TOTAL: 52, MAXHOG: 3, LASTHOG: 3

LASTHOG At:   12:39:15 PHST May 7 2012

PC:           8b37300 (suspend)

Process:      snmp, NUMHOG: 52, MAXHOG: 3, LASTHOG: 3

LASTHOG At:   12:39:15 PHST May 7 2012

PC:           8b37300 (suspend)

Traceback:    8b37300  8b35d27  8b32e39  8b358c8  8b10b5e  8b0f7bc  8062413

Process:      ssh_init, PROC_PC_TOTAL: 43117, MAXHOG: 4, LASTHOG: 2

LASTHOG At:   10:35:55 PHST May 8 2012

PC:           83cf301 (suspend)

Process:      ssh_init, NUMHOG: 43117, MAXHOG: 4, LASTHOG: 2

LASTHOG At:   10:35:55 PHST May 8 2012

PC:           83cf301 (suspend)

Traceback:    83cfb25  83c9883  812ea45  89e51b2  89b8dda  8ba0e44  8ba0278

              8062413

Process:      Dispatch Unit, PROC_PC_TOTAL: 4911194, MAXHOG: 1010, LASTHOG: 3

LASTHOG At:   14:22:15 PHST May 8 2012

PC:           81aa50f (suspend)

Process:      Dispatch Unit, NUMHOG: 4501175, MAXHOG: 1010, LASTHOG: 3

LASTHOG At:   14:22:15 PHST May 8 2012

PC:           81aa50f (suspend)

Traceback:    81aa50f  8062413

Process:      snmp, PROC_PC_TOTAL: 82902, MAXHOG: 4, LASTHOG: 3

LASTHOG At:   14:25:09 PHST May 8 2012

PC:           8c09598 (suspend)

Process:      snmp, NUMHOG: 82902, MAXHOG: 4, LASTHOG: 3

LASTHOG At:   14:25:09 PHST May 8 2012

PC:           8c09598 (suspend)

Traceback:    8b300cd  8b1086d  8b0f7bc  8062413

Process:      snmp, PROC_PC_TOTAL: 41500, MAXHOG: 4, LASTHOG: 3

LASTHOG At:   14:25:09 PHST May 8 2012

PC:           8b3709e (suspend)

Process:      snmp, NUMHOG: 41500, MAXHOG: 4, LASTHOG: 3

LASTHOG At:   14:25:09 PHST May 8 2012

PC:           8b3709e (suspend)

Traceback:    8b3709e  8b35dcb  8b32e39  8b358c8  8b10b5e  8b0f7bc  8062413

Process:      Dispatch Unit, PROC_PC_TOTAL: 50136, MAXHOG: 46, LASTHOG: 2

LASTHOG At:   14:25:12 PHST May 8 2012

PC:           81aa324 (suspend)

Process:      Dispatch Unit, NUMHOG: 50136, MAXHOG: 46, LASTHOG: 2

LASTHOG At:   14:25:12 PHST May 8 2012

PC:           81aa324 (suspend)

Traceback:    81aa324  8062413

Process:      Dispatch Unit, NUMHOG: 13985647, MAXHOG: 1012, LASTHOG: 3

LASTHOG At:   14:25:43 PHST May 8 2012

PC:           81aa5f9 (suspend)

Traceback:    81aa5f9  8062413

Process:      Dispatch Unit, PROC_PC_TOTAL: 18866757, MAXHOG: 1012, LASTHOG: 4

LASTHOG At:   14:25:44 PHST May 8 2012

PC:           81aa5f9 (suspend)

CPU hog threshold (msec):  2.844

Last cleared: None

according to the person who replied, the dispatch unit is running high. May I ask for your assistance?

What are all the information needed, so I could send it.

Thanks,

mark

Hello

I ran into this issue with high CPU as well on an ASA 5540 with only about 150 users. In the case if you are using large modulus operations including large key size certificates and a higher Diffie-Hellman group, it will cause for high processing.

Since the default method of processing these operations is software-based, it will cause higher CPU usage and also slower SSL/IPsec connection establishment.

If this is the scenario for you, use hardware-based processing by using the following configuration:


"crypto engine large-mod-accel"

Hi All,

I had a similar issue on my end with my firewall's cpu pinned at 95% and the service-policy showed no drops at all. What helped me was the show asp drop. I kept seeing inspect-fail counts incrementing. What I did was I ran three packet capture samples for the asp drop type only (command:cap cap type asp-drop all") and exported it for further analysis. I was able to deduce that we kept getting bogus dns queries from various subnets over the internet inbound. This was due to an acl line that allowed any internet traffic inbound to access our public dns servers. I created a network object-group and only blocked those and my firewall's cpu dropped to 54%. Hope this helps someone.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card