02-23-2012 09:23 PM - edited 03-11-2019 03:34 PM
I have been getting overrun errors on 3 different ASA 5550 HA pairs with traffic rates less than 100Mbps total. I was told by one TAC guy to split the traffic between the two slots so that traffic comes in one and exits the other to maximize throughput because the 5550 was designed to work that way. Another TAC guy told me to enable ethernet flow control to alleviate the overrun errors because the traffic was bursty, but this doesn't seem to address the root cause of the problem to either. TCP traffic is bursty by nature and has it own flow control mechanism. I can't seem to find any detailed info on why traffic needs to be split for 100Mbps when the marketting throughput number is 1.2G. Is this a design flaw or limitation? Is there a way to alleviate overrun errors?
03-27-2012 06:36 AM
Cisco TACS is now recommending finding those hosts that are generating the bursty traffic, nothing to be done on the ASA to deal with this issue, I'd also like to share the Cisco engineer answer on the ASA handling of traffic and buffering:
"I would like to confirm that the input queue on an interface has a capacity that varies. Depending upon the configuration and speed and duplex settings on both sides it can be 13 packets or a maximum of 75 packets. At the same time the output queue could be as little as 2 packets and as much as 40 packets"
If anyone has any other recommendations I would appreciate it/
03-27-2012 09:17 AM
Thanks for sharing the info. Did they say what to do with those bursty hosts once you find them? I don't have the full context of the reason and logic behind that suggestion, but it doesn't seem to make any sense. I have told TAC repeatedly that traffic on the network will always be bursty. The only time that you may not have bursty traffic is when you are streaming video at constant bit rate. TCP traffic is bursty by nature. Even if you only have UDP traffic on the network, the accumulative effect of multiple hosts transmitting at the same time still makes the traffic bursty.
The queue size is not as important as how fast the ASA can pull the packets out of the queue and forward them. It seems like they can not do that very fast so the queue gets filled up very quickly and overflowed.
Have you had a chance to try turning on flow control? Did it help?
My experience dealing with TAC is that different TAC engineers may give you complete different answers and sometimes contradicting each other. If the answer doesn't make sense, you can ask for an escalation.
03-27-2012 01:13 PM
Hi, I escalated this ticket, waiting for support. I can't turn on flow control, is not supported on the 8.3 version I'm running, will let you know if I find a resolution from Cisco TACS, thanks.
07-07-2012 08:10 PM
Hi guys,
Did you figure this out with the TAC?
I seem to have the same situation with my 5520s and I am wondering if enabling flow control is the only solution to the problem.
Thanks!
07-27-2012 01:00 PM
Bringing this back from the dead...
Interface GigabitEthernet1/0 "inside", is up, line protocol is up
Hardware is VCS7380 rev01, BW 1000 Mbps, DLY 10 usec
Full-Duplex(Full-duplex), 1000 Mbps(1000 Mbps)
Input flow control is unsupported, output flow control is unsupported
Media-type configured as RJ45 connector
MAC address 1cdf.0f66.35da, MTU 1500
IP address x.x.x.x, subnet mask 255.255.255.0
2136842533 packets input, 2505600752697 bytes, 0 no buffer
Received 6118 broadcasts, 0 runts, 0 giants
3519301 input errors, 0 CRC, 0 frame, 3519301 overrun, 0 ignored, 0 abort
0 L2 decode drops
1231746618 packets output, 186031413394 bytes, 0 underruns
0 pause output, 0 resume output
0 output errors, 0 collisions, 0 interface resets
0 late collisions, 0 deferred
0 input reset drops, 0 output reset drops
0 rate limit drops
input queue (blocks free curr/low): hardware (0/0)
output queue (blocks free curr/low): hardware (0/0)
Traffic Statistics for "inside":
2132528034 packets input, 2462177894354 bytes
1231761102 packets output, 163876265191 bytes
69112 packets dropped
1 minute input rate 15279 pkts/sec, 17668962 bytes/sec
1 minute output rate 8978 pkts/sec, 1071206 bytes/sec
1 minute drop rate, 0 pkts/sec
5 minute input rate 17170 pkts/sec, 20191846 bytes/sec
5 minute output rate 9932 pkts/sec, 1170989 bytes/sec
5 minute drop rate, 0 pkts/sec
This platform has an ASA 5510 Security Plus license.
sh blocks
SIZE MAX LOW CNT
0 400 383 400
4 100 99 99
80 400 284 399
256 3148 2928 3148
1550 2285 1807 2025
2048 2100 1422 2100
2560 164 164 164
4096 100 98 100
8192 100 99 100
16384 100 99 100
65536 16 16 16
Interface is being overrun, there are sufficient 1550 byte memory blocks, the 'no buffer' count is 0. Still, after several years, trying to find an explanation that explains this type of overrun. If it were 'bursty' traffic, wouldn't the interface buffer fill before being overrun which would show an incrementing 'no buffer' counter?
01-08-2013 02:28 PM
There is a FIFO buffer present on the NIC hardware where packets are first received, before getting DMA'd to a receive buffer in the ASA RAM that is assigned to this interface. An overrun is counted when a packet cannot be received by the NIC because the FIFO is full.
There several possible reasons why the FIFO may fill up, but the two most obvious ones are
1. A traffic burst fills up the FIFO very quickly. This is entirely possible even with a relatively low aggregate throughput on the interface measured over minutes.
2. The NIC cannot find a free receive buffer assigned to this interface to DMA the packet to. There are many possible causes of this but most relate to the ASA simply not being able to keep up with the amount of traffic it is receiving.
Note that neither of these cases will increment the "no buffer" counter in recent ASA code. AFAIK that counter is incremented when the driver attempts and fails to assign a new receive buffer to the interface.
Since your interface does not seem to be oversubscribed I believe that you are most likely hitting case 1.
Implementing flow control on the interface should help with this.
11-21-2012 07:50 AM
Hi,
Before I set up new thread, I would like to ask here.
I have installed 5550, and my input/output queues looks like this:
GigabitEthernet0/0
input queue (blocks free curr/low): hardware (255/255)
output queue (blocks free curr/low): hardware (255/254)
GigabitEthernet0/1
input queue (blocks free curr/low): hardware (255/230)
output queue (blocks free curr/low): hardware (255/0)
GigabitEthernet0/2
input queue (blocks free curr/low): hardware (255/255)
output queue (blocks free curr/low): hardware (255/255)
GigabitEthernet0/3
input queue (blocks free curr/low): hardware (255/230)
output queue (blocks free curr/low): hardware (254/97)
GigabitEthernet1/0
input queue (blocks free curr/low): hardware (0/0)
output queue (blocks free curr/low): hardware (0/0)
GigabitEthernet1/1
input queue (blocks free curr/low): hardware (0/0)
output queue (blocks free curr/low): hardware (0/0)
GigabitEthernet1/2
input queue (blocks free curr/low): hardware (0/0)
output queue (blocks free curr/low): hardware (0/0)
GigabitEthernet1/3
input queue (blocks free curr/low): hardware (0/0)
output queue (blocks free curr/low): hardware (0/0)
Do you have any idea why 4GE module in 5550 has no buffers? Software is 8.4.3.
I also experiece above mentioned problems with dropping traffic, and my searching brought me to this.
Thank you
03-04-2013 05:24 PM
I too am bringing this back from the dead.
I have this same problem, and I acted upon TAC's request to turn on flow control. I do find myself with another related issue now though.
While I no longer see any input errors on the ASA interfaces, I am seeing about 10,000 pause frames a minute being sent from the ASA to my upstream 6500. While this seems like a TON of pauses to me, TAC seems not too concerned about it.
TAC is suggesting that the problem lies in too many small / bursty packets.
I'm running about 300 mbps / 60,000pps / 700 connections on a 5550 in transparent mode.
I will update as I have more info.
03-05-2013 10:17 AM
I've found this from Cisco live last year that pretty much sums up my problems, and the performance issues it will cause.
Especially slides 29 - 43. Wow.
03-05-2013 10:25 AM
Thanks so much for the URL to very useful info, Steve. Did you experience any performance issue after enabling flow control?
03-05-2013 11:11 AM
Well I only implemented the flow control on Sunday night so it’s only been about 36 hours now.
I think I'm well over 4 million pause frames sent back to my 6500, and actually I've seen no performance issues.
It just seems like a TON of pauses , but TAC says that’s nothing for a 6500 to handle, and the overruns have not incremented at all on the ASA so I guess there has been no packet loss.
Between that link and my TAC engineer I've got a really good idea as to the where the bottleneck lies. Basically something on my network is sending traffic that is filling the FIFO queue on the NIC before the next packet can be put on the receive ring. Most likely this is caused by lots of very small packets (maybe) coming in very bursty. I say maybe because if my receive ring is full to the point that 300 times a second the ASA has to ask for pause then it seems that it’s probably pretty constant little packets, or basically continual burst.
The questions will now be to track down where these little guys are coming from.
So in searching the internet about this issue it seems really common, and the bottleneck seems to be the FIFO queue on the NICs on these units. The best info I've found says the queue is 32 - 48KB depending on your model. I really wonder if these new 55x5-X models are using a larger buffer for the queue on the NICs. Unfortunately we only recently purchased these 5550's so upgrading won’t make the bosses happy, better to address the root cause.
Hope that helps.
06-28-2013 08:58 PM
Apparentyly, this is a product architectural issue due to single core CPU for all ASA5500 series. Overrun is due to CPU hog. I encounter same issue too and confirmed CPU hog detected. Same thing TAC suggest to make use of both PCI, agree it will help mitigate overrun but if CPU continue hog due to number of software feature enabled in ASA, my fell is overrun will continue. Flow control will help but it will shift the botttlenck to upstream and slowing down the traffic. This box is distance away from the 1.2Gbps spec.
http://www.cisco.com/en/US/products/ps6120/products_tech_note09186a0080bfa10a.shtml
11-05-2013 12:06 PM
I have the same issue on 5550s, 5510s and 5520s that we have in production. Low CPU, moderate traffic, etc. I think it comes down to the NICs as they are old Intel NICs with 16kb of buffer. I found that I recieve no overruns on the 5550's addon card which makes me think it is these junky Intel NICs that Cisco uses or the PCI bus. We do have a 5515-X in production with no overruns, so maybe it is fixed on newer boxes since they use PCI-E rather than PCI for the NICs.
11-05-2013 12:22 PM
Yes this is the NIC buffer on the 55x0 models.
I’ve gone round and round with TAC, and gotten two of their CCIEs on the case (both of whom were super knowledgeable btw), and they have proven to me that this is simply a hardware shortcoming with these models.
The 55x5-x models use a different NIC with a larger buffer, and should not have this issue.
Bummer for those of us with a bursty traffic profile.
We are moving to these new units Q1 2014.
11-05-2013 02:47 PM
It is pretty sad though, the traffic isn't really that burst for some of our sites and they still fail to hold up. I would recommend replacing any older ASA you have if you plan to do a internet upgrade above 50Mbit. The ASAs were made for a different traffic profile based off the internet that existed in 2005.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide