Showing results for 
Search instead for 
Did you mean: 

3745 - "Total Output Drops:" ?



I'm kind of curious what exactly this "Total Output Drops:" field means. I've looked through a bunch of cisco docs and they never really explain it. The input queue on fa0/0 has filled up in the past, but the "drops" field in there never increases: (Input queue: 0/75/0/0 (size/max/drops/flushes)). Anyone? I realize that we are pushing this particular router/interface pretty hard (its peaking in the low 70Mb's and 75-80%cpu) - we have other plans to relieve the situation.

FastEthernet0/0 is up, line protocol is up

Hardware is Gt96k FE, address is x

Description: "x"

Internet address is x/24

MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,

reliability 255/255, txload 141/255, rxload 130/255

Encapsulation ARPA, loopback not set

Keepalive set (10 sec)

Full-duplex, 100Mb/s, 100BaseTX/FX

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 3d23h

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 56266

Queueing strategy: fifo

Output queue: 0/40 (size/max)

30 second input rate 51345000 bits/sec, 33192 packets/sec

30 second output rate 55453000 bits/sec, 34940 packets/sec

855911607 packets input, 2614997118 bytes

Received 2604040 broadcasts, 0 runts, 0 giants, 0 throttles

546 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

0 watchdog

0 input packets with dribble condition detected

1410553067 packets output, 4269722497 bytes, 0 underruns

You would think a document like this would explain it...


7 Replies 7


FYI, MRTG graphs for this interface...

I wouldn't be very worried anyway. From the stats, an average of 1 packet out of 25000 has been dropped. That's negligible.


Believe your total output drops is the count of the number of packets that exceeded the ouput FIFO queue.

The default value of 40 isn't very large for 100 Mb, try increasing it to something like 128, 256 or even 512 and see if your drop count decreases. (e.g. hold-queue 256 out)

Thanks for the info. IDK why they would not format the output queue the same way as the input queue or put it on the same line at least? Heh heh... anyways, its increased to 128, cleared counters, and I'll check again in an hour or two and work from there. As far as not worrying about the drops - lots of this traffic is SIP or RTP (UDP) so its directly effecting quality of service (to some small degree) even if its only 1/25000.

Ah, I had thought to mention that your overall drop rate was low, but since you mention concern for non-TCP traffic, you might also want to consider using CBWFQ for outbound traffic. A simple policy using just class-default with FQ is a nice first step. (It also usually expands the overall queue size, i.e., shouldn't need hold-queue command.)

Once you have CBWFQ active, you can use LLQ for any real-time traffic as a second step.

Well, input queue and output queue are two very different things. First of all, the input queue doesn't really exist. Or better said, it exist only in the form an hardware or driver-level queue, most often a round-robin circular buffer. The idea is that whenever possible, packets have to be copied directly from the input hardware to the output queue, to minimize memory to memory copies.

Packets are never supposed to be waiting in the input queue, because that would mean the router is not servicing the interface. That is very bad and that is why any input queue drop must be dealt with immediately. You will also notice that there are no options to configure different queueing schemes on input.

On output, all is different. Router will control how and when send the the packet if congestion occurs. And when there is congestion, the output queue fills up and drops.

Now, you have one the the few cases in which a quite fast interface fills up. You said that "most" of the traffic is RTP, so you could configure QoS / LLQ for output. That would attempt to drop anything else before RTP (let's say, dscp EF).

I'm alway vary of increasing queue sizes. True, at FA speeds even 256 or 512 packets in queue isn't much of a delay, but again I think queues should be kept small to aggressively drop TCP, while priority traffic should be treated like that and never dropped if possible.

That is a very abridged version of general IP queueing discussion, but i hope you got the idea.

Joseph, Paolo:

Great info! Thanks for your help. I think I'm sticking with FIFO and a 256 out queue. I was seeing very minimal drops (9 since I changed it) with 128 and its been 0 over the last hour or so at 256. Latency w/ the larger queue seems to be unchanged, and now we've not dropping anything.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: