cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1274
Views
5
Helpful
10
Replies

HighQueueDropRate

m.sobolev
Level 1
Level 1

Strange behavior of DFM: its sees periodical HighQueueDropRate on one of ours 871, and every time we got message like this:

STATUS = Active

SEVERITY = Critical

HighQueueDropRate::Component=IF-Bijsk-R1/5 [Fa4] [10.xx.xx.xx] [WAN];ComponentClass=Interface;ComponentEventCode=1053;Type=ETHERNETCSMACD;OutputPacketQueueDropPct=0.0 %;MaxSpeed=100000000;OutputPacketQueueDropRate=0.0 PPS;OutputPacketNoErrorRate=1.

0.0% and 0.0PPS - So I dont know what should I fix?

10 Replies 10

Joe Clarke
Cisco Employee
Cisco Employee

There should be more to this notification. But on the surface, it looks like a bug in DFM. I have never seen this event generated when there are no errors, and I cannot find any existing bugs.

Complete email text:

EVENT ID = 00008GN

ALERT ID = 000014I

TIME = Sun 03-Aug-2008 22:28:54 MSD

STATUS = Active

SEVERITY = Critical

MANAGED OBJECT = Bijsk-R1

MANAGED OBJECT TYPE = Routers

EVENT DESCRIPTION = HighQueueDropRate::Component=IF-Bijsk-R1/5 [Fa4] [10.xx.xx.xx] [WAN];ComponentClass=Interface;ComponentEventCode=1053;Type=ETHERNETCSMACD;OutputPacketQueueDropPct=0.0 %;MaxSpeed=100000000;OutputPacketQueueDropRate=0.0 PPS;OutputPacketNoErrorRate=1.

The only difference between alert active and cleared states is OutputPacketNoErrorRate=1 in active and OutputPacketNoErrorRate=0 in cleared.

We have about ~70 of 871 routers but only this one generates this error. I'he tried several IOSes, but no luck...

Please post a screenshot of the event details (from AAD) which generates this notification.

here it is

That's what I thought. Look at your input queue drop %. It's higher than your threshold of 2. Your notifications are being truncated. If you don't want to see this event anymore, and your comfortable with the higher drop rate, then simply increase your threshold to 3 or 4%.

To get more characters in your notifications, edit NMSROOT/objects/nos/config/nos.properties, and increase MAX_EMAIL_DES. The largest it can be is 1024.

I've googled my error and found suggestion to see 'sh buffers'. Output interpreter gives me the following answer:

ERROR: Since it's last reload, this router has created or maintained a relatively

large number of 'Syslog ED Pool buffers' yet still has very few free buffers.

The above symptoms suggest that a buffer leak has occurred.

This is for the Embedded Event Manager syslog event detector. I doubt it is causing your interface to drop incoming packets, though. What does show buffers show?

Biysk-R1#sh buffers

Buffer elements:

1118 in free list (1119 max allowed)

647605 hits, 0 misses, 619 created

Public buffer pools:

Small buffers, 104 bytes (total 53, permanent 50, peak 75 @ 13:05:07):

45 in free list (20 min, 150 max allowed)

244478 hits, 400 misses, 272 trims, 275 created

14 failures (0 no memory)

Middle buffers, 600 bytes (total 22, permanent 25, peak 34 @ 1d03h):

21 in free list (10 min, 150 max allowed)

617211 hits, 1034 misses, 121 trims, 118 created

43 failures (0 no memory)

Big buffers, 1536 bytes (total 18, permanent 50):

17 in free list (5 min, 150 max allowed)

150901 hits, 175 misses, 116 trims, 84 created

104 failures (0 no memory)

VeryBig buffers, 4520 bytes (total 6, permanent 10):

6 in free list (0 min, 100 max allowed)

52 hits, 52 misses, 8 trims, 4 created

52 failures (0 no memory)

Large buffers, 5024 bytes (total 2, permanent 0, peak 2 @ 08:25:15):

2 in free list (0 min, 10 max allowed)

4 hits, 48 misses, 7 trims, 9 created

48 failures (0 no memory)

Huge buffers, 18024 bytes (total 4, permanent 0, peak 21 @ 16:07:31):

4 in free list (0 min, 4 max allowed)

87687 hits, 6651 misses, 13248 trims, 13252 created

29 failures (0 no memory)

Interface buffer pools:

Syslog ED Pool buffers, 600 bytes (total 150, permanent 150):

118 in free list (150 min, 150 max allowed)

38 hits, 0 misses

SEC Eng Packet buffers, 1700 bytes (total 256, permanent 256):

0 in free list (0 min, 256 max allowed)

256 hits, 0 fallbacks

256 max cache size, 256 in cache

0 hits in cache, 0 misses in cache

Header pools:

Header buffers, 0 bytes (total 384, permanent 384):

0 in free list (0 min, 512 max allowed)

384 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

384 max cache size, 384 in cache

0 hits in cache, 0 misses in cache

Particle Clones:

1024 clones, 577 hits, 0 misses

Public particle pools:

F/S buffers, 256 bytes (total 385, permanent 384):

129 in free list (128 min, 1024 max allowed)

280 hits, 1 misses, 6 trims, 7 created

0 failures (0 no memory)

256 max cache size, 256 in cache

632287 hits in cache, 0 misses in cache

Normal buffers, 1536 bytes (total 512, permanent 512):

384 in free list (128 min, 1024 max allowed)

256 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

128 max cache size, 128 in cache

0 hits in cache, 0 misses in cache

Private particle pools:

HQF buffers, 0 bytes (total 2000, permanent 2000):

2000 in free list (0 min, 2000 max allowed)

0 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

SEC Eng Particle Header buffers, 256 bytes (total 256, permanent 256):

0 in free list (0 min, 256 max allowed)

256 hits, 0 fallbacks

256 max cache size, 256 in cache

0 hits in cache, 0 misses in cache

FastEthernet0 buffers, 1536 bytes (total 192, permanent 192):

0 in free list (0 min, 192 max allowed)

192 hits, 0 fallbacks

192 max cache size, 128 in cache

797688 hits in cache, 0 misses in cache

FastEthernet4 buffers, 1536 bytes (total 192, permanent 192):

0 in free list (0 min, 192 max allowed)

192 hits, 0 fallbacks

192 max cache size, 128 in cache

779632 hits in cache, 0 misses in cache

SEC Eng Particle buffers, 1700 bytes (total 256, permanent 256):

0 in free list (0 min, 256 max allowed)

256 hits, 0 misses

256 max cache size, 256 in cache

0 hits in cache, 0 misses in cache

This is exactly what all my devices report (118 on the free list with 150 max). I think Output Interpreter has given you a red herring. I do not believe this is problematic.

Thank you, I think this problem goes beyond net management, but can you suggest - where should I ask about queue drop rate - LAN or WAN switching and routing?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: