cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
822
Views
0
Helpful
5
Replies

Output drops on GRE tunnels

n.moss
Level 1
Level 1

Hi,

I have two 7507's with all GigE interfaces, running 12.3(1a) IP. Each has approximately 15 GRE tunnels and is routing approximately 35Mbps at peak across the cards. The GRE tunnels are reporting output drops and are being incremented very regulary. I have tweaked the input and output hold-queues to no advantage. The RSP CPU=2% max, and the main VIP carrying the card and most of the traffic has a CPU utilisation of ~60%. Anybody any thoughts on how to stop the output drops?

#sh int Tunnel X

<SNIP>

Input queue: 0/200/0/0 (size/max/drops/flushes); Total output drops: 36649

Output queue: 0/200 (size/max)

Thanks in advance,

Neil.

5 Replies 5

ehirsel
Level 6
Level 6

Check your buffers by running the show buffers command. I'd be interested to know if you have buffer misses due to lack of memory, or for other reasons.

Please do a show memory command and post the first 2-3 lines of output.

Enclosed - can't see anything immediate, thanks for any assistance:

** #sh mem

Head Total(b) Used(b) Free(b) Lowest(b) Largest(b)

Processor 42B78960 222852768 30181064 192671704 163516704 157562540

Fast 42B58960 131080 127096 3984 3984 3932

** #sh buffers

Buffer elements:

499 in free list (500 max allowed)

2129038681 hits, 0 misses, 0 created

Public buffer pools:

Small buffers, 104 bytes (total 960, permanent 960):

917 in free list (20 min, 2000 max allowed)

1031811660 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Middle buffers, 600 bytes (total 720, permanent 720):

716 in free list (20 min, 1600 max allowed)

1572485351 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Big buffers, 1536 bytes (total 720, permanent 720):

719 in free list (10 min, 2400 max allowed)

1145555600 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

VeryBig buffers, 4520 bytes (total 80, permanent 80, peak 827 @ 7w0d):

80 in free list (5 min, 2400 max allowed)

257392 hits, 2484 misses, 7452 trims, 7452 created

0 failures (0 no memory)

Large buffers, 5024 bytes (total 80, permanent 80):

80 in free list (3 min, 240 max allowed)

0 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Huge buffers, 18024 bytes (total 10, permanent 0, peak 331 @ 4w0d):

9 in free list (3 min, 104 max allowed)

18622306 hits, 4025 misses, 12162 trims, 12172 created

0 failures (0 no memory)

Interface buffer pools:

IPC buffers, 4096 bytes (total 624, permanent 624):

624 in free list (208 min, 2080 max allowed)

319555747 hits, 0 fallbacks, 0 trims, 0 created

0 failures (0 no memory)

Header pools:

what the input and output rate on the interfaces. They might be too high, which could be due to a DoS attack. The CPU utilisation of the VIP seems to be very high also.

I/O rate is around 30-35Mbps at peak, which for GigE cards, on VIP4-80s on a 7507 is pocket change.

Agree about the CPU rate on the VIP, this is due to TurboACLs (compiled) for preventing worms etc however this only accounts for 20% of the VIP CPU load...

DoS is unlikely and I don't suspect this at all at this stage.

Suspect it's TAC case raise time...

I might be more interested in the packet rate i.e. the pps on the interfaces. They often determine the performance of the link. Please check these values out also.

Also what percentage is the IP Input process taking in the CPU utilisation of the VIP

Review Cisco Networking for a $25 gift card