cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1116
Views
0
Helpful
1
Replies

Cisco 4509 Middle Buffer issue

venturas05
Level 1
Level 1

I have a segement of my network that is extremely slow. I am still new to troubleshooting and have not encountered this issue before. Here is the output of show buffers:

show buffers

Buffer elements:

     499 in free list (500 max allowed)

     1903857493 hits, 0 misses, 0 created

Public buffer pools:

Small buffers, 104 bytes (total 50, permanent 50, peak 140 @ 7w0d):

     50 in free list (20 min, 150 max allowed)

     2879060931 hits, 30 misses, 90 trims, 90 created

     0 failures (0 no memory)

Middle buffers, 600 bytes (total 45, permanent 25, peak 16039 @ 4w0d):

     43 in free list (10 min, 150 max allowed)

     646008988 hits, 42004503 misses, 126014068 trims, 126014088 created

     0 failures (0 no memory)

Big buffers, 1536 bytes (total 50, permanent 50, peak 77 @ 7w0d):

     50 in free list (5 min, 150 max allowed)

     75853326 hits, 28 misses, 84 trims, 84 created

     0 failures (0 no memory)

VeryBig buffers, 4520 bytes (total 10, permanent 10):

     10 in free list (0 min, 100 max allowed)

     5 hits, 0 misses, 0 trims, 0 created

     0 failures (0 no memory)

Large buffers, 5024 bytes (total 0, permanent 0):

     0 in free list (0 min, 10 max allowed)

     0 hits, 0 misses, 0 trims, 0 created

     0 failures (0 no memory)

Huge buffers, 18024 bytes (total 1, permanent 0, peak 4 @ 7w0d):

     1 in free list (0 min, 4 max allowed)

     2 hits, 2 misses, 2956 trims, 2957 created

     0 failures (0 no memory)

Interface buffer pools:

IPC buffers, 4096 bytes (total 23, permanent 2, peak 56 @ 7w0d):

     7 in free list (1 min, 8 max allowed)

     12410619 hits, 20 fallbacks, 39 trims, 60 created

     0 failures (0 no memory)

Header pools:

Catalyst 4000 buffers, 0 bytes (total 11496, permanent 11496):

     11494 in free list (0 min, 11497 max allowed)

     326193018 hits, 0 misses, 0 trims, 0 created

     0 failures (0 no memory)

What would be causing this?

1 Accepted Solution

Accepted Solutions

nkarpysh
Cisco Employee
Cisco Employee

Hello,

It looks that your network traffic is specific and by the size usually coming to medium buffers. When the regular medium buffers are over switch is gaining more - you can see 4 weeks ago it increased the medium pool to 16039.

When there is no ability to get additional medium buffers for traffic which requires it - packet is sent to bigger buffers.

As you have no buffer failures - no packets were dropped due to shortage of buffering space. Switch was always able either to increase number of needed buffers or move packet to bigger buffer.

So buffers itself did not cause any problems. You may need to understand why you are using buffers and queueing traffic - are you using QoS which is queueing traffic and use those buffers and possibly eventually dropping in some particular classes? Or you have multiple interfaces sending out towards a single port causing starvation?

Nik

HTH,
Niko

View solution in original post

1 Reply 1

nkarpysh
Cisco Employee
Cisco Employee

Hello,

It looks that your network traffic is specific and by the size usually coming to medium buffers. When the regular medium buffers are over switch is gaining more - you can see 4 weeks ago it increased the medium pool to 16039.

When there is no ability to get additional medium buffers for traffic which requires it - packet is sent to bigger buffers.

As you have no buffer failures - no packets were dropped due to shortage of buffering space. Switch was always able either to increase number of needed buffers or move packet to bigger buffer.

So buffers itself did not cause any problems. You may need to understand why you are using buffers and queueing traffic - are you using QoS which is queueing traffic and use those buffers and possibly eventually dropping in some particular classes? Or you have multiple interfaces sending out towards a single port causing starvation?

Nik

HTH,
Niko