cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
676
Views
0
Helpful
6
Replies

Input queues filling

brian.oflynn
Level 1
Level 1

Hi, I have two sites with 3750G's running OSPF connecting to a metro cloud. There are a number of other 3750G's in the same cloud also however only two are having problems. The Input queue is currently showing 1024/75/218/0. The result is that OSPF neighbor relationships are timing out. When I show the process's it is the PIM Process that is running at 99%. We are doing Multicast on the network which I assume this service is responsible for. All the switches working and non working are running 12.2(46)SE. Anybody have any idea's where to go from here?

6 Replies 6

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Brian,

verify if the troubled devices have any igmp join-group commands and remove them because they cause process switching

Hope to help

Giuseppe

Hi Giuseppe, Nope, no IGMP Join-group commands on any interfaces.

Cheers

Brian

Mark Yeates
Level 7
Level 7

Brian,

You could try increasing the size of the input queue on the interface to alleviate some of the drops on the interface. Yours are currently set at 1024, I would try 2000 and clear the counters and monitor the interface to see if the number of drops are reduced. I would also be slightly concerned if your PIM Process is constantly running at 99%. Can you please post the interface statistics for the port?

Mark

Hi Mark, I think the queue is set to 75 but it is being overloaded, here are the interface stats:

GigabitEthernet1/0/18 is up, line protocol is up (connected)

Hardware is Gigabit Ethernet, address is

Description: *** Connection to Metro Network ***

Internet address is xxx.xxx.xxx.xxx

MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,

reliability 255/255, txload 1/255, rxload 7/255

Encapsulation ARPA, loopback not set

Keepalive set (10 sec)

Carrier delay is 0 msec

Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX

input flow-control is off, output flow-control is unsupported

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 04:11:28

Input queue: 739/75/252/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue: 0/40 (size/max)

30 second input rate 2748000 bits/sec, 1259 packets/sec

30 second output rate 157000 bits/sec, 282 packets/sec

19548336 packets input, 3759898901 bytes, 0 no buffer

Received 14634162 broadcasts (240 IP multicasts)

0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

0 watchdog, 14634162 multicast, 0 pause input

0 input packets with dribble condition detected

5569660 packets output, 389366989 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier, 0 PAUSE output

0 output buffer failures, 0 output buffers swapped out

The PIM process is currently running at 80% however as a result the Overall CPU usage for 5 secs, 1 min and 5 mins is 99%.

scottmac
Level 10
Level 10

Some questions:

What is the bandwidth on the Metro side? What's the connection to the source (100 or 1000)

What is the nature of the multicast? i.e., How much bandwidth (per stream), is there more than one stream, constant bandwidth or variable (per stream) ...

Are there multiple sites acting as sources, or a single site streaming out to the other sites?

What role specifically are the two problem switches performing? Source only, source/destination, destination only?

What other traffic, if any, are these switches handling? Are you monitoring the traffic to catch users of BitTorrent or other p2p apps?

It might be helpful to put a monitor on the LAN side of the two problem switches and see what hosts are "Top Talkers" or to get a session count ... maybe there's some other activity that's being overlooked.

Good Luck

Scott

Hi Scott, it is 100 on both. The Multicast is used for IPTV. There is only a single stream being multicast from a single source. The two problem switches involved, one is the source only and the other is the destination only. Interface usage is actually not high on these switches for any interfaces according to solarwinds. I don't currently have any netflow level of detail but will try get this on Monday. Thanks for all the replies, I am going to try stop the Multicast source on Monday once approved as it will not be required then. Will post any results.

Review Cisco Networking for a $25 gift card