SPD flushes on lan interface
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2018 07:12 AM - edited 03-05-2019 10:49 AM
Hello,
From a while now we're having an issue with one of our Lan interfaces on a cisco 3845 router.
The number of flushes on the interface keeps increasing and i'm not sure why.
After searching on the internet I've come across the SPD (Selective Packet Discard) functionality and after reading about it I got it why this is used but was unsure about how it behaves on some scenarios, and I think this might have something to do with my flushes problem.
You can see below some informations on the interface.
My question is: Even though I've increased the hold-queue in value to 2500, if the SPD min/max thresholds are still 73/74 (because the lowest queue value on another interface on the router is 75) are the packets being dropped when the queue hits 74 instead of waiting until the full 2500 queue length is hit? If not, how can I debug why all those flushes are happening? (I really tried everything I could think of to figure out why, but no luck so far)
One thing worth mentioning is that a lot of multicast traffic goes through this router, so we do have a lot of bursts of traffic from time to time going through (i really think the flushes are because of that but couldn't figure out a way to go around this issue yet). Another one is the WAN interface has the default 75 value for its queue and a rate-limit configuration time based.
Thanks in advance for your time and help!!
- Labels:
-
Other Routing
-
vEdge Routers

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2018 07:15 AM
Hello,
can you post the output of 'show interfaces x' where X is the LAN interface in question ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2018 07:22 AM
Thanks for the quick reply!!!
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full Duplex, 1Gbps, media type is RJ45
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 1w6d
Input queue: 2/2500/0/34677936 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 1940000 bits/sec, 1561 packets/sec
5 minute output rate 4845000 bits/sec, 2064 packets/sec
299588657 packets input, 3522337704 bytes, 9443 no buffer
Received 329894117 broadcasts (279279592 IP multicasts)
0 runts, 0 giants, 87 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 330962736 multicast, 0 pause input
779424322 packets output, 3138866255 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2018 07:47 AM
Hello,
thanks for the output.
The min/max values SPD uses are based on the LOWEST input queue of any interface, so even if you set the input queue on your 'problem' interface to something higher, SPD still uses the lowest value configured for any interface. So, your best option is to set the hold-queue on all your interfaces to e.g. 512, and then change the SPD value in interface configuration mode (this is a hidden command, so you have to type it in in full)
ip spd queue threshold minimum 256 maximum 512
Can you give that a try ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2018 07:52 AM
Just to be sure, I'll have to increase the values on ALL interfaces including the LoopBack and the 2 tunnel interfaces I have configured on this router?
Thanks for your help, Georg!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2018 07:59 AM
Tunnel interfaces...very suspicious, since they usually have a very low values. You might be getting off with only changing the value on the tunnel interfaces.
Either way, do it after hours. Another alternative would be to turn SPD off altogether (no spd enable in global config mode)...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2018 08:03 AM
Hello,
if you have some spare time, have a look at the document below, which explains very well what SPD does...
https://www.cisco.com/c/en/us/about/security-center/selective-packet-discard.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2018 05:59 AM
Thanks for recommending this document and after reading it the only doubt I have is regarding resources.
I've understood that even though my hold-queue on interface gi0/1 was increased to 2500 the packets on that interface are flushed after reaching the min/max threshold value (depending on how much it'll be random drop or full drop) because the hold-queue value on my LOWEST interface still is 75 and the GLOBALLY CONFIGURED value for min/max are 73/74, respectively.
The only thing holding me from increasing the hold-queue value on all interfaces and consequentially the min/max values is the uncertainty regarding the capabilities of the router.
So, this leaves me with the question: once I increase all interfaces input queue values , lets say up to 512, how can I know the router will be able to handle all these possibly 512 packets that are going to be in the gi0/1 interface resource-wise? Is there a way to calculate the maximum amount of packets a router can handle in its input queue without experiencing slowness or crashing?
Again, thanks for your help and patience!!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2018 07:01 AM
Hello,
check the output of:
Router#show int | include line protocol|Input queue
This will list all interfaces and the input queue sizes. Are they all 0/75/0/0 ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2018 07:06 AM
GigabitEthernet0/0 is up, line protocol is up
Input queue: 0/75/0/191 (size/max/drops/flushes); Total output drops: 0
GigabitEthernet0/1 is up, line protocol is up
Input queue: 2/2500/0/37667151 (size/max/drops/flushes); Total output drops: 0
FastEthernet0/0/0 is up, line protocol is up
Input queue: 0/75/0/152 (size/max/drops/flushes); Total output drops: 0
FastEthernet0/0/1 is administratively down, line protocol is down
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Loopback0 is up, line protocol is up
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Tunnel0 is up, line protocol is up
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Tunnel1 is up, line protocol is up
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Interface gi0/0 counters are were cleared 2 weeks ago and Fa0/0/0 counters were never cleared!!!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2018 07:19 AM
Hello,
the problem is not so much that the router doesn't have enough resources, but the below:
"Caution: An increase in the hold queue can have detrimental effects on network routing and response times. For protocols that use SEQ/ACK packets to determine round-trip times, do not increase the output queue. Dropping packets instead informs hosts to slow down transmissions to match available bandwidth. This is generally better than duplicate copies of the same packet within the network, which can happen with large hold queues."
Setting the value to 256 or 512 shouldn't be a problem. I would start with 256 (and also configure 'ip spd queue threshold minimum 128 maximum 256'), and check if the flushes decrease.
If not, turn off spd altogether...
Of course, to be on the safe side, after hours...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-22-2018 08:46 AM
Unfortunately increasing the spd threshold had no visible effect on the interface flushes. I've increased the spd threshold to 512 and after seeing no improvement I increased again to 1024 which started to increase the "no buffer" counter faster than before, so I decreased the spd back to 512.
Any other suggestions?
Thanks for your help!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-22-2018 12:55 PM
Hello,
not sure what we have already tried...did you turn off spd altogether (no spd enable in global config mode) ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-24-2018 06:34 AM
That said, if the 3845 is getting either gig bursts or CPU spikes due to ingress traffic, you might be able to tune the interface ingress queue (as noted by Georg) and buffers (have you enabled auto tune?) to capture a gig burst without packet drop. If you succeed, though, you risk introducing latency, and with the input queue, you cannot prioritize or too well manage drops. For CPU spikes caused by ingress, depending on the traffic, sometimes enabling the router's PMTUD, larger packet will be received, requiring less processing per packet.
