12-08-2025 10:52 PM - edited 12-08-2025 10:54 PM
Hello.
I have two questions about ICMP behavior and Fregmansion behavior.
All system jumbo MTU / all interfaces / SVI are set to MTU 9216.
1.
Packets are generated and processed by the ASIC (data plane) when pinged without a source
When pinged to the source interface vlan, it is generated by the control plane (CPU) and passed to the data plane (ASIC) for processing
Am I right to understand?
2.
I noticed that many of the icmp options are packet size / count / timeout.
For packet size, the maximum value is found to be multiplied by 65535 bytes (2^16), the maximum value of the ip payload.
I understand the MTU is not set to 9216 or higher, is there a reason why it is set to allow the icmp packet size to 9216 or higher?
When I increased the size and did icmp, the fragmantion worked slowly.
This might be too basic a question, but I'm writing because I need your help.
Please give us a valuable answer.
Solved! Go to Solution.
12-11-2025 03:39 AM - edited 12-11-2025 05:00 PM
So, I wonder if the bfd was cut off due to continuous transmission/CPU rise/fragmentation time shortage before receiving bulk packet/echo packet.
Possibly, or perhaps you've saturated the link long enough BFD timed out. The latter can be computed.
BTW, I've used 64k pings for QoS link saturation testing in a way that ordinary pings will not.
From my observations, ordinary pings appear to pause between each ping request, so link is not saturated. EDIT NB: from PT testing (for whatever that's worth), the "pause" appears to be either the next ping request is fired of either when the prior reply packet is received or the timeout is exceeded.
With a 64k ping, all the ping fragments, I believe, are physically sent back-to-back. Possibly, timing for ping requests remains the same so the next ping packet would have less delay then it ordinarily would.
12-11-2025 03:36 PM
Hi,
From my experience, the outcome of you using timeout 0, was clearly with CPU impact, however your BFD session flapped and afterwards your BGP session flapped not because of CPU was impacted, but because you've consumed all the Ethernet TX ring slots, at least for minimum 1 second which was enough for BFD to go down given the timers you're using.
Since this test, with timeout 0, is not a real life / valid test, and something that you'll obviously not repeat, there's nothing to be addressed here, other than don't do it again.
Thanks,
Cristian.
12-08-2025 11:17 PM
Hi,
The default MTU value can be changed or kept, while when sending a packet you can choose the packet-size to be up to maximum value of IP payload (why would you send a packet larger than MTU, to see if fragmentation occurs and if fragment are being successfully delivered to destination). Ethernet MTU and IP payload size are two different valued with not correlation / dependency in between.
Thanks,
Cristian.
12-08-2025 11:35 PM
There was a bgp loop phenomenon in the current network, so we did an icmp test by increasing the packet-size.
I set the packet-size to the maximum value and did a ping test, and all the bfd session was cut off on the equipment, causing a bgp flap phenomenon.
Regarding this, I asked because I was curious about the reason for the maximum packet-size.
12-09-2025 03:38 AM
Hi,
So let me ensure I understood what happened here: your network was stable (BFD and BGP were UP), you send out a large ICMP packet from a network equipment and at the exact same time, BFD went down which signaled BGP to go down as well? What is your configured Rx/Tx and multiplier values on your BFD sessions for BGP?
Thanks,
Cristian.
12-10-2025 05:16 PM - edited 12-10-2025 05:27 PM
bfd interval 300 min_rx 300 multiplier 3
bfd multihop interval 300 min_rx 300 multiplier 3
BFD settings are as above.
When I was doing the test
I tried to ping timeout 1 count 10000 with x.x.x.x.x.x packet size 40000, but I set the timeout setting incorrectly. (timeout 0)
However, neighbors with bfd settings were disconnected from the bgp template.
So, I wonder if the bfd was cut off due to continuous transmission/CPU rise/fragmentation time shortage before receiving bulk packet/echo packet.
We did the same test in the bfd/bgp session, but the bfd/bgp flap happens the same way.
12-11-2025 03:39 AM - edited 12-11-2025 05:00 PM
So, I wonder if the bfd was cut off due to continuous transmission/CPU rise/fragmentation time shortage before receiving bulk packet/echo packet.
Possibly, or perhaps you've saturated the link long enough BFD timed out. The latter can be computed.
BTW, I've used 64k pings for QoS link saturation testing in a way that ordinary pings will not.
From my observations, ordinary pings appear to pause between each ping request, so link is not saturated. EDIT NB: from PT testing (for whatever that's worth), the "pause" appears to be either the next ping request is fired of either when the prior reply packet is received or the timeout is exceeded.
With a 64k ping, all the ping fragments, I believe, are physically sent back-to-back. Possibly, timing for ping requests remains the same so the next ping packet would have less delay then it ordinarily would.
12-11-2025 05:11 PM
It was just a simple question.
I don't think there will be any future tests with timeout 0, so I think I can end the question here.
Thank you so much for your help.
12-11-2025 03:36 PM
Hi,
From my experience, the outcome of you using timeout 0, was clearly with CPU impact, however your BFD session flapped and afterwards your BGP session flapped not because of CPU was impacted, but because you've consumed all the Ethernet TX ring slots, at least for minimum 1 second which was enough for BFD to go down given the timers you're using.
Since this test, with timeout 0, is not a real life / valid test, and something that you'll obviously not repeat, there's nothing to be addressed here, other than don't do it again.
Thanks,
Cristian.
12-11-2025 05:09 PM
cpu didn't go up too much. as you said, tx slot issue seems right.
We will set interval 1 or more to proceed with the test.
Thanks for your help.
12-11-2025 05:38 PM
@Cristian Matei wrote:
Hi,
From my experience, the outcome of you using timeout 0, was clearly with CPU impact, however your BFD session flapped and afterwards your BGP session flapped not because of CPU was impacted, but because you've consumed all the Ethernet TX ring slots, at least for minimum 1 second which was enough for BFD to go down given the timers you're using.
Since this test, with timeout 0, is not a real life / valid test, and something that you'll obviously not repeat, there's nothing to be addressed here, other than don't do it again.
Thanks,
Cristian.
I just updated my prior reply with some PT observational data, which appear that the next ping transmission waits on either receipt of the prior ping request and its reply, or the timeout value. So, a timeout of zero, is very likely to create a transmission burst, although whether there's actually no transmission pause (with the zero timeout setting), might not be the case. (Why? Well, possibly the next ping transmission might pause until whatever clock is being used to measure responses times, advances beyond its current value.)
I fully agree, that BFD, would fail if it misses all its keep alives during the time interval it's measuring.
As to the Ethernet filling all its TX ring slots, if that happen, usually they are backed by main RAM (software) buffers. However, it's certainly possible, that all egress buffers, hardware and software, are exhausted, and the BFD packets are lost (interface egress drops). It's also possible, the BFD packets are not lost, but are instead, delayed by packets ahead of them in the egress queues. Either, will cause BFD to see its peer connectivity as lost.
I recall Cisco, by default, tags BGP packets with IP Prec 6. Don't know if it does anything similar for BFD. QoS that "protects" such traffic, would help insure that it's neither dropped or excessively delayed.
Cisco also has something called PAK_priority, I believe Cisco doesn't use this for BGP. Unsure about BFD. Regardless, even if PAK_priority is active, I've read it's mainly to protect against packet loss, but not packet delay. Further, QoS, depending on platform, might be adjusted to better insure an SLA for BFD and/or BGP, to prevent just such an outage (if, indeed, caused by a burst of ping traffic).
Lastly, I would consider using timeout zero, possibly, as a handy way to attempt saturate an interface. Ordinarily, I use an external traffic generator, for that purpose, because ordinary pings don't generate enough traffic. However, ping using a max IP sized packet, with a zero timeout, might be (new to me) substitute. Oh, I might mention, when doing such testing, I insure such traffic is ToS tagged to direct it to a specific egress policy for such traffic. (Also, BTW, Cisco's extended ping allows setting the ToS value.)
(One of my most common QoS tests, to demonstrate the "power" of QoS, is to send traffic, to an egress interface, at 10% or higher than that interface's bandwidth capacity. But, that traffic is commonly tagged to be prioritized below everything else. I.e. even though the interface is running at 100%, and egress drops are counting up rapidly, all other traffic should see almost no impact to their SLAs.)
12-09-2025 02:17 AM
Max IP packet size is independent of physical MTU, and various media have different MTU. You might have different media with different MTU along a physical path.
Traditionally, a factor of processing packets is more efficient as packet size increases, so allowing you to set packet size to whatever you want, even larger then source host's MTU, might be of some benefit. So, it's allowed. That's why.
Does it make sense to allow this? Possible not, but as it makes sense to allow using a source MTU larger than a later transit MTU, my guess, they (IP designers) didn't want to make source host IP processing any further complex than what was already required for fragmentation processing. (Often forgotten, CPU limitations of hardware when IP specs written. After all it works, as is, and there are ways to avoid source exceedingly its MTU; which are usually done. I.e. although you can do what you've described, normally an App wouldn't.)
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide