12-08-2025 10:52 PM - edited 12-08-2025 10:54 PM
Hello.
I have two questions about ICMP behavior and Fregmansion behavior.
All system jumbo MTU / all interfaces / SVI are set to MTU 9216.
1.
Packets are generated and processed by the ASIC (data plane) when pinged without a source
When pinged to the source interface vlan, it is generated by the control plane (CPU) and passed to the data plane (ASIC) for processing
Am I right to understand?
2.
I noticed that many of the icmp options are packet size / count / timeout.
For packet size, the maximum value is found to be multiplied by 65535 bytes (2^16), the maximum value of the ip payload.
I understand the MTU is not set to 9216 or higher, is there a reason why it is set to allow the icmp packet size to 9216 or higher?
When I increased the size and did icmp, the fragmantion worked slowly.
This might be too basic a question, but I'm writing because I need your help.
Please give us a valuable answer.
12-08-2025 11:17 PM
Hi,
The default MTU value can be changed or kept, while when sending a packet you can choose the packet-size to be up to maximum value of IP payload (why would you send a packet larger than MTU, to see if fragmentation occurs and if fragment are being successfully delivered to destination). Ethernet MTU and IP payload size are two different valued with not correlation / dependency in between.
Thanks,
Cristian.
12-08-2025 11:35 PM
There was a bgp loop phenomenon in the current network, so we did an icmp test by increasing the packet-size.
I set the packet-size to the maximum value and did a ping test, and all the bfd session was cut off on the equipment, causing a bgp flap phenomenon.
Regarding this, I asked because I was curious about the reason for the maximum packet-size.
12-09-2025 03:38 AM
Hi,
So let me ensure I understood what happened here: your network was stable (BFD and BGP were UP), you send out a large ICMP packet from a network equipment and at the exact same time, BFD went down which signaled BGP to go down as well? What is your configured Rx/Tx and multiplier values on your BFD sessions for BGP?
Thanks,
Cristian.
12-10-2025 05:16 PM - edited 12-10-2025 05:27 PM
bfd interval 300 min_rx 300 multiplier 3
bfd multihop interval 300 min_rx 300 multiplier 3
BFD settings are as above.
When I was doing the test
I tried to ping timeout 1 count 10000 with x.x.x.x.x.x packet size 40000, but I set the timeout setting incorrectly. (timeout 0)
However, neighbors with bfd settings were disconnected from the bgp template.
So, I wonder if the bfd was cut off due to continuous transmission/CPU rise/fragmentation time shortage before receiving bulk packet/echo packet.
We did the same test in the bfd/bgp session, but the bfd/bgp flap happens the same way.
12-11-2025 03:39 AM
So, I wonder if the bfd was cut off due to continuous transmission/CPU rise/fragmentation time shortage before receiving bulk packet/echo packet.
Possibly, or perhaps you've saturated the link long enough BFD timed out. The latter can be computed.
BTW, I've used 64k pings for QoS link saturation testing in a way that ordinary pings will not.
From my observations, ordinary pings appear to pause between each ping request, so link is not saturated.
With a 64k ping, all the ping fragments, I believe, are physically sent back-to-back. Possibly, timing for ping requests remains the same so the next ping packet would have less delay then it ordinarily would.
12-09-2025 02:17 AM
Max IP packet size is independent of physical MTU, and various media have different MTU. You might have different media with different MTU along a physical path.
Traditionally, a factor of processing packets is more efficient as packet size increases, so allowing you to set packet size to whatever you want, even larger then source host's MTU, might be of some benefit. So, it's allowed. That's why.
Does it make sense to allow this? Possible not, but as it makes sense to allow using a source MTU larger than a later transit MTU, my guess, they (IP designers) didn't want to make source host IP processing any further complex than what was already required for fragmentation processing. (Often forgotten, CPU limitations of hardware when IP specs written. After all it works, as is, and there are ways to avoid source exceedingly its MTU; which are usually done. I.e. although you can do what you've described, normally an App wouldn't.)
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide