Showing results for 
Search instead for 
Did you mean: 

Path MTU discovery tool

I am trying to understand how to use the path NYU discovery command to discover the MTU sizes for different links on a path. Can anyone help?

VIP Mentor

simple document have liked so far was :


*** Rate All Helpful Responses ***

Thank you for this article. 

VIP Advocate


A sender will send a packet with DF (Don't fragment) bit on to the destination. If any of the device in the middle or destination device having smaller MTU Compare the sender device then that device will drop this packet and send back an ICMP "Fragment to XXXX B) message and now sender get the smallest MTU in the path. And this process is called PDMTU.


Tye to understand the same with an example:


R1 is the sender and R3 is receiver then

R1 (MTU 1500)-------->R2 (MTU1400)--------->R3 (MTU 1500)

The 1st packet with 1480+20 (1500) Sent from the R1 to R3 with DF bit set ON

R2 Receive the Packet and dropped due to DF bit set ON and he failed to Defragment.

R2 Sent an ICMP reply to R1 with its own MTU value.

R1 will send next packet in 1380+20 (1400B).


Here, ICMP is playing a good role and always recommended to not block the Complete ICMP protocol. If you want to block the PING then only block the certain packet. 



Deepak Kumar





Deepak Kumar,
Don't forget to vote and accept the solution If this comment will make help you!

Just some additional information.

Deepak mentions device that needs to fragment informs sender what the max MTU it can handle is. I recall that's true for later implementations, but early implementations only informed sender fragmentation is needed, not what their max MTU is. If not, the sender had to probe (send more packets until one gets through) to discover max MTU.

Also keep in mind, you may not be able to discover the max MTU of each hop, because earlier hops will "hide" downstream.


R1 (MTU 1500)-------->R2 (MTU1000)--------->R3 (MTU 1400)--------->R3 (MTU 1500)

R2's MTU of 1000 will always "hide" R3's MTU of 1400.

Also it takes a too large packet to trigger the DF condition, which also means you might find a reduction after seeing a reduction.


R1 (MTU 1500)-------->R2 (MTU1400)--------->R3 (MTU 1000)--------->R3 (MTU 1500)

If you send a 1500 byte packet, it will trigger R2 but when you send a 1400 byte packet, to pass R2, then R3 will trigger. Oh, and if your first packet was 1200 bytes, R3 would trigger first, effectively "hiding" R2's lower MTU too.

Also keep in mind, networks can be dynamic, it links come and go, so max MTU can change during the time of sending multiple packets. (NB: PMTUD, by default, will try sender's max MTU after some time interval to detect a change to a larger MTU to destination.)

What commands would I use to set this up


It depends on the application. "Under-the-covers" such an application only needs to set the DF bit (and note failure and/or ICMP message results).

At to a PMTUD "tools" on a Cisco network device, I generally just use ping.

How would I set this up to monitor this information? 


MTU settings change only request when needed, so it is only for testing when you have issue fragmentation in the network.


you can use below tool

*** Rate All Helpful Responses ***