There is A9K GRE MTU and MSS issue.
Interface : Gi0/0/0/0
interface Loopback0 ipv4 address 10.10.10.10 255.255.255.255 ! interface tunnel-ip100 ipv4 address 10.2.2.1 255.255.255.252 ipv4 tcp-mss-adjust enable tunnel source 10.10.10.10 keepalive 5 4 tunnel destination 10.10.10.20
The Default MTU is 1500 Bytes, and the TCP MSS is 1460 for Gi0/0/0/0 ( because there are 20 Bytes IP header and 20 Bytes TCP header).
After we configure GRE tunnel as example. the TCP MSS will be 1436 due to 24 Bytes GRE + new IP header. So is it will drop packets if the DF bit set pakets MSS larger than 1436 Bytes ?
In this situation, Can we use jumbo frame for GRE ? And how to configure it if it's a good choose ?
Solved! Go to Solution.
Thanks for your response, sorry I didn't state clearly.
We have a network. A GRE tunnel is established between A9K1 and A9K through an intermediate network. The physical interfaces we interconnect are Gi0/0/0/0, and the source IP and destination IP of GRE are the Loopback0 interfaces of the local and peer ends respectively（the configuration like I mentioned above）. the GRE tunnel interface can be normal, but it is found that the GRE tunnel will discard large packets, such as the kind of TCP MSS 1460 packets will be discarded.
According to my judgment, I don't know if it is because the MTU is 1500, because GRE is encapsulated, the maximum TCP MSS can only be 1436 bytes, so it will drop the packets whose MSS is 1460. If so, what should I do? how to configure it so that solve the drop pkts issue?
In addition, I would like to ask whether jumbo frames can solve this problem, and if this is also a solution, how should I configure it?
MTU has a lot of implications, if this is a core network meaning ISP then its a good idea to go with 9000+ MTU for core links. That way anything your customers send you will easily go over the links. Most providers limit CE links to 1500B + overhead. When you change MTU that impacts your IGP adjacency and how quickly the IGP will sync its DB between its peers as well as BGP in your core can now send more prefixes at once (packet stuffing).
If I were you I would raise the MTU in your core (remembering that XR accounts for MTU at layer 2 while IOS uses layer 3 and other vendors do it at one or the other) and then your GRE traffic will easily pass.
fried you ask about the jumbo I dont see relate between MTU in GRE and jumbo because if the GRE tunnel can not handle 1460 how it handle 9000?
check this post I share how you can use sweep-ping test to find the mtu size.
try use sweep-test and check the max MTU size you can pass through GRE tunnel
I understand what you mean, I would like to know how to solve this? Is it possible to reduce the MTU of all the devices on the path, or can you directly adjust the MTU on the A9K for the devices on the tunnel path, or configure jumbo frames? Which one is better, or what is the best advice? thank you very much.