06-01-2022
01:56 PM
- last edited on
06-11-2022
02:35 AM
by
Translator
I have a need to adjust the tcp mss value to support GRE tunnels on ASR-920-12CZ-A running 16.12.05. I've configured
ip tcp adjust-mss
1436 on the egress interface, but I can see in a packet capture that the value is still 1460. I've verified that internet bound traffic does traverse this interface, and I also run this through a CSR1000v in CML and it worked as it should. Are there any device limitations that I'm running into? I'm going crazy trying to get this working. I've looked through many articles and forum posts, and haven't seen anything relevant aside from how the command/function is supposed to work.
Any help would be greatly appreciated.
Solved! Go to Solution.
06-10-2022 11:38 AM
Thanks for all of the input. I finally got confirmation from Cisco TAC and their Business Unit, this feature/command is not supported by the ASR-920 and is not on the roadmap to be implemented. So we will be eventually replacing these with another router at some point, meanwhile we've applied the MSS change to the endpoints which is less than ideal but it works for now.
06-01-2022
02:40 PM
- last edited on
06-11-2022
02:41 AM
by
Translator
ip tcp adjust-mss
is different than
ip mtu
for your case you need to config
ip mtu
to be 1436, and
ip tcp mss
lower than this value at least 40 bytes.
06-01-2022 03:41 PM
In our case we will have asymmetric routing, where traffic leaving the router doesn't traverse the GRE tunnels, but the return traffic will. This is for a ddos protection service. Their guides and configuration samples are telling me that we need to have mss clamping in place.
06-01-2022
03:49 PM
- last edited on
06-11-2022
02:51 AM
by
Translator
I dont get what you want,
But
Ip tcp mss
only adjust mss of tcp segemnt not adjust mtu of ip packet that why ypu see 1460 in capture,
I.e.
Ip tcp mss
1436 + gre and ip header which is in you case 24 bytes=
ip mtu
If you want to make
ip mtu
1436 then you need
Ip tcp mss
1412
06-01-2022
04:08 PM
- last edited on
06-11-2022
02:53 AM
by
Translator
My main issue is why
ip tcp adjust-mss
is not applying to the traffic flowing through it, I am looking at the mss value in the tcp handshake. When I put the same configuration in the virtual lab, I get the results I am looking for. Both are running IOS-XE but different versions, so its not an exact simulation. It looks like I'm either hitting a bug or this is something is not supported by the router, though I'm not finding any documents confirming either.
06-01-2022
04:33 PM
- last edited on
06-11-2022
02:56 AM
by
Translator
this comment is edit
OK, in your case are you config
ip tcp mss
with in both side of tunnel ??
the router adjunct the MSS in SYN i.e. the client intiatie tcp traffic must be behind the router config with
ip tcp mss
, if you config in other peer I don't think it will take effect.
so you need to config in both side.
06-02-2022
03:29 PM
- last edited on
06-11-2022
02:58 AM
by
Translator
"so you need to config in both side."
That's often done, but I recall (?) TCP
mss-adjust
will work on ingress or egress traffic on interface it's applied to.
06-02-2022
03:37 PM
- last edited on
06-11-2022
02:58 AM
by
Translator
You can adjust a GRE tunnel's IP MTU to 1476, to try to avoid fragmentation for non-TCP traffic. (For this to work, transmitting client needs to set DF bit.)
TCP
mss-adjust
usually, for GRE, would be set to 1436.
BTW, great Cisco Tech Note on dealing with IPv4 fragmentation, in different situations. I recall (?) this Tech Note might also address using PMTUD on tunnel's egress traffic (in case, somewhere along the path, a 1500 MTU isn't available.
06-02-2022 03:39 PM
please see below my comment
thanks
06-02-2022 12:45 AM - edited 06-02-2022 12:45 AM
Hello
My understanding TCP-MSS is relevant to the interface mtu, and also relevance is taken on directly connected links so if both endpoints are NOT directly connected and there is additional inpath links with smaller interface mtu then TCP-MSS wont work and fragmentation can be introduced, as such PMTUD would be required to accommodate the inpath links between those end points.
06-02-2022
10:14 AM
- last edited on
06-11-2022
03:00 AM
by
Translator
The TCP
adjust-mss
only works during TCP session setup. I.e. enabling it won't impact existing TCP sessions.
You mention enabling on "egress interface". With GRE, you'll want to apply it on the GRE tunnel interface.
06-02-2022
03:36 PM
- last edited on
06-11-2022
03:04 AM
by
Translator
@Joseph W. Doherty @dcrozier7
Mr.Joseph point that you apply the MSS in egress interface not under the GRE tunnel interface SO
I do lab and config
1- config the
ip tcp adjust-mss
(1000) under egress interface "tunnel source"
the result is
TCP handshake override the value of MSS in egress interface and use default 1460
2-config the
ip tcp adjust-mss
(1000) under GRE tunnel interface
the result is
TCP handshake set the value of MSS to 1000 <<<<
Now both side vs single side
single Sided
depend on initiate traffic
if the
tcp adjust-mss
is config on the side of initiate traffic then
SYN 1000
SYN ACK 1000
if the
tcp adjust-mss
is config on the other side then
SYN 1460
SYN ACK 1000
double side
both side use same MSS in SYN and SYN+ACK
SYN 1000
SYN ACK 1000
06-03-2022
07:26 AM
- last edited on
06-11-2022
03:05 AM
by
Translator
"Mr.Joseph point that you apply the MSS in egress interface not under the GRE tunnel interface SO"
Sorry, I must have been unclear. What I'm trying to say is the converse. I.e.
mss-adjust
on the tunnel interface, not the physical egress interface (i.e. interface transmitting [or receiving] the GRE encapsulated packets).
As to single side vs. double side, I recall (?) actual MSS used by both sides is the LCD. I.e. both sides can have different MSSs, but they will both use the lower value.
"See" what happens after session setup when only using "single sided" MSS adjust.
06-03-2022
09:19 AM
- last edited on
06-11-2022
03:07 AM
by
Translator
Thank you all for the feedback and input, but this still doesn't help our situation. Maybe I was unclear in the original post. Here is what I've gotten from the vendor support.
Reasoning about MSS can feel backwards, especially in scenarios with asymmetric routing. This is very common with Magic Transit, where Internet → your traffic goes through a GRE tunnel and thus has MTU 1476, while you → Internet traffic does not go through GRE and has path MTU 1500. In this scenario, we want to reduce the MSS value in packets from your end → Internet in order reduce the size of packets sent from Internet → you. This means that the configuration must be done either on the your end-hosts, or an MSS clamp must be applied on some intermediary device on the egress path of traffic leaving their network.
On my tunnel interfaces, I have both
ip mtu 1476
ip tcp adjust-mss 1436
On the physical interface I have
ip tcp adjust-mss 1436
Ingress traffic from the internet goes through the tunnel, while egress traffic out to the internet does not. So I do not need to set the MTU on Egress to 1476 as far as I understand, and what they are telling me.
See the attached screenshot for a better picture of the setup.
Meanwhile I am in the process of purchasing a support contract to have Cisco TAC take a look. I think that its not a problem with the configuration, but maybe a limitation of the hardware or a bug. Its pretty similar to this one, though this is for ASR9000, I have ASR920 https://quickview.cloudapps.cisco.com/quickview/bug/CSCvg25871
06-03-2022 09:51 AM
Scenario 4 shows an asymmetric routing example where one of the paths has a smaller minimum MTU than the other. Asymmetric routing occurs when different paths are taken to send and receive data between two endpoints. In this scenario, PMTUD will trigger the lowering of the send MSS only in one direction of a TCP flow. The traffic from the TCP client to the server flows through Router A and Router B, whereas the return traffic that comes from the server to the client flows through Router D and Router C. When the TCP server sends packets to the client, PMTUD will trigger the server to lower the send MSS because Router D must fragment the 4092 byte packets before it can send them to Router C.
The client, on the other hand, will never receive an ICMP "Destination Unreachable" message with the code that indicates "fragmentation needed and DF set" because Router A does not have to fragment packets when it sends them to the server through Router B.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide