04-05-2023 08:17 PM - editado 05-04-2023 05:52 AM
el 04-05-2023 08:51 PM
Hello! To configure TCP window scaling on a Cisco router, you can use the following command:
ip tcp window-size <window-size>
Replace <window-size> with the desired window size value in bytes. This command sets the maximum size of the receive window advertised to a remote TCP peer.
If you want to enable TCP selective acknowledgement (SACK), you can use the following command:
ip tcp selective-ack
Regarding the "tcp window-scale" command, it is not available on all Cisco IOS versions. The "window-size" command is the correct command to use on IOS XR, and it should affect all TCP traffic passing through the router.
You can use the "receive-queue" command to adjust the size of the receive queue for a particular interface, but this may not be necessary for most use cases.
el 04-05-2023 09:04 PM
Thank you very much for your response! One last question, what is the default value if tcp window-size is not configured? And, what value would be recommended for a host with 40ms latency if I am currently measuring 600mbps over tcp and 10G over udp? I know there are no exact values, but what values should I use as a starting point? And, does this require a router reload or restart
04-05-2023 09:06 PM - editado 04-05-2023 09:09 PM
default value is 16K bytes for xr12000. what is the router model?
el 04-05-2023 09:14 PM
ASR9010 with IOS XR 7.4.1, the options are <2048-65535>, so if it says 16K, it means 16384, right? What window size should I try as a starting point to see any positive changes? Also, since tcp window-size is a global configuration, I'm not sure if I need to reload or restart the router, is it necessary?
el 04-05-2023 10:25 PM
no need to restart. if you want to revert back, you can use #no tcp window-size
el 04-05-2023 09:48 PM
Anyway, I chose minimum and maximum values in "conf ter tcp window-size" and "tcp receive-queue", enabled selective-ack and applied the change, but I still have low performance in TCP. In fact, I didn't notice any changes, of course, I did commit the changes. This menu doesn't exist in "ip tcp", it's "conf ter tcp". I'm not sure if I'm configuring control plane settings instead of data plane settings.
04-09-2023 02:25 AM - editado 04-09-2023 02:27 AM
@galaxyord I think you are referring to traffic which *transits* the router right (data plane)?
Assuming that is the case then the info which @davislee2122 gave you is incorrect as that is for traffic terminating or originating on the IOS box - so yes effectively control plane. He/she has been posting chat-bot answers across the communities so probably has little/no actual knowledge of the subjects!
On IOS and IOS-XE the correct command for transit traffic is "ip tcp adjust-mss <value>" to change the advertised MSS value on the TCP SYN packet. It's applied on the interface you expect the traffic to come in via. The idea is to select a low enough value to ensure there is no packet fragmentation required on the path taking account of protocol overheads like tunnels, IPSEC etc. For example Cisco now recommends (and uses as default) 1250 for CAPWAP traffic from APs.
Now you're trying to do that on ASR9010 running IOS-XR which is quite different - for that see:
https://community.cisco.com/t5/service-providers-knowledge-base/tcp-mss-adjust-on-asr9000/ta-p/3138507
TCP window size is not something you can set in the network for data plane traffic. The clients on each end manage that dynamically. Setting an appropriate MSS may help to optimise that and hence improve throughput. Most OS now optimise window size automatically but any options to tune that will be OS specific on the device.
el 04-09-2023 04:40 AM
It's correct,
I mean the data plane. The problem I am facing is that TCP performance across the network decreases with fixed values relative to latency, that is:
If I perform TCP tests between two hosts at 2ms, the performance of a single connection always exceeds 1GB. However, when I test on the same network at 8ms, the performance becomes asymmetric or gives similar values of 900mbps/500mbps. I think this is due to the size of the TCP window. I am aware that the devices negotiate this dynamically, which is why I do not understand why they cannot choose the correct value, so I think that perhaps something in the data plane is not configured correctly. The MTU of the transport network is in jumbo frames, and the devices have an MTU specified at 1500. So, can adjusting TCP MSS help? And, given the network characteristics mentioned, what would be the standard value?
el 04-09-2023 05:01 AM
I don't know the complete topology of your network so can't say whether you need to change TCP MSS. If your entire network is ethernet and larger (MTU >= 1500) then you should not need to touch MSS. MSS tuning is only needed if there's an MTU *less* than 1500 in the path.
What are those end devices? What OS/version?
What are you using to test that throughout? (iperf is generally considered to be best)
Other apps may be affecting it at app level depending on what idiotic things a developer might have coded so validate it with something like iperf first.
Also get packet captures of a "good" connection and a "bad" connection at both ends and compare them to see what's different. If window size is too small then increased round trip time will lead to reduced throughput. Another thing to consider is packet loss - is anything in the path (including the end devices) dropping packets? (this will be obvious on a packet capture because you'll see missing packets and retransmissions). A multi-path network might also affect throughout if packets arrive out of order. Most OS and stacks should handle that fine today but could also be a factor.
el 04-09-2023 05:45 AM
Can I share a screenshot of a traffic that should give 10G/10G but instead I get 200mbps/500mbps? The topology is symmetric, there are no multiple paths, and it occurs even in edge routers, iperf, ookla, drive, etc.
el 04-09-2023 05:49 AM
You can share a screenshot (of what?) but not clear how that will help.
You really need to get into the technical details to understand what is causing your problem and that's why you should get packet captures.
el 04-09-2023 06:00 AM
I have packet captures. In a topology, I get these results:
Server-->ASR9k-->internet
Asymmetric traffic and lower measurements, vary depending on the host's latency. For example, a host with 40ms will have a speed of 700/500mbps (down/up), and one with 50ms will have 400/300mbps (these are just examples, actual numbers vary). The MTU to the internet is 1500 bytes, and the servers also use 1500 bytes. There is no fragmentation.
Server--->Switchl2--->NCS5500-->Server
700mbps down/1000mbps up, 1ms latency.
With packet captures of both scenarios, I find many packets with "TCP Previous segment not captured," "dup ack," "tcp ACKed unseen segment."
What I don't understand is why this is happening in the second scenario, which is a server with 1ms latency.
The only strange thing in the second test is that the NCS has "Hardware usage name: encap, BAnk_0 Total in-use 100%" when the rest of the banks are empty. Could this be affecting it?
04-09-2023 07:21 AM - editado 04-09-2023 07:21 AM
1. Testing to internet - results will vary. Many services will actively rate limit client connections and it can depend on how busy the server is. You're not comparing apples with apples!
2. Is the server internal to your infrastructure (no internet)? Did you do the packet capture on both server1 and server2. Did you monitor the kernel and TCP stats on both servers while running the test? Did you test both ways - running iperf server on one and test from the other, and then reverse? Embedded packet captures on the network devices can be very useful but might miss packets depending on switching path and feature related punts. I'm not familiar with NCS but where are you seeing that message? Packet captures on the servers might also miss packets depending on drivers, how busy CPU is, buffers, how quickly the app offloads the packets from kernel and a whole bunch of other things - but at least with a simultaneous capture on both ends you can piece together the full picture. At least then you can start focussing your investigation.
Presume you've done the obvious things like making sure the servers are fully up to date - OS and especially network drivers. And make sure the software on network devices is up to date too.
If you can't work it out open a TAC case so that Cisco can help you look through all the network devices for any issues.
04-09-2023 08:08 AM - editado 04-09-2023 08:31 AM
ip tcp window-size is only for TCP traffic generate from the router itself not effect any traffic pass through any interface in router.
Server--->Switchl2--->NCS5500-->Server <<- I understand that there is some latency here ??
in show NCS5500
try show interface x/x summary <<- check if there is any packet in Queue.
Descubra y salve sus notas favoritas. Vuelva a encontrar las respuestas de los expertos, guías paso a paso, temas recientes y mucho más.
¿Es nuevo por aquí? Empiece con estos tips. Cómo usar la comunidad Guía para nuevos miembros
Navegue y encuentre contenido personalizado de la comunidad