cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1308
Views
0
Helpful
7
Replies

ACE 30 waits for TCP ACK

p.hruby
Level 1
Level 1

Hi,

I'd like to solve the problem which occurs when our client communicates with http server through ACE SM. See picture attached.

 

The problem is, that http response from server (200 OK) is divided into two packets. Both packets are sent by backend http server in rapid succession.

ACE forwards the first packet, but then waits for ACK from client. Only then it sends the second one. It takes about 200ms until client sends ACK.

 

One transaction consists of hunderds such http requests. It means that whole transaction takes approx. 25 seconds when is balanced by ACE. When I connect dirrectly to backend server the transaction takes approx. 5 seconds.

I'm quite sure the problem is not related to TCP window.

 

Is there any parameter on ACE which should affect this behaviour (waiting for the ACK before second packet is sent)? 

 

Petr

 

1 Accepted Solution

Accepted Solutions

Hi Petr,

Can you use "set tcp wan-optimization rtt 0" and see if the issue goes away?

Regards,

Kanwal

Note: Please mark answers if they are helpful.

View solution in original post

7 Replies 7

Kanwaljeet Singh
Cisco Employee
Cisco Employee

Hi Petr,

Can you try this:

 Setting the ACK Delay Timer

You can configure the ACE to delay sending the ACK from a client to a server. Some applications require delaying the ACK for best performance. Delaying the ACK can also help reduce congestion by sending one ACK for multiple segments rather than ACKing each segment individually. To configure an ACK delay, use the set tcp ack-delay command in parameter map connection configuration mode. The syntax of this command is as follows:

set ack-delay number

The number argument is an integer from 0 to 400 ms. The default is 200 ms.

For example, to delay sending an ACK from a client to a server for 400 ms, enter:

host1/C1(config-parammap-conn)# set ack-delay 400

You can configure it under parameter type connection.

Regards,

Kanwal

Note: Please mark answers if they are helpful.

Hi Kanwal,

this didn't solve the issue. 

I think this command delays ACK packets from client to backend server.

This is not my case. I need to force the ACE not to wait for ACK from client and send the packet from backend server (which is already in buffer of ACE) to client immediately. The TCP Window is big enough to do it.

 

 

Regards

Petr

 

 

Hi Petr,

Do you have Layer 7 configuration in question here?

When the ACE is configured to do Layer 7 load balancing, it needs to proxy the connection to do HTTP layer parsing and manipulation of the data.

When this is performed, the ACE is waiting for the first ACK from the client to the first HTTP response packet in order to un-proxy the connection, before forwarding subsequent data packets to the client.

When the ACE unproxies the connection, in fact, it no longer keeps the TCP segments in the buffers. Because of this, the ACE can't unproxy the connection until it gets the ACK from the client, in order to avoid losing packets.

In this case we can force the connection to stay proxied using set tcp wan-optimization rtt 0. But the number of proxied connections that ace can handle are 512k.

Regards,

Kanwal

Note: Please mark answers if they are helpful.

Hi Kanwal,

all traffic is sent to one serverfarm.

 

policy-map type loadbalance first-match TIA-TEST2-POLICY
  class class-default
    sticky-serverfarm TIA-TEST2-FARM-STICKY
    insert-http X-ace-ip header-value "%is"

 

 

Petr

 

Hi Petr,

Can you use "set tcp wan-optimization rtt 0" and see if the issue goes away?

Regards,

Kanwal

Note: Please mark answers if they are helpful.

Hi Kanwal,

that helped! Thanks.

 

 

Regards

Petr

 

pukumar2
Level 1
Level 1

Hi Petr,

Since your issue is solved now, You might want to check out this new product called ITD.

Simple and faster solution:

ITD provides :

  1. ASIC based multi-terabit/s L3/L4 load-balancing at line-rate
  2. No service module or external L3/L4 load-balancer needed. Every N7k port can be used as load-balancer.
  3. Redirect line-rate traffic to any devices, for example web cache engines, Web Accelerator Engines (WAE), video-caches, etc.
  4. Capability to create clusters of devices, for example, Firewalls, Intrusion Prevention System (IPS), or Web Application Firewall (WAF), Hadoop cluster
  5. IP-stickiness
  6. Resilient (like resilient ECMP)
  7. VIP based L4 load-balancing
  8. NAT (available for EFT/PoC). Allows non-DSR deployments.
  9. Weighted load-balancing
  10. Load-balances to large number of devices/servers
  11. ACL along with redirection and load balancing simultaneously.
  12. Bi-directional flow-coherency. Traffic from A-->B and B-->A goes to same node.
  13. Order of magnitude OPEX savings : reduction in configuration, and ease of deployment
  14. Order of magnitude CAPEX savings : Wiring, Power, Rackspace and Cost savings
  15. The servers/appliances don’t have to be directly connected to N7k
  16. Monitoring the health of servers/appliances.
  17. N + M redundancy.
  18. Automatic failure handling of servers/appliances.
  19. VRF support, vPC support, VDC support
  20. Supported on both Nexus 7000 and Nexus 7700 series.
  21. Supports both IPv4 and IPv6
  22. N5k / N6k support : coming soon


Blog

At a glance

ITD config guide

Email Query or feedback:ask-itd@external.cisco.com

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: