cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2677
Views
0
Helpful
5
Replies

Ace - Proxy Load Balancing with stickyness

csco10387876
Level 1
Level 1

Good moring all,

We try to do some Lb for our Proxy service but we can't manage to have a good load share on both device, we end up with one proxy takin 95% of the load, the other one nearly none.

We have more or less 30-40 source client proxies.

we are runnig version : 2.4

here is the config :

serverfarm host SF_PROXYUSR_8080

  failaction purge

  predictor leastconns

  probe P_PROXYUSR_8080

  rserver R_PROXYUSRIN 8080

    weight 50

    conn-limit max 10000 min 8000

    probe P_TCP_8080

    inservice

  rserver R_PROXYUSRPI 8080

    weight 50

    conn-limit max 10000 min 8000

    probe P_TCP_8080

    inservice

sticky ip-netmask 255.255.255.255 address both SG_SF_PROXYUSR_8080

  timeout 10

  replicate sticky

  serverfarm SF_PROXYUSR_8080

policy-map type loadbalance first-match PM_LB_PROXYUSR_8080

  class class-default

    sticky-serverfarm SG_SF_PROXYUSR_8080

policy-map multi-match PM_623_VIP

  class PM_VIP_PROXYUSRA_8080

    loadbalance vip inservice

    loadbalance policy PM_LB_PROXYUSR_8080

    loadbalance vip icmp-reply

we tried many of the predictors with no success.

the only time we had good load sharing was with http header stickyness using this :

sticky http-header Host SG_SF_PROXYUSR2_8080

  timeout 30

  replicate sticky

  serverfarm SF_PROXYUSR_8080

but some webservice were not working very well then.

Any help would be greatly appreciated.

Regards and see you all in London...

5 Replies 5

Hello,

I might try using weight 1 per each real server and predictor roundrobin per serverfarm.

Dan

Hi,

We tried that as it is the default config but it doesn't work.

Thanks.

As far as i know , the weight value of 1 is not the default one.

Dan

indeed, default is 8. even with roundrobin, we end up with thousands of connections on one server and very little on the other...strange

HI CSCO,

Had you tried "predictor hash url". Also only one server is getting all the hits may be because of sticky.

As per cisco recommnedations cache servers perform better using URL hashing as predictor. However, the hash methods do not recognize weight for the real servers. The weight assigned to the real servers is used in the round-robin and least-connection predictor methods.

One more thing server weights are not used for hash predictors.


Cache servers perform better with the URL hash method because you can divide the contents of the caches evenly if the traffic is random enough but does not guarantee to load balance evenly. The best benefit of using this predictor is that in a redundant configuration, the cache servers continue to work even if the active ACE switches over to the standby ACE.

The goal is to guarantee that the same object will not be cached on many servers and therefore maximise disk usage.

So, if you don't really need to maximize this disk usage, you could use any other balancing method like .

If the primary concern is the BW and not disk space, you need to go to roundrobin or leastconn.

If your goal is to preserver disk space, your solution is the best one and the drawback is unequal loadbalancing.

You can't get both at the same time.

URL hashing is useful in caching environments in which the content is not duplicated across the real servers.

The benefit of hashing in general is that, because the content switch hashes every packet of a flow, it does not need to store the associations of the client's connection to the selected real server in RAM. As a result, hashing is a stateless distribution method,


You can removed sticky connections from URL hash, thus, might be your both caches will get equal amount of requests if your traffic pattern is random but as I said above hashing can't guarantee to have equal load balance to the servers.

Usually when using ACE with caches, cisco recommends to use either hash url (if you want to make sure one object only exists on one cache - save disk space) or roundrobin/leastconn to have equal load on the caches.

Hash URL, or URL hash, performs a hash operation on the URL string to come up with a value that matches a server or cache in the farm. Hash URL increases the likelihood of cache hits by sending requests for the same content to the same cache. Depending on the load balancer type, the URL is either partially or fully matched, or pattern matched through a regular expression. For example, the pattern to match could be any of the following expressions:


www.example.com/default.htm
www.example.com/*.jpg
www.example.com/files/movie*


www.example.com is the domain name portion of the URL

Hash address source and the hash address destination predictors are best for firewall load balancing.

https://supportforums.cisco.com/message/3252465#3252465

https://supportforums.cisco.com/message/3248115#3248115

Kindly check my previous post in this regard for PREDICTOR HASH URL:

https://supportforums.cisco.com/message/3251701#3251701

https://supportforums.cisco.com/message/3251601#3251601

Please get back to me for any further help in this regard and share the result using predictor hash URL .

HTH

Sachin Garg