05-04-2011 06:57 AM
Hello,
we are using the ACE to establish a redundancy for our vpn devices.
In this setup there is one aktiv and one standby box.
If the primary box goes down all the tunnels are put to the standby box, this is working as expected.
Now we have the Problem that If the primary box comes back online the tunnel is not correctly balanced back to the primary box.
On the backup box the tunnel is still in qm idle and on the no back in service primäry box the tunnel is stucked in the state ag_init_exch.
To get the tunnel back to the primary box the connection table on the ace need to get cleared. (clear conn all)
Thus we do have an active/standby construct stickiness is not required. (and its not working either i tried it)
Here the snipets of the config
.....
serverfarm host backup_1
transparent
failaction purge
probe ICMP
rserver ONE
backup-rserver TWO
inservice
rserver TWO
inservice standby
class-map match-any IPSEC
match virtual-address 1.1.1.1 50
match virtual-address 1.1.1.1 udp eq 500
match virtual-address 1.1.1.1 udp eq 4500
policy-map type loadbalance first-match IPSEC
class class-default
serverfarm serverfarm host backup_1
....
---------------------------------------------------------------------------------
Same setup but with two serverfarms
serverfarm host backup_1
transparent
failaction purge
probe ICMP
rserver ONE
inservice
serverfarm host backup_2
transparent
failaction purge
probe ICMP
rserver TWO
inservice
class-map match-any IPSEC
match virtual-address 1.1.1.1 50
match virtual-address 1.1.1.1 udp eq 500
match virtual-address 1.1.1.1 udp eq 4500
policy-map type loadbalance first-match IPSEC
class class-default
serverfarm host backup_1 backup backup_2
Thanks for any help in advanced
05-04-2011 05:56 PM
Hi
The problem looks to be normal behaviour with the cmd you use.
If the primary rserver comes back to inservice, it will only receive the new connections. BUT the existing connections to the backup rserver won't be redirected to the primary untill inactivity timeout clear the connection.
For the udp the ACE uses 10 sec as inactivity timeout.
regards
Andrew
05-05-2011 12:20 AM
Hi again,
@Joo you are right if a scenario is used with an primary and a backup rserver, but if you use a primary and backup serverfarm it should work in theory, but it does not work in practice.
You can load balance a client request for content to a server farm by using the serverfarm command in policy-map class configuration mode. Server farms are groups of networked real servers that contain the same content and that typically reside in the same physical location. The syntax of this command is as follows:
serverfarm name1 [backup name2 [sticky] [aggregate-state]]
The keywords, arguments, and options are as follows:
•name1—Unique identifier of the server farm. Enter an unquoted text string with no spaces and a maximum of 64 alphanumeric characters.
•backup name2—(Optional) Designates an existing server farm as a backup server farm in case all the servers in the original server farm become unavailable. When at least one server in the primary server farm becomes available again, the ACE sends all connections to the primary server farm. Enter the name of an existing server farm that you want to specify as a backup server farm as an unquoted text string with no spaces and a maximum of 64 alphanumeric characters.
regards
ed
05-05-2011 01:28 AM
> becomes available again, the ACE sends all connections to the primary server farm.
should probably read "...all NEW connections ..."
Unless the serverfarms can maintain state between themselves then there is no way for the ACE to move traffic seamlessly from one farm to another mid-flow.
HTH
Cathy
05-05-2011 04:06 AM
Please explain then following behavior:
I change the config to use sticky serverfarms to ensure all connections from a client face the same rserver.
Now if the primary sticky farm fails the sticky entry is removed and a new sticky entry to the backup farm is written into the sticky database.
Now the primary sticky farm is getting back online, the sticky entry to the backup server farm is removed, but a new sticky entry ist not establish, the sticky database is empty.
Thus the service is dead.
In Version A2.2.3 the sticky entry is not removed as expected
In Version A2.3.3 the sticky entry is removed and the database is empty
05-05-2011 07:57 PM
Hi Everhard
I think the problem you concerning of is when the primary sticky farm is getting back online. You are seeing the sticky entry to the backup server farm is removed, but a new sticky entry is not establish, the sticky database is empty with A2.3.3.
As per the A2.30 config guide, the stick works in a following way :
If all the servers in the primary server farm go down, the ACE sends all new requests to the backup server farm. When the primary server farm comes back up (at least one server becomes active):
•If the sticky option is enabled, then:
–All new sticky connections that match existing sticky table entries for the real servers in the backup server farm are stuck to the same real servers in the backup server farm.
–All new non-sticky connections and those sticky connections that do not have an entry in the sticky table are load balanced to the real servers in the primary server farm.
I think what you see with A2.3.3 is a little bit wierd. Looks to be a bug.
Can you open a TAC case for this?
-regards
Andrew
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide