cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1043
Views
0
Helpful
5
Replies

ACE ipsec issue

eddschulz
Level 1
Level 1

Hello,

we are using the ACE to establish a redundancy for our vpn devices.

In this setup there is one aktiv and one standby box.

If the primary box goes down all the tunnels are put to the standby box, this is working as expected.

Now we have the Problem that If the primary box comes back online the tunnel is not correctly balanced back to the primary box.

On the backup box the tunnel is still in qm idle and on the no back in service primäry box the tunnel is stucked in the state ag_init_exch.

To get the tunnel back to the primary box the connection table on the ace need to get cleared. (clear conn all)

Thus we do have an active/standby construct stickiness is not required. (and its not working either i tried it)

Here the snipets of the config

.....

serverfarm host backup_1

transparent

failaction purge

probe ICMP

rserver ONE

     backup-rserver TWO

     inservice

rserver TWO

     inservice standby

class-map match-any IPSEC

     match virtual-address 1.1.1.1 50    

     match virtual-address 1.1.1.1 udp eq 500

     match virtual-address 1.1.1.1 udp eq 4500

policy-map type loadbalance first-match IPSEC

         class class-default

          serverfarm serverfarm host backup_1

....

---------------------------------------------------------------------------------

Same setup but with two serverfarms

serverfarm host backup_1

transparent

failaction purge

probe ICMP

rserver ONE

       inservice

serverfarm host backup_2

transparent

failaction purge

probe ICMP

rserver TWO

       inservice

class-map match-any IPSEC

     match virtual-address 1.1.1.1 50    

     match virtual-address 1.1.1.1 udp eq 500

     match virtual-address 1.1.1.1 udp eq 4500

policy-map type loadbalance first-match IPSEC

         class class-default

           serverfarm host backup_1 backup backup_2

Thanks for any help in advanced

5 Replies 5

Andrew Nam
Level 1
Level 1

Hi

The problem looks to be normal behaviour with the cmd you use.

If the primary rserver comes back to inservice, it will only receive the new connections. BUT the existing connections to the backup rserver won't be redirected to the primary untill inactivity timeout clear the connection.

For the udp the ACE uses 10 sec as inactivity timeout.

regards

Andrew

Hi again,

@Joo you are right if a scenario is used with an primary and a backup rserver, but if you use a primary and backup serverfarm it should work in theory, but it does not work in practice.

Enabling Load Balancing to a Server Farm (Configuring a Backup Server Farm)

You can load balance a client request for content to a server farm by using the serverfarm command in policy-map class configuration mode. Server farms are groups of networked real servers that contain the same content and that typically reside in the same physical location. The syntax of this command is as follows:

serverfarm name1 [backup name2 [sticky] [aggregate-state]]

The keywords, arguments, and options are as follows:

name1—Unique identifier of the server farm. Enter an unquoted text string with no spaces and a maximum of 64 alphanumeric characters.

backup name2—(Optional) Designates an existing server farm as a backup server farm in case all the servers in the original server farm become unavailable. When at least one server in the primary server farm becomes available again, the ACE sends all connections to the primary server farm. Enter the name of an existing server farm that you want to specify as a backup server farm as an unquoted text string with no spaces and a maximum of 64 alphanumeric characters.

regards

ed

> becomes available again, the ACE sends all connections to the primary server farm.

should probably read "...all NEW connections ..."

Unless the serverfarms can maintain state between themselves then there is no way for the ACE to move traffic seamlessly from one farm to another mid-flow.

HTH

Cathy

Please explain then following behavior:

I change the config to use sticky serverfarms to ensure all connections from a client face the same rserver.

Now if the primary sticky farm fails the sticky entry is removed and a new sticky entry to the backup farm is written into the sticky database.

Now the primary sticky farm is getting back online, the sticky entry to the backup server farm is removed, but a new sticky entry ist not establish, the sticky database is empty.

Thus the service is dead.

In Version  A2.2.3 the sticky entry is not removed as expected

In Version  A2.3.3 the sticky entry is removed and the database is empty

Hi Everhard

I think the problem you concerning of is when the primary sticky farm is getting back online. You are seeing the sticky entry to the backup server farm is removed, but a new sticky entry is not establish, the sticky database is empty with A2.3.3.

As per the A2.30 config guide, the stick works in a following way :

If all the servers in the primary server farm go down, the ACE sends all new requests to the backup server farm. When the primary server farm comes back up (at least one server becomes active):

If the sticky option is enabled, then:

All new sticky connections that match existing sticky table entries for the real servers in the backup server farm are stuck to the same real servers in the backup server farm.

All new non-sticky connections and those sticky connections that do not have an entry in the sticky table are load balanced to the real servers in the primary server farm.

I think what you see with A2.3.3 is a little bit wierd. Looks to be a bug.

Can you open a TAC case for this?

-regards

Andrew

Review Cisco Networking for a $25 gift card