First of all I am new the job and have very little ACE expierence. I work on a large campus. We have to 6513's with an ACE blade in each. A few contexts configured for different applications. Basically the server guys have come to me and asked me to enabled stickiness on one of there contexts.
Now I am sure this is basic stuff to ye guys but I am just wondering what I need to do? Can I implement this on the fly without causing an outage? I have cut and paste the relevant context below. And added the changes I think that need to be made. Do you guys think this will work and will it cause any outage?
I appreciate any help at all guys:
Here is current config:
probe tcp APPS-PROBE
passdetect interval 5
parameter-map type ssl SSL-APPS-ADVANCED
rserver host SERVER1
ip address 10.10.10.1
rserver host SERVER2
ip address 10.10.10.2
ssl-proxy service SSL-APPS-PROXY
ssl advanced-options SSL-APPS-ADVANCED
serverfarm host APPS-FARM
rserver SERVER1 8080
rserver SERVER2 8080
class-map match-any APPS-VIP
2 match virtual-address 10.10.10.4 tcp eq https
policy-map type management first-match MGT-POLICY
policy-map type loadbalance first-match APPS-POLICY
policy-map multi-match APPSPOLICY
loadbalance vip inservice
loadbalance policy APPS-POLICY
loadbalance vip icmp-reply active
ssl-proxy server SSL-APPS-PROXY
service-policy input APPSPOLICY
Will adding the following to the context make stickiness work?
sticky ip-netmask 255.255.255.255 address source STICKY-APPS-FARM
policy-may type loadbalance first-match APPS-POLICY
I am really lost on this and only getting this from looking at stickiness on other configs. Can you guys advise will this work.
The above looks alright.
Note: Changing from a serverfarm to a sticky group is potentially service impacting. While current connections will be allowed to finish, new connections will not be accepted during the removal of the serverfarm and applying the sticky group to the load balance policy.
So it is advisable to take at least 10 minutes of maintenace window.
You can refer the following link for step by step guidelines:
Thank you Ajay,
This is all new to me and I am just trying to learn by reading through other configs. From the context above is the following line correct:
service-policy input APPSPOLICY
are they missing a hyphen? Should it be:
service-policy input APPS-POLICY
Thank you Ajay I will do this and let you know how I get on.
I have encountered another problem. It is very very very strange. I have added a new server to this context but after 5 minutes the server guys can't access it. It seems to disappear off the arp table. Now the context is still functioning ok and the servers that are loadbalanced work fine. When I move this server infront of ace its fine.
Now I thought this was just a problem for this context put I tried moving a server into another context and I get the same thing. Its ok for 5 minutes then I cannot access server.
Have you any ideas on this? Its very strange?
Few basic check:
When server is not accessible :
Check if there is some issue with the port where you connect your server.
Check if the server has dual NIC. If there is shut one of them and try again.
I dont see any reason for ACE to cause such issues.
No I have tried a few different servers now and the same problem. They work fine in front of ACE, behind ACE work for 5 mins but if left alone then they are inaccessable. Very very strange.
Are you having this issue only one specific server or many of them?
Does this server have an alternate path besides the ACE?(Possible Asymmetric routing issue)
Are you able to ping the server from the ACE? Are you able to telnet to that servers on the ports which are supposedly open? Can you ping from the server to the ACE?
You may need to take some captures to see what really happens here.
No its more then one server. I don't think there are alternate paths. Once behind the ACE the servers cannot be telneted too or pinged when they time out. Until about 5 mins everything is fine.
We have built a test server and are trying it in different contexts. It seems to work in one context. The funny thing about this context is its FWSM context is live on the other 6500. So now we think it may actually be a FWSM problem and not the ACE.
Guess we need to fail this working FWSM context over to the same 6500 as the contexts that aren't working and see if it still works. Guess if it doesn't its the FWSM and not the ACE thats the problem.
Can you guys tell me the quickest way to failover a FWSM context. Do I need to change the failover group on the admin context of both FWSM's?
Hmm, yes it might be possible as well since you cannot even ping the server.
Here you have a link about it:
Also look at the following :
By default, the ACE does not allow traffic from one context to another context over a transparent firewall. The ACE assumes that VLANs in different contexts are in different Layer 2 domains, unless it is a shared VLAN. The ACE allocates the same MAC address to the VLANs.
When you are using a firewall service module (FWSM) to bridge traffic between two contexts on the ACE, you must assign two Layer 3 VLANs to the same bridge domain. To support this configuration, these VLAN interfaces require different MAC addresses.
To enable the autogeneration of a MAC address on a VLAN interface, use the mac address autogenerate command in interface configuration mode. The syntax of this command is as follows:
mac address autogenerate
For example, enter:
host1/Admin(config-if)# mac address autogenerate
To disable MAC address autogeneration on the VLAN, use the no mac address autogenerate command. For example, enter:
host1/Admin(config-if)# no mac address autogenerate
Unfort failing the working context over prooved nothing as the context worked on the 6500 some contexts are having problems on.
I guess we will go back to doing more testing on the faulty context. When infront of the fwsm and ace the server works fine. I can get to it ping it etc. But when behind the fwsm or ace no joy. There is a permit ip any any on the fiswm context so this should be causing any issues.
I will try a few more things and let ye guys know. Thanks for all the help.
Please send me the showtech or running-config of the Context where you have this problem. In addition, please let me know the IP address of the servers
What is the DG of the servers?
Unfort the IP's we are using are pubic so I can't send a show running-config unless I can send it to you privately? Where we are at the moment is when the server is dropped behind the FWSM it loses connectivity but once the admin guy oings the gateway from the server it regains connectivity and I can ssh to it. I left it for a few hrs and tried to ssh to it again and it was successful. I am trying these tests now behind the ACE.
Straight away I lost connectivity ping again when the server admin pings the gateway the server comes back. He does get 4/5 packet loss on ping. I then can ssh to him. I have left it 5 mins and now he has dropped off again and I can't see him.
Any suggestions what I should look at now? Talking to the server guys they can get to everything that is currently been load balanced in this context no problem.