Failover testing on multicontext ASA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2010 07:51 PM - edited 03-11-2019 11:33 AM
Hello
I am trying to remove a few contexts from the ASA. After that I am doing failover testing on the ASA. Does anyone have any proper procedure on how to perform a failover testing for a multicontext environment. This is very urgent so your help is deeply appreciated.
Thanks
- Labels:
-
NGFW Firewalls

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2010 08:41 PM
You can do manual failover using the following command on the active firewall group:
no failover active group
OR/ alternatively, the following command on the standby firewall group:
failover active group
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2010 08:46 PM
Thanks for the reply. This is a active standby scenario. So there are no groups here. So i would
have to give these commands on the system context and not on the individual contexts correct?
Also what kind of issues would one face. This is the first time I am performing this and this is a production environment. Actually my problem is like this. There are a few interfaces which are on waiting and failed state. I need to remove a few contexts from the Firewall which are not in use. so that I can stabilize the failover. If I am still these waiting/failed state, then what kind of troubleshooting can I perform? Do you have any link which provided detailed troubleshooting for these kind of issue
Thanks

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2010 08:51 PM
With multiple context, you would have failover groups configured. Can you please share the output of "show failover" from system context, and also advise which context is having the issue?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2010 09:02 PM
Hello Halijenn,
Please give me some time to post the config as there has been some change in the user accounts. Should post the config in the next 3-4 hours
Thanks

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2010 09:05 PM
sidcraker,
I see you have act/stdby config in multiple context.
Your show fail output on one unit should show
this unit active
other unit standby
and on the other unit it should say the exact opposite - this unit standby other unit active.
Are you able to ping the standby IP from the active and active IP from the standby unit for all interfaces in each context? Did you configure each interface with standby IP?
You can refer this link for a few commands but, I don't recall any troubleshooting link for failover in particular.
-KS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2010 09:58 PM
Hi Halijenn,
Thanks for that link. Its really useful. I remember offhand that some interfaces are in waiting state. I checked the link you gave me and its mentioned that if interfaces are in waiting state, then we have to make sure that the switch interface that it is connected to has portfast enabled and should not be a trunk port.
This ASA multicontext FW is configured on VLANS and subinterfaces. So there are only 3 interfaces on the ASA with three subinterfaces per context connected to a switch. So trucking is enabled on the switch and port fast is turned off.
That should not be any reason for the interfaces to be in waiting state correct?
Thanks

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-01-2010 06:43 PM
Thanks kusankar for the links provided
Just have to make sure that the standby ip address is assigned to every interfaces on every context as well as there is no overlapping of ip address which might cause the failover to be in waiting state.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-02-2010 07:41 PM
Hello,
Currently customer A, B, C and cust2 are all for testing and hence no real traffic is flowing thru them. The Admin and CustomerD contexts have traffic flowing thru them. I will be removing the test contexts to test the failover.
As mentioned earlier these are subinterfaces defined on the system context with vlans. So trunking is enabled on all ports towards the core switch. All Ip addresses are changed.
Can you tell me if I remove the the contexts mentioned above, can I get the failover back to the normal state.
This is the output of the "sh failover"
This host: Primary - Active
Active time: 12099617 (sec)
slot 0: ASA5540 hw/sw rev (2.0/8.0(4)) status (Up Sys)
admin Interface shared (192.168.40.1): Normal (Waiting)
admin Interface dmz (192.168.39.1): Unknown (Waiting)
admin Interface inside (192.168.38.1): Unknown (Waiting)
admin Interface management (192.168..37.1): Normal (Waiting)
customerA Interface outside (192.168.42.1): Normal
customerA Interface inside (192.168.41.1): Normal (Waiting)
customerB Interface outside (0.0.0.0): Normal (Waiting)
customerB Interface shared (0.0.0.0): Normal (Waiting)
customerB Interface dmz (192.168.43.1): Unknown (Waiting)
customerB Interface inside (0.0.0.0): Normal (Waiting)
customerC Interface outside (192.168.47.1): Normal
customerC Interface shared (192.168.46.1): Normal (Waiting)
customerC Interface dmz (192.168.45.1): Unknown (Waiting)
customerC Interface inside (192.168.44.1): Unknown (Waiting)
CUST2 Interface dmz (192.168.50.1): Normal (Not-Monitored)
CUST2 Interface inside (192.168.49.1): Normal
CUST2 Interface outside (192.168.48.1): Normal (Not-Monitored)
CustomerD Interface outside (172.16.125.9): Normal
CustomerD Interface inside (172.16.122.19): Normal
CustomerD Interface management (192.168.128.20): Normal (Waiting)
slot 1: empty
Other host: Secondary - Standby Ready
Active time: 140 (sec)
slot 0: ASA5540 hw/sw rev (2.0/8.0(4)) status (Up Sys)
admin Interface shared (192.168.40.2): Normal (Waiting)
admin Interface dmz (192.168.39.2): Failed (Waiting)
admin Interface inside (192.168.37.2): Unknown (Waiting)
admin Interface management (0.0.0.0): Normal (Waiting)
customerA Interface outside (192.168.42.2): Normal
customerA Interface inside (192.168.41.2): Normal (Waiting)
customerB Interface outside (0.0.0.0): Normal (Waiting)
customerB Interface shared (0.0.0.0): Normal (Waiting)
customerB Interface dmz (192.168.43.2): Failed (Waiting)
customerB Interface inside (0.0.0.0): Normal (Waiting)
customerC Interface outside (192.168.47.2): Normal
customerC Interface shared (192.168.46.2): Normal (Waiting)
customerC Interface dmz (192.168.45.2): Failed (Waiting)
customerC Interface inside (192.168.44.2): Unknown (Waiting)
CUST2 Interface dmz (192.168.50.2): Normal (Not-Monitored)
CUST2 Interface inside (192.168.49.2): Normal
CUST2 Interface outside (192.168.48.2): Normal (Not-Monitored)
CustomerD Interface outside (172.16.125.10): Normal
CustomerD Interface inside (172.16.122.20): Normal
CustomerD Interface management (0.0.0.0): Normal (Waiting)
slot 1: empty

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-02-2010 07:58 PM
So it appears that the below interfaces that show either normal or normal (not-monitored) are the ones that are fine. These interfaces can ping the standby IP address without any problem. The rest of the interfaces have problem.
This host: Primary - Active
Active time: 12099617 (sec)
slot 0: ASA5540 hw/sw rev (2.0/8.0(4)) status (Up Sys)
customerA Interface outside (192.168.42.1): Normal
customerC Interface outside (192.168.47.1): Normal
CUST2 Interface dmz (192.168.50.1): Normal (Not-Monitored)
CUST2 Interface inside (192.168.49.1): Normal
CUST2 Interface outside (192.168.48.1): Normal (Not-Monitored)
CustomerD Interface outside (172.16.125.9): Normal
CustomerD Interface inside (172.16.122.19): Normal
Other host: Secondary - Standby Ready
Active time: 140 (sec)
slot 0: ASA5540 hw/sw rev (2.0/8.0(4)) status (Up Sys)
customerA Interface outside (192.168.42.2): Normal
customerC Interface outside (192.168.47.2): Normal
CUST2 Interface dmz (192.168.50.2): Normal (Not-Monitored)
CUST2 Interface inside (192.168.49.2): Normal
CUST2 Interface outside (192.168.48.2): Normal (Not-Monitored)
CustomerD Interface outside (172.16.125.10): Normal
CustomerD Interface inside (172.16.122.20): Normal
Either remove those contexts or make sure to resolve the connectivity problem so, all the interfaces will show "Normal" status. Since the problematic interfaces seem to be equal on both pri/act and sec/std both units are eually healthy and failover will function as it should.
So long as you see this unit primary active and other unit secondary standby Ready ---- indicates that failover is functioning properly.
You can leave it the way it is or clean it up so, you only see Normal interface and not "waiting" interfaces.
-KS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-02-2010 08:15 PM
Hi Kusankar,
Thanks for the quick reply. The ADMIN Context which is a crucial one is showing unknown for a couple of them. Now is that OK? I will check up on the secondary FW and see any issues and also on the switch side.
Any issues you see from your experience
Thanks

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-02-2010 08:31 PM
So long as the sh fail output matches on both units you are ok.
There could be many reasons for these to show waiting.
- interface shut down
- switch port shut down
- turnk port has incorrect native vlan configured on both sides
- cable problem
Start from layer 1 then go to layer 2 and then layer 3. First try to ping that units addres then the peer units address (standby IP).
Good luck.
-KS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-27-2019 01:37 AM
I want to initiate failover and divert all traffic from Primary to Secondary ASA.
Can this be done by issuing a "failover Active" command in the system context of Secondary Member.
PS: all interface status is monitored and normal..
