cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
687
Views
1
Helpful
6
Replies

"enable" command from the "cluster group" not saved to startup-config

tvotna
Spotlight
Spotlight

Hi community,

Does anybody know why cluster "enable" CLI not saved to startup-config on ASA/Firepower? Presumably, FTD is affected too.

To upgrade FXOS+ASA on Firepower 4100 we typically disable clustering before the upgrade to prevent ASA slave units from joining our clusters right after FXOS upgrade. This speeds things up as slaves do not need to sync config, etc. However, we noticed that they still join the cluster despite "no enable" entered in the "cluster group" mode and saving configuration...

"Enable" is not in the startup-config. ASA 9.12.x.y

ASA# sh run cluster
cluster group CLUSTER
key *****
local-unit ASA-node1
cluster-interface Port-channel48 ip 127.2.1.1 255.255.0.0
priority 9
health-check holdtime 2
health-check data-interface auto-rejoin 3 5 2
health-check cluster-interface auto-rejoin unlimited 5 1
health-check system auto-rejoin 3 5 2
health-check monitor-interface debounce-time 1500
site-id 1
no health-check monitor-interface Port-channel7
enable

ASA# sh start | beg cluster group
cluster group CLUSTER
key *****
local-unit ASA-node1
cluster-interface Port-channel48 ip 127.2.1.1 255.255.0.0
priority 9
health-check holdtime 2
health-check data-interface auto-rejoin 3 5 2
health-check cluster-interface auto-rejoin unlimited 5 1
health-check system auto-rejoin 3 5 2
health-check monitor-interface debounce-time 1500
site-id 1
no health-check monitor-interface Port-channel7

Also, it appears this behavior contradicts documentation:

Deactivate a Unit

When an ASA becomes inactive, all data interfaces are shut down; only the management-only interface can send and receive traffic. To resume traffic flow, re-enable clustering. The management interface remains up using the IP address the unit received from the cluster IP pool. However if you reload, and the unit is still inactive in the cluster (for example, if you saved the configuration with clustering disabled), the management interface is disabled. You must use the console port for any further configuration.

 

 

6 Replies 6

are you access cluster units via SSH/Telnet or direct via console ?

I'm connecting to the ASA via "connect module 1 console".

 

 

Let assume that peer exchange config with this FW via control link' and thst explain why the no enable is disappear after reload 

Do following and check

1- no enable 

2- shut the control interface interconnect FW

3-reload 

I made wrong assumption and my initial problem description was incorrect. Sorry about that. Actually, "enable" command is saved to the startup-config. The "no enable" command is the default and doesn't appear there.

So, we do "no enable", save configuration, which removes "enable" command from the startup-config, upgrade, the upgraded unit joins the cluster magically, receives config and "enable" command appears in the running-config. This command is still absent in the startup-config though, until "write memory" is run on the unit.

It's still not clear if this behavior is expected and how unit can join cluster if clustering has been disabled. Probably it's a side affect of "health-check system auto-rejoin" introduced in 9.9.2, but it would be great if somebody can confirm.

We can't play around with CCL because this is a production cluster.

I believe that is caused by the health check auto rejoin feature which might be triggered in this case becaue the upgraded unit might have the application sync timed out.

I've tried to disable this feature, but this didn't help:

unit-4-1/slave# sh run cluster
cluster group CLUSTER
key *****
local-unit unit-4-1
cluster-interface Port-channel48 ip 127.2.4.1 255.255.0.0
priority 33
no health-check
health-check system auto-rejoin 0
health-check monitor-interface debounce-time 500
site-id 1
enable
unit-4-1/slave# conf t
unit-4-1/slave(config)# cluster group CLUSTER
unit-4-1/slave(cfg-cluster)# no enable
Cluster disable is performing cleanup..done.
All data interfaces have been shutdown due to clustering being disabled. To recover either enable clustering or remove cluster group configuration.

Cluster unit unit-4-1 transitioned from SLAVE to DISABLED
unit-4-1/ClusterDisabled(cfg-cluster)# exit
unit-4-1/ClusterDisabled(config)# exit
unit-4-1/ClusterDisabled# wr
Building configuration...
Cryptochecksum: fb73ed55 27dc8418 be10adbd ba21eb35

1898 bytes copied in 0.70 secs
[OK]
unit-4-1/ClusterDisabled# reload
admin config has been modified. Save? [Y]es/[N]o/[S]ave all/[C]ancel: n
data config has been modified. Save? [Y]es/[N]o/[S]ave all/[C]ancel: n
Proceed with reload? [confirm]
unit-4-1/ClusterDisabled#

...

unit-4-1/ClusterDisabled> Clustering configuration on the chassis is missing or incomplete; clustering is disabled

Detected Cluster Master.
Beginning configuration replication from Master.
Jun 16 2023 14:41:58 unit-4-1 : %ASA-2-112001: Clear finished
WARNING: Removing all contexts in the system
Removing context 'data' (2)... WARNING: existing snmp-server user CLI will not be cleared.
WARNING: existing snmp-server group CLI will not be cleared.
Done
Removing context 'admin' (1)... WARNING: existing snmp-server user CLI will not be cleared.
WARNING: existing snmp-server group CLI will not be cleared.
Done

WARNING: Cluster health monitoring is disabled. Unnecessary traffic loss may occur due to the delayed cluster member failure detection
*** Output from config line 44, " no health-check"
INFO: Admin context is required to get the interfaces
*** Output from config line 53, "arp timeout 14400"
INFO: Admin context is required to get the interfaces
*** Output from config line 54, "no arp permit-nonconnect..."
INFO: Admin context is required to get the interfaces
*** Output from config line 55, "arp rate-limit 32768"
Creating context 'admin'... Done. (3)
*** Output from config line 62, "admin-context admin"

WARNING: Skip fetching the URL disk0:/admin.cfg
*** Output from config line 65, " config-url disk0:/admi..."
Creating context 'data'... Done. (4)
*** Output from config line 68, "context data"

WARNING: Skip fetching the URL disk0:/data.cfg
*** Output from config line 71, " config-url disk0:/data..."

Cryptochecksum (changed): 969b84a3 1702c24f c7e091d4 078e6c0f
..
Cryptochecksum (changed): 06c88031 58f4f658 0141e64f 212a9c4d
.
Cryptochecksum (changed): 5d167f9c b4ca4d50 22b71409 1c66b742

Cryptochecksum (changed): d41d8cd9 8f00b204 e9800998 ecf8427e
End configuration replication from Master.

unit-4-1/slave> en
Cluster unit unit-4-1 transitioned from DISABLED to SLAVE
a
Password: *****
unit-4-1/slave#
unit-4-1/slave# sh run cluster
cluster group CLUSTER
key *****
local-unit unit-4-1
cluster-interface Port-channel48 ip 127.2.4.1 255.255.0.0
priority 33
no health-check
health-check system auto-rejoin 0
health-check monitor-interface debounce-time 500
site-id 1
enable
unit-4-1/slave#

Review Cisco Networking for a $25 gift card