Showing results for 
Search instead for 
Did you mean: 
Choose one of the topics below to view our ISE Resources to help you on your journey with ISE

This community is for technical, feature, configuration and deployment questions.
For production deployment issues, please contact the TAC! We will not comment or assist with your TAC case in these forums.
Please see How to Ask the Community for Help for other best practices.


ISE Fail OPEN configuration/testing


We will be performing a live test of ISE Fail Open on our production system tomorrow night. When the policy nodes are all unavailable we want the switches to allow open access to all devices on all interfaces.

I have done some testing of this on an individual test switch by routing packets to the ISE policy nodes to null 0 to emulate a failure. It appears to be working well, but was hoping for more input from the community before my Live test tomorrow night.

First, I believe these to be the only commands needed to make this work correctly. Does anyone have any comment on this configuration? Am I missing anything? Do these timers seem OK? I'm wondering if the deadtime should be greater in case the nodes or the network connection are flapping?

Global Config:

radius-server dead-criteria time 5 tries 3

radius-server deadtime 5

dot1x critical eapol

Interface Config:

authentication event server dead action reinitialize vlan <normal data vlan>

authentication event server dead action authorize voice

authentication event server alive action reinitialize

Next, this is the behavior I am seeing after the policy nodes go down. Is this as it should be?

1. Absolutely nothing happens until an interface undergoes (re)authentication. All ports remain in current authentication/authorization state.

2. If an interface undergoes (re)authentication, the switch tries to reach one of the configured policy nodes. After 5 seconds there is a message the first node is dead. In another 5 seconds there is a mesage that the second node is dead.

3. After another ~20 seconds, the interface that was attempting (re)authentication goes into Critical Authorization:

TEST#sh auth sess int f1
            Interface:  FastEthernet1
          MAC Address:  1234.5678.90ab
           IP Address:  Unknown
            User-Name:  UserName

               Status:  Authz Success
               Domain:  DATA
       Oper host mode:  multi-host
     Oper control dir:  in
        Authorized By:  Critical Auth
          Vlan Policy:  2
      Session timeout:  N/A
         Idle timeout:  N/A
    Common Session ID:  0A0A010B0000013D093F17CC
      Acct Session ID:  0x0000072B
               Handle:  0x5A00013E

Runnable methods list:
       Method   State
       dot1x    Authc Failed
       mab      Not run

Critical Authorization is in effect for domain(s) DATA


All other interfaces remain in current mode, nothing on them changes so long as they don't attempt to (re)authenticate.

4. If another interface attempts to (re)authenticate, it goes into critical state immediately w/o trying to contact the dead policy nodes.

5. The switch will try every so often (every 5 minutes?) to reach the policy nodes. If one of them is up, all interfaces that were in critical state immediately transition to normal authc/authz modes. Normal timers apply, dot1x endpoints come up almost immediately, mab clients lose connectivity until dot1x times out.

To emulate a global fail for the organization, I plan to stop the ISE services on both of my policy nodes.

Thanks for any comments/insights/input.

Everyone's tags (3)

ISE Fail OPEN configuration/testing

FYI we had a Cisco TAC Engineer confirm this configuration is correct and complete. Our fail open test went great, everything performed as expected.


ISE Fail OPEN configuration/testing

We appreciate the detailed scenario description, the question itself was very informative.

I used

authentication event server dead action authorize

                                       critical VLAN=accessVLAN

instead of

authentication event server dead action reinitialize vlan