01-23-2025 03:01 PM - edited 01-25-2025 04:52 PM
what are the possible causes for having TOO many IKE SAs? they all are in negotiations and one is connected. The VPN tunnel seems to be working fine.
01-24-2025 12:25 AM - edited 01-24-2025 12:26 AM
It could be a race condition, where both sides are initiating... is this ikev2 or ikev1 ? It could also be when a initiator fails the old ike sa may stay for a few seconds/minutes and then get purged/cleaned up...while it created a new IKE sa to start negotiations again. can you show the output of "show crypto isakmp sa det " or "show crypto ikev2 sa det"
**Please rate as helpful it this was useful**
01-24-2025 12:44 AM
Thanks @ccieexpert ! Here is the show output
VPN-Router#sh crypto ikev2 sa remote Remote_Peer_IP detailed
Tunnel-id Local Remote fvrf/ivrf Status
15 Local_Peer_IP/500 Remote_Peer_IP/500 frontdoor/none IN-NEG
Encr: Unknown - 0, PRF: Unknown - 0, Hash: None, DH Grp:0, Auth sign: Unknown - 0, Auth verify: Unknown - 0
Life/Active Time: 900/0 sec
CE id: 0, Session-id: 0
Status Description: Initial State
Local spi: 191915DBC7E7698B Remote spi: 0000000000000000
Local id: Local_Peer_IP
Remote id:
Local req msg id: 0 Remote req msg id: 0
Local next msg id: 0 Remote next msg id: 0
Local req queued: 0 Remote req queued: 0
Local window: 5 Remote window: 1
DPD configured for 0 seconds, retry 0
Fragmentation not configured.
Dynamic Route Update: disabled
Extended Authentication not configured.
NAT-T is not detected
Cisco Trust Security SGT is disabled
Initiator of SA : Yes
01-24-2025 01:09 AM
I only see 1 where are the others ? also this has not completed IKEv2 sa. it is still in progress and is the initiator..
01-24-2025 01:37 AM
there are too many in IN-NEG state and one in READY state
Tunnel-id Local Remote fvrf/ivrf Status
45 Local_Peer_IP/500 Remote_Peer_IP/500 frontdoor/VRF_X READY
Encr: AES-CBC, keysize: 256, PRF: SHA512, Hash: SHA512, DH Grp:20, Auth sign: PSK, Auth verify: PSK
Life/Active Time: 900/442 sec
CE id: 0, Session-id: 34973
Status Description: Negotiation done
Local spi: 5FA08FFB29B25018 Remote spi: 305441AEAA55AD53
Local id: Local_Peer_IP
Remote id: Remote_Peer_IP
Local req msg id: 4 Remote req msg id: 0
Local next msg id: 4 Remote next msg id: 0
Local req queued: 4 Remote req queued: 0
Local window: 5 Remote window: 1
DPD configured for 0 seconds, retry 0
Fragmentation not configured.
Dynamic Route Update: disabled
Extended Authentication not configured.
NAT-T is not detected
Cisco Trust Security SGT is disabled
Initiator of SA : Yes
01-24-2025 12:39 AM
If we merge borh issue then check lifetime mismatch
If one peer lifetime is end and other not' then ypu will see many new nego where one side try to establish new connection but remote side have already one and hence the nego is failed.
Other reasons is remote peer reject to be responder.
MHM
01-24-2025 12:59 AM - edited 01-24-2025 01:00 AM
Thanks @MHM Cisco World ! This is a different VPN tunnel than the other one in the post with no SAs
The engineer who manages the VPN peer has shared the config on his side. The lifetime for both phase-1 and 2 match ours.
01-24-2025 01:21 AM
Both sides use static IP?
Which platform you use in both sides ?
MHM
01-24-2025 01:39 AM
Both VPN peers have static IP addresses for both the peers and the ACL (traffic selectors).
My side is Cisco ASR router and the remote is FontiNet firewall.
01-24-2025 02:25 AM
If it ASR then did you disable
Config-exchange request <<- under ikev2 profile
MHM
01-24-2025 02:38 AM - edited 01-24-2025 11:59 AM
@MHM Cisco World I've tested it but unfortunately it didn't make any difference. I've just uploaded the config, show outputs and debugs. Please have a look and let know if you could find what causes the problem
01-24-2025 12:43 AM - edited 01-24-2025 12:44 AM
a
01-24-2025 12:48 AM
in all cases they should get cleaned up they are temporary.. if you are seeing many then the system is not purging them correctly there should be a cleanup routine that cleans them up... if other side establishes a new connection that should not fail cause there is one already...it will build another one.. debugs are your best friend to see why they are happening..
01-25-2025 12:08 AM
i had a quick look... first thing why did you lower the ipsec sa life / and ike to 900 seconds ? not ideal to keep the ipsec sa lifetime and ike lifetime to the same value. ike should be much longer..
From the show commands all these ike sesions were initiated by this router and they say in negotiations meaning they never completed negotiations. from the debugs, it is hard to say.
You should run debugs for much longer time , like for 3-4 rekeys over 1 hour and run show commands before starting debug and after starting debug to see if they linger around... you can send debug logs to syslog or increase the logging buffer to like 10 MB. if you have multiple tunnels then do conditional debugging... That would highlight why .. but one thing that is not recommended to have ike sa to a low value especially less (or equal) to ipsec life time... The default is 86400 for ikev2 and 3600 for ipsec... ipsec is what needs to be protected and rekeyed frequently as it carries the data....
01-25-2025 03:14 AM
Thanks @ccieexpert !
This setup has 2 routers for redundancy purpose that why we have the IPsec and IKE lifetime set to 900 seconds. So if the primary router loses connection or dies, the secondary router takes over but then the tunnels has to re-establish (after 15 minutes). We have about 50 IPsec tunnels to different customers in this router, none of the other tunnels have multiple IKE SA simultaneously, and I have personally never experienced this before.
I'll try to do a new debug with longer time and upload the output.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide