10-19-2022 07:18 AM
Hello,
We are in the process of migrating from on prem ISE3.0 in to Azure ISE 3.2 with Azure AD.
Configuration was migrated manually and now I am in process of testing 802.1x authentication. I know that clients are good, because they are able to use ise3.0 but they are failing to Authenticate on 3.2. I verified the certificate on the Client and ISE environment and they are good.
I tested with Anyconnect ver.4.10.06079 and Secure Client 5.0.00556 with the same result:
Event 5440 Endpoint abandoned EAP session and started new
Authentication Protocol EAP-FAST (EAP-TLS
Client is hitting all the right policies.
Anyone seen this in 3.2 environment?
Thank you
Solved! Go to Solution.
11-09-2022 07:32 AM - edited 11-09-2022 07:36 AM
Our connection to Azure over verizon MPLS. Issues was caused by UDP packets being fragmented and arriving in Azure out of the order. The larger RADIUS packet that is 1511 byes is fragmented into two packets.....1 that is 1410 and 1 that is 101 bytes. When it goes over Verizon MPLS, those packets arrive most of the time out of order (101 first and 1410 second) most likely due to Verizon Express route connections into Azure it builds redundant peers and does EQLB between them.
Also within Azure- Virtual Network stack is set up to drop "out of order fragments," that is, fragmented packets that don't arrive in their original fragmented order. These packets are dropped mainly because of a network security vulnerability announced in November 2018 called FragmentSmack.
Our fix was to enable ip virtual-reassembly on the routers MPLS interface.
We experienced this issue only over Verizon MPLS connection into Azure. Authentication over Broadband/DMVPN connection worked fine because it does not use express route connection.
Thank you all for recommendations.
10-19-2022 09:27 AM
This has nothing to do with Azure AD and everything to do with the EAP negotiation between the endpoint supplicant and ISE. Either certificates are untrusted or timeouts are too short or you have latency triggering timeouts. When the client abandons an EAP session it's not getting what it needs (in a timely manner).
10-19-2022 11:15 AM - edited 10-19-2022 11:39 AM
Hi Thomas,
I confirmed that certificates are good and trusted. Timeouts make sense from 1ms on prem to 25ms in Azure. I am seeing this error message in DART Network wired: The connection duration timer expired.
Where can I update timeouts? is it on the switch interface configuration or in ISE? In ISE under EAP TLS settings > Enable EAP TLS Session Resume currently setup at 7.200 seconds.
interface config timeouts:
authentication timer reauthenticate server
authentication timer unauthorized 600
dot1x timeout tx-period 10
dot1x max-reauth-req 3
Thank you
11-03-2022 06:59 AM
I am on the 3rd tac engineer since I opened this case.
no progress so far just collecting DART and ISE debugs and they are going over it. I did more testing last night and switched connection over our broadband (COX) circuit and it worked. Looks like issue is isolated to Verizon MPLS. Will be doing more troubleshooting today.
11-03-2022 10:18 PM
It could be an issue related to MTU, try to lower the mtu on the vlan which carries radius frames from the switches.
You could also try switching to PEAP on a test host, usually PEAP is less susceptible to mtu
11-09-2022 07:23 AM
Hi Massimo,
We tried that but it did not help.
THank you
11-09-2022 07:32 AM - edited 11-09-2022 07:36 AM
Our connection to Azure over verizon MPLS. Issues was caused by UDP packets being fragmented and arriving in Azure out of the order. The larger RADIUS packet that is 1511 byes is fragmented into two packets.....1 that is 1410 and 1 that is 101 bytes. When it goes over Verizon MPLS, those packets arrive most of the time out of order (101 first and 1410 second) most likely due to Verizon Express route connections into Azure it builds redundant peers and does EQLB between them.
Also within Azure- Virtual Network stack is set up to drop "out of order fragments," that is, fragmented packets that don't arrive in their original fragmented order. These packets are dropped mainly because of a network security vulnerability announced in November 2018 called FragmentSmack.
Our fix was to enable ip virtual-reassembly on the routers MPLS interface.
We experienced this issue only over Verizon MPLS connection into Azure. Authentication over Broadband/DMVPN connection worked fine because it does not use express route connection.
Thank you all for recommendations.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide