10-31-2022 07:05 PM - edited 10-31-2022 07:06 PM
We are noticing that during high authentication volume hours ise application crashes and application status goes to initialization state, after sometime maybe an hour it recovers on his own and goes back to running state. Wondering if somebody else is experiencing similar issue with ise 3.1 patch 3. During this time obviously all authentication fails. Upgraded to patch 4 waiting to see any improvement.
It’s a 12 node deployment with independent pan and MNT and 8 PSNs.
Solved! Go to Solution.
11-01-2022 06:36 AM
VM, scale supports 40k per psn with current resource reservation, I think it’s a bug, Cisco TAC couldn’t figure out, we are trying to escalate to BU at this point.
11-01-2022 12:31 AM
- Do you have virtual ISE's or appliances ? For VM's follow-up on performance with the hypervisor monitoring tools, if needed increase resources such as CPU and mem , or other.
M.
11-01-2022 06:36 AM
VM, scale supports 40k per psn with current resource reservation, I think it’s a bug, Cisco TAC couldn’t figure out, we are trying to escalate to BU at this point.
11-01-2022 07:32 AM
- You may try to show logging system ade/ADE.log ,use this particular command when high authentication volume occurs at regular intervals (if time permits - check for related info's)
M.
11-01-2022 01:58 AM
Are you within the scale limits listed here? https://www.cisco.com/c/en/us/td/docs/security/ise/performance_and_scalability/b_ise_perf_and_scale.html
11-01-2022 06:38 AM
Yes we are
11-02-2022 02:15 PM - edited 11-02-2022 02:16 PM
You might see if the conditions for this bug are relevant. This is the only condition under which I've seen instability so far in ISE 3.1.
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd41773
11-23-2022 12:38 PM - edited 11-23-2022 01:19 PM
just to let everyone know the issue was with cpu resource reservation, at the end of the day this was nothing more than miss communication between network and server team, the reservation was set to 14000 along with limit.
reservation was changed to 16000 and checked cpu limit to unlimited the issue was resolved. In addition we also made some additional changes on ise and meraki.
disable Endpoint Owner Directory and Profiler Forwarder Persistence Queue and change meraki interim update to every 3 hours compare to every 10 minutes (default).
thanks for your inputs.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide