This community is for technical, feature, configuration and deployment questions.
Let's assume the DNAC and ISE integration is working well. What is the procedure to switch that existing integration to a totally different/new ISE deployment?
It would appear that the ISE config in DNAC is so tightly integrated that it's not a simple case of updating the ISE server's IP address in DNA Centre. In my case the old ISE server is still operational, but let's assume that in a total failure scenario the ISE deployment was destroyed - is there a way to point DNAC to a new ISE deployment (assuming you had no ISE config backups)?
I have not encountered this first hand, but its an interesting one.
just thinking through it, if I am at a stage where I am only using DNAC for design, device provisioning and assurance. Then there should be no impact from DNAC perspective, obviously authentication failure and other ISE failure impact will happen till its recovered, you could potentially update new ISE IP in global config and push it to all sites and integrate new ISE to DNAC, it would be preferred to sign ISE certificate with the same CA which signed DNAC cert considering you also don't have certificate backup, if one can keep the IP same then it would be even better.
If you are doing fabric and using micro segmentation, then it will be a major disruption I would think, I am not sure once you integrate new ISE, DNAC will sync all existing SGTs and policy, So we might just have to rebuilt it manually on ISE or reconfigure it again on DNAC to push it to ISE.
I will be following this to see what others have to say.
I am in the same boat as you - not using DNAC yet for SDA. Once I get the go ahead to try and de-register our DNAC appliance from the ISE 2.4 node, and re-register it to the ISE 2.7 I will provide feedback. Right now it looks as if I need to remove a lot of config from DNAC to allow me to integrate it to a new ISE.
It's a lot more complicated than "integrating" ISE with Cisco Prime Infrastructure :-)
@Arne Bier very interesting and important questions. Here are my opinions on your concerns:
Completely migrating to a new ISE instance and performing a brand new integration would be a large task IMO. I can tell you from experience that I have seen the pxgrid connection between our ISE cluster and DNAC cluster go down and stay down for some time. This did not present any fabric related problems because they had been integrated beforehand so the SGT assignment to VNs in DNAC and everything else was already configured and provisioned. The issue was actually discovered when attempting to deploy new SGTs since we rely on ISE to perform all micro segmentation tasks and not DNAC. Simply updating the ISE IP in DNAC will not affect your NADs and clients in the fabric until you re-provision your hosts. When updating the ISE IP in DNAC, and then provisioning ENs this is going to update things on your ENs such as radius configs, dynamic auth configs, etc. I have actually tested this and the fabric remains operational when NOT re-provisioning the ENs. Note though that things were already provisioned and deployed. Since 1.3.x version of DNAC you can actually choose to rely on DNAC as the main policy enforcer where you would configure CTS related stuff (this is found under Policy->Group Based Access Control Settings). If you choose to use DNAC as the main driver ISE becomes a read-only mechanism.
What is the procedure to switch that existing integration to a totally different/new ISE deployment?
is there a way to point DNAC to a new ISE deployment (assuming you had no ISE config backups)?
AKAIK there is no documentation/CVD on switching over to a brand new ISE cluster. Some options I think you may have that would be worth a shot to test are:
*I would recommend ensuring that your critical auth vlan provides whatever access you deem necessary. You can tweak the DNAC out of the box config via template editor. This may bail you out and allow clients some sort of network access if things go wrong
Scenario 1 (migrate from existing ISE cluster): Change authz profiles in ISE to require re-auth to like 12 hours or more. Ensure clients reauth and you get the 12 hour window to play. Flip GBAC settings so that you rely on DNAC to manage micro segmentation. This should sync current ISE CTS config to DNAC. At this point no hosts are needing to reauth to existing PSNs and you have some footprint of CTS deployed and functioning. Build out new ISE cluster to include PSNs and radius policies. In a scheduled ASI change the ISE DNAC IP, then re-provision one EN so that it attempts to use new PSN come reauth time and test connectivity. At this point the new PSNs are authenticating and authorizing. You shouldnt need to make any CTS changes since the fabric already contains some sort of configs. Once confirmed the one EN works, start re-provisioning the remaining ENs.
Scenario 2 (migrate to new ISE cluster after losing other ISE cluster): I would assume at this point hosts would start dropping into the critical vlan access network once reauth occurred and your NADs figured out ISE is dead. You would probably want to build out the new ISE cluster and then integrate with DNAC to sync configs. I would imagine that your SGTs would need to remain with the same names, etc. so that you dont have to change VN to ip pool/sgt assignments in DNAC and really shake up the fabric. That info can be taken from your DNAC cluster for reference during new build. I think again in this scenario you could flip to using DNAC as main driver for GBAC. However, you will run into the NAD + ISE auth problem since your NADs will contain old dead ISE radius configs. Things wouldnt start to get better until complete integration and re-provisioning of ENs.
I may be forgetting a couple of items to watch out for/be aware of, but I hope this helps shed some light on your very good questions.
Hi @Mike.Cifelli - thanks for thinking this through - you have more experience with this stuff than I have - I have no real SDA battle experience so far. In my situation DNAC is used only for simple 9800 provisioning and Assurance. It's effectively in our lab and we have used it to learn how to provision 9800's using DNAC, because customers are buying these combos. DNAC is a major overkill if all you have is a single 9800 controller. In my base I have a "lab" DNAC with a few 9800's. I will try to swing the DNAC over to the ISE 2.7 - once that succeeds, are you saying I need to re-provision my NAS's and then it will push RADIUS/TACACS config to them? I thought I could re-discover the network once I have DNAC and ISE integrated.
Now is a good time to test these things ... :)
@Arne Bier No problem. Figured I would share some of my experiences with you as a heads up. So as far as the re-provision part I suppose the first question I would ask is are you planning on changing ISE cluster IP information for/on any nodes? IMO you wouldn't need to re-discover the devices unless you were planning on completely removing them from DNAC. Once discovered they remain in inventory. If you change IPs or modify any ISE settings under (Design->Network Settings->Network) within the DNAC gui then you would need to re-provision devices so things such as aaa configs could be updated and the NADs would not have issues. To take it a step further if fully integrated running an SDA fabric with CTS etc. I would be willing to bet that there would be some sort of CTS pac issues as well, which is why I think you would need to re-prov devices. However, with that said it sounds like your scenario is quite different. I am definitely curious to hear what occurs during your testing adventures so please share :)
I don't have any SDA fabric in my lab. The Cisco 9800 and Cat 9300 (as well as some older Cisco switches) were added into DNAC and provisioned in the early days of us playing around with this. I have tried to provision the Cat9300's and it pushes most of the config down to the switch, except, it's breaks on the AAA part. The switch already had a working RADIUS and TACACS config. When you provision from DNAC, it gets confused and bungles the config. Perhaps I am doing it all wrong. If you push TACACS config to a device then you better make sure it works, or else you can cut off your access. This seems to be the case here - I had to log back into the switch using local admin account and manually fixing up the TACACS after DNAC aborted the push. Perhaps it's obvious and I may have missed it, but, when you provision a device for the very first time, then you should be using a device credential that is NOT a TACACS credential ... but then going forward, DNAC should be managing that device using a TACACS credential.