04-04-2016 11:37 AM - edited 03-01-2019 12:40 PM
We currently have multiple pools in our template, and recently a new blade pulled from the last pool that was added and we can't have it use since it is in a different subnet than the FI. KVM won't work in that case.
So, the management pulled an IP from that pool, and the KVM won't connect due to it being on a different subnet as the FI. We want to remove that pool so this doesn't happen again, and reset this blade so it is on the other subnet 10.1.0.xxx which will allow the KVM to start working. And we do not want to cause interruption to any machines that are running on this blade since it is production.
My question really is, if we remove that pool, will the blade itself automatically switch to the other existing pools?
What happens to the other KVM's on the blades that are currently using the pool for a KVM session to that POOL we want to remove?
As seen in the screenshot, we have blade, 1 and 2 have a kvm pulled from it. Blade 3 and UCS8 is the new blade and needs to be reset to another pool. What happens if this pool is removed in this scenario?
See screenshot.
Solved! Go to Solution.
04-06-2016 12:56 AM
If you delete the ext-mgt-pool (for external CIMC access), the IP address are removed immediately from CIMC ! No reboot of server, nor CIMC. Then you can define a new ext-mgt-pool, and a IP address is automatically assigned to each blade.
04-04-2016 05:49 PM
I need to confirm in the lab, but I think the CIMCs will still hang onto the IPs until you reboot the CIMCs, or manually assign a static IP, even after deleting the pools.
Basically removing the pool, will only remove the unassigned IPs from being handed out.
The process of rebooting the CIMC to get it to pick up from a new pool, does not bother the running OS, as the CIMC is out of band.
The only caveat I can think of is if you have blades with the 12Gb sas controllers, as there is a bug where the raid controller info isn't completely populated in UCSM inventory, and when the CIMC recognizes the missing info, it tries to force a deep discovery (reboot of blade) to get it. This would only apply to M4 servers. CSCut61527. If you have M4 servers, please check the known fixed releases, and if your blades and ucsm is running at those levels or higher, then you can reboot/reset the CIMC without issue.
Thanks,
Kirk
04-06-2016 12:56 AM
If you delete the ext-mgt-pool (for external CIMC access), the IP address are removed immediately from CIMC ! No reboot of server, nor CIMC. Then you can define a new ext-mgt-pool, and a IP address is automatically assigned to each blade.
04-06-2016 05:58 AM
I just tested this in the lab on 3.11e, and after removing IP block, the discovered blades CIMCs, still retained their IPs, until a CIMC reset (or decom) of blade.
After deleting the blocks and creating new ones, then resetting/rebooting CIMCs, they will pull from the new available blocks.
Thanks,
Kirk...
04-06-2016 08:19 AM
Kirk
Confused ! do you speak about out of band mgt ip pool ? I tested this and came to a different result, see my entry above !
Walter.
04-06-2016 09:47 AM
Greetings.
Yes, I'm referring to the out of band IPs defined in IP blocks in the "IP Pool Ext-mgmt" container. Are you referring to the IPs being pulled via a 'Management IP address Policy' that stay with the service profile (vs the ones initially assigned to CIMC during discovery)?
Thanks,
Kirk...
04-06-2016 10:18 AM
04-06-2016 10:54 AM
I hadn't checked this with a UCS MINI system.
I was pretty sure I had seen this behavior (CIMC retains IP after pool deletion) in early (i.e. 2.25/2.26) versions for standard UCSM 62xx FIs/blade.
I'll see what this does on one of our mini systems in the lab.
Thanks,
Kirk...
04-06-2016 11:08 AM
Kirk
This would be very odd, if specific to UCS mini; I used this feature many times for classical UCS, for the case of changing IP addresses of a UCS domain.
Walter.
04-06-2016 12:09 PM
Okay, thanks everyone. We removed the Pool range, and once we did, the mgmt. outbound went to 0.0.0.0., and within about 10 seconds it grabbed a new one from one of the other pools and came back up like it suppose too. KVM and all was working again.
No interruption to service as well!
04-06-2016 01:45 PM
Good to hear it solved your problem ! this was my claim in my first note: from experience !
04-24-2017 12:24 AM
Hey Walter,
Hope you are doing good?
Thanks for this answer, we just did it for a customer and everything worked as you described :-)
Regards
Richard
07-30-2018 03:32 AM - edited 07-30-2018 03:34 AM
Hey @Walter Dey, thank you for the information. A quick question, will the resolution hold true when blade and SP both are receiving the IP from the same ext-mgmt pool?
I need to split a big pool into two, for that, I have to delete the existing pool from which the blade and SPs are receiving KVM IPs and recreate new ones.
08-26-2018 12:16 PM
Hi
It's very easy and in fact non disruptive to delete a existing ext-mgt-pool and then create new ones !
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide