cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6273
Views
25
Helpful
13
Replies

UCS KVM issues, need to remove a pool and leave existing pool questions

rzotz0814
Level 1
Level 1

We currently have multiple pools in our template, and recently a new blade pulled from the last pool that was added and we can't have it use since it is in a different subnet than the FI.  KVM won't work in that case. 

So, the management pulled an IP from that pool, and the KVM won't connect due to it being on a different subnet as the FI.  We want to remove that pool so this doesn't happen again, and reset this blade so it is on the other subnet 10.1.0.xxx which will allow the KVM to start working. And we do not want to cause interruption to any machines that are running on this blade since it is production.

My question really is, if we remove that pool, will the blade itself automatically switch to the other existing pools? 

What happens to the other KVM's on the blades that are currently using the pool for a KVM session to that POOL we want to remove?

As seen in the screenshot, we have blade, 1 and 2 have a kvm pulled from it.  Blade 3 and UCS8 is the new blade and needs to be reset to another pool.  What happens if this pool is removed in this scenario? 

See screenshot.

1 Accepted Solution

Accepted Solutions

If you delete the ext-mgt-pool (for external CIMC access), the IP address are removed immediately from CIMC ! No reboot of server, nor CIMC. Then you can define a new ext-mgt-pool, and a IP address is automatically assigned to each blade.

View solution in original post

13 Replies 13

Kirk J
Cisco Employee
Cisco Employee

I need to confirm in the lab, but I think the CIMCs will still hang onto the IPs until you reboot the CIMCs, or manually assign a static IP, even after deleting the pools.

Basically removing the pool, will only remove the unassigned IPs from being handed out.

The process of rebooting the CIMC to get it to pick up from a new pool, does not bother the running OS, as the CIMC is out of band.

The only caveat I can think of is if you have blades with the 12Gb sas controllers, as there is a bug where the raid controller info isn't completely populated in UCSM inventory, and when the CIMC recognizes the missing info, it tries to force a deep discovery (reboot of blade) to get it.  This would only apply to M4 servers. CSCut61527.  If you have M4 servers, please check the known fixed releases, and if your blades and ucsm is running at those levels or higher, then you can reboot/reset the CIMC without issue.

Thanks,

Kirk

If you delete the ext-mgt-pool (for external CIMC access), the IP address are removed immediately from CIMC ! No reboot of server, nor CIMC. Then you can define a new ext-mgt-pool, and a IP address is automatically assigned to each blade.

I just tested this in the lab on 3.11e, and after removing IP block, the discovered blades CIMCs, still retained their IPs, until a CIMC reset (or decom) of blade.

After deleting the blocks and creating new ones, then resetting/rebooting CIMCs, they will pull from the new available blocks.

Thanks, 

Kirk...

Kirk

Confused ! do you speak about out of band mgt ip pool ? I tested this and came to a different result, see my entry above !

Walter.

Greetings.

Yes, I'm referring to the out of band IPs defined in IP blocks in the "IP Pool Ext-mgmt" container.  Are you referring to the IPs being pulled via a 'Management IP address Policy' that stay with the service profile (vs the ones initially assigned to CIMC during discovery)?

Thanks,

Kirk...

I checked with 3.02.d

Yes, "IP Pool Ext-mgmt", assigned to Blade CIMC, nothing to do with SP !

See attachment.

maybe this has recently changed !

I hadn't checked this with a UCS MINI system.

I was pretty sure I had seen this behavior (CIMC retains IP after pool deletion) in early (i.e. 2.25/2.26) versions for standard UCSM 62xx FIs/blade.

I'll see what this does on one of our mini systems in the lab.

Thanks,

Kirk...

Kirk

This would be very odd, if specific to UCS mini; I used this feature many times for classical UCS, for the case of changing IP addresses of a UCS domain.

Walter.

Okay, thanks everyone.  We removed the Pool range, and once we did, the mgmt. outbound went to 0.0.0.0., and within about 10 seconds it grabbed a new one from one of the other pools and came back up like it suppose too.  KVM and all was working again. 

No interruption to service as well!

Good to hear it solved your problem ! this was my claim in my first note: from experience !

Hey Walter,

Hope you are doing good?

Thanks for this answer, we just did it for a customer and everything worked as you described :-)

Regards

Richard

Hey @Walter Dey, thank you for the information. A quick question, will the resolution hold true when blade and SP both are receiving the IP from the same ext-mgmt pool?

 

I need to split a big pool into two, for that, I have to delete the existing pool from which the blade and SPs are receiving KVM IPs and recreate new ones.

 

Hi

It's very easy and in fact non disruptive to delete a existing ext-mgt-pool and then create new ones !

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card