I'm hoping someone can give me some quick info on this. We currently have 6 x B200 M3 hosts configured using a Service Profile Template, which has a Boot Policy with only CD and iSCSI boot allowed (in that order). All hosts boot from LUNs on the iSCSI SAN and this has been the case for a few years now.
Now, I am looking to split off 2 of the hosts for a VDI pilot project but I want to retain all Service Profile configuration settings, the only exception being the Boot Policy. i need to retain identical functionality and networking/storage capabilities across all 6 hosts (to allow for emergency expansion until we go live business-wide with additional resource) but have the 2 hosts that I have split out boot-from-SD cards using FlexFlash instead.
We are running UCS v3.1(2e) firmware across the entire estate and ESXi v6.0 Update 2a across all 6 hosts.
I have read conflicting opinions about adding SD card boot options into an existing boot-from-SAN Boot Policy but is it possible for me to simply stick in SD boot ahead of the existing iSCSI boot option? Then the servers will use SD if available and fail back to boot-from-SAN? If so, I assume I will need to do the following to the Service Profile Template for all 6 hosts:
Assign a Local Disk Policy that enables the FlexFlash options
Assign a Scrub Policy to get the 2 x SD cards (in each of the 2 split off hosts) into a consistent RAID-mirrored configuration before re-installation of ESXi from scratch
Disable Scrub Policy after successful provision of rebooted servers
If so, presumably the Scrub Policy will not do anything nasty to the existing iSCSI LUNs that the 4 original hosts will remain using? I will be removing the LUNs for the 2 split off servers so am not concerned about them.
Any advice on any gotchas and confirmation that the above will not trash the servers would be grand! By the way, the maintenance policy applied at Service Profile Template level is user-ack and this functioned fine while doing 'B' package firmware upgrades.
... View more
Ok, so our support providers took a look and are recommending either power cycling each FI or upgrade the firmware and that the error is nothing to be concerned by.
Given I am doing an upgrade at some point anyway, I'll do the latter and post feedback when i've completed it.
... View more
We have the same issue with our UCS estate, running v2.2(1b). 1 x 5108 chassis & 2 x 6296UP FI's.
As part of a project, we are about to upgrade to v3.1(1e) but I am reluctant to do this while this error is flapping, several times per day.
It's only ever classified as a green warning and disappears on its own but does not instill confidence in running an in-place upgrade on a production environment.
I am logging a call with our service provider, who will likely contact Cisco TAC, and I will post any useful information on this thread.
... View more