10-31-2025 02:37 PM - edited 10-31-2025 02:41 PM
I could use some guidance. I have not tried support yet from either Cisco or VMware but here is my struggle.
I have an environment with probably 15 - 20 b200m5 blades.
They are all set to legacy boot
They all boot local via local disks configured in a raid 1 via the LSI SAS "invader" controller.
They are all running ESXi 7.0u3w. Never had an issue booting or anything.
I am in the process of upgrading them to 8.0u3g. I upgraded one host and got the warning about Legacy boot and needing to be on UEFI as legacy will soon be unsupported. (Exact Message: <bios_Firmware_Type Warning: Legacy boot detected. ESXi servers running legacy BIOS are encouraged to move to UEFI. Please refer to kb 84233 for more details>
I ran the upgrade and the server booted as expected after the upgrade.
What is my risk level here. I suspect i should plan to move all my servers to UEFI as stated to avoid bricking servers etc. This whole environment is legacy meaning older and will be decomed but probably not for a couple years yet.
With that said, should i chance it and upgrade more hosts? My thought is probably not.
Again we have local boot storage for esx. WE also to have a few SANs with connectivity to all hosts.
Is my safest path forward:
11-03-2025 08:56 AM
Those UCS Service Profile steps look correct.
Have heard of several doing the BIOS > UEFI upgrade like you have outlined without issue.
The only thing I would change is instead of upgrading ESXi from 7.0 to 8.0 and changing Legacy BIOS to UEFI boot mode would be:
to re-install ESXi 8.0 clean after flipping from BIOS to UEFI.
In my mind UEFI is different enough that I would take the time to reinstall instead of trying to migrate in place.
This reinstall idea is my personal thoughts and not a recommendation that I have seen from UCS Engineering.
What all OS modifications would be needed for a complete re-install? Probably not many.
Give the OS an IP. Should "see" shared storage/datastores just fine.
If you are using a DVS, then Port Groups would stay in place.
If using a standard vSwitch with a lot of Port Groups, then scripting the Port Group after the install wouldn't be too difficult.
If you read between the lines on all of the VMware documentation. . .
If the server (model,platform,CPU generation) supports boot from Legacy BIOS, it is still supported to boot from Legacy BIOS, like UCS M5/M6.
While UEFI is recommended UEFI is NOT required.
What is required to use UEFI is new server model (,platform,CPU generation), like UCS M7/M8+.
The shortest distance between two points is a straight line. Leave it Legacy BIOS and upgrade the OS.
11-03-2025 09:23 AM
IF it is supported to leave it legacy, i am fine with that. It would be the quickest to upgrade and just leave things. This environment will be going away on a long enough time line...(1 year or so) but i dont want to skip changing over to UEFI if now is time to do it as things are upgraded to 8. Or for example if legacy and esxi8 cause some sort of boot issue, purple screen issue or general instability.
As mentioned, i have already updated one host and its been running ok (and has been rebooted a few times to confirm it boots ok) for several days now. It was upgraded last week.
Ideally if it is supported and is stable in esxi8 with Legacy (and broadcom doesnt drop support for it or make any other drastic changes in that time, it should be fine as is if cisco still supports it i would think.
I might open a case and see how far i get with the same questions.
11-10-2025 04:22 PM
The upgrade to UFEI worked as outlined above. I upgraded after this and the upgrade worked as well but it did appear to reorder my vmnics on the esxi host and caused things to go crazy. I assume this is from the esxi upgrade and not switching boot.
The odd part here is i didnt even notice the vmnics were changed. The host mgmt ip still responded. All my nics were still connected with no errors or warnings in vmware. I didnt try to put any vms or load on it until i upgraded and i didnt check the vmnics prior to upgrading.
That said...i have a mess now. Oddly MGMT and VMOTION still work but the second i move a vm to an upgraded host it cant talk to anything (things in the same vlan. gateway , etc.).
I am wondering if it would be possible to roll back to 7. I havent tried that yet
Finally....im wondering if it wouldnt just be easier to reload esxi8 from scratch at this point hoping that will reset the logical naming of the vmnics. IM not sure where it stores this information but i have to imagine it will reset back so vnic0 is vmnic0 etc. Now it looks like vnic0 is vmnic2 for example after the upgrade which is wrong.
wth would it reorder the nics via an straight forward upgrade. Odd.
11-16-2025 10:00 AM
ESXi renumbering vnics after enabling secure boot or UEFI has been observed a number of times.
As Steve previously mentioned, Just reinstall ESXi 8.0 fresh, and your original mappings (vnic to VMNIC) should be correct.
Kirk...
11-17-2025 09:28 AM - edited 11-17-2025 11:16 AM
Thanks...this bug artlcle was shared with me. Is this worth trying?
It outlines reordering the nics from the cli and rebooting. I can always reload if that is recommended but this is listed as a bug article and talks about resorting the nic orders back to orignal in esxi based on MAC address.
Is this worth doing?
11-18-2025 11:13 AM
We upgraded from ESXi 7.0 to 8.0. vNICs and vHBAs reordered on Cascade Lake M5s. This was for a host with 6 vNICs and 2 vHBAs:
| Alias | BIOS | UEFI |
| vmnic0 | 1 | 4 |
| vmnic1 | 2 | 5 |
| vmnic2 | 3 | 6 |
vmnic3 | 4 | 1 |
| vmnic4 | 5 | 2 |
| vmnic5 | 6 | 3 |
| vmhba1 | 1 | 2 |
| vmhba2 | 2 | 1 |
(we ignored the vHBA reordering, since WWPNs are the "source of truth" of what LUN should be configured to which vHBAs, not the device names)
I wrote a script based on the commands mentioned in the bug report, that extracts the vmnics, and creates a script that you can run to reorder the vmnics. (I needed a script that generates a script, since the reordering commands change based on the number of vNICs - some SPs have 4 vNICs). This script I had to run from a shared datastore via the console since the host did not have networking yet (e.g. to copy the script to the host), and it was too time-consuming to assign IPs at our scale, copy the script across then run via SSH.
Some background on the certification: Cisco used to certify both legacy BIOS and UEFI with M5, but midway in the 4.1(3) series of patch releases, they stopped certifying legacy BIOS.
In the end we decided it was just not worth the effort to upgrade, since
We observed the following patterns
Everywhere that reordering did not take place, we switched.
The only reason I can think of to do all the work to fix the order post switch to UEFI, is if vLCM in VCF 9.0 explicitly blocks upgrading a host still running Legacy BIOS (whereas in vSphere 8.0 it only warns, but still lets you upgrade from ESXi 7.0 to ESXi 8.0). NB: I have not been able to test this myself; I am speculating.
11-18-2025 11:58 AM
Thanks
Ours are Skylake.. b200 m5 servers. Firmware is 4.1.3d.
I ran the reorder stated in my previous post and so far so good. IT did put the vmnics back in the correct order after a reboot. You can run it from ssh provided you can still get at the management ip. IF not you have to do it over the kvm using the esxi shell. Good luck pasting a multiline nic reorder using that. It will probably work but that thing is clunky at best.
Anyway....so far as good.
You are corret. Upgrading to uefi before going to 8 had no affect. It still still worked on legacy. Broadcom/Vmware clearly states wont support you if they find you are still on legacy. The grey area on hardware supported on legacy in 8 really is a moot point if you cant get support from the vendor in the end. Also.....9...when you go to 9 you will need UEFI so you might as well upgrade.
I will have to see if upgrading the rest of the hosts knowing what i have done seems to work. I really have 10 or so hosts so its not as bad as someone who has a few rows full. I cant imagine that mess.
That is interesting that for you reloading ESXi didnt reorder. I would think it would on a fresh load as the UCS vnic seems to match the vmware vmnic number in my experience on a fresh load. UEFI messed that up from what i have seen. Reordering it using ssh and the bug article above seems to have reorderd them back to:
UCS vnic0 = vmnic0
UCS vnic1 = vmnic1
etc.
In the end i am out a TON of time and my confidence level is extremely low. Talking to broadcom wasn't helpful at all. It didnt seem like they had any clue what i was talking about when trying to show them what i was seeting.....despite showing them their own article about this issue.....and the cisco bug article.
11-19-2025 10:59 AM
ah one other thing i will note after looking closely after reading your post is that all the vmhba connections renamed themselves in vmware too. The storage still works fine and appears not be an issue. As you stated the wwpns are the source of truth and the A/B mappings should ultimately function the same. Im trying to document it just the same so we have in in case we need to reference it in the future.
It is said that if you are on esxi 8 and you are still running legacy broadcom/vmware can refuse to support you and will demand you upgrade to UEFI first. This why we have gone down this path.
That said, their support hasnt been great at all lately. I have a case with them on this showing them their own article about this looking for guidance or answers and they had no idea. They didnt even understand what was being said from what i could tell. My confidence level is not very high for them at the moment.
I have one more environment to do this on. Same blade modes so it should go quicker provided everything remaps the same way. I might just decided to say the hell with it and upgrade to 8 on legacy. In my experience if you do this and then later upgrade to UEFI the mapping issue is exactly the same. The host will still boot after upgrading to 8 on legacy and then switching it to uefi afterwards. I am not sure why they recommend you upgrade to uefi while still on 7 if it makes no difference. They couldnt explain that either.
Its been a fun few weeks trying to figure this out and explain it to vendors and everyone else around me. Nope.
What a nightmare.
11-19-2025 02:15 PM
Also, have you been through article kb 313378 from Broadcom saying your hosts if running legacy may randomly just not boot with a out of memory error on boot up?
That seems not great. Thats what got me to start converting over to UEFI. That and its best effort/not supported blah blah.
I dont know what to believe. Id love to switch over to UEFI and have it just work like it should be that doesnt seem like an option no matter what i do here.
TAC is suggesting putting a bios policy in place with CDN Control policy enabled but i just dont think those changes or CDN has anything to do with the way UEFI scans and enumerates the PCI bus compared to Legacy. Its hard to find any data on this....but simple searchs are suggesting that having a CDN Control policy doesnt affect the way UEFI scans or names things as far as the guest OS is concerned......aka same probelm.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide