06-06-2013 05:20 AM
Hey everyone,
I'm performing an upgrade soon to 4.2(1)SV1(4a), and I noticed in the documentation, it states to upgrade the VEMs before the VSM, which is exactly the opposite of what I've done in the past.
I wanted to perform this via VUM, and I'm seeing in the documentation that the "vmware vem upgrade proceed" seems to start the VUM update process after I push the vibs/accept the upgrade. My question is this:
If for some reason a host doesn't reattach properly and we start running into issues, is there a way to cancel this process?
Forgive me if my VMWare-fu is a little weak...
06-06-2013 05:26 AM
Yes the order between earlier versions of 1000v did change. It used to be VSM first, then VEM first. The way it is as of 1.4 is the order it will remain. There were technical reasons for this but they're behind us now. Best advise is to exactly follow the upgrade guide for the version you're upgrading from & to.
If VUM runs into any problems throughout the process it will stop at the problem VEM upgrade. It will notn skip over and continue.
Yes you can cancel the process at this stage from the VSM, fix the problem, and re-start it.
You don't have to use VUM. You can optionally do the VEM upgrade manually also. The procedure is in the same guide. Many of us prefer the manual method so we can predictably upgrade certain VEMs in a particular order. With VUM it selects the order and doesn't stop until it's complete. Doing it manually also allows me to migrate VMs to specific hosts. VUM will choose the target hosts itself in this regard.
For less than 10 VEM hosts, the manual method is painless. Simply copy the VIB to a shared datastore, and update each one from the ESXCLI or remote CLI. For more than 10 hosts manual method can get tedious.
Regards,
Robert
06-06-2013 05:33 AM
Hey Robert,
Thanks, that helps. Having not used VUM before, I was concerned about a scenario where I slowly started watching our VEMs breaking and not having a way out of it.
We have more than 10, although last time we didn't, and used the manual method successfully...
The VUM upgrade looks pretty straightforward, so we'll give it a try and see how it works for us.
Do you happen to know if the proxy issue still exists with VUM, where if configured to download from the internet, it will have a problem talking to the VSM locally?
Thanks again... appreciate it!
06-06-2013 05:40 AM
No the proxy issue shouldn't be a problem. As long as VUM can pull updates from the net successfully, and VCenter can communicate with the VSM fine - you should be good to go.
Many times the VUM upgrade fails for reasons not related to the 1000v. Ex. Not enough DRS capacity to allow one host to be put into maintenance mode at a time, or locally hosted VMFS VMs etc. As long as VUM migrate VMs off, can put the host in maintenane mode, and pull the correct VIBs, there shouldn't be any problems.
Robert
06-06-2013 05:42 AM
Awesome... we should be good to go then.
Thanks for the advice.
06-07-2013 06:37 AM
We got through the upgrade... though we ran into a little hiccup with the following VMWare KB:
We're on 4.1, and our SC NICs are on the VEMs. We just had to restart the vCenter service and "vmware vem upgrade proceed" to get it going after each hicccup.
Robert,
Was reading the SV2(1.1a) upgrade doc, and I saw this:
Note | For the ESXi 5.1 release (799733), the minimum versions are as follows: For the ESXi 5.0.0 release, the minimum versions are as follows: If you plan to do a combined upgrade of ESX and VEM, the minimum vCenter Server/VUM version required is 623373/639867. For the ESX/ESXi 4.1.0 release and later, the minimum versions are as follows: This procedure is different from the upgrade to Release 4.2(1)SV1(4). In this procedure, you upgrade the VSMs first by using the install all command and then you upgrade the VEMs |
Special just for this version, or did the VSM go back to being first again?
06-10-2013 11:39 AM
Right it's flip-flopped. Upgrading the VSMs first is the way it will be going forward. This more closely resembles what you do on a Nexus switch. We upgrade one supervisor, switchover, upgrade the other supervisor so it's more like an ISSU upgrade.
The VEM modules as of 2.1 are also more tolerant of the upgrades. With 2.1 the VEM modules and VSMs can be at different code revs and you can still make changes to the running config of the VSM.
louis
06-10-2013 11:49 AM
Very cool... thanks for the tip.
We've upgraded all the way to 4.2(1)SV2(1.1a) without any real problems - had a PSOD on one host an hour after the VEM was upgraded, but not sure if it was related to something else.
I did also notice that the hosts were requiring a reboot after the VEM upgrade, as they would not establish connectivity otherwise. I don't have the exact error handy, but it was some sort of initialization error (I can get it most likely). Something common/obvious you guys have seen in these later versions that causes this? 4.0(4)SV1(3c) -> 4.2(1)SV1(4a) didn't have the problem at all.
Edit: The reboot was required on both the VUM-initiated and manual processes.
Message was edited by: Ryan Lambert
06-10-2013 12:05 PM
You should not have needed to reboot the ESXi hosts.If you find the error message please forward it on and I'll do a search to see if it's a known issue.
BTW which license version are you using?
louis
06-10-2013 12:10 PM
It's the LAN services licensing.
I was a little confused about needing the reboot. I've never seen that before...
Luckily, we still have one host to upgrade in the cluster, so I'll grab the error message off of it. We're able to reproduce this 100% of the time, so should be easy.
06-10-2013 01:24 PM
Hey Louis,
So we noticed it wouldn't reconnect after the VEM upgrade, and we tried a vem start. Here's what we saw:
~ # vem start
startDpa
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
Error initializing sf_dpa_api_init()
CPU resource file is already there - old reservation 50
Warning: DPA not running
MEM resource file is already there - old reservation 512
Reboot brought it right up.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide