cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3770
Views
0
Helpful
6
Replies

Upgrade Nexus to v1.2 without downtime

thorstenn
Level 4
Level 4

Hi,

it is possible to update the VSM/VEM without downtime of virtual machines?

Which steps neede?

Thanks and best regards

6 Replies 6

Robert Burns
Cisco Employee
Cisco Employee

Good question.  The answer is Yes, but for any type upgrade planning an appropriate maintenance window is best when possible - how many times has Murphy's Law shown itself?.  For this you'll need to do it manually though.  Using VUM is a non-stop sequential upgrade of all hosts.  This doesn't allow for migration of running VMs.   This is why the upgrade guide advises to a "maintenance window" where the VM traffic can be interupted.

If you do it manually you can circumvent downtime follow this procedure.

1.  Upgrade your VSM(s) with new system and kickstart images.

2.  One by one, manually installed the new VEM software on each host from the CLI using "exsupdate".  As you do each host you can put it into maintenance mode, migrate your VMs to another cluster member and proceed without downtime.

3.  After all VEM hosts have been upgraded, raise the "feature level" of the VSM.

I recommend this to people who are upgrading less than 10 hosts.  Any more than that and it could get tedious.  At least with the manual upgrade if you have an issue upgrading one host, you can stop and diagnose the problem immediately.

Regards,

Robert

That sounds good but i have some questions:

roberbur schrieb:


1.  Upgrade your VSM(s) with new system and kickstart images.

If i upgrade the nexus with the new images and then reload the VSM. If the VSM is back, is the system working well at this time with the old config and the not upated VEM where the production VMs are on?

Next step is updating the VEM in maintance mode manually from the CLI?

roberbur schrieb:


3.  After all VMs have been upgraded, raise the "feature level" of the VSM.

You mean "After all VEMs have been updated" right?

Could you confirm that these steps are right for upgrading a ESX Cluster with two ESX hosts:

1. Snapshot of VSM

2. Put one ESX host in maintance mode with moving all VMs to the other host

3. Upgrading the nexus with new system and kickstart images

4. Manually install the new VEM software on the maintance ESX host

5. exit maintance mode on the ESX Host

6. Moving the VMs to the updated host

7. enter maintance mode of the other ESX host

8. Manually install VEM software from CLI

9. exit maintance mode and moving some VMs on it

10. raise "feature level" of the VSM

11. everything fine.

But if something get wrong and i need to revert to a snapshot of the VSM, what about the updated VEMs? They are working well with the "old" version of nexus?

@jasonfmic

Where is a list of known bugs from the v1.2 ?

@robert

if i build a test environment, i can't use the license from the production environment?

Thanks and best regards

Thorsten

thorstenn,

Once you upgrade the VSM and it reloads yes, all existing VEM hosts will connect fine and operate normally.  The feature level following your VSM reboot wil remain at 4.0.4.SV1.1, until you upgrade the all remaining VEM hosts and then set the new feature level explicitly.


Yes.  Next step is to manually upgrade each VEM host from the CLI.  You'll need to scp/ftp/tftp the new VEM software to your host ahead of time.  To verify which .VIB file you need, find your vSphere host's build # from vCenter - Summary Tab and then open "http://<VSM IP> which will display all the new VEM software for each build of vSphere.  You'd do this AFTER you've upgraded the VSM.   If you can't find your appropriate biuld - /vib versoin there you'll need to grab it from VMware's website.  At the time of publishing a VSM release we package all "current" .vib files for each release & patch matching VMware (Ex. Express Patch 1, Update 1 etc).  Note** The .vib file will not install unless it's the exact match for that build of ESX.

Consult this page for more info:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0/compatibility/information/n1000v_compatibility.pdf

roberbur schrieb:


3.  After all VMs have been upgraded, raise the "feature level" of the VSM.

You mean "After all VEMs have been updated" right?

**YES.  I've corrected that post typo.

As for your upgrade steps.  I'd add a step in between Steps: 3 & 4, and 7 & 8.

Before manually installing the new software, I've had the best results when first uninstalling the existing VEM software.

"vem-remove -d"

If that is successful, when you try to issue a "vem status" command it should fail stating the command doens't exist etc.  Then proceed with installing the new bits.

If you need to rollback, you'll need to uninstall the VEM software, and replace the previous version manually.  Even with VUM there is no auto-roll back.


@jasonfmic

Where is a list of known bugs from the v1.2 ?

A list of known bugs for 4.0.4.SV1.2 can be found in the release notes.  http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_2/release/notes/n1000v_rn.html#wp42881

"@robert

if i build a test environment, i can't use the license from the production environment?"

Every VSM at version 4.0.4.SV1.2 and later is bundled with a 60 day demo license.  This should be enough time for your testing.  If you need longer then you will need a license it or deploy a new VSM and transfer over your config to it.

Regards,

Robert

jasonfmic
Level 1
Level 1

Is this for a production environment?  If so, in addition to Robert's advice, I would recommend taking a snapshot of the VSMs prior to upgrading them.  Downgrading is not supported, so you will need to restore the VSMs if any of the bugs in 4.0(4)SV1(2) are unacceptable in your environment.  I would also recommend taking a close look at the list of known bugs.  Many have acceptable work-arounds that you should get in your config before upgrading if possible.

Robert,

Here's a question related.  I am working on developing a procedure for updating our production environment.  Can you tell me, is it acceptable to upgrade the VSMs and run them for an extended period of time without upgrading the VEMs?  I am looking at a procedure where after upgrading the VSMs we would upgrade the VEM on one host, move some VMs back to that host and do verification before continuing the upgrade process.  Would such a procedure be TAC supported?

Jason

There's nothing stopping you from upgrading your VSMs and some (not all) of your VEM hosts but I wouldn't operate like this for long. This configuration hasn't been intensively tested as the intention is for people to upgrade all VEM hosts in & around the same time, not operate in a hybrid state for any lengthy period.  I've seen VEMs function at both versions fine for a couple hours, but as mentioned, there's no "gaurantee" how long it will function before potential issues arise.  Most customers build out a separate lab for testing vs. production.  If you did want to try this, we would assist on a best effort basis of course, but just understand Cisco TAC can't "officially" support it.

Remember if until all VEMs are running the update software you can't upgrade the VEM feature level.  Activating the new feature level just enables some new commands that aren't availabe on SV1.1.

Hope this helps.

Regards,

Robert

We cannot completely replicate production in our test environments.  One weakness is that we do not typically see the volume of traffic that we do in production.  We test as thoroughly as possible, but occasionally we may not aggravate an bug until it is deployed in production.  For this reason we require some supported method of gradually rolling out a new release.

I supposed I could have a dedicated QA environment.  I was trying to avoid this because we are typically migrating hosts entirely to 1000v because we are limited to 4 NICs per host and do not want to dedicate a NIC to a VSwitch.  In order to have a useful QA environment, I believe I will need at least one host in QA and one in production each with at least on NIC on a VSwitch to facilitate vmotion of VMs from one environment to the other.