cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
502
Views
0
Helpful
1
Replies

Nexus 1000v upgrade - 4.2(1)SV1(4) to 4.2.1.SV2(1.1)

sean.demura
Level 1
Level 1

Hello,

I am planning on upgrading our Nexus 1000v from 4.2(1)SV1(4) to 4.2.1.SV2(1.1).

As I understand it, I have to use the kickstarter files to do this upgrade. I have run the 'show install all impact' command, and all tests pass.

I have a question regarding the VEM upgrade though.

Our 1kv spans across to blade chassis's. One is a Cisco UCS, the other is a Dell chassis. They use different layer 3, and as such they cannot use DRS between them.

When I initiate a the VEM upgrade from VUM, what algorithm/process is used to determine which host gets the upgrade first?

What happens if one of the hosts fails to migrate all of its vm's off before going into maintenance mode?

Does the 1KV need to sit on a vmware vSwitch to do the VUM upgrade? Can this even be done?

Once a host is in maintenance mode, how long will it take each host to upgrade it's VEMs?

Can the newer version of the VSM's operate with an older VEM version, or do these need to be done all at the same time?

We are using ESX 4.1 and vmware 4.1 update 2 and 3. Are there any issues going from our current version to the latest version? As I understand it, I need to go to the SV2 release in order to get ESX 5.1 support, which we plan on migrating too after we have successfully completed a 1000v upgrade.

What other things should I be aware of to ensure that we do not run into any issues? Are there firmware versions of the Cisco UCS or our Dell Chassis that need to be in place before upgrading our 1000v?

Thank you.

1 Reply 1

mwronkow
Cisco Employee
Cisco Employee

VUM steps through the hosts but there is not hard rule to the order.  We are working on an enhancement for next major release. 

All VMs must be migrated off the host before it enters maintenance mode.  ESX will not enter maint. mode until this process completes so you do not need to worry about stranding a VM. You do need sufficient resources in each cluster so that one host can be in maintenance mode.  If your environment can tolerate 1 host in maint mode per cluster then that should work.

The VSMs can sit anywhere - N1k dvs, vSwitch, VMware dvs.

The VEM upgrade takes 30-45s per host.  vMotion/Maintenance mode is the time consuming part of the process which can take anywhere from a few minutes to 30 or more depending upon your network/VM RAM, etc. 

SV2 is the first release to support prolonged upgrade cycles where the VEMs can be a rev or two back.  However, the long-term plan should be to upgrade the VEMs to match VSM.

ESX 4.1 is the oldest release supported for SV2. SV2 is the first release to support 5.1  There is a combined esx/n1k upgrade guide you should reference when its time to upgrade host OS.

We do not have a UCS/Dell firmware -> N1k matrix.  All UCS versions are supported.  For Dell, we highly suggest using latest drivers & firmware as we've observed many issues with 802.1q/lacp on older 3rd party software.

If using LACP from the hosts do not upgrade to SV2 - CSCud39246.  Fix coming soon.

Matthew

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: