cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3298
Views
0
Helpful
6
Replies

Upgrade software for Catalyst 4500-X in VSS from non-crypto to crypto image

supportgns
Level 1
Level 1

Hi everyone,

 

We're gonna perform an upgrade this Sunday at midnight on two Catalyst 4500-X in order to move to a more recent and stable version, but the most important thing to do is install a crypto image on it, as the current image is not a "k9".

 

However, these 4500-X are in VSS configuration and besides they are the Core switches on our customer (it has at least 25 user SVI's), which is a 7x24 enterprise, thus the idea of making a maintenance window at midnight.

 

We currently have a 3.6.4 version (non-crypto) and we plan to move to 3.11.0 (crypto). Besides, we've not considered ISSU as an option as we're aware this is not supported from non-k9 to k9 images. Instead we're going to reload both switches at the same time, we've read lots of threads about upgrading a VSS and we understand there will be an outage; the maintenance window was prepared considering that outage. So I have these questions:

 

Can we make that upgrade directly or do we have an intermediate step?

What's the best procedure to minimize the impact and make the downtime the shortest possible?

 

Thanks, I'll appreciate your help.

 

1 Accepted Solution

Accepted Solutions

Hi.

Finally, my customer and I decided to upgrade both switches at the same time. This process was done with a Cisco TAC engineer in charge of supervise and help us in that process. These were the steps:

 

1. Copy new IOS image to both bootflash:/ and slavebootflash:/ in the VSS (done 3 days before).

2. Save the backup, internally (on the switch) and externally (on a computer).

3. Check bootvar and configuration register (already 0x2102).

4. Delete current bootvar (3.6.4) and replace it pointing to the new IOS (3.11.0 + crypto).

5. Save the configuration (we made another backup).

6. Reboot the whole stack by using command "redundancy reload shelf".

 

This worked successfully, we had a 14-minutes downtime as planned, until network connectivity (all Core VLANs) came back. However we had to wait another 1h 30m to get business' services and applications back online, but that was something internal to our customer.

Until now, we have had no issues with the image and we could set the SSH we needed so much for our VSS.

 

Anyway, thanks for helping!

View solution in original post

6 Replies 6

balaji.bandi
Hall of Fame
Hall of Fame

I believe you can upgrade to the target version but you need to upgrade ROMMON  - ROMMON to IOS Version 15.0(1r)SG11.

 

Read the release notes carefully before upgrading.

 

https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/release/note/ol-311 xe-4500x.html

 

1. Take the configuration out of the box.

2. prepare for a backout plan if any disaster situation to recover quick.

3. upgrade ROMMON

4. Upgrade to target version in the maintenance window.

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Hi Balaji,

Thanks for your reply. I've just checked out the Core switches and with a "sh version" I confirmed we have ROMMON 15.0(1r)SG11, so there should be no problem to migrate to that version.

I understand that several previous steps have to be taken before the maintenance window, so, according to what we've read in another posts, we have designed this workplan:

1. Take the configuration out of the box and make a backup (like you said, and this was done yesterday)

2. Copy the new IOS image (3.11.0 crypto) into both switches (active and standby, i.e. bootflash:/ and slavebootflash:/)

3. Rename the current IOS image (3.6.4 non-crypto) so that it can't be taken as a boot image (in order to not modify the config-register)

4. Set boot variable to the new IOS image.

5. Reload both VSS during maintenance window (about 15-20 min).

Is there something else I should consider or the mentioned steps are OK?

Also, capture all the information Pre and post 

 

  • Show inventory
  • Show version
  • Show environment
  • dir bootflash:
  • dir slavebootflash:
  • show bootvar
  • show switch virtual
  • show redundancy
  • show switch virtual slot-map
  • show module
  • show log

 

1. make sure config-reg 0x2102

2. make sure after you copy check with md5 checksum.

3.  after your step 4, write the config. 

  • redundancy reload peer

4. Once switch come back online check all working as expected

5., reload master unit

  • redundancy force-switchover

 

Note: Always connect Console cable and capture all the boot Logs, in case anything go wrong you have evidence to review and fix

 

 

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Ok, according to what you mentioned, this is what I have:

 

1. make sure config-reg 0x2102

I checked it out with "show bootvar" and effectively, it's already configured as 0x2102.

 

2. make sure after you copy check with md5 checksum.

Yesterday with customer we performed md5 checksums after copying the new IOS and it's all OK.

 

3.  after your step 4, write the config. 

We did it yesterday too, and prior to maintenance window we're gonna save configs again.

 

  • redundancy reload peer

4. Once switch come back online check all working as expected

5. reload master unit

  • redundancy force-switchover

I have some doubts with these commands. What we planned to do is reload both switches right away (15-20 min downtime) and I understand the command "redundancy reload shelf" performs complete reboot. I also understand that with commands "redundancy reload peer" and "redundancy force-switchover" one switch is reloaded at the time (slave and then master); I'm afraid to do this, as I see this implies there will be a mismatch when the standby switch reloads and thus the VSS will fail until the primary switch is upgraded.

 

Am I correct on my guess? Please explain me a bit about this reboot procedure (redundancy peer) and which procedure is less risky.

 

Thanks again. 

Yes, you can do either depends on the requirement, i have suggested the best method if dual connected devices to VSS, still stay up and forwarding the traffic while secondary reboot, and you reboot primary.

 

this not necessary to follow on your case, if you got a full down maintenance window, you can do what best suitable for you.

 

Make sure to connect console both the device console cable and capture logs.

hope you have the right Licenses also.

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Hi.

Finally, my customer and I decided to upgrade both switches at the same time. This process was done with a Cisco TAC engineer in charge of supervise and help us in that process. These were the steps:

 

1. Copy new IOS image to both bootflash:/ and slavebootflash:/ in the VSS (done 3 days before).

2. Save the backup, internally (on the switch) and externally (on a computer).

3. Check bootvar and configuration register (already 0x2102).

4. Delete current bootvar (3.6.4) and replace it pointing to the new IOS (3.11.0 + crypto).

5. Save the configuration (we made another backup).

6. Reboot the whole stack by using command "redundancy reload shelf".

 

This worked successfully, we had a 14-minutes downtime as planned, until network connectivity (all Core VLANs) came back. However we had to wait another 1h 30m to get business' services and applications back online, but that was something internal to our customer.

Until now, we have had no issues with the image and we could set the SSH we needed so much for our VSS.

 

Anyway, thanks for helping!

Review Cisco Networking for a $25 gift card