cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
25256
Views
102
Helpful
7
Comments
welkin
Cisco Employee
Cisco Employee

ACI upgrade involves  APIC software update and switches update. Here are a few pre-check lists we usually recommend to customer to prepare before an upgrade get started.

Upgrade Best Practices:

1. Confirm the supported upgrade path

APIC upgrades involves database conversion and synchronization across APIC nodes. Such operations may fail when APICs are upgrades with an unsupported path. This may cause loss of configurations. Also communications between APICs and switches, or between switches may fail if a supported upgrade path was not followed.

Please always check the supported upgrade path via ACI Upgrade/Downgrade Support Matrix

2. Review behavior changes in the target version

Before you start the upgrade, please review the Release Notes for your target ACI version and understand any behavioral changes that are applicable to your fabric configuration to avoid any unexpected results after the upgrade.

One example is L3Out Route Control Enforcement (Import). The support for this feature on OSPF was added starting from APIC release 2.0(1). Prior to this release, the option was ignored for OSPF as an unsupported configuration. If the option is enabled for OSPF when you upgrade your fabric from 1.x to 2.0 or later, Import Route Control Enforcement starts taking effect after the upgrade and there may be an outage due to all OSPF routes being filtered out with the import route control.

3. Clear All faults

The faults of ACI fabric stand that there are invalid or conflicting policies or even disconnected interfaces etc. Please understand the trigger and clear them before starting an upgrade. Please be aware that conflicting policies may result in an unexpected outage because ACI switches fetch all policies from APICs from scratch after an upgrade which behaves in a "first come first served" manner. As a result, the unexpected policies may take over those expected policies. An example of faults for conflicting policies are F0467 "Encap is already in use", "Interface Configured as L2" and so on.

Some specific faults that are known to cause an outage or upgrade failure are also checked by pre-upgrade validations listed in the upgrade guide. Note that it is still highly recommended to clear all faults even if it's not listed in the guide. 

4. Check NTP status

Confirm that the time is synchronized across nodes, especially APICs to avoid known issues due to APIC time mismatch. More details on this can be found in the troubleshooting section of this article.

For switches, this may not be as critical as APICs. However it is still a best practice to synchronize time across all nodes in the fabric.

5. Confirm APIC cluster is fully-fit

APIC upgrades involves database conversion and synchronization across APIC nodes. APIC cluster status on all APICs must be fully-fit to perform such operations. If the cluster status is not fully-fit, contact TAC and resolve it before the APIC upgrade.

Switches fetch configurations from APICs after an upgrade. When the database synchronization between APICs has an issue, this operation may be affected as well.

Note: Please do not initialize an APIC when the cluster status is not fully-fit. Such an operation may cause the database information to be lost forever.

6. Backup the configuration to an external server

Make sure to export a configuration back up to a remote server before you start the upgrade. This exported back up file can be used to get the configuration back on APICs in case APICs lost configurations or the data is corrupted after the upgrade.

When exporting your configuration backup, make sure that global AES encryption is enabled so that the passwords can be exported in the backup. Without AES encryption, all the passwords including the admin password would not be exported at all in the backup. Importing such a backup will result in the need of password recovery for admin via USB and reconfiguration of any other passwords that may be required for integrations such as VMM. Make sure to write down the passphrase you used to enable the AES encryption. The passphrase is required when you import the backup.

References:

 

7. Prepare for the state comparison before and after the upgrade

Cisco Network Assurance Engine (NAE) fetches the configuration and some operational status in the fabric at one point in time. Each data collection is called Epoch. By collecting the epoch data before and after your upgrade, you can use Epoch Delta Analysis to check if there are changes after the upgrade. NAE Epoch Delta Analysis compares Smart Event and configurations between two epoch data. 

Another option is an appcenter app called StateChangeChecker. Although this is a volunteer based, best-effort app and may no longer be maintained, the app does comprehensive state comparison between two points in time.

References:

 

8. Check all pre-upgrade validations

Starting from APIC release 4.2(1), when selecting the target version for APIC (or when submitting the upgrade group for switches), APIC performs some pre-upgrade validations. Although the validations were simply fault checks in earlier releases like 4.2(1) to 4.2(4), more detailed validations are being implemented. Make sure to check and clear any failure in the pre-upgrade validations. For APICs running older versions (but 3.2 or newer) , you can try an appcenter version.

References:

 

9. Stage the upgrade in a lab

Cisco recommends to try the upgrade in a lab or test fabric before the actual production fabric to familiarize yourself with the upgrade and behaviors in the new version. This also helps to evaluate any possible issues you could run in to after the upgrade.

 

 

Upgrade preparations (known issues/behaviors):

This section covers some known issues related to upgrades. Some of them may be specific to switch upgrades, but that may require configuration changes which should be addressed even before APIC upgrades. Hence those are covered in this section instead of switch specific upgrade section below.

1. Check overlapping VLAN pools

Overlapping VLAN blocks across different VLAN pools may result in some forwarding issues such as:

  • Packet loss due to issues in endpoint learning
  • Spanning-tree loop due to BPDU forwarding domain

These issues may suddenly appear after upgrading your switches because switches fetch the policies from scratch after an upgrade and may apply the same VLAN ID from a different pool than what was used prior to the upgrade.

Check the following documents to understand how and which scenario overlapping VLAN pools become an issue in.

References:

 

2. Check Rogue Endpoint feature

Due to an implementation change in ACI switch release 14.1(x). the rogue endpoint feature may not function correctly during the upgrade of switches to or from 14.1(x). Hence it is recommended to disable the feature temporarily during such an upgrade. Once the upgrade of all switches are done, you can re-enable the feature again regardless of the version the switches are running.  

 

3. Check L3Out BGP Peer Connectivity Profile under a node profile without a loopback

BGP Peer Connectivity Profile can be configured per node profile or per interface. The former is to source the BGP session from a loopback while the latter is to source the BGP session from each interface.

Prior to 4.1(2), when a BGP Peer Connectivity Profile is configured at a node profile without configuring a loopback, APIC uses another available IP on the same border leaf in the same VRF as the BGP source, such as a loopback IP from another L3Out or an IP configured for each interface. This has a risk of the BGP source IP to unintentionally change across reboots or upgrades. Hence, CSCvm28482 changed this behavior and ACI no longer establishes a BGP session via a BGP Peer Connectivity Profile at a node profile when a loopback is not configured in the node profile. Instead a fault F3488 is raised.

Due to this change, when upgrading from an older version to 4.1(2) or newer, a BGP session is no longer established if the session is generated via a BGP Peer Connectivity Profile under a node profile and a loopback is not configured in the node profile. Prior to upgrading to 4.1(2) or later, you must ensure that a node profile with a BGP Peer Connectivity Profile has a loopback configured for all switches in the profile, or ensure that BGP Peer Connectivity Profiles are configured per interface.

When configuring BGP Peer Connectivity Profiles per interface with the same peer IP, you need to configure a node profile for each node separately until the restriction is loosened via CSCvw88636.

 

4. Check QSFP-40/100-SRBD usage

Prior to the fix of CSCvm26708 (13.2(3m) as of writing this), QSFP-40/100-SRBD may latch onto 40G speed even when the peer is a 100/40G bidirectional compatible platform. This may occur due to a flap or reload/upgrade of the leaf switch or spine switch. When performing an upgrade with QSFP-40/100-SRBD and older releases such as 13.1, please check the interface speed after the upgrade. When interfaces latched onto 40G unexpectedly, flap those links to have them come back up with 100G.

If QSFP-40/100-SRBD is used on -EX line card on a spine, as a side effect of CSCvm26708, you may be susceptible to CSCvf18506 that could cause packet drops for traffic with the size roughly larger than 4000 MTU on 40G speed links.

 

APIC specific upgrade preparations:

1. Confirm the CIMC access for all APICs 

This is to avoid two risks:

  1. CIMC 1.5(4e) has a memory leak defect which would lead the impacted APIC (usually APIC2 and above) to fail to kick off the upgrade. It would also lead APIC1's process crash post the upgrade. You can consider that the CIMC has reached the bad state if the CIMC becomes unreachable either via GUI or SSH. You must restore this by resetting the CIMC via disconnecting the server's power cable, waiting for 3 minutes and connecting it back. Also it is highly recommended to upgrade the CIMC to a recommended version for your APIC prior to the APIC upgrade.
  2. Without the CIMC access, we will not be able to access the APIC console remotely in case something went wrong during the APIC upgrade. Securing the CIMC access before the APIC upgrade is very critical. 

References:

 

2. Remove all switch upgrade groups prior to upgrading APICs to 4.0 or later 

Prior to APIC release 4.0, there are two switch upgrade groups, firmware group and maintenance group. Starting from APIC release 4.0, these groups are merged to simplify the user configuration. Once APICs are upgraded to 4.0 or later, users need to configure only maintenance groups for switch upgrades. In a later release, this group may be referred to as simply an upgrade group or an update group. 

To avoid any unexpected behaviors, it is recommended to remove all switch firmware groups and maintenance groups prior to upgrading APICs to 4.0 or later from 3.x or older. Once the APICs are upgraded to 4.0 or later, create new switch upgrade groups and proceed with the switch upgrades. Even though the groups are for switches, the object model change happens on APICs. That's why this precautious operation needs to be performed before the APIC upgrades. Once APICs are upgraded to 4.0 or later, there is no need to perform this operation ever again when you upgrade APICs further.

An example issue is that if the graceful option is enabled in switch maintenance groups when APICs are upgraded to 4.0 or later from 3.x or older, switches in such maintenance groups will be brought to maintenance mode (the same status as Graceful Insertion and Removal (GIR) ) and stop forwarding traffic. 

3. Confirm that the APIC process is not locked

The process called Appliance Element(AE) which runs in the APIC is responsible to trigger the upgrade in the APIC. There is a known bug in CentOS Intelligent Platform Management Interface (IPMI) which could lock the AE process in APICs. If AE process is locked, the APIC firmware upgrade will not kick in. This process queries the chassis IPMI every 10 seconds. If the AE process has not queried the chassis IPMI in the last 10 seconds, that could mean the AE process is locked.
You can check the log of AE process to know the last IPMI query. From the APIC CLI, run the command date to check the current system time. Now run the command grep "ipmi" /var/log/dme/log/svc_ifc_ae.bin.log | tail -5 and check the last time when the AE process has queried the IPMI. Compare the time against the system time to check if the last query was within the 10 second window of the system time.
If the AE process has failed to query the IPMI in the last 10 seconds of the system time, you should reboot the APIC to recover the AE process before starting the upgrade.

Note: Please do not reboot two or more APICs at the same time to avoid any cluster issues

 4. Check the duplicate IP in the APIC OOB network

When another device in the APIC Out-of-band (OOB) network is using the same IP address as the APIC, APIC may fail to bring up the OOB interface as a result of the duplicate IP check via ARP. This may also happen when a firewall using identity NAT is in the APIC OOB network. Such a firewall may respond to ARP requests as proxy-ARP.

If this happens, APICs will not be accessible because the OOB interface remains down even though the upgrade successfully completed.

An enhancement to bypass ARPCHECK on OOB interface was filed to address this corner case.

CSCvv47374 Add a method for customers to disable duplicate IP detection on APIC management interfaces

 

 

Switch specific upgrade preparations:

1. Use multiple upgrade groups to keep the redundancy 

The best practice is to upgrade switches in each pod in at least two separate groups so that half of leaf and spine nodes in each pod are up at any given time. An example is one group to have even numbered leaf and spine nodes, and another group to have odd numbered leaf and spines in each pod. And upgrade each group separately. Do not upgrade all spine nodes in one pod as the first group, and then all leaf nodes as the next group.

Especially when the graceful option is enabled, always make sure to keep one node in the redundancy is up while the other one is upgrading. For example, when a spine is upgraded with the graceful option, it will shut all of its IPN connectivity to isolate itself from the traffic flow via maintenance mode, assuming the other spine nodes provide necessary reachability to complete the upgrade. However, if all spine nodes in a pod performed upgrades with the graceful option, all nodes lose reachability to other pods and APICs in them. This may cause the spines to be indefinitely stuck in maintenance mode.

Although enhancements were added to prevent such scenarios, it is still one of the most important best practices to follow for both leaf and spine upgrades.

Note: If leaf nodes in a vPC pair are upgraded in the same group, the upgrade is performed only one at a time. However it is still recommended to place such leaf nodes in different groups so that you can verify the status application and service before proceeding to the next leaf.

Note: Prior to APIC release 4.2(5), APIC upgraded switches one pod at a time even if the group contains switches from multiple pods. Starting from APIC release 4.2(5), this restriction was lifted and users can upgrade switches from multiple pods in parallel. Still, please remember to not upgrade all switches in the same pod at once. 

 

2. Create separate groups for BGP Route Reflector Spines

Even if spine nodes in each pod are upgraded in separate groups, sometimes it is overlooked to check which spine nodes are router reflectors for ACI infra MP-BGP. When all route reflector spine nodes in a pod are upgraded at the same time, the pod can no longer distributes L3Out routes from border leaf nodes to other leaf nodes, which results in serious reachability issues. Hence always make sure to check which spine nodes are BGP route reflector in each pod so that those can be upgraded separately.

3. Check the graceful option for a single spine setup

Due to the isolation for the graceful upgrade mentioned above, if a pod has only one spine, you must not use the graceful option to upgrade the spine. Such an upgrade is blocked starting from APIC release 4.1(1).

4. Confirm the switches are not in maintenance mode via manual GIR

You can put switches in maintenance mode to isolate the switch from traffic flow by using Graceful Insertion and Removal (GIR). You can put switches in maintenance mode via manual GIR under "Fabric > Inventory > Fabric Membership" in the GUI, or via the graceful option in the switch update group (i.e. graceful upgrades)

Even though both manual GIR and the graceful upgrade use the same mode "maintenance mode" after all, it is not supported to upgrade switches that were brought to maintenance mode via manual GIR. When it is required to isolate switches before the switch upgrade starts, enable the graceful option when you submit the switch update group. 

5. Pre-download switch images

Starting from APIC release 4.1(1), you can download switch images from APICs to switches without triggering the actual upgrade. This can be achieve by utilizing the scheduler. Instead of "Upgrade Now", select the scheduler in the update group (or the maintenance group) and set the target date further ahead such as 10 years ahead, and submit the group. This will trigger the switch image download from APICs. Then, during the actual maintenance window, edit the same group and select "Upgrade Now" this time and submit. This will allow switches to start upgrading immediately without waiting for the images to be copied during the maintenance window.

Starting from APIC release 5.1, the scheduler option is removed from the GUI for firmware upgrade. Instead, it allows you to pre-download images always. Once the download is done, the installation (the actual upgrade) can be triggered separately.

References:

 

 

 

Troubleshooting Upgrade:

In case the upgrade failed and troubleshooting is required, always start with APIC1, if APIC1 did not finish upgrade, please do not touch APIC2. If APIC1 is done but APIC2 did not complete, please do not touch APIC3, violate this rule could lead the cluster database broken and cluster rebuilt.

1. APIC2 or Above stuck at 75% even APIC1 has completed.

This problem could happen because the APIC1's upgraded version information is not propagated to APIC2 or above. Please be aware, svc_ifc_appliance_director is in charge of the version sync between APICs and store them into a framework so that upgrade utility (and other process) could read.   

First, please make sure APIC1 could ping rest of the APIC, this will determine whether we need troubleshoot from leaf switch or continue from APIC itself. If APIC1 can not ping APIC2, you may want to call TAC to troubleshoot the switch. If APIC1 could ping APIC2, then move to second step. 

Second, since APICs can talk to each other, which means APIC1's  version info should have been replicated to peer but somehow was not accepted, the version info is identified by the followed timestamp. We can run the cli below to confirm the version timestamp of APIC1 from APIC1 self and APIC2 which is waiting at 75% before complete. 

apic1# acidiag avread | grep id=1 | cut -d ' ' -f20-21
version=2.0(2f) lm(t):1(2017-10-25T18:01:04.907+11:00)

apic1# acidiag avread | grep common= | cut -d ' ' -f2
common=2017-10-25T18:01:04.907+11:00

apic2# acidiag avread | grep id=1 | cut -d ' ' -f20-21
version=2.0(1m) lm(t):1(2017-10-25T18:20:04.907+11:00)

 

As showed above on APIC2,  APIC1's (old) version 2.0(1m) is even later than APIC1's new version 2.0(2f) timestamp, this prevents APIC2 to accpeted APIC1's newer version propagation, so the installer on APIC2 think that APIC1 did not complete upgrade yet. Instead of moving to data-conversion stage, APIC2 will keep waiting for APIC1. There is a workaround which must be run from APIC1 and only when APIC1 has  completed the upgrade successfully and booted up into new version, never run this from any APICs if they are waiting at 75% , this would totally mess up.  Consider of the risk, i would suggest you call TAC instead of doing that by yourself.

Comments
Rick1776
Level 5
Level 5

Awesome write up.I do agree he docs on Cisco’s website are a little lacking. This document is concise and I really like the examples.

Rob R.
Level 1
Level 1

Thanks. 

deloso-dni
Level 4
Level 4

Thank you for this write-up!

Vlads19718
Level 1
Level 1

For Multipod deployments: Please update the article to reflect also the POD restriction the APIC will not perform a parallel upgrade of leafs that are in different pod if they are inside the same mntgroup..

 

pod1-ifc2# moquery -c maintUpgStatusCont
Total Objects shown: 1

# maint.UpgStatusCont
childAction :
dn : maintupgstatuscont
lcOwn : local
modTs : 2017-12-12T11:24:10.710-08:00
rn : maintupgstatuscont
schedulerOperQualStr : Node: 301, Policy: mt-pod-2, Check constraint: Is any other pod currently upgrading?, Result: fail, Details: Node: 301(pod: 2) cannot be upgraded, as node: 202(otherPod: 1) is upgrading. Rejecting upgrade request, node to retry periodically
schedulerTick : 182657
status :
uid : 0

 

 

It is also a good idea to check disk space on the apics:

 

apic#df -h

 

and clean up unused data (show techs, old firmware, etc).

 

Regards,

Vladimir

ggeorgas1
Level 1
Level 1

awesome write up Welkin!

 

 

thanks

george g

welkin
Cisco Employee
Cisco Employee
Thank you, George. Glad to hear from you…
HAZEM JAD
Level 1
Level 1

Very useful write up, appreciated ....

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking for a $25 gift card