05-25-2025 08:15 PM
Hi Community,
We're currently facing a challenge with upgrading our Cisco ACI Fabric in a production project, and I would greatly appreciate any advice or clarification.
Current APIC version: 5.2
Current Fabric (Leaf/Spine) version: 15.2
Target version: 6.1 (to reduce the number of upgrade hops, since 5.2 is nearing End of Support)
Due to production environment constraints, we have a strict 2-hour maintenance window for this upgrade.
My main question is:
If we upgrade the APICs in advance from 5.2 to 6.1, will the Fabric Leaf/Spine switches require two image downloads (5.2 → intermediate → 6.1), or will they be able to upgrade directly from 15.2 to the corresponding 6.1 NX-OS version?
Understanding this will help us plan accordingly and ensure the upgrade fits within the limited window.
Thanks in advance for your support!
Best regards,
Solved! Go to Solution.
05-26-2025 03:57 AM
Hi Daniel,
Thank you very much for your reply and for providing the link.
I've rechecked the device version and the upgrade path. It appears that a direct upgrade from 5.2(7f) to 6.1(x) is supported. However, some of the devices do not support 6.1(x).
Given this, we are now considering a phased approach: first replacing the unsupported devices, then upgrading ACI, and finally performing a group-wide upgrade under this overall framework.
Best regards,
05-26-2025 11:55 PM - edited 05-26-2025 11:55 PM
05-27-2025 12:55 AM - edited 05-27-2025 12:57 AM
"Including rollback time, the upgrade is limited to just 2 hours,
so the typical upgrade approach would result in a very long overall upgrade cycle."
"The upgrade will be done by groups based on service groups — for example, upgrading
all server-connect Secondary devices within a 2-hour window on a given day."
I am a bit confused here, are you planning to upgrade a single group in a single maintenance
window which is 2 hours?
Meaning, there will be multiple maintenance windows planned, but each one 2 hours only?
Please clarify.
05-27-2025 02:59 AM
I’ve never encountered any limitations on the size of upgrade groups or the total number of
devices being upgraded in any Cisco ACI documentation. The fabrics I manage typically consist of no more than 40–50 leaves each,
so I haven’t performed an upgrade on a large-scale fabric and haven’t come across this topic
of upgrade limitations.
Good luck.
05-26-2025 03:20 AM - edited 05-26-2025 03:23 AM
Hi,
It really depends on the specific version you’re referring to.
Could you clarify which exact 5.2 release you’re on? For example, if you’re on 5.2(7) or later, you can upgrade directly to 6.1.
I recommend checking the Cisco APIC Upgrade/Downgrade Matrix at the link below to confirm the supported upgrade paths:
https://www.cisco.com/c/en/us/td/docs/Website/datacenter/apicmatrix/index.html
It’s a handy resource that should help you figure this out.
Some additional handy resources for the upgrade:
https://github.com/datacenter/ACI-Pre-Upgrade-Validation-Script
Hope this helps.
05-26-2025 03:57 AM
Hi Daniel,
Thank you very much for your reply and for providing the link.
I've rechecked the device version and the upgrade path. It appears that a direct upgrade from 5.2(7f) to 6.1(x) is supported. However, some of the devices do not support 6.1(x).
Given this, we are now considering a phased approach: first replacing the unsupported devices, then upgrading ACI, and finally performing a group-wide upgrade under this overall framework.
Best regards,
05-26-2025 05:59 AM
Good luck!
05-26-2025 06:12 PM
I am sorry that this will be a last question
We are currently planning to separate the firmware image distribution from the actual upgrade process using the scheduling feature in our ACI environment. This is to minimize potential impact by pre-loading the images to devices ahead of time and performing the upgrade during designated maintenance windows.
We have the following questions regarding this approach:
Is there a maximum number of devices that can be included in a single upgrade group when scheduling a firmware upgrade via APIC?
In one of our POPs, we have 57 switches (a mix of leaf and spine nodes). Would it be supported to include all 57 nodes in a single upgrade group?
Are there any known scalability limitations or best practices related to large upgrade groups, especially regarding APIC performance or device response?
Any clarification or official guidance would be greatly appreciated.
Best regards,
05-26-2025 11:55 PM - edited 05-26-2025 11:55 PM
05-27-2025 12:16 AM
Thank you very much for your reply. Due to the special nature of this requirement, it cannot be implemented in the usual way within a short period of time — that's why I asked the question this way.
Here are the constraints:
Including rollback time, the upgrade is limited to just 2 hours, so the typical upgrade approach would result in a very long overall upgrade cycle.
The APIC cluster will be upgraded outside of the maintenance window. Then, during the maintenance window on that day, the OS will be distributed to all devices.
The upgrade will be done by groups based on service groups — for example, upgrading all server-connect Secondary devices within a 2-hour window on a given day.
This approach allows me to significantly reduce the time pressure caused by OS distribution during the upgrade process.
05-27-2025 12:55 AM - edited 05-27-2025 12:57 AM
"Including rollback time, the upgrade is limited to just 2 hours,
so the typical upgrade approach would result in a very long overall upgrade cycle."
"The upgrade will be done by groups based on service groups — for example, upgrading
all server-connect Secondary devices within a 2-hour window on a given day."
I am a bit confused here, are you planning to upgrade a single group in a single maintenance
window which is 2 hours?
Meaning, there will be multiple maintenance windows planned, but each one 2 hours only?
Please clarify.
05-27-2025 01:51 AM
Thanks for the clarification — let me explain a bit more about our plan.
Q1: Are you planning to upgrade only one device group within a 2-hour window?
→ Not exactly — it's not limited to just one group. In each 2-hour window, we may upgrade multiple device groups, depending on redundancy and failover readiness.
Q2: Are there multiple 2-hour windows planned to complete the upgrade in batches?
→ Yes, that’s the approach. But our goal is to complete the entire upgrade in the least number of windows possible, while keeping risk under control.
Here’s some context on our environment:
3x Spine switches, 3x Border Leaf switches (L3-OUT)
Other Leafs are divided into service groups: Cloud connect, Server connect, ACI connect, and NMC service connect
No vPC involved
All links are Active/Standby, except for 2 devices using HSRP based on VLAN ID parity (odd/even)
Our tentative plan:
Day 1: Border Leaf #3, Spine #3, NMC connect Leaf (Standby), ACI connect Leaf (Standby)
Day 2: Border Leaf #2, Spine #2, NMC connect Leaf (Active), ACI connect Leaf (Active)
Day 3: Border Leaf #1, Spine #1, Cloud connect Leaf (Standby), Server connect Leaf (Standby)
Day 4: Cloud connect Leaf (Active), Server connect Leaf (Active)
Day 5: HSRP device connect Leafs
My main question is:
Does Cisco recommend a maximum number of devices to upgrade per window from a best practice point of view (e.g., for rollback safety or system stability)?
Any advice or field experience is appreciated!
05-27-2025 02:59 AM
I’ve never encountered any limitations on the size of upgrade groups or the total number of
devices being upgraded in any Cisco ACI documentation. The fabrics I manage typically consist of no more than 40–50 leaves each,
so I haven’t performed an upgrade on a large-scale fabric and haven’t come across this topic
of upgrade limitations.
Good luck.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide