Showing results for 
Search instead for 
Did you mean: 

Redundancy issue with Sup 720-3BXL

I've a Cisco 7609, SSO redundancy configured with two Sup 720-3BXL running 12.2(33)SRC1 image. Even thought I've configured SSO, my standby sup remains in COLD state with the following logs generated.

-     -     -     -     -     -     -     -     -     -     -     -     -     -    

*May 29 17:17:57.597: %SYS-SP-STDBY-5-RESTART: System restarted --
Cisco IOS Software, c7600s72033_sp Software (c7600s72033_sp-ADVENTERPRISEK9-M), Version 12.2(33)SRC1, RELEASE SOFTWARE (fc1)
Technical Support:
Copyright (c) 1986-2008 by Cisco Systems, Inc.
Compiled Fri 23-May-08 07:52 by prod_rel_team
*May 29 22:47:57.901 IST: %SYS-SP-STDBY-6-BOOTTIME: Time taken to reboot after reload =  314 seconds

May 29 22:48:13.360: Config Sync: Bulk-sync failure due to Servicing Incompatibility. Please check full list of mismatched commands via:
  show redundancy config-sync failures mcl

May 29 22:48:13.360: Config Sync: Starting lines from MCL file:

interface Serial8/0/0.1/1/2/3:0
! <submode> "interface"
-13\A3\7859, AIRTEL, NON NOC
! </submode> "interface"

May 29 22:48:13.359 IST: %ISSU-SP-3-INCOMPATIBLE_PEER_UID: Setting image (c7600s72033_sp-ADVENTERPRISEK9-M), version (12.2(33)SRC1) on peer uid (6) as incompatible
May 29 22:48:13.363 IST: %OIR-SP-3-PWRCYCLE: Card in module 6, is being power-cycled (RF request)
May 29 22:48:13.960 IST: %PFREDUN-SP-6-ACTIVE: Standby processor removed or reloaded, changing to Simplex mode
May 29 22:50:06.079 IST: %ISSU-SP-3-PEER_IMAGE_INCOMPATIBLE: Peer image (c7600s72033_sp-ADVENTERPRISEK9-M), version (12.2(33)SRC1) on peer uid (6) is incompatible
May 29 22:50:06.079 IST: %ISSU-SP-3-PEER_IMAGE_INCOMPATIBLE: Peer image (c7600s72033_sp-ADVENTERPRISEK9-M), version (12.2(33)SRC1) on peer uid (6) is incompatible

May 29 22:51:20.616 IST: %PFREDUN-SP-4-INCOMPATIBLE: Defaulting to RPR mode (Runtime incompatible)

-     -     -     -     -     -     -     -     -     -     -     -    

Logs says that due to some reasons, the configuration is not being synchronized with active and standby sups and hence the redundancy mode remains in RPR mode and SSO not achieved, and hence COLD state.

However, I couldn't find a reason why the configuration is not being synchronized. I've opened a case with Cisco, and as per suggested, I've issued the command, redundancy config-sync ignore mismatched-commands, and everying worked fine, SSO achieved and standby sup came to HOT state.

Now my questions is,

1. Why the configuration was not synchronized?

2. Since I issued the suggested command, at least, some of the mismatched lines in configuration will be ignored, Will that create a problem when my Active sup fails and standby become active?

Kindly suggest.

Hall of Fame Expert

Hi Ameen,

Per this section of the release note, It looks as there is bug in SRC release of the IOS that causes that:

Open Caveats—Cisco IOS Release 12.2(33)SRC

This section describes possibly unexpected behavior by Cisco IOS Release 12.2(33)SRC. All the caveats listed in this section are open in Cisco IOS Release 12.2(33)SRB. This section describes only severity 1, severity 2, and select severity 3 caveats.

Basic System Services


Symptoms: The aaa group server radius subcommand ip radius source-interface will cause the standby to fail to sync.

c10k-6(config)#aaa group server radius RSIM

c10k-6(config-sg-radius)#ip radius source-interface GigabitEthernet6/0/0

c10k-6#hw-module standby-cpu reset


Aug 13 14:49:31.793 PDT: %REDUNDANCY-3-STANDBY_LOST: Standby processor fault


Aug 13 14:49:31.793 PDT: %C10K_ALARM-6-INFO: ASSERT MAJOR RP A Secondary


Aug 13 14:49:31.793 PDT: %REDUNDANCY-3-STANDBY_LOST: Standby processor fault


Aug 13 14:49:31.793 PDT: %REDUNDANCY-3-STANDBY_LOST: Standby processor fault


Aug 13 14:49:31.793 PDT: %REDUNDANCY-3-STANDBY_LOST: Standby processor fault


here is the link for more info:




Dear Reza,

Thanks for ur comments.

The mentioned bug is present only in SRC, however, the image, running in both the sups, is SRC1. Also, I don't even have those AAA commands in my running configurtion which may cause standby to fail to sync.




Hi.. Anyone got any idea??

Thanks in Advance.


Hello Ameen,

according to bug toolkit the bug mentioned by Reza is solved in 12.2(33)SRC2 not in 12.2(33)SRC1

I remember also other similar bugs triggered by the presence of a NAM2 module in the chassis but if I remember correctly it may affect 12.2(33)SRD.

A possible match for you case is the following bug that is triggered by other conditions

CSCsm44147 Bug Details

Symptoms: The standby WS-SUP720-3BXL failed to boot into  SSO mode because of
MCL check failure with the FPD configuration  command: upgrade fpd path

Conditions:  The problem happens when "sup-bootdisk:" is used as the FPD image
package directory path argument in the  upgrade fpd path
pkg-dir-path configuration command for an active
WS-SUP720-3BXL that supports  "sup-bootdisk:" filesytem, but the same
fiilesystem is not support  by the standby  WS-SUP720-3BXL.

Workaround: For systems that  have a mixture of old and new WS-SUP720-3BXL,
please do not use  "sup-bootdisk:" as the filesystem in the upgrade fpd
pkg-dir-path configuration command, instead
use the "sup-bootflash:" filesystem  as this filesystem exist on both old and
new WS-SUP720-3BXL.

also this is fixed in 12.2(33)SRC2 or 12.2(33)SRD

I would suggest an IOS upgrade to a later IOS image.

Hope to help



Dear Giuseppe,

Thanks for your help.

Now I understood that SSO what not achieved because of the bugs present in the IOS. The workaround suggested by one of the Cisco engineer was to issue the command, redundancy  config-sync ignore mismatched-commands and the issue got resolved and SSO achieved by the Sups.

It means that at least some of the configuration will not be synchronized between active and standby. Will that make any problem when my active Sup goes down and standby comes up??

Thanks in advance,



Hi Ameen,

If you have spare box with 2 Sups, set up and test it with a newer IOS and see if you still have the same issue.  It is hard to tell how the switch will behave when the primary goes out and you don't want it to happen in the middle of the day.



Content for Community-Ad