cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8015
Views
10
Helpful
3
Replies

6509 VSS failover behaviour

Toi Seng Chang
Level 1
Level 1

Hi All,

Anyone have experience on triggered failover on VSS deployment with 1 VS-720-10G-3C in each chassis?

I tried using "redundancy force-switchover" but after that the 20G VSL is flapping up & down and cannot be up normally,

we got 1 FWSM in each chassis, any configuration need to fit in this kind deployment?

BTW, if I shutdown the power source of VSS active chassis, both FWSM & VSS can failover normally. Please advise.

3 Replies 3

Arumugam Muthaiah
Cisco Employee
Cisco Employee

Hi Toi,

If you perform the "redundancy force-switchover", this will reload of the ACTIVE supervisor, which include VSL link and it will be down

The following sequence of events provide a summary of the failover convergence process:

  1. Switchover is invoked via software CLI, removing supervisor, powering down active switch, or system initiation.
  2. The active switch relinquishes the unified control plane; the hot-standby initializes the SSO control plane.
  3. All the line cards associated with active switch are deactivated as the active chassis reboots.
  4. Meanwhile, the new active switch (previous hot-standby switch) restarts the routing protocol and starts the NSF recovery process.
  5. In parallel, the core and access-layer rehash the traffic flow depending on the topology. The unicast traffic is directed toward the new active switch, which uses a hardware CEF table to forward traffic. Multicast traffic follows the topology design and either rebuilds the multicast control plane or uses the MMLS hardware table to forward traffic.
  6. The NSF and SSO functions become fully initialized, start learning routing information from the neighboring devices, and update the forwarding and control plane protocols tables as needed.

Please see the below recommended code to support for FWSM module,

Firewall Services Module (FWSM) (WS-SVC-FWM-1-K9)

Minimum Cisco IOS Release
12.2(33)SXI

Minimum Module release
4.0.4

Refer:

http://www.cisco.com/en/US/products/ps9336/products_tech_note09186a0080a7c72b.shtml#ace_fwsm - Integrate Cisco Service Modules with Cisco Catalyst 6500 Virtual Switching

http://www.cisco.com/en/US/products/ps9336/products_tech_note09186a0080a7c837.shtml - Redundancy with Service Modules

Note: When one of the sups fails the chassis for that sup is also dead or chassis down. The other sup and chassis will take over the responsibility without any service interruption

Regards,

Aru

*** Please rate if the post is useful ***

Regards, Aru *** Please rate if the post useful ***

Hi Aru,

Thanks for your information, it really clear me a lot!

But i forgot to mention that, when I use command line to trigger the VSS failover action, the new active supervisor cannot forward out the traffic to its next hop(FWSM) normally, this is our topology VSS--->FWSM--->Internet, I remember that we cannot even ping the next hop with confirmed the service module allowed inbound icmp traffic.

On the other hand, if i shutdown the current active supervisor, everything goes right, VSS can still forward traffic to next hop, we only lose the module on shutdowned chassis.

Any comment on that?

Hi Toi,

If the VSS active chassis or supervisor engine fails, the VSS initiates a stateful switchover (SSO) and the former VSS standby supervisor engine assumes the VSS active role. The failed chassis performs recovery action by reloading the supervisor engine

If the VSS standby chassis or supervisor engine fails, no switchover is required. The failed chassis performs recovery action by reloading the supervisor engine

The VSL links are unavailable while the failed chassis recovers. After the chassis reloads, it becomes the new VSS standby chassis and the VSS reinitializes the VSL links between the two chassis

The switching modules on the failed chassis are unavailable during recovery, so the VSS operates only with the MEC links that terminate on the VSS active chassis. The bandwidth of the VSS is reduced until the failed chassis has completed its recovery and become operational again. Any devices that are connected only to the failed chassis experience an outage.

Note:
The VSS may experience a brief data path disruption when the switching modules in the VSS standby chassis become operational after the SSO

After the SSO, much of the processing power of the VSS active supervisor engine is consumed in bringing up a large number of ports simultaneously in the VSS standby chassis. As a result, some links might be brought up before the supervisor engine has configured forwarding for the links, causing traffic to those links to be lost until the configuration is complete. This condition is especially disruptive if the link is an MEC link. Two methods are available to reduce data disruption following an SSO

  1. Beginning in Cisco IOS Release 12.2(33)SXH2, you can configure the VSS to activate non-VSL ports in smaller groups over a period of time rather than all ports simultaneously
  2. You can defer the load sharing of the peer switch's MEC member ports during reestablishment of the port connections

At the same time, i would like to share somd best practice about FWSM intergration with VSS

Refer:

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps9336/white_paper_c11_513360.html

Please verify the #show redundancy status

Router# show redundancy status

       my state = 13 -ACTIVE

     peer state = 8  -STANDBY HOT

           Mode = Duplex

           Unit = Secondary

        Unit ID = 18

Redundancy Mode (Operational) = sso

Redundancy Mode (Configured)  = sso

Redundancy State              = sso

Do you see the problem still, after the old Acive supervisor become Standby?

Did you reload the standby supervisor, then check the performace? do you see the same behavior?

Regards,

Aru

Regards, Aru *** Please rate if the post useful ***
Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card