cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3312
Views
0
Helpful
7
Replies

Upgrade from Sup 720 to Sup 2T with new Chassis

dharmendra2shah
Level 1
Level 1

All-

We recently purchased 2 Cisco 6500 series switches (with Sup 2T). These switches will be replacing our old 2 6500 series switches (with Sup 720).

We have 70 vlans and 90+ closet switches (2900) connecting the core switches

We have 2 WLC connected to the core switch. We also have a 1 x 1 connection to a VSS switch which in turn connects to our Server Co-Location data center utilizing IPSec & GRE tunnel to connect to our Server Co-Location data center.

Our routing protocol is EIGRP.

Our VTP domain at Server Co-Location is separate from our location “A” campus. I was wondering what is the best way to migrate our Core switches at location “A” campus.

The requirement is we would like to replace these switches with minimum downtime.

Ds

2 Accepted Solutions

Accepted Solutions

Reza Sharifi
Hall of Fame
Hall of Fame

Hi,

When moving from 720 to 2T, the biggest challenge is QOS.  The QOS architecture for Sup-2T is different from 720.  So if you have QOS deployed, you need to read below document:

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/white_paper_c11-652042.html

If you have HSRP or VRRP on your older 6500, you can simply change one device at the time.

replace the stand-by VRRP or HSRP first. Then fail over you HSRP or VRRP from the old switch to the new one and then replace the second old switch and bring it up as stand-by.

You defiinatly want to do all of these during an outage window.

Also, make sure all you old cards are comparable with the new Sup.

HTH

View solution in original post

Never been bitten by a SUP swap out but did have an issue with a chassis swap out. Was upgrading from a 6509 chassis to a 6509-VE chassis and was also running GLBP. Shut down the old 6K, racked the VE and put the blades in the same slot numbers and booted up with no issues. What we didn't realize is the base MAC addresses are based off the chassis, and when GLBP saw a different MAC it assumed this was AVF #3. There is a 4 hour GLBP AVF timer that removed AVF #1, leaving #2 and #3. Around 4 hours after the bootup our downstream WISMs stopped passing traffic. Their SVI were anchored on the 6K distribution and GLBP was used to handle first hop redundancy. Turned out the WISM has holding the old MAC address of the dead forwarder in memory and would not give it up, thus we were black holed. Could not clear ARP on the WLC and had to reboot the unit to resolve, and this reboot occurred after our downtime window because of the 4 hour AVF timer, and we had already gone home thinking mission accomplished,

Discovered WLC 6.x code did not handle GLBP properly and is corrected in 7.x. We had 9 more chassis to swap out and to avoid this issue, we would shut down the SVI on the chassis that was getting replaced 4 hours before the downtime, that way GLBP aged out the dead AVF gracefully and when we brought up the new VE chassis, everything worked OK and the WISM did not have any problems. We never saw any issues with PCs or any other devices, just the controllers running 6.x code.

View solution in original post

7 Replies 7

Reza Sharifi
Hall of Fame
Hall of Fame

Hi,

When moving from 720 to 2T, the biggest challenge is QOS.  The QOS architecture for Sup-2T is different from 720.  So if you have QOS deployed, you need to read below document:

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/white_paper_c11-652042.html

If you have HSRP or VRRP on your older 6500, you can simply change one device at the time.

replace the stand-by VRRP or HSRP first. Then fail over you HSRP or VRRP from the old switch to the new one and then replace the second old switch and bring it up as stand-by.

You defiinatly want to do all of these during an outage window.

Also, make sure all you old cards are comparable with the new Sup.

HTH

Reza-

Thanks for the head-up on QoS and I like your idea of using HSRP & VRRP and migrating 1 layer 3 interface at a time. But we are running GLBP on 6500 right now. I am assuming we should be able to do the same thing what you explained above with GLBP as well?

Ds

Hi DS,

GLBP should work the same way, as members of a GLBP group select one router as the active gateway and the other group members are backup devices.

Good Luck

HTH

Thanks!!!

Never been bitten by a SUP swap out but did have an issue with a chassis swap out. Was upgrading from a 6509 chassis to a 6509-VE chassis and was also running GLBP. Shut down the old 6K, racked the VE and put the blades in the same slot numbers and booted up with no issues. What we didn't realize is the base MAC addresses are based off the chassis, and when GLBP saw a different MAC it assumed this was AVF #3. There is a 4 hour GLBP AVF timer that removed AVF #1, leaving #2 and #3. Around 4 hours after the bootup our downstream WISMs stopped passing traffic. Their SVI were anchored on the 6K distribution and GLBP was used to handle first hop redundancy. Turned out the WISM has holding the old MAC address of the dead forwarder in memory and would not give it up, thus we were black holed. Could not clear ARP on the WLC and had to reboot the unit to resolve, and this reboot occurred after our downtime window because of the 4 hour AVF timer, and we had already gone home thinking mission accomplished,

Discovered WLC 6.x code did not handle GLBP properly and is corrected in 7.x. We had 9 more chassis to swap out and to avoid this issue, we would shut down the SVI on the chassis that was getting replaced 4 hours before the downtime, that way GLBP aged out the dead AVF gracefully and when we brought up the new VE chassis, everything worked OK and the WISM did not have any problems. We never saw any issues with PCs or any other devices, just the controllers running 6.x code.

Thanks for the heads up Darren !!! I am sure we would have the same problem but now we know what to do.

Ds

We were able to migrate from old 6500 (Sup 720) to the new 6500 (Sup 2T). Everything is working fine. Except some of the applications who uses oracle in the backend are having frequent disconnects.

Is anyone aware of the known problems?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card