I am about to undertake a project to migrate our core switches to new hardware and am keen to find out the best way of doing this.
Currently we have 3 sites. A, B and C
Site A, contains two Cisco 4506 switches that are end of life and are the ones scheduled for replacement. However, these pair of switches have a large number of SVIs, have many other access switches trunked to them, and they are running EIGRP and so have a large number of routes to other parts of the same overall network
Site B have switches connected over layer 3 vlan to Site A via a LAN extension service. Switches in Site B have local vlans and SVIs
Site C have switches connected over layer 3 vlan to Site A via fibre. Switches in Site C have local vlans and SVIs
Obviously i'd need to try and keep downtime to a minimum so is it possible to set up the new Core switches in parallel in Site A, trunk them to the old Core, set the gateway of last resort to the old Core and then move each SVI one by one? Or maybe run EIGRP on the new Core also and let it work out the routing?
Thanks in advance
Unless the vlans are on a switch by switch basis ie. you do not have a vlan spanning multiple switches you can't use the L3 routed connection with EIGRP.
The reason being if you migrate a vlan to the new switches but there are still other vlans on the access switches then those access switches still need L2 connections to the existing switches.
And in order for the clients in the migrated vlan to get to their SVIs on the new switch there would need to be a L2 trunk between the existing and new switches. A L3 connection wouldn't work.
If each vlan was only on one switch then you could migrate the vlan and at the same time migrate the access switch uplinks to the new switch.
Thanks for your help Jon
The access switches that are currently trunked to the old Core will have the same vlans at the Core, however the majority are just client PC vlans, which i could leave on the old Core for now. The SVIs that i would be moving from the old Core to the new would just be for instance, server vlans - none of these vlans would be needed on the access switches are there's no devices in them. Does that make sense?
If the servers vlans only need to be on the new switches then yes you could run a L3 connection and EIGRP and route between the switches.
But it does mean when you come to migrate the access switches if each switch has multiple vlans then you need to migrate all switches and vlans at once whereas with a L2 trunk it gives you more flexibility in terms of migration ie. you can do one vlan at a time no matter how many access switches that vlan is on.
I have a feeling we'll leave the access switches trunked to the old core for the time being, kind of creating a distribution layer of sorts. Do you think i could get away with just setting the gateway of last resort on the new core as the old core instead of also running EIGRP on there?
Firstly apologies as I gave some misleading information in my first response.
If you move the server vlans to the new switch then you have two choices -
1) leave the SVIs for the server vlans on the existing switches and use a trunk link between the existing switches and the new switches.
Then yes you can just set the default gateway on the new switch but that is only used so you can telnet/ssh to it for management
2) move the SVIs for the servers to the new switches as well. If you do this then you need to route between your existing switches and the new switches.
If the servers are all directly connected to the new switches or they are on dedicated switches ie no other devices but servers and these switches are connected to the new switches then you need to route between the existing and new switches.
You can do this either over a trunk link or you can connect using L3 routed links.
Up to you.
If the server switches also have other clients on them that have their SVIs on the existing switches then it gets a bit more complicated.
Let me know if that is the case.
Thanks for your help
I think i've got a better picture of what i need to do now. It'll have to be phased, starting with the sever vlan. Once that is up a running (with default gateway as old core) then i can start to think about moving the rest and also what to do about EIGRP
we re about to migrate a couple of 6509-E to a couple of new 6513-E
We have a collapsed redundant core w/ some physical servers, couple of Nexus switches, about 40 access stacks. All is connected redundant to the two 6509-E's
Our plan would be like this:
- First completely pre-configure the new 6513-E's, except for OSPF and BGP. And we shutdown all the SVI's
- Label all cables going to the old 6509-E's (we print the new port numbers of the 6513-E to these labels)
- Plug all these cables out of the first 6509-E
- Get the first 6509-E out of the rack
Everything will be still running since everything is still plugged in to the second 6509-E. (But we re running on only 1 coreswitch now, so fingers crossed)
- Build the new 6513-E into the rack & power on
- Build a trunk between the 6509-E and the 6513-E
- Plug the cables into the new 6513 according to the label info.
- Shut down the SVI's (with a script) on the old 6509-E and bring them up at the 6513-E (also with a script) . We will then have a short downtime for connected networks and no Internet or remote OSPF networks for some moments)
- De-activate BGP and OSPF (old 6509-E) and activate BGP and OSPF on the new 6513-E .
Now all layer 3 functionality is running on the new 6513-E. We should be up and running again.
- Now plug out the cables from the second 6509-E and get it out of the rack
- Build the 6513 into the rack
- Connect it to the other 6513 with a trunk.
- Connect cables
Presumably you are doing it this way because the new switches are going into the same rack space as the existing switches ?
You will get more downtime than you mentioned though ie. when you connect the new 6513 to the 6509 and then connect up the access switch links to the new switch STP is going to need to recalculate even before you bring up the SVIs etc.
And when you connect up the second 6513 you are going to get the same again.
Not saying there is anything wrong with the plan but there will be more downtime than you covered.
Hi Jon, yep, same rack space.
i know about the STP recalc's and should have mentioned them too here.
(There in the migration plan anyway) ;-)