cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
520
Views
0
Helpful
3
Replies

Migrate from Nexus 3064pq to Nexus 93180YC-FX3

ssell0001
Level 1
Level 1

We are planning on replacing our Nexus 3ks with 9ks and am curious of any issues we may run into.  The 3ks are pure layer 2 connecting our storage and UCS environments with uplinks to the Fabric Interconnect and our 7k core.  They are running vPC and all the storage clusters have links to each 3k. I was originally planning on bringing up the new 9ks, configuring everything to match the 3k configuration, then moving the uplink to the core and FI from the 3k that is in a secondary role, then moving each end device link.  However, now I'm thinking that will likely create issues due to the vPC.  What would be the best way to do this with minimal downtime?

 

The 7k is also configured for vpc.  The trunks to the 3k are configured for vPC.  On the 3k, the trunks to the Fabric Interconnect are configured for vPC.  I have attached a basic diagram.

 

Thank you

1 Accepted Solution

Accepted Solutions

Reza Sharifi
Hall of Fame
Hall of Fame

You can configure the 9ks ahead of time with all the configs the same as the 3ks, and in maintenance, window move all the uplinks and downlinks cables from the 3ks to the 9ks. Hopefully, where the 3ks are installed, you have room to rack the 9ks above or below them. This way you don't even have to change cables. I think if you get everything ready ahead of time, one hour of maintenance window should be sufficient to move the cables and test connectivity.

 

HTH 

View solution in original post

3 Replies 3

Reza Sharifi
Hall of Fame
Hall of Fame

You can configure the 9ks ahead of time with all the configs the same as the 3ks, and in maintenance, window move all the uplinks and downlinks cables from the 3ks to the 9ks. Hopefully, where the 3ks are installed, you have room to rack the 9ks above or below them. This way you don't even have to change cables. I think if you get everything ready ahead of time, one hour of maintenance window should be sufficient to move the cables and test connectivity.

 

HTH 

ssell0001
Level 1
Level 1

Thank you for your quick response. 

 

Would you recommend moving all uplinks on both 3ks (the uplink to the 7k and the uplink to the FI) first, then moving the end device links which would have the most of the environment down for the duration of the migration , or move all connections from one 3k first, then move all connections from the other 3k?   My concern is if cluster A is still on the 3k, and cluster B is on the 9k, but only one 9k is uplinked would there be a major outage.   Our change management will have a hard time approving a downtime for the entire storage environment.

Yes, in this I would recommend moving all uplinks on both 3ks. The reason for that is because the config will be exactly the same on the 9ks and 3ks, and so you don't want to have any duplicate IP address issues to deal with at that time. 

 

Our change management will have a hard time approving a downtime for the entire storage environment.

Because of this, you really need to plan this ahead of time with a good design and great documentation i.e., make sure what port you are disconnecting from what switch and to what switch and what port you are connecting it to. if not, you will be dealing with an unsuccessful outage and that does not look good.

HTH

Review Cisco Networking for a $25 gift card