cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1303
Views
8
Helpful
5
Replies

Need some hints migrating to nexus

marco1.fabbri
Level 1
Level 1

Hello,

I am new to nexus world so I'm  reading a lot of documents regarding nexus design and guidelines.

I should propose and implement a Nexus solution in a server farm/campus currently made of catalys switches(6500).

THe topology is made of two core 6500.

Some routers for branch offices.

Server farm access switches(8 Cat6500) are L2 only and the core/distribuition is L3 for server's vlans. (500 to 700 servers)

Campus/user network has both distribution(L3 boundary) and access switches(3750s). (4000 users)

I need to find a good approach to begin a migration on Nexus technology.Also note that the entire site has to be completly readdressed, so I could take advantage of the migration to complete it.

A budget for this project has not been defined yet, and this (unfortunately) gives me so many possible choices that is difficoult to take a decision.

Basically I identified some option to start with, giving much priority migrating servers because in my opinion there are more immediate benefits:

1)  creating an entire  parallel Nexus infrastructure connected to the old core ( a pair of N7k with (core + server farm distribution VDCs) and a pair or 4  N5548  with double sided vPC + some FEXs (with vPC to 5548)

Notes:

-This requires a big initial investment, but it requires just to migrating server without further network outages due to new device integration.

-I could also create an additional VDC to include campus distribution layer

-Readdressing can be done during server migration without additional burden .

2) connect two 5548(L2 only)+some FEX(with vPC to 5548) to the core using vPC and start migrating servers.

Notes:

-low budget required with increased risk.

-I am not sure whether I can use vPC on the core/distribution(6500) having the peelink on the SF access switches. Techically is possible but whether it is suggested or not is another matter. I did not find anything about this in the documentation

-In case I want to include server readdressing, it requires additional work (creating new vlans)

-I don't know how well the 6500s interoperates with Nexus.

3) other?

This is a nice challenge, and I hope to receive some hints from you.

Marco

5 Replies 5

Andras Dosztal
Level 3
Level 3

Hi Marco,

We're doing almost the same (except the readressing), and chose the first option, because

  • it takes less downtime (1 migration insted of 2);
  • if considering FabricPath, you can enable it in the pre-migration state (in the 2nd option, it might require a reboot, which is not a common event in a DC );
  • moving a large number of servers is a big project itself, you don't want to take extra risks.

BTW, Cat6.5k interoperates fine with Nexus, check this white paper.

I'll set notifications for this topic, feel free to ask more.

ogamanya
Level 1
Level 1

It really depends with your objective.I would suggest that the descicion should be aligned with your strategic objectives.

1st option is an ALL-OUT technology upgrade. Nexus introduces new features into your DC you could not have on the Cat 6500's. 7K's design also improve on HA features in the DC as well. With the mix of 7K's and N5K's the biggest benefit is bandwidth scaling (tons of it), flexebility (5K / FEX combination allows your flexibility to stretch your DC footprint to any corner of the DC). If there is budget for this, now is a good time to do this since the product is relatively matured from an investment protection point of view.

2nd option of introducing 5K's will give you L2 features like Fabric path and combined with FEX will simplify cabling (top of rack / end of row) depending on server density per rack / row). Which ever case, I bet its better than central wiring to a Cat 6500. I would not advice running L3 on the 5K's just as yet.

As for the migration process, the main difference between option 1 & 2 is that in 1 you will need to migrate your core functions from Cat6500 to N7K.

The 2nd part of your question about IP readdressing, I would say the effort needed to achieve this would be the same weather you chose option 1 or 2 i.e  new subnet, routing of subnet, new Vlan and access port configuration of migrated servers. I assume new cabling as the terminate point for the server will move from the 6500 to Nexus 5K in both scenarios.

regards

Thanks guys,

I definetely have to read more in detail about fabric-path! I thought it was used for remote DC interconnection only.

@Onanis: "It really depends with your objective"=> Well the objective is mainly to introduce a new technology that offers a lot of potentials (loop free topology, increased bandwidth utilizaition, cabling semplification etc.) at the price of complex implementation.

Honestly I wonder if these benefits are real, because vPCs has a lot of caveats that add complexity(the configuration could be technically simple, but complexity could result in unexpected behaviours and more bug exposure for sure).

Marcofbbr wrote:

I definetely have to read more in detail about fabric-path! I thought it was used for remote DC interconnection only.

That's OTV. FP is basically "L2 routing". Check this for a starter.

Loosely put, FabricPath replaces spanning tree in the DC. It takes  care of L2 topology without the need of blocking links or ports and allows  multi-pathing. Fancy layer 2 routing makes efficient forwarding of frames on the network.

As for benefits take vPC for instance, there is a real need for multi  chassis Ether-channeling support natively. Off the bat you improve resilience in the network.  Distribution connected to 2 cores via vPC means you can lose a core  device and your layer 2 network won’t go belly up. Dropping and  inserting links is painless. Without vPC, this would cause outages due to spanning tree's re-convergence. As you start to unpack the possibilities that vPC offers, you quicly see that it makes operations smoother and more flexible. The impact of most outages is  reduced significantly when operating in dual homed active / active  scenario.

Yes there are caveats, that is why it is  important to build it right from day1. For all its benefits, the biggest  challenge with vPC is total box failure of the primary vPC box, the recovery time for this failure is rather long (5 minutes). Addressing things like power  redundancy, multiple peer links etc will ensure you have a stable  solution i.e do that which is necessary to ensure the foreseeable does  not bite you.

The way I see it, the right failure on the right box is always going to be expensive regardless of solution you implement.

Review Cisco Networking for a $25 gift card