cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3120
Views
0
Helpful
10
Replies

Replace C3750G stacks by Nexus 2000?

parisdooz12
Level 1
Level 1

Hi,

we have in our datacenter C3750G stacks. As some of the models will become out-os-support in 2014, I would like to analyze to replace them by Nexus 2000 (conncted to Nexus 5000). At a first glance, I thought this could be a good option to save money, but the fact that N2K doesn't permit stacking, which can make the solution really more expensive than I thought, specially because of 10G SFP modules + interfaces on Nexus5K (and I don't speak about potentially optical fiber to deploy). Note that we use 24 ports switches up to now.

I would like to have your feedback about Nexus 2K:

- How did you deploy them: top of rack, end/middle of row, centralized (we currently have centralize design with C3750X stacks)

- Did you place Nexus5K in end/middle of row, centralized?

- Did you made a cost comparison between C3750X stack and N2K, in terms of equipments & cabling?

Thanks by advance

P.

10 Replies 10

Surya ARBY
Level 4
Level 4

Various topologies :

- for a big customer : POD is made with two rows; N5K in the middle of the first row; N2K top-of-rack in all other racks in the POD

- for a midsized customer : various 5k at the end of the room (in the last row, which in fact is empty); N2k as middle of row in each row.

About the cost of optical modules; don't use 10SR; there are bundles availble with 10G-FET (proprietary USR)

Leo Laohoo
Hall of Fame
Hall of Fame
but the fact that N2K doesn't permit stacking

Yes, you are correct.  But there are rumours coming from the halls of Cisco of the words "Nexus 2K", "stacking" and "PoE".  *wink*, *wink*

How did you deploy them: top of rack, end/middle of row, centralized (we currently have centralize design with C3750X stacks)

To manage cabling costs, deploy the 2K as top-of-rack (ToR).

Did you place Nexus5K in end/middle of row, centralized?

Depends on the length of your "row" and whether you've invested in a fibre optic tray.  Basically I don't want fibre optic patch cables to be inter-twined with the heavier Cat5/6 cables.  With that being said, we've come to a consensus to deploy the 5K/5500 at the centre of every row and fibre optic cable runs going left and right.

While some people have followed Cisco's recommendation of end-of-row.

Did you made a cost comparison between C3750X stack and N2K, in terms of equipments & cabling?

Cost has nothing to do with our solution.  Our solution is based around what stuff is being put into the DC.  We are seeing an explosion of servers being put it that are, to put it bluntly, "spoiled".  These servers do NOT want oversubscription.  1 Gbps, 10 Gbps are their speeds of choice so naturally a 2K (1 Gbps only) and 5500 (1- and 10Gbps) is a natural choice.

To compare a stacked 3750X solution to a Nexus 5K/2K solution would be un-just.  One of the things the 5K got from the 3750X is the "stacking" feature.  You can have, say, 10 cabinet in a row.  Each cabinet has one Nexus 2K.  You manage all your 2K using the Nexus 5K.  This alone is unique.  You can't get a 3750X to span this far.  You well need plenty of copper and fibre optic patch cables.

Heck, I don't even encourage anyone stacking 6 or more 3750.

Thnks for your feed back.

The fact to manage all N2K with the N5k is indeed a very good thing.

My hesitation come from the fact we already have C3750X centralized in 3 Racks, so all copper cabling arrive there. such a topology was very bad idea , but, unfortunatelly, the cabling is here and so, the fact to move to a topology with top of rack (or middle of row) will have an additional cost. Oversubription is not really a problem for us (8:1 is really sufficient).

So I now need to deal between: "network management simplification" versus "few cabling work"

Just one question: when you say some servers do not want oversubscription: is it a real need, or is it a client requirement? I ask that because up to now, the only big bandwidth needs I saw on servers was because of backup or synchronization, but never because of client traffic or inter-server traffic?

Just one question: when you say some servers do not want oversubscription: is it a real need, or is it a client requirement?

SAN servers don't like oversubscription.

The Nexus solution was developed to overcome this.

For iSCSI the Nexus 5k product line is a poor solution (a lot of drops when polarization occurs on the storage array connection); specially the Nexus 2k; buffers are too short; that's why Cisco developped the nexus-TP-E with buffer as shared memory.

previous best practice for iSCSI-based networks was to use 4900M or 4948 with large shared memory as buffer space.

Being a cut-through switch doesn't mean that you don't need large buffers.

SAN servers don't like oversubscription. 

The Nexus solution was developed to overcome this.


Ok. Our SAN use dedicated equipment up to now, and managed by servers team. I have big hesitations to use N5K to connect FC, as it means to share SAN design responsabilities between network and server teams, that coul make the administration less easy.

What is your feedback on your side about SAN on Nexus? Do you use FCoE?

if you keep the N5K with the FEX within the same rack you could use inexpensive twinax cables. If rackspace permits you could place them in the same rack as the current 3750's and this would save you a lot of recabling while decommisioning the onl 3750's

Kevin Turner
Level 1
Level 1

Just one thing to keep in mind, is the N2K doesn’t switch traffic it's self (if you're not already aware). All traffic needs to go to the upstream 5K or 7K, even if the traffic needs to go from port 1 to port 2 on the 2K. Just something to keep in mind (as this would never have been an issue) if you do a lot of server to server traffic in the same rack. I know a few people have been caught with this and they have gone the 4948 option. All depends on the traffic flow.

We use 5K's a fair bit for FCoE (FC to the SAN and then FCoE to the Blades, either BM using N4K or UCS). If you’re worried about admin from memory you can setup a SAN admin on the 5K which will allow the SAN admins to look after the SAN and not have access to the ethernet side of things. There is no FC support in the 2K range but the 2232PP will do FCoE, but depending on your servers you may need to install CNA's to make use of this. Other option is to do all the FC on the 5K and connect straight to the HBA on the server

Thanks for this information. What's your feedback about this topology, using FCoE on N5K, and with N4K ?

As far as I know it works well (unfortunately most of the the implementations where done pre me, so I can't give you first hand experience). The 4K is for IBM blades only so if you have HP or Dell then not much you can do there. I believe the 4K's are a FIP snooping bridge and effectively just pass the traffic up to the 5K (Don't quote me on this thou).

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: