cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5932
Views
9
Helpful
9
Replies

10gbe switching recommendation

keithsauer507
Level 5
Level 5

Looking to upgrade our vmware storage network from 3750g 1gbps links to something with 10gbe support.

Looking to add intel x540-t2 10gbe adapters to 4 servers and connect via Cat6a to a 10gbe switch in the same rack which also connected to an EMC VNX 10gbe. This presents storage to the VMware hosts via NFS data stores. Anyone have a product recommendation?

Thanks.

Sent from Cisco Technical Support iPhone App

9 Replies 9

Reza Sharifi
Hall of Fame
Hall of Fame

Since you want to go to 10gig, If you have the budget, for storage connectivity, I recommend using the Nexus 5500 series.  They are low latency switches designed for data centers. With VM and storage you need all the speed you can get. The 5500 is also capable of doing FC in the same switch if you get the 5548UP version.

See data sheet:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.html

HTH

keithsauer507
Level 5
Level 5

Ok looks nice.

So seems like its pretty modular, so in this case get the module with 12 ports with Cat 6e / Cat 7 support. Then I can cable my servers to this. Then use the SFP+ active twinax modules as required by EMC VNX (one for each storage processor).

Seems pricey, especially since I usually prefer to run TWO of them and use the second interface on each device to a second switch for fault tolerance. However each system has already working 1GbE interfaces so perhaps just fail over to those interfaces on the 3750g. The. The following year as budget allows, purchase another Nexus.

This really is entirely for the storage access which can be presented as iSCSI or NFS. We do not have FC storage at this time.

Sent from Cisco Technical Support iPhone App

use the nexus 5k Series switches , and use the SFPT modules then no need add FC card in the server ,

rate it use full post

Looks like when you buy a Nexus like a 5548 for example, you get 32 SFP+ ports.  Well the Intel X540 10gbe adapters have an RJ45 connector on them for use with Cat 6a or Cat 7 cable.  I don't see any SFP+ module that has an RJ-45 jack on it.  I'm seeing Twinax (active and passive) and the multitude of fiber LC connectors for short range or long range.

What kind of server adapter would I need to connect this up?  Is there anything to go 10gbe over an RJ45 copper Cat6a/cat7 cable short distances?

paolo bevilacqua
Hall of Fame
Hall of Fame

Nexus are expensive. You can look at the 4500-X with which you can start with 8 ports and then grow.

Wow that's a good recommendation. Looks like they make 10gbe-t with rj45 SFP+ to connect servers with Cat 6a/7. Then I can also use active twin ax SFP+ to connect the EMC VNX storage array.

Is it stackable? Still advisable to run 2 for failover (failover interfaces available on both VMware servers and EMC VNX). Or just get a passive SFP+ to connect them up?

I'm using it purely for IP storage connectivity. Will likely use existing 3750g / 3560x for LAN to core.

Sent from Cisco Technical Support iPad App

The 4500-X series can form a VSS pair, so you can cross-connect and run LACP etc.

The 4500-X doesn't have the low latency of NEXUS. You only mention a requirement for 10GbE connectivity. Are you looking for improved latency as well? Will 4500-X latency be sufficient for any future requirements that you have planned?

Well 10GbE will be a big improvement alone over 1GbE I would say.  Looking at the traffic graphs of the current 3750g, I'm not really utilizing all of 1GbE.  It's just that I plan to add more VMWare host servers.  More servers that are going to need to access that shared storage.  Even if today I'm ok with dual 1GbE trunks to the current shared storage, If I add a few more servers I want to improve that backbone to the storage.

I'm not sure how much latency would differ between a 4500X and a Nexus 5548p for example.  Is the cost difference really justifyable? 

We don't have FC storage, it's all IP based like iSCSI or NFS (mostly).  Just looking to widen the pipe to that storage.  Of course I want the best of the best.  I will propose 4 options and get quotes and see which one is approved.

Option 1 - Best performance, isolated 10gbe SAN & Vmotion traffic on Cisco Nexus

Option 2 - Loweset cost, add another 3560X, add C3KX-NM-10G to both 3560X's, and use those 10g to storage, keeping the vmware hosts on traditional 1gbe port trunks across two 3560X's.  The uplink to the core is 4 x 1gbps (2 on each switch)  Reuse the 3750g somewhere else since it doesn't have any expantion bays.

Option 3 - Replace the 3560X and 3750g with two 3750X, stack them, and add the C3KX-NM-10G to both, again using the 10g links just to the shared storage, while traditional port trunks go to the vmware hosts and the core switch infrastructure.

Option 4 - Add a Cisco 4500-X switch isolated soley for storage traffic and vmotion.  Same as option 1 above, but instead of a Nexus it's this model.  SFP+ Cat6a RJ45 to servers, Active Twinax SFP+ to EMC VNX Shared storage.

Everything is contained within one standard rack.

Well going over the BOM with each option above, the only way I think we could get a nexus is if we buy a refurbished one.

I think likely what would happen is Option 2 above.  At least the storage would be on 10gbe and maybe in the following years we can slowly migrate to the other solutions.  Option 2 is just adding an additional 3560X swich.  I know they are not stackable, but maybe that's a good thing.  One bad switch won't take out the entire stack.

It's just a shame that on the C3KX-NM-10G expantion module, if you use an SFP+ 10gig module, the SFP port next to it becomes disabled, even though each expantion module has 4 ports.  So it's like you can use 2 SFP's plus 1 SFP+ or 2 SFP+ or 4 SFP's.  Design oversight in my opinion.  So I guess on one 3560X we would run an SFP+ to the primary blade of the EMC VNX shared storage.  On the other 3560X we would run an SFP+ to the failover blade of the EMC VNX shared storage.  Two fiber SFP's on each 3560x would tie back into the core lan switch.

Ports for storage on their own VLAN of course.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card