cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
897
Views
0
Helpful
6
Replies

Data Center Network Design

WILLIAM STEGMAN
Level 4
Level 4

I'm looking at a couple options for a small network in a data center.  I seem to be getting hung up on all the different options.  One of the options I'm looking at is end or row using both 2960Ss and Blade Center chassis switches with each physical server dual homed into a 2960, each ESX server dual homed into a blade switch and each of the switches with a Layer 2 10Gb uplink (20 total with etherchannel) to one of two 4900Ms.  The 4900Ms would then have a layer 2 uplink between them to accomodate VLANs that span the access layer switches.  This would be an inverted U topology.  That's simple enough, and maybe that is where I should leave it, but there is the now available stacking feature of 2960s that has me wondering if there is another option available with dual homing a stack.  Is there such a beast?  Would it be better to stack 2960s, or even 3750s, so as to make each end of row with 2 redundant switches appear as one logical stack, and then uplink that stack to an aggregate multilayer switch such as a pair of 4900Ms?  Or might that limit me to keeping VLANs within a stack and end or row?

thank you,

Bill

1 Accepted Solution

Accepted Solutions

Collin Clark
VIP Alumni
VIP Alumni

Hi Bill-

First, I personally would not use the 2960S for the data center, no matter the size. That switch was purposely built for user access and has some limitations. Also, depending on what you need to accomplish will determine your design. I recently did a design similar to what you are describing. We ended up putting 3750X's at the top of rack as a stack. This allows for etherchannel to your servers with both server NICs being active. From there we uplinked to a pair of 6509's in VSS. From a layer 2 point of view this was about as simple as it gets; 1 switch connected to another switch connected to a server. No spanning tree! If you can't afford stackable switches, you may want to look at routing at the top of rack. However you will lose functionality like moving VLAN's between racks, relying on server NIC software for active/passive links and the moving of VM's could be limited.

View solution in original post

6 Replies 6

Collin Clark
VIP Alumni
VIP Alumni

Hi Bill-

First, I personally would not use the 2960S for the data center, no matter the size. That switch was purposely built for user access and has some limitations. Also, depending on what you need to accomplish will determine your design. I recently did a design similar to what you are describing. We ended up putting 3750X's at the top of rack as a stack. This allows for etherchannel to your servers with both server NICs being active. From there we uplinked to a pair of 6509's in VSS. From a layer 2 point of view this was about as simple as it gets; 1 switch connected to another switch connected to a server. No spanning tree! If you can't afford stackable switches, you may want to look at routing at the top of rack. However you will lose functionality like moving VLAN's between racks, relying on server NIC software for active/passive links and the moving of VM's could be limited.

thanks Collin.  How do you see blade switches fitting in that design?  Would the 2 redundant blade switches be using a looped triangle so that each blade switch had active uplinks to each 6509 and then the ESX servers would use NIC teaming to each of those blade switches?  Or does VSS have some other remedy for active/active uplinks?

I was a bit confused on that. Are you using Cisco Blade switches? Pass-thru's? I'd mention that other vendor, but Cisco probably has a filter for those two letters :-) Are your ESX servers the blades or another server chassis?

I have pass throughs on one chassis, a couple 4 port Cisco switches on a 2nd chassis, and then 2 10Gb switches on a 3rd chassis.  The ESX servers are in each of the 3 chassis. 

All of our blades were pass-thru's and they had their own 3750X at top-of-rack. I would do the same for yours except for the 10GB and would probably connect them to end-of-row or distro.

Thanks Collin.  After doing some research, I do like this design. Besides the VSS benefits from access layer to distribution/core, I like that I can use MEC from the 6500s to another Multilayer switch on the edge that could host vlans with ports that lead to the WAN and Internet.  Lots of options. 

Review Cisco Networking for a $25 gift card