cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
599
Views
0
Helpful
10
Replies

Suggestion for Data Center Switch Upgrade

nsateam01
Level 1
Level 1

Hi Everyone,

 

Our company has grown from a small business to a medium-large company with just over 500 employees. I came on early this year and my first project was to migrate to MPLS and get traffic flowing into our Data Center, which is complete.

My next project is to add redundant routing paths to the DC. Currently we have no redundancy from a route/switch perspective in our Data Center. We use 1 single 3650 for our core switch and this has gotten us by so far, but since we are adding another switch, we are upgrading to a more scalable core. The SAN is running on 2-3650's as well.

We have been in discussion with our Cisco Sales team and they are of course recommending the Nexus platforms. A pair of 9372's and 2-FEX (2k) for the SAN. However, as many already know, they are costly. So, I have been asked to find a middle ground somewhere between scalable design with high availability and a cheaper price tag.

I have looked at the pricing for 4500 and it does not get much better than the Nexus as far as the price goes.

Any other options out there that you have put into production? Anybody using switches in their Data Center that is not Nexus?

 

Thanks!

 

10 Replies 10

Leo Laohoo
Hall of Fame
Hall of Fame
A pair of 9372's and 2-FEX (2k) for the SAN.

Nexus 9K for 500 employees????  Wow, talk about over-kill. 

 

The question, however, needs asking is how many "servers" are there now and the next, say, 5 years. 

 

For management purposes, I wouldn't go with 4500R+E option but rather the 6807X and 6800IA switches.  I know that Catalyst switches are not suitable for DC work but this recommendation will work if there is not a lot of high-speed servers.  The Catalyst switches can "survive" DC work with a few QoS tweeking.

 

I was also thinking this is an overkill the first time it was suggested, but now I have bought into it. 

We currently run about 45-50 VM hosts and honestly probably won't add many in the next 5 years. However our employee base is growing rapidly, which will increase additional traffic into the DC. And let me also note we are still running 1G on our servers. No 10-40G NIC's.

At a glance for the 6807x, that looks pretty expensive too. I think for the cost of the blades we would need, the 9K's and the 2K FEX's would still come out cheaper.

So, is there anything between these platforms? Does not appear to be a middle ground between a branch grade switch and a data center for a mid-size business.

I am not sure why your Cisco sales rep suggesting both the 9ks and also FEXs, as the Nexus 9ks can be stand-alone switches and have enough copper and fiber ports to support the number of servers you have.

Reza, the FEX's would be for the SAN. The SAN currently occupies 2 3650 24 port switches, but we need to upgrade to 48 ports and keep the SAN redundant. The pair of 9K's would be for our production servers and network equipment.

I see.  That make sense. Both the 9300 and the Fex are all 1RU devices.  So, really you need 4RU rack/cabinet space for everything. Just make sure you check the amount of buffer on the FEXs you are purchasing.  I know that the 2232 FEX comes only with 4Mb of buffer and we ran to issues with storage and had to replace them.

Leo,

7700 is way overkill for what he is trying to do, the number of devices and of course limited budget.

Thanks,

Reza

 

 

Thanks for the information Reza. I will keep that in mind about the buffer.

 

So, I guess unless we wanted to just go with the highest branch router platform, the Nexus is our best bet? Nothing in between?

For server, storage, VM, etc..Nexus is your best choice.  They are designed with low latency and higher throughput than regular campus switches and so it fits the need for most data centers. If the organization is growing, this is good investment. 

Nexus series is slowly making its way to campus environment.  I see more people starting to use them in the distro/core of their networks.

 

 

 


 

We currently run about 45-50 VM hosts and honestly probably won't add many in the next 5 years. However our employee base is growing rapidly, which will increase additional traffic into the DC. And let me also note we are still running 1G on our servers. No 10-40G NIC's.

Ok, with this statement, Nexus is the way to go but not a 9K.   Nexus 7700 at the core, Nexus 5K/6K as distro/access with Nexus 2K access is ideal.  

 

As a matter of fact, a single 7700 chassis with dual supervisor engine would be sufficient.  There's really no need for a dual chassis 7700 unless there's tons of money to burn.

Thanks Leo. Appreciate the feedback and advice!

So, what if a 10+RU is not an option? We have limited space in our rack and what is appealing about the 9k's is they are 1RU.

Additionally, when I spoke with our reps, we were told 9K's built out this way would come out cheaper than the 5k or 7k.

 

Additionally, when I spoke with our reps, we were told 9K's built out this way would come out cheaper than the 5k or 7k.

Do your maths.  Nexus 9K is not designed for 100 servers.  It's designed for very, very large Enterprise DC environment.  

 

Don't just do the maths in terms of chassis but everything:  chassis, supervisor cards, line cards, maintenance support, etc.  Compare and you'll see the difference.

Review Cisco Networking for a $25 gift card