12-29-2009 05:09 PM - edited 03-06-2019 09:06 AM
Hello,
I'm spec'ing out a new 4506-E with a SUP 6-E for a LAN upgrade. This will be a core switch in the server room, where there are 20 servers. As I understand it, Cisco recommends the Core/Distribution design approach where the servers would be connected to their own switch, say a 3750 and then uplinked to the 4506. Is there any reason beyond design recommendations that I wouldn't want to add a 24 or 48 port gigabit line card in the 4506 and connect the servers directly? I'm trying to justify costs and understand the implications of doing this. Any feedback is welcome.
Thanks,
Shawn
12-29-2009 06:34 PM
Horses for courses, I would say. In my previous experience, our switches, particularly our core switch, was located in a rack far away from the servers. And cabling each rack of servers to the core switch, for instance, can be a very messy affair. However, if you have a switch in each rack and, say, fibre optic cable from the server racks to your core switch, it would make things alot easier and neater, doesn't it?
If you want to keep costs low, consider a Catalyst 2350 top-of-the-rack, Layer 2 switch if you don't need the layer 3 capabilities (or SmartStack) of a 3750. The 2350 has 10Gig fiber uplink and also supports TwinGig converters. The Sup6-E also supports 10Gb line cards (do consider the oversubscription rate of each linecard).
I personally prefer employing a switch for servers and uplinked to a core/distro switch.
NOTE: Cisco has a number of promos to help new clients. One of them involves swapout of your non-Cisco switches and Wireless AP for a US$50.00 discount in a one-to-one purchase. You may want to talk to the authorized reseller to find out more.
12-29-2009 11:14 PM
servers connected to diff blades will dicrease the single point of failure and you can easyly expand in future.
there could be many reasons for the same need to have some more infor fromt the topology side to discuss this.
12-30-2009 01:35 AM
Hello,
I'm spec'ing out a new 4506-E with a SUP 6-E for a LAN upgrade. This will be a core switch in the server room, where there are 20 servers. As I understand it, Cisco recommends the Core/Distribution design approach where the servers would be connected to their own switch, say a 3750 and then uplinked to the 4506. Is there any reason beyond design recommendations that I wouldn't want to add a 24 or 48 port gigabit line card in the 4506 and connect the servers directly? I'm trying to justify costs and understand the implications of doing this. Any feedback is welcome.
Thanks,
Shawn
Shawn
So are the servers only singly connected ?. I ask because most designs dual hone the servers and team the NICs. And this is one of the reasons why you have separate switches for the servers ie. you have a pair of switches connected via a L2 trunk and you dual hone the servers to these switches and run HSRP/GLBP. Having a pair of dedicated switches allows you to keep the L2 trunk off the core switches ie. you can interconnect them with a L3 routed link.
It's all about fault isolation really. If you are only singly honing your servers into the 4506 and the 4506 only has one supervisor you have 2 single points of failure. So in that case you really get no benefit from using a dedicated switch(es). So for your scenario i would say by all means connect them into the 4506 and spread across the modules within the 4500.
For a more resilient scenario you could look to have 2 3750s in the core and either
1) dual hone the servers to each switch
or
2) have a pair of L2 only switches for the servers. Dual hone the servers to these switches and then uplink the switches with cross-stack etherchannels to the 3750s.
Without knowing more about the requirements it's difficult to be more specific.
Jon
12-30-2009 05:04 AM
I agree with these gentlemen, before looking at redundancy with the servers, look at your core switch. If you are installing a single Cisco Catalyst 4500 switch with a single Supervisor blade, the benefits from utilizing edge layer switches is minute and you won't notice the results.
If you are looking to provide high availability, your first option will be a second supervisor blade in the Cisco Catalyst 4500 for standby functions and then at least two gigabit ethernet blades in the chassis that your servers are dual homed to. Downfalls of this design is if a supervisor blade fails, it can take a little bit of time for failover, or if you have an issue with the chassis it will cause issues.
The second option would be to purchase to Cisco Catalyst 4500 series switches with a single supervisor blade in each and use EIGRP as the routing protocol. EIGRP can typically recover faster from a failure if you like the Cisco Catalyst 4500 drop than to have a failover to a secondary supervisor card. Then you would want to use HSRP or similar protocol and dual home your servers to both Cisco Catalyst 4500 switches with ethernet blades.
The last option and offers the best class of service is dual Cisco Catalyst 4500's with a single supervisor blade in each using EIGRP. Then use some Cisco 3750 series switches as your access layer.
Here is some great Cisco design guides which goes over what I mentioned above:
Campus High Availability
http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns815/landing_cHi_availability.html
Cisco Overall Campus Design
http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns815/landing_cOverall_design.html
12-30-2009 11:05 AM
Thanks for the comments eveyone, very useful.
Redundancy is definitely important and the servers have dual nics so I want to take advantage of that. Looking at the comments, I'm going to scope out two options:
1. Single 4507R-E with Dual SUP 6-E and 1 WS-X4506-GB-T card . Two 3750's, teamed server nics with nice connected to each switch, dual uplinks to the 4507 in etterchannel.
2. Dual 4506-E using HSRP with single SUP 6-E and 1 WS-X4506-GB-T card. Same 3750 & server config except using all 4 uplink ports on the switches to connect dual etterchannels to each 4506.
Feedback is welcome as always.
Thanks,
Shawn
12-30-2009 11:16 AM
With your first option, I would recommend two WS-X4506-GB-T cards and have them etherchannel them down to the two 3750's.
This will limit your single point of failure only to the chassis itself.
The second option looks good, can't think of anything additional to add.
12-30-2009 12:47 PM
Thanks for the comments eveyone, very useful.
Redundancy is definitely important and the servers have dual nics so I want to take advantage of that. Looking at the comments, I'm going to scope out two options:
1. Single 4507R-E with Dual SUP 6-E and 1 WS-X4506-GB-T card . Two 3750's, teamed server nics with nice connected to each switch, dual uplinks to the 4507 in etterchannel.
2. Dual 4506-E using HSRP with single SUP 6-E and 1 WS-X4506-GB-T card. Same 3750 & server config except using all 4 uplink ports on the switches to connect dual etterchannels to each 4506.
Feedback is welcome as always.
Thanks,
Shawn
Shawn
If redundancy is important then option 2 is the better design. If you are not using the L3 features of the 3750s though you may be able to reduce the costs by using L2 switches for the servers, it really depends on the throughput needed and any addtional features.
As for connectivity between the switches, depends on what else is connecting into the 4500 switches. If you connect the 4500s via a L3 connection then you can use both etherchannel uplinks from the 3750s. If you use a L2 trunk then you can alternate vlans on the uplinks, assuming you have more than one server vlan.
Jon
12-30-2009 12:57 PM
Good point on the L2 vs L3. The servers are in a single vlan and we're going to be restricting access via ACL, so an L3 will be used.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide