12-27-2013 09:02 AM - edited 03-03-2019 07:15 AM
Hi guys,
Just wondering which is best practise?
connect server farm into distribution switches or connect directly to core switches?
Understand from different articles stated different methology but from what i see in cisco network design, server farm is always connected to the distribution switches.
What other factors to consider when connecting to distribution and when to consider when connecting to core ?
Thanks
12-27-2013 11:16 AM
Best practices is to build a server farm block consisting of top-of-rack switches that are then uplinked to Distribution switches that then connect to the core. IMO servers should almost never be connected to a core. A dedicated pair of switches can be purchased, relatively cheap, and can terminate server connections. This is more of a collapsed core design where the core acts as both the core and the distribution. Breaking the server connectivity out to dedicated switching allows for more flexibility, security, and scalability in the overall design.
12-30-2013 10:16 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
For a 3 tier design, the "book's" best practice would be as Collin describes (except perhaps the server switches don't actually need to be ToR).
However, the principle reason for a 3 tier design is scalability, especially when we only had low port density (slow) routers and access hubs. Such 3 tier designs are still valid even for modern equipment when the network is massive, but modern (fast) L3 switches often allow usage of much more compressed designs such as a collapsed core/distribution, 2 tier design. In such a design, again the "book's" best practice would be to have the servers on their own access layer, but if there aren't that many servers, I attach them to the core/distribution device.
For example, one of the sites I support is a call center with four 6509s. Three of the 6509 are running L3 with dual sup32s and seven 96 10/100 PoE (classic bus) cards, for users (pretty much fully populated). The remaining 6509, which has dual gig Etherchannel to the three user 6509s, is dual sup 720s, with seven CEF720 cards, two 6708s, two 6724-SFPs, and three copper 6748s cards hosting servers (pretty much fully populated). These four 6509s, hosting about 2,000 users and about 150 servers, works just fine.
12-30-2013 07:53 PM
Hi Collin and Joseph,
Thanks for your valuable feedback.
This is the puzzling part for me because some of the design i seen in borderless network, they connect the data center and rest of network architecture to the core however some i see connect to the distribution.
Whatever it is i think no right or wrong in the design probably due to different requirements hence different design produced am i right?
I probably will try to stick with traditional design like you guys mentioned but will attach internet to core with few tiered firewalls.
Btw what is the best practise for multi tiered firewalls design?
Thanks.
12-31-2013 04:29 AM
Anthony
I haven't seen the design guides you are referring to but in a multi site design you may well connect the DC to the core. It's important to distinguish between a single building and a multi site (campus) setup.
The 3 tier design doesn't work that well in a single building ie. a building that is standalone or may be connected to a WAN with much lower bandwidth speeds. In such a setup, as Joseph mentioned, a collapsed core is often used ie. the same pair of switches are used for both functions. This makes sense because what real value would a dedicated core provide. If traffic is routing between vlans it does this on the distro switches and the only time it routes anywhere else is to the WAN and you do not need high speed core switches to do this.
A campus setup is where the 3 tier design is more applicable ie. multiple buildings interconnected with high speed links. Then each site would have distribution switches for routing between vlans in that building and there would be a high speed core (usually L3 switches) to route between those sites. In this setup if the DC is also connected via high speed links you would attach it to the core as well. But you would still only connect the DCs distro switches back to the core. The core itself should be left to L3 switch between the sites ie. no servers directly attached to the core.
I have also done the same as Joseph and connected servers to a collapsed pair of distro/core switches primarily because of cost but also because your core/distro switches tend to have the greater throughput so it is a logical place to put them.. In addition because they are on the core/distro switches you do not have to worry about oversubscribing uplinks from a different pair of switches, although there might still be oversubscription on the core/distro switches.
It really comes down to how many servers you need to connect.
As for multi tiered firewall designs the 2 key elements i have come across are -
1) use a different vendor for each firewall. This provide additional security in case one vendors firewall has a bug etc.
2) dual home your servers so each server has a NIC facing the external firewall and one NIC facing the internal firewall. Generally speaking you would disable routing between the NICs. So you manage the servers via their internal facing NIC and from the internet access is only allowed to the external facing NIC.
Jon
12-31-2013 11:23 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Jon, very nice reply. Thanks for joining this thread.
This makes sense because what real value would a dedicated core provide. If traffic is routing between vlans it does this on the distro switches and the only time it routes anywhere else is to the WAN and you do not high speed core switches to do this.
Yep, in in my cited example, the 4th 6509 uses its 6708's for two off-site 10g links.
I have also done the same as Joseph and connected servers to a collapsed pair of distro/core switches primarily because of cost but also because your core/distro switches tend to have the greater throughput so it is a logical place to put them.. In addition because they are on the core/distro switches you do not have to worry about oversubscribing uplinks from a different pair of switches, although there might still be oversubscription on the core/distro switches.
Re: cost, again, yep, why buy another box?
Re: greater throughput, also again, yep, for example, note I noted 4th box is all CEF720, i.e. all fabric, vs. classic bus in user 6509s.
Re: oversubscription, and again, yep, 4th 6509's server 6748 cards are 40 Gbps to fabric, vs. gig or even 10g uplinks from a separate server switch.
Jon, one reason I enjoyed, so much, reading you've done similar for similar reasons, late last year the business unit came to me and told me they want a separate dual core to increase reliability (as call centers are considered critical). I noted that, yes, adding a second "core" box, by adding a redundant chassis (vs. the single chassis with redundant everything else) would decrease the MTBF by about 2 hours a year for the off-site links (expensively, IMO, for those 2 hours, but as they are footing the bill, who am I to say no), but I didn't see any advantage for adding another (2nd) "core" box (vs. continuing using the existing box as a collapsed core). Well I got overruled because you just can't share a "core" device for anything else .
Unfortunately, a case of, I think, some reading some design guides, which say "core" devices do X, and so therefore, you can never do otherwise. Again, so very much enjoyed reading someone else not following the 3 tier model, always, literally.
01-01-2014 06:00 PM
Hi John,
Thanks for your valuable input in this thread.
What will be the criteria in choosing different firewall vendor whom will be facing external network and whom will be facing internal network?
Thanks.
01-02-2014 04:37 AM
Anthony
Entirely up to you. Generally speaking you want stateful firewalls (as opposed to application firewalls) for general traffic as these are more efficient in terms of packet processing. However it's been a while since i looked at new firewalls so there may be application firewalls that perform as well.
It's more to do with the support + features that you need. Decide what you need and where (in terms of which firewall) and then select accordingly.
Jon
01-02-2014 04:43 AM
Jospeh
Thanks for the response.
To be honest i think the collapsed core is a lot more common than a dedicated L3 core nowadays although that is just from my experience. Apart from the campus setup where you may need dedicated switches the only other place i can think of needing it is in a DC where you end up with multiple pairs of distribution switches and need a core pair to route between them.
As you quite rightly say, due to the increased performance of modern L3 switches the 3 tier design is more applicable to very large networks. Otherwise it is very hard to justify purchasing a dedicated pair of high performance switches which would, in all likelihood, be underutilised.
Jon
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide