01-09-2012 09:10 AM - edited 03-07-2019 04:14 AM
Hi,
This is a long one I guess...
For a multi building (10 buildings) campus which has a few remote sites...
Currently all L2 at the campus...
Is it the right move to have a single dedicated core area (By this I mean a single physical area which hosts a core device or two core devices)? At the moment I have a few L3 capable devices but the vast majority are L2 (2960)... The area is set up so that approximately 60% of the buildings connect to everything through one main building, and 40% connect through another. We have 2 datacenter areas (One in the 40% building and one in another building)...
I am trying to establish if it is best to have a single core with all routing performed in the core, or to perhaps have L3 routing happening locally in each large building, with L2 in any smaller buildings (Less than 50 users)... (Because I do not have many L3 devices)...
The other issue I have is the simple fact of if I did have a single core area, it is hard to know where to geographically place it...
I could place it in the area with the most building connections, but this area only hosts VOIP servers
I could place it in the area with the 40% connections, this has general servers and Internet access
I could place it in the area with the second datacenter area, this has general servers and is to act as a backup to live environment
Can I use HSRP in two seperate buildings connected by fibre? Is this not recommended? What happens if link breaks!
The drawbacks of the centralized core approach for me are that no matter where I put it, if I have issues I will lose something, be it VOIP, the servers, or DR etc?
Whereas if I go for some kind of distributed approach, if one area is down I only lose one service?
Can anyone comment on my ideas?
01-09-2012 10:41 AM
There are pros and cons to either side of your post (specifically, physical seperation vs centralized area). I have been involved with Data Center builds, corporate builds and retrofits. Essentially, you have to take each and every consideration into account and weight it versus the affect to uptime/availability of your network.... Let me expand on your specific items:
Centralized vs Distributed routing:
I could say with very good certainty that most admins/engineers would recommend a centralized routing setup WITH a redundant router in case of a disaster. In any event, it does not necessarily matter if your distribution switches are L2 or L3, all of the routing should be handled at the core. Essentially, all of your network routing should be handled at a centralized area, so you can make routing changes as needed in one place and *BOOM* the entire network sees the changes -- having to do that across multiple L3 switches spread throughout a campus would be an administrative nightmare.
Physical Location:
Your CORE switching should be somewhere centralized in the physical termination of your network connectors, assuming [or course] you have redundant power/HVAC/UPS there (if not - it doesn't matter, put it where the redundancy is). Key reason for centrally locating it is because you want to have a PHYSICAL end-to-end connection to your distribution gear, not switch cascasded to switch cascaded to switch, etc, etc, etc.
You should then install your EDGE routing in your demarc/server area (i'm assuming they'll probably be close/if not together) and run new redundant fiber between CORE-EDGE....
In a perfect world, it would be really nice to stick your CORE and EDGE in the datacenter, but again, having a direct link to your switches is REALLY important to keeping things simple and operational.
Redundancy:
All of your distribution switches should use some sort of etherchannel link aggregation so you can have physical-link redundancy between CORE and DISTRO....
Same premise goes between CORE and EDGE....
I've designed campus networks, WAN networks, etc, I can easily help you with this if you give me a layout of where everything is location and the physical connectivity between locations... MSG me for more information...
Thanks,
Sean Brown
01-09-2012 02:25 PM
Actually now you have a collapsed core+distribution+access.
1. The two buildings 40% one and 60% one should be 2 core locations, routing should be enabled between them, but no firewalls.
2. To this buildings you should connect the datacenters, firewall is recommanded.
3. All remote buildings I would say that are collapsed access+distribution, one layer3 device (3560/3750 is cheaper than 2960) in every remote location, routing enabled between collapsed access+distribution and core.
4. Keep centralized DNS/Dhcp servers.
5. Keep in mind no single point of failure, but no more than 2 redundant links/devices for cutting costs.
This will help you:
1. To add new remote locations, design is the same, location is layer2 inside, routed to core.
2. Usually users will change desks inside one building, and having a lot of subnets will cause firewall changes when they are moving from one subnet to another, keep one subnet per site.
3. If you add more datacenters, just connect them to core.
4. You will know where are users located by subnet.
01-10-2012 06:05 AM
Hi,
I have two very different responses! Currently it is correct that really I have a collapsed core + distribution + access in the three main areas, and simply distribution + access in the others.
My initial thoughts were to use L3 devices in the three main areas and link them, the 60% connections, the 40% connections and the second datacenter area and creating a triangle using a routing protocol between them.
Everything else (All other buildings and remote sites) would simply use L2 (Due to lack of L3 devices) to link to one of the three points of the triangle...
My issue with this approach is it seems radically different to what others do, and therefore that makes me nervous that there are big issues with it!
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide