cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1082
Views
0
Helpful
7
Replies

Best Routing Protocol? Hub and Spoke / Collapsed Core

mjensen323
Level 1
Level 1

Hello,

We have a collapsed core design in a K-12 school district. The district has a datacenter currently serving 8 locations with a 3750-X stack with 1 gig fiber and 10 gig to the blade chassis / nexus / nimble storage. All computing / storage / wireless controller / filtering, etc. are located in the datacenter and shared out to all locations. Currently, the design is that all the VLANS are located on the stack with their respective gateways and ip routing enabled with the protocol being NSF Aware. Each location is connected with a trunk port and none of the locations are setup as layer 3 connections, although they could be. All the locations could be layer 3 switches if necessary. Some locations only have a single data closet with 1-3 switches. Some have 2 closets with 2-4 switches. The entire network is 100% Cisco. Everything is a complete hub / spoke / star design. There are not redundant links because of geographic locations. 

Is this the most efficient design? Would their be an increase in speed by migrating the the routing to EIGRP? All of the links back to the datacenter are owned, not leased or ISP links. There are no routers.

This summer, there are plans to migrate the 3750-X stack to a 4500-X to bring all the links from 1G to 10/20G based on location size. I had plans to purchase the Enterprise Licensing to create the EIGRP network from the Datacenter Distribution Layer, and then use the base licensing from 3560/3750's at head end locations as an EIGRP stub. Would this make more sense or is the way things are setup a better design?

Thanks in advance.

Matt

7 Replies 7

Philip D'Ath
VIP Alumni
VIP Alumni

Will there be an increase in speed migrating to EIGRP?:  No.  The choice of routing protocol will not affect throughput.

Over the next 3 years is the number of sites likely to increase?

With having a layer 2 domain that contains a hundred odd ports as well as the uplink you do risk the uplink being saturated from unicast flooding, broadcast storms, etc.  So their is a reasonable argument for moving to layer 2 trunks to the DC.

I think my final choice will depend on the likely network size [growth] over the next 3 years.

The number of sites will likely not increase. 

Each location has between 25-75 access points and over the network 3 years, could have between 600-1500+ wireless clients via a mixture of 2702, 3502, and 3702 access points back to 5508 controllers at the DC and out to the internet from the DC. 

We are more or less trying to "future proof" the design. A lot of the access layer switches will be removed and only PoE+ switches powering AP's and phones will remain. 

In that case definitely convert to using layer 3 trunks to the sites.  That is way too many client devices to put into a single layer 2 domain with a layer 2 trunk port.  I would personally use EIGRP to make the addition (or removal) of VLANs much easier.

You are likely to need multiple VLANs at each site as well with that many users.

It sounds like your current plan is the way to go to me.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Do you span VLANs across remote sites?  Do the remote sites have multiple VLANs, and if they do, is there (much) traffic between those VLANs?  If answers are no, there would be very little immediate benefit making the link to the branch L3.

For your future proofing, making your links to branch L3 might be somewhat better, but if you're concerned about a L2 design being "bad", not so much so if it mimics (physically) a Nexus FEX or 6xxx Catalyst IA design.

Replacing your 3750-X with a "better" L3 switch is likely "good", as the 3750-X is a bit light for a distribution/core role.  The 4500-X is a good choice, as might be a 6840.

As to moving your branch links from gig to 10g, what's your current utilization look like?

You mention moving to 20g.  What's that, dual links?  If so, perhaps a better investment than moving to 10g would be to move to dual gig that supports additional redundancy.

As Philip has already noted, EIGRP, alone won't improve speed, although depending on the L2 topology, a L3 topology can respond to some topology changes faster and/or take better advantage of redundancy.

Yes, the VLAN's span across remote sites via 1 gig links. Each remote site has 2 vlans (which all begin at the DC for voice / data). Then the WLC SSID's each begin at the DC and span down to the remote site as well via CAPWAP Tunnels. 

Current utilization is 20-30%, but will increase since device allocation will multiply by 4x-20x within 3 years.

20g would be 2 x 10g 4 strand single mode.

To be clear, a site's two VLANs are unique to it and the DC or are same two VLANs found across multiple or all your branch sites?  If you're spanning VLANs across multiple branches, it would be better if you did not.

From reading your reply to Phillip, do you expect most of your growth via wireless?  If so, the WLAPs might partially preclude the bandwidth growth you expect from the increased number of clients.  That said, nothing wrong moving to 10g.  Fortunately, it's much less expensive than it once was.

If you do want to support additional network redundancy, I would suggest moving your branches to L3.  Although 4500-Xs support VSS, you're probably better served by L3.

A sites VLANS are unique across them, so example, the DC contains VLANS 10,11,20,21,30,31 in origination, but branch 1, contains 10 for data and 11 for voice (although all of them are displays in "show vlan"). branch 2 is 20 / 21, and so on.

All of the growth will be wireless. All POE switches are being upgraded to support 10G uplinks to the DC. That way oversubscription when AP's are hosting 20-30 devices each (switches will have 15-25 AP's each) should be able to handle the saturation.