cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1810
Views
30
Helpful
13
Replies

Tiered Core, Dist, Access layer - SVI configuration.

CiscoBrownBelt
Level 6
Level 6

So, per this architecture, do most usually create all SVIs on the Dist, create all VLANs on Accessswitches, but then what particualar SVIs do most use on Core layers or what is best way to dispurse traffic to the Dist layer from the Cores?

13 Replies 13

Reza Sharifi
Hall of Fame
Hall of Fame

Core, Dist, Access layer is an old design from the 90s.  Most networks have just 2 tiers (access and core) as there is no more need for distribution switches. Now, if you already have a 3 tier design and there is not much you can do about it, it is a good idea to have the SVIs for all vlans on the distro and a layer-3 transit links between distro and core. This reduces the size of your spanning tree and lowers the possibility of bringing the whole network down due to a loop.

HTH

Awsome thanks! No this particular project is a lab.

What is you let's say run out of connections on the Cores and let's say you want to add more switches. Should you just choose a couple switches on the Access layer to connect to both for redundancy instead of connecting all the new switches you want to add to all the access layer switches?

What is you let's say run out of connections on the Cores and let's say you want to add more switches.

So, when you design a network, you would count for cases where you may need more switches or connect other devices directly to the core.  Usually, you want to leave 25 to 40% room for growth so, the core switches can last you at least 5 years or more. It all depends on the business need and how often the company does lifecycle replacements.

HTH

 

 

Yes but's let's say a particular place did not. What is the best approach? Connect to access and just add applicable VLANs, point DG to Cores?
Also, it is best to hardcode the Cores to be the root bridge for all VLANs on the Cores correct?

Joseph W. Doherty
Hall of Fame
Hall of Fame
In a classic 3 tier design, usually core<>distribution is L3. Traditionally, distribution<>access was L2, but today you might find it L3 too.

As to having "all" VLANs on access switches (even assuming that distribution<>access is L2), you try to restrict VLANs as much as possible. Ideally (with wire-speed L3 switches), the same VLAN doesn't span across a distribution device to more than one logical access switch path.

BTW, Reza is likely correct, most networks, today, are likely 2 tier (core/distro and access), but large(1,000+)/critical LANs sites, will likely be 3 tier, either due to physical equipment limitations or to avoid having L2 in one "core/distro" device.

Yes so I will be doing HRSP SVIs on the cores, the access switches will have SVIs/VLANs for whatever hosts that are connected - access switches point to Cores as GW.

If I am out of ports on Cores and need to add switches to access layer, should I just put all need layer 2 vlans on these switches and trunk them to just 2 access switches for redundancy?

Don't connect the access switches together especially since you are routing at the access layer.

Just make sure the core switches are large enough to handle more access switches (room for growth).

HTH

Thanks but what if you have no more ports for growth but the access switches do? What is best way to configure?

Is it better just to put all SVIs on the core, and just have the access layer with vlans pointing to the core as default GW? Or is it ok to put SVIs on access switches that have hosts in subnets that need to talk, but not necessarily all subnets leave those particular access switches?

"Yes so I will be doing HRSP SVIs on the cores, the access switches will have SVIs/VLANs for whatever hosts that are connected - access switches point to Cores as GW."

Could you clarify that statement? If the SVIs on on the cores, with HSRP, and you're routing on the edge access switches, why are you using HSRP on the cores?

"If I am out of ports on Cores and need to add switches to access layer, should I just put all need layer 2 vlans on these switches and trunk them to just 2 access switches for redundancy?"

Will your additional access switches be routing too? Regardless, if you're short core ports, you can treat some existing access switches as hybrid distro/access devices. This can be done with L2 or L3, depending on whether the devices are L2 or L3. If doing L2, again, ideally try to avoid spanning the same VLAN having access ports on more than one upstream device hosting the SVI-gateway.

Could you clarify that statement? If the SVIs on on the cores, with HSRP, and you're routing on the edge access switches, why are you using HSRP on the cores?

So COres will route for all subnets, but access switch that will have additional SVIs (other than same management SVI that is on all switches)  will have SVI for only the subnets that hosts are connected to on the particular pair of switches, so they do not have to go to the Core to route between their vlans (e.g VPC Core pair has SVI 1-10, one pair of VPC access switches has SVI 11-15 and a DG to the Core for everything else.

 

Will your additional access switches be routing too? Regardless, if you're short core ports, you can treat some existing access switches as hybrid distro/access devices. This can be done with L2 or L3, depending on whether the devices are L2 or L3. If doing L2, again, ideally try to avoid spanning the same VLAN having access ports on more than one upstream device hosting the SVI-gateway.

No they will not. I only want to send what ever vlans out the trunks to the access switches that that must be routed at the SVIs on the core. Or would it be better to do SVIs for whatever is needed like I stated above.

 

I'm still a bit lost in what you are asking now.

Normally you would use HSRP with vPC, also assuming the edge was only L2, like on FEXs. But you generally don't use vPC when routing between L3 devices. Also, I wouldn't consider vPC architectures classical 2 or 3 tier. (Much as a leaf/spline architecture isn't classical 2 or 3 tier either.)

If yours is a vPC question, you might want to post a new question.

No so the cores will have most all SVIs, and the access will have only mgmt SVIs and maybe a few SVIs for hosts so they do not have to go to core to route.
Should I join the cores into a VPC domain, and then the pairs of access switches into their own VPC domain. Just trunk all uplinks between the switches?