cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
984
Views
20
Helpful
8
Replies
Highlighted
Beginner

2 locations, 2 core switch stacks, fibre in between, equal cost load balancing between?

Hi,

We've recently inherited a job that another company was doing, so we've had our hand slightly forced on the kit and overall topology involved, however that's all fine and we can make it work.

This is a collapsed core topology with core and access switches, split over 3 blocks (fibre connections between), one core switch/stack is in block B and the other in block C, with access switches throughout.

They require all access switches to be connected to the Core in B and the Core in C, and then obviously cross connects between the two cores.

They state:

"Core switches shall be linked with 2x 1Gbps links bonded into a standard compliant Etherchannel"

"Uplinks between access and core switches shall be non-blocking - for example equal cost load balancing at layer 3, or layer 2 bonded multi-chassis Etherchannel"

The specced kit for the core are 3850's, in an ideal world I'd use VSS (Virtual Switch System) to achieve the above statements beyond repute; but this is only supported on 4500/6500 and Nexus platforms.

Do we think a cross stack etherchannel (LACP between both core switch stacks) would satisfy the above statements? Or the statements may just be badly worded...

 

I look forward to your thoughts and views on this! Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted
Beginner

Apparently the requirements

Apparently the requirements that you described were the project requirements, which apparently were not followed by the previous company, as in order to achieve the

"Uplinks between access and core switches shall be non-blocking"

You need at least VSS as you said or VPC between the core switches. 3850's are usually Access layer switches or distribution at most, not normally used for "Core" layer, and therefore, they do not support the mentioned technologies.

An Etherchannel between Core switches is the way to go, but it wouldn't satisfy the "Non-blocking" requirement, as having a L2 uplink to each core, would create a redundant topology and SPT would block one of the uplinks.

View solution in original post

8 REPLIES 8
Highlighted
Beginner

Apparently the requirements

Apparently the requirements that you described were the project requirements, which apparently were not followed by the previous company, as in order to achieve the

"Uplinks between access and core switches shall be non-blocking"

You need at least VSS as you said or VPC between the core switches. 3850's are usually Access layer switches or distribution at most, not normally used for "Core" layer, and therefore, they do not support the mentioned technologies.

An Etherchannel between Core switches is the way to go, but it wouldn't satisfy the "Non-blocking" requirement, as having a L2 uplink to each core, would create a redundant topology and SPT would block one of the uplinks.

View solution in original post

Highlighted
VIP Mentor

Hello

Hello

The 3850 support stackwise - So they can become a virtual switch and ideally if you use Multiple Spanning -tree (MST) With 2 instances you can control what vlan span which uplink and have resiliency on both

res

Paul



kind regards
Paul

Please rate and mark posts accordingly if you have found any of the information provided useful.
It will hopefully assist others with similar issues in the future
Highlighted
Beginner

Stackwise is good.. but I'm

Stackwise is good.. but I'm guessing these two "core" switches are on different rooms at least.

If not, my previous post is nonsense :)

Highlighted
VIP Mentor

HelloI am assume the opposite

Hello

I am assume the opposite! -- LOL

res

Paul



kind regards
Paul

Please rate and mark posts accordingly if you have found any of the information provided useful.
It will hopefully assist others with similar issues in the future
Highlighted
Beginner

Thanks for your feedback so

Thanks for your feedback so far guys, the switches are in different rooms, (buildings in fact, but they are adjacent).

I've considered RPVST to run different instances of STP per VLAN (similar to MST) but we would still have some blocked ports in this case.

We'll probably have to bite the bullet and upgrade both core switches to support VSS (4500 series) in order to be spec compliant...

Highlighted
Hall of Fame Guru

Just another option depending

Just another option depending on vlan placement.

If you can isolate vlans to specific access switches ie. you do not need the same vlan on more than one access switch then you can meet both requirements by using a L3 etherchannel link between your core switches.

Both of the uplinks from the access switches would then be forwarding because there is no L2 loop for STP to block.

And they don't specifically say the etherchannel between the core switches can't be L3 :-)

If you can do this then running GLBP as you FHRP is a good choice because then you get full utilisation of both uplinks.

If they need all vlans on all switches or even the same vlan on some of the access switches this is not a viable option.

If you could isolate the vlans you could also as Joseph suggests extend L3 to the access switches but this depends on their capabilities. In addition the previous suggestion gives you the flexibility of spanning a vlan across multiple switches if needed at a later date (although there would then be blocking for those vlans) but using L3 does not give you this option.

Edit - well it seemed like a good idea but checking the 3850 configuration guides it only seems to support HSRP as a FHRP which would mean the uplinks would not be equally used per vlan unless you used MHSRP.

Jon

Highlighted
VIP Expert

DisclaimerThe Author of this

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

As the others have noted, the 3850s, to stack, are restricted to the length of the longest stack cables.

As you have noted, VSS physical units would allow the "logical" unit to be far apart.

For a "small" VSS core, the 4500-X might be an idea unit.  (Other than cost, the 4500 would be a better choice for a core device.)

Something to watch for, or understand, when running VSS, Etherchannel doesn't load balance as it does on a single chassis or stack.  VSS will avoid using the VSL cross link unless it must.

As many access switches, today, support basic L3 routing, you might also determine whether a L3 edge would be a suitable alternative choice.  It would allow retention of the 3850s and can offer some advantages even over VSS.  (Where VSS is very nice [as too the Nexus] supporting servers with Etherchannels.)

Beginner

Well, we've offered them the

Well, we've offered them the 4500 option (to stick with their exact specs) but I've also redrawn there whole topology suggesting how it should actually be set up, be more resilient, cost effective, blah blah blah.

Thanks for your help everyone; there is more than one correct answer from your responses... I'll have a quick re-read through and click on the one I found the most helpful (although there was no single solution to this.

CreatePlease to create content
Content for Community-Ad