cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
690
Views
15
Helpful
15
Replies
Highlighted
Beginner

SWV design guidance: using 4 switches in 2 SWV domains as two virtual core switches

I understand that only two switches can make up a SWV cluster/stack. I am working on a design that needs to ensure 100% uptime.  From my general networking experience as well as reading about VSS/SWV configuration and troubleshooting discussions, there are a number of scenarios that may come up where both stack members need to be rebooted in order to affect a change.  

 

So I'm thinking that one way to better meet that requirement would be to create say, two separate SWV stacks, each consisting of two Catalyst 9540 switches.  Each stack would have its own SWV domain.  At that point, I guess I could connect the stacks together over a very high speed Etherchannel (say, two or more 40G links).  That way if one entire  stack had to go down for maintenance, the other stack could take over during the outage.

 

Are there any special design aspects to consider using this approach?  Would the best practice be to say, use one, two-port Multi-Chassis Etherchannel from each distribution switch, landing on each stack member in SWV Domain A; and another MEC from each distribution switch similarly landing on each stack member in SWV Domain B?

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

RB6502,

See attached diagram for explanation (sorry for the crudeness).

They should look at the 9548 instead of the 9500-40X. The price should be the same (or very close), the 9548 has more ports that run up to 25Gbps instead of just 10Gbps (and they run 1Gbps), the 9548 has more scale as it has 3rd Gen forwarding engine, and the 9548 has 40/100Gbps uplinks (can be converted to 10G or 25G via adapter).

Cheers,

Scott Hodgdon

Senior Technical Marketing Engineer

Enterprise Networking Group

 

View solution in original post

15 REPLIES 15
Highlighted
Hall of Fame Expert

The larger issue with designing a network for 100% up-time is having the underlying, redundant power and cooling infrastructure in place and not so much 2 switches vs 4. The question is; do you have multiple redundant power feeds (PDUs) for your systems? Do you have generator or UPS that can handle the load for your business requirement if there is power outage?  The next step to look at is, do you have redundant physical locations for your core switches, where you can put one switch on one location and the other one in a different location? Do you have sufficient cooing i place? The next layer you want to look at is the switching, usually chassis based switches are preferred as they can come with redundant PSUs, fan trays and sups. With 4 switches in place, troubleshooting and management also becomes cumbersome as when something goes wrong it is hard to know where to look as you have more devices to deal with. So, look at a set of redundant 9600 series switches as VSS as on option.

 

https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9600-series-switches/nb-06-cat9600-series-data-sheet-cte-en.html

HTH

Highlighted
Cisco Employee

RB6502,

Yes, you could do this. As you pointed out with your proposed design:

1. To best utilize VSS/SWV, the Best Practice is to have all devices dual-connected. In your case, you would need to quad-connect them. If you do not, then you will have traffic traversing the SVL within the SWV pair, which is sub-optimal. Sub-optimal does not mean not supported, but we would not want to design this way if possible.

2. Following on to #1, you would need to quad-home the two stacks and form a 4-port EtherChannel. This is for the same reason ... optimal traffic flows.

Do you already have the 9500-40X , or will you be purchasing them ?

Cheers,
Scott Hodgdon

Senior Technical Marketing Engineer

Enterprise Networking Group

 

Highlighted

Thanks so much for your reply, Scott.  I just want to clarify your response of "quad connect", though.  from each distribution switch (which in this case would be 9410s with dual sups), I'm thinking two MECs.  The first MEC would terminate on each of the stack members in Domain A, and the other MEC would terminate on each stack member of Domain B, for a total of four physical links (assuming two links/MEC).

Are you instead saying that I need a pair of MECs from each distribution switch to each domain, for a a total of eight physical links?

The customer is quite serious about the 100% uptime requirement, and we've been directed to design to that tenet.  The building is yet to be built, and the building facility requirements will be matched to the requirements of meeting that 100% availability goal, in terms of cooling, power, rack space, fiber runs, etc.  It's a welcome change from being asked to value-engineer a design from day 1! :)

The customer has specified 95xx for the core, but nothing's been purchased yet.  However, we are fairly limited in switch selection due to compliance purposes...93xx,94xx and 95xx only.

Highlighted

RB6502,

See attached diagram for explanation (sorry for the crudeness).

They should look at the 9548 instead of the 9500-40X. The price should be the same (or very close), the 9548 has more ports that run up to 25Gbps instead of just 10Gbps (and they run 1Gbps), the 9548 has more scale as it has 3rd Gen forwarding engine, and the 9548 has 40/100Gbps uplinks (can be converted to 10G or 25G via adapter).

Cheers,

Scott Hodgdon

Senior Technical Marketing Engineer

Enterprise Networking Group

 

View solution in original post

Highlighted

Thanks for the drawing, Scott!  I'm pretty sure the last time I used CCW to kick around various config options that it gave me a message that 9540 was "less expensive" than the 9548, but I can certainly look again.

Highlighted

@Scott Hodgdon 

 

Good diagram and note, what happens the failure scenarios and failure domain, it's nice and good to hear more about this.

 

All 4 will be the same domain ? or will be different Domains?

 

Seen one on Cisco Live Presentation, i have scenario same to deploy 1 . please do give some white papers, please.

 

 

BB
*** Rate All Helpful Responses ***
Highlighted

@balaji.bandi 

Each SWV is independent of the other, and so each is its own HA / failure domain.

This is the only White Paper of which I know : https://www.cisco.com/c/dam/en/us/products/collateral/switches/catalyst-9000/nb-06-cat-9k-stack-wp-cte-en.pdf . There is a Cisco Live Session, BRKCRS-2650 : Enterprise Network Next Generation High Availability, that touches on SWV HA and is delivered by the TME that covers the function for the Cat 9K family.

Cheers,
Scott Hodgdon

Senior Technical Marketing Engineer

Enterprise Networking Group

Highlighted

Just to be clear, the links between the two domains are/can be just trunks, not an SVL? 

Highlighted
VIP Mentor

It depends on Business and Costing.

 

if the business can make a decision, Cat 9600 is my choice, it got all redundant, and you can deploy VSS/SVL 2 chassis with Quad supervisor every point you have redundancy.

 

BB
*** Rate All Helpful Responses ***
Highlighted
Hall of Fame Community Legend

Check the pricing for two 9600 (including redundant power and dual supervisor card) vs 4 9500.
I'm with Reza & Balaji on this. Aside from multiple power supplies (per 9600 chassis), 9600 is ideal for your scenario.
It is easier to manage a single VSS pair (of two 9600 with dual supervisor in a single chassis) than two VSS pairs (2 x 9500 VSS pairs).
Highlighted

@Leo Laohoo  yes you are correct, cost-wise too expensive, that is the reason mentioned if the business can accept, Cat 9600 my choice. ( it all business call, what will be cost downtime vs capital cost)

 

 

BB
*** Rate All Helpful Responses ***
Highlighted

Hi Leo,

 

Can you explain how a single SWV domain (of any type of switch) protects against say, needing to change/update a Dual-Active Fast Hello link, or a situation where the customer has lost the credentials to the Core switches?  I think both of these scenarios would ultimately result in both core switches having to reboot, which would not be acceptable to the customer.  There are probably other scenarios (both known & corner cases) that would require a wholesale stack reboot, which the customer can't tolerate.  They want multiple levels of redundancy, so while a single SWV stack provides excellent redundancy, there are still cases where it falls down.  Having two SWV domains in the core seems like the best way (although very expensive and more complicated) to get as close as possible to "100% availability".

I definitely appreciate the recommendations, but I need to insure that they are appropriate to the task.

Highlighted
Hall of Fame Community Legend

Wait a second ... What kind of downstream clients are these that require 99.999999999% uptime?
Highlighted

The application is such that serious financial loss, significant equipment damage or loss of life could result from downtime.

To make it easier, let's just take the "100%"uptime as a given, since that's what the customer's asked us to design. We've already explained the redundancy cost vs downtime tradeoff to the customer, and they want no corners cut.

 

Content for Community-Ad