02-22-2024 07:40 PM - last edited on 02-22-2024 10:18 PM by shaiksh
I am going to be replacing a pair of 5520's with a pair of 9800's. My 5520's are in a N+1 design. We did this because it gave me a controller to use for testing configuration changes and OS upgrades.
I've read through the 9800 documentation, and I am leaning towards keeping this design. I was curious what others are doing. I can't see in our environment why to use HA SSO.
For engineers that are using N+1, do you split your AP's across the two controllers? Today, I keep all AP's on one controller so the other controller is just sitting idle.
Thoughts?
02-23-2024 12:09 AM
- The drawback of N+1 is that since an AP needs to reconnect when the primary controller is down ; client sessions will be dropped and need to reconnect ,
M.
02-23-2024 01:27 AM
If you want to be able to fail over to a different site that has different subnets, then you will need to look at FlexConnect with local switching. Either way, in any N+1 scenarios, the configuration should be pretty much the same. This goes for wlan, Mobility, radius, etc. If you had differences in config, then during a failover, you would have devices that most likely fail to connect or fail when roaming. You don’t want that. So even at site 1, both controller need to be configured the same if using N+1.
It’s not easy to just say what you need to do without really understanding the requirements and also understanding where and what each site has. Knowing more information might result in a total different design.
02-23-2024 06:51 AM
It's really just about your availability needs/requirements. HA-SSO gives you seamless failover, N+1 doesn't.
We have HA-SSO pair in DC1 doing N+1 with HA-SSO pair in DC2 with APs split across the two pairs. So all the APs have seamless switchover locally and in case we lose one DC only half the APs will have to switch to the other DC.
02-23-2024 04:58 PM
I started my position here with 3 WLC HA pairs across 2 physical datacenters on the same site (each SSO pair has a chassis in DC A and another in DC B). Each WLC chassis has one uplink to a 6500 VSS in DC A and another to DC B (crisscrossed).
As we transition to 9800 WLCs and 9500 switches, we had the opportunity to change both the SSO config and the VSS (Stackwise) switch topology. I wanted to keep SSO for ease of config (3 config points instead of 6) as well as seamless failover in the event of issues such as loss of uplinks, a DC blackout, one of the VSS or WLC chassis failing, etc. We also elected to keep the WLC fiber uplinks crisscrossed across DCs for better traffic balancing (I can’t recall if it was to reduce traffic across the VSS or upstream routers or both).
We have plenty of fiber between each DC, making this topology feasible. 9800s supporting fiber redundancy ports makes this even better, not having RPs run through the VSS switch as on the 8540s. If we didn’t have two DCs, so much fiber, and multiple WLC pairs (which helps for config testing), I might consider a different design.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide