cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
288
Views
3
Helpful
8
Replies

Would one PSN node be enough to deploy for 5000-6000 users?

ggenti
Level 1
Level 1

A new customer wants to enable NAC using our existing ISE cluster and has asked to add a Policy Service Node (PSN) in their own datacenter as part of the cluster.

Given they have about 5,000 to 6,000 users, would one PSN be sufficient to handle the authentication load, or would it be better to deploy multiple PSNs? 

1 Accepted Solution

Accepted Solutions

> "use only this new PSN"

As already mentioned, there wouldn't be any redundancy here. You should at least configure one of the existing PSNs as a secondary RADIUS server. Other than that, a 3755/3855 should generally do the job.

View solution in original post

8 Replies 8

@ggenti you should definately deploy two nodes for resilency, in a small deployment model (see link below). With both nodes running the PAN, MNT and PSN personas.

https://www.cisco.com/c/en/us/td/docs/security/ise/performance_and_scalability/b_ise_perf_and_scale.html

 

"NAC" is too generic, as different functions can have different performance implications. Rob already posted the scaling document, which is very important for this planning. But you say there is already an ISE cluster. This likely already has PSN running. If you add a PSN node, it is typically not used exclusively for specific functions. Instead, you spread the load over all your available PSN nodes.

@Karsten IwenWe currently have a six-node deployment consisting of 2 PANs, 2 MNTs, and 2 PSNs. The client would like to add an additional PSN in their own datacenter to reduce latency, with the new node integrated into the main cluster. They plan to configure their switches to use only this new PSN. They are asking if one PSN node would be enough for their users. 

> "use only this new PSN"

As already mentioned, there wouldn't be any redundancy here. You should at least configure one of the existing PSNs as a secondary RADIUS server. Other than that, a 3755/3855 should generally do the job.

@Karsten Iwen  Thank you for the feedback!!

The SNS-3815 supports up to 50,000 concurrent sessions, and the smallest NVMe drive is 960GB which is way more than you'd need. You can order them beefier if needed.

No need for the larger servers unless you have done exact calculations of RADIUS + TACACS log retentions (i.e. how many days you MUST retain these logs on the ISE database).  The TPS (transactions per second) are also a consideration but if you build your network right (i.e. think about concurrent users, and when/if to use session re-auth) you might notice that 6000 endpoints may not be as chatty as you think. In wired scenario, in well designed setup, the chattiest traffic should be RADIUS accounting interim updates.

The 3815's AMD EPYC processor is so much more powerful than anything Cisco has shipped before, combined with NVMe storage for the IO demands .. you'd be hard pressed to see that box struggle with 5000-6000 concurrent endpoints. And I can't tell from your questions whether "users" equates to "concurrent users" (i.e. peak active count) ?

I have not yet seen the official performance numbers for the 3800 series on Cisco website - they're probably still testing that.

 

 

 

I didn't like the 3x15 because of the single drive. I was probably slightly paranoid, given that they really don't fail very often. Now I see that the 3855 also only has one drive for "PSN only." Perhaps I should wipe my rejection away.

As far as I am concerned, single drive failure in a PSN is annoying but not serious. RMA and rebuild.

In a two node deployment a single drive failure is slightly more annoying but still not serious. RMA and rebuild.

It’s quite rare for a drive with this price tag to fail. Mechanical drives have more issues than SSD.
And the wear levelling in SSD these days is so good that if you do the maths, you can hammer them with terabytes of data for many years. By then Cisco will have forced us off that SNS and you’ll be buying the next best thing.