Krishnan is correct. We certainly do validate performance and scale across the different appliances. I recently raised the gap in missing MAB results and hope to have updates for those soon for ISE 2.4 (usually in the hundreds of auths/sec per PSN). We perform testing to understand max values BEFORE any drops or excessive resource utilization for a given feature or method for a given platform. We separately test scale and performance of an entire configuration which matches supported deployment models for multiple services on multiple nodes. Other perf/scale testing is done, but most fall into the category of individual feature versus overall deployment scale.
Another factor to consider is negative inputs. In most production deployments a node is running EAP authentication (possibly for multiple protocols) for a wide range of client types. Often the noise (bad auth inputs) can be extremely high and will have a significant impact on positive auth rate.