cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
9397
Views
60
Helpful
8
Replies

ISE VM disk performance requirements

A customer is asking me about ISE VM storage performance requirements. The documentation states the following:

 

"The storage system for the Cisco ISE virtual appliance requires a minimum write performance of 50 MB per second and a read performance of 300 MB per second. Deploy a storage system that meets these performance criteria and is supported by VMware server."

 

The customer, or more specifically the customer's storage team, is asking if this is a "sustained" requirement along with the questions below. Is there any published or best practice info regarding specific IO performance for ISE that I can share with them?

 

Avg Read Latency Requirement:  < ?ms

Avg Write Latency Requirement:  < ?ms

Peak Read Latency Requirement:  < ?ms

Peak Write Latency Requirement:  < ?ms

Avg IO Size: ?

Write to Read Ratio: ?

Read Throughput: 300 MB/Sec (Is this sustained?)

Write Throughput: 50 MB/Sec (Is this sustained?)

Avg Read IOPS: ?

Avg Write IOPS: ?

Peak Read IOPS: ?

Peak Write IOPS: ?

 

Would the devices function as Active/ Passive or Active/ Active?

Would the above workload requirement be the sum of both sites or would the characteristic be for each appliance running concurrently?

1 Accepted Solution

Accepted Solutions

Damien Miller
VIP Alumni
VIP Alumni

Browsing Cisco BU folks, one of the enhancements I communicated to TAC, and also my CSE for TAG a few weeks ago, was to improve the throughput tests.  

 

One of the TAC cases we had opened following moving from 2.1 to 2.4 involved our MNT nodes not having the disk throughput to handle the load.  They were not alarming on the 50MB/s write threshold, but we were still having logs backup in the collector.  We ended up having to build the MNT node on faster storage, 6x10k rpm disks in a dedicated UCS C240 chassis was not capable of keeping up.  We are stable running one MNT on hyperflex, and the other on nutanix, but we still haven't been able to go back to our dedicated C240/local disk/vmware. We tried to rebuild it on the existing hardware a few weeks ago since we have been stable, still an issue not caught by the disk performance test.  

It appears as if the sequential write test looking for a minimum of 50 MB/s write does not translate to enough transactional small write throughput in some cases.  ex, 4k byte write speeds/database transactions vs sequential. 

 

Maybe a sanity check is needed on how this disk performance test was originally written.  

View solution in original post

8 Replies 8

hslai
Cisco Employee
Cisco Employee

Hardware and Virtual Appliance Requirements mentions,

... To achieve performance and scalability comparable to the Cisco ISE hardware appliance, the virtual machine should be allocated system resources equivalent to the Cisco SNS 3515 and 3595 appliances. ...

For SNS hardware appliance specifications, see "Table 1, Product Specifications" in the Cisco Secure Network Server Data Sheet.

 

The minimal disk I/O read and write performance numbers are general guidelines for each ISE node. Yes, the storage should be able to sustain at such rates and with relatively low latency. However, individual ISE nodes might not always use high I/O depending on their personas and work loads. For example, an M&T node use higher I/O while running reports of long durations.

Other than that, we have no other published info on this topic.

 

 

The ISE software will periodically perform a disk IO check (I think it's once per hour).  It just logs this to a file and TAC case use this as an argument if there is a performance problem.  I think this is a terrible idea.  Considering that customers will run more than one VM typically - and if all of these VM's hammer away at the disk subsystem then it's quite intrusive.  You'll see the spikes in the vSphere performance graphs. 

The initial performance check when installing ISE is acceptable in my opinion.  But this ongoing perf testing is just not on, especially now that SSD's are becoming more prevalent.  ISE should trust the figures it got during install and then move on.

The tests are performed every 3 hours and each dd takes about 0.033 seconds only on my ISE VM.

@hslai - fair enough if it's on one VM.  One of my customers has 44 nodes and it's quite noticeable to them :)

hslai,

Thanks for your input here. I've forward this thread to the customer. I think from the storage/server team perspective they are just worried that if they need to build the VMs for sustained "minimum write performance of 50 MB per second and a read performance of 300 MB per second" that would be a substantial requirement - personally I'm not sure what that really translates to on the back-end. Damien Miller's comment seems to imply that even above throughput might not be sufficient depending on the specifics of the deployment. The other option they're looking at is just deploying on SNS appliances and ruling out any VM variables on the ISE performance.

Thanks!

Thomas

ma.alsaffar
Level 1
Level 1

Hi,

 

the IO performance would be determined on the initial setup of ISE - when you starting configuring the VM itself -

there will be a test running during the initial setup to make sure the VM is meeting Cisco IO Requirements

 

the test will show you almost same as the following:

 

>>>> Measuring disk IO performance
>>>> *****************************************
>>>> Average I/O bandwidth writing to disk device: 55 MB/second
>>>> Average I/O bandwidth reading from disk device: 761 MB/second
>>>> I/O bandwidth performance within supported guidelines
>>>> Disk I/O bandwidth filesystem test, writing 300 MB to /opt:
>>>> 314572800 bytes (315 MB) copied, 6.70964 s, 46.9 MB/s
>>>> Disk I/O bandwidth filesystem read test, reading 300 MB from /opt:
>>>> 314572800 bytes (315 MB) copied, 0.0788764 s, 4.0 GB/s

 

even if IO performance test not succeeded, you will be able still to configure and login with CLI/GUI but VM team needs to investigate why they have slowness in the IO.

 

And I do not believe there is a best practice for this part, especially as it is only performance testing and requirement, 

so if you are dealing with a good infrastructure, you will never look at this.

 

 

 

Damien Miller
VIP Alumni
VIP Alumni

Browsing Cisco BU folks, one of the enhancements I communicated to TAC, and also my CSE for TAG a few weeks ago, was to improve the throughput tests.  

 

One of the TAC cases we had opened following moving from 2.1 to 2.4 involved our MNT nodes not having the disk throughput to handle the load.  They were not alarming on the 50MB/s write threshold, but we were still having logs backup in the collector.  We ended up having to build the MNT node on faster storage, 6x10k rpm disks in a dedicated UCS C240 chassis was not capable of keeping up.  We are stable running one MNT on hyperflex, and the other on nutanix, but we still haven't been able to go back to our dedicated C240/local disk/vmware. We tried to rebuild it on the existing hardware a few weeks ago since we have been stable, still an issue not caught by the disk performance test.  

It appears as if the sequential write test looking for a minimum of 50 MB/s write does not translate to enough transactional small write throughput in some cases.  ex, 4k byte write speeds/database transactions vs sequential. 

 

Maybe a sanity check is needed on how this disk performance test was originally written.  

Yes, Damien, the tests are for basic sanity. Thanks a lot for sharing the info.