cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5437
Views
15
Helpful
3
Replies

Measuring Disk IO Performance - Average I/O Bandwidth

Hi,

 I recently had the following Alarm: Insufficient Virtual Machine Resources:

Alarms.png

 caused by the Average I/O Bandwidth value:

ise/admin# show tech-support
...
*****************************************
Measuring disk IO performance
*****************************************
Average I/O bandwidth writing to disk device: 422 MB/second
Average I/O bandwidth reading from disk device: 280 MB/second
WARNING: VM I/O PERFORMANCE TESTS FAILED!
WARNING: The bandwidth writing to disk must be at least 50 MB/second,
WARNING: and bandwidth reading from disk must be at least 300 MB/second.
WARNING: This VM should not be used for production use until disk
WARNING: performance issue is addressed.
Disk I/O bandwidth filesystem test, writing 300 MB to /opt:
314572800 bytes (315 MB) copied, 0.786212 s, 400 MB/s
Disk I/O bandwidth filesystem read test, reading 300 MB from /opt:
314572800 bytes (315 MB) copied, 0.140811 s, 2.2 GB/s
...

 The Average I/O Bandwidth test is performed every 3 hrs (0, 3, 6, 12 and 18 ... 9 is skipped due to CSCvx44981 VM IO Performance Checks not done at 09:00).

 The Alarm - Insufficient Virtual Machine Resources is generated if 24 hr average is bellow the requirements of 50 MB/s writing and/or 300 MB/s reading and because of that it's generated at the ISE GUI Dashboard between 03:00AM - 04:00AM.

As was done in prior versions, 300 blocks of 1024K (300MB) are written to an specific file and after that, a read test occur to generate the Disk I/O Bandwidth Filesystem Test (write/read) value.

 It takes 4x Disk I/O Write & Read Test to generate the Average (Write & Read) value.

 I started to question the result of the Average value, if you take a look at the example above the Disk I/O Bandwidth Filesystem Read Test is 2.2 GB/s but the Average I/O Bandwidth Reading is 280 MB/s ... a very high number ("individual test") against a very low number (average).

 Every time that I checked the Measuring Disk IO Performance the result is the same (very high "individual test", against very low average).

 Maybe the bug CSCuu07555 ISE 1.3 "sh tech" incorrect values for disk I/O performance still exists on ISE 2.7 P3.

 

 Has anyone noticed this difference: "Individual" against "Average"?

 

Regards

1 Accepted Solution

Accepted Solutions

hslai
Cisco Employee
Cisco Employee

I worked with TAC on your case earlier this week. Below is to give a brief summary:

ISE show-tech does two different read I/O tests:

(1) dd if=/dev/sda of=/dev/null count=2000000

(2) dd if=/opt/iowrites of=/dev/null bs=1024k where /opt/iowrites is the output file from the write test right proceeding it.

(1) is what it takes the average from and is more accurate than (2).

 

View solution in original post

3 Replies 3

hslai
Cisco Employee
Cisco Employee

I worked with TAC on your case earlier this week. Below is to give a brief summary:

ISE show-tech does two different read I/O tests:

(1) dd if=/dev/sda of=/dev/null count=2000000

(2) dd if=/opt/iowrites of=/dev/null bs=1024k where /opt/iowrites is the output file from the write test right proceeding it.

(1) is what it takes the average from and is more accurate than (2).

 

Hi @hslai ,

 beyond grateful !!!

 It makes sense to me.

 

Regards

Hello hslai et al,

First of all - thank you so much for sharing the commands that are run to test the storage performance. I have to admit I'm quite disappointed with these, since these commands provide results that are far from reality.

1. the first command "dd if=/dev/sda of=/dev/null count=2000000" does not set the block size, so this operation is run with default block size of 512bytes. 1GB of data is read (2000000 * 512bytes). Use of default block size is negatively affecting performance in this case.
2. "dd if=/opt/iowrites of=/dev/null bs=1024k" reads data with a block size of 1MB, which gives much higher performance than (1).
3. in both cases, the results are impacted by use of read cache. It's evident with (2), where I'm getting up to 3.7MB/s, despite only 10GbE bandwidth to the storage.

Bottom line - both of these commands should be run with option 'iflag=direct' if the uncached speed is to be tested. Also, the block size should be set for (1) and eventually the expected performance must be adjusted from 300MB/s

Regards,
Jacek