Best laravel framework open-source packages.

Block storage

Block storage test suite based on SNIA's Solid State Storage Performance Test Specification Enterprise v1.1
Updated 1 year ago

Block Storage Benchmark

This benchmark suite uses fio, scripts for automation and graph/PDF generators for reporting to replicate the SNIA Solid State Storage (SSS) Performance Test Specification (PTS) Enterprise v1.1. This specification includes 8 test each measuring different block storage performance characteristics. The specification is available here:

https://www.snia.org/sites/default/files/SSS_PTS_Enterprise_v1.1.pdf

This benchmark suite uses gnuplot and wkhtmltopdf to generate test reports based on the SNIA specification. An example report is available here:

https://github.com/cloudharmony/block-storage/blob/master/reports/sample-report.pdf?raw=true

A few notes on cloud/virtualization limitations as they pertain to SNIA test specifications:

  1. Due to virtualization, direct access to hardware commands like ATA secure erase are generally unavailable in cloud environments. Thus, the recommended
    Purge methods prescribed by the test specification are generally not feasible in a cloud environment. As a workaround, where supported, full device TRIM is applied prior to preconditioning and testing cycles (using blkdiscard for devices and fstrim for mounted volumes). If ATA secure erase and TRIM are not supported, devices are zero filled as a last resort. The purge method used is documented in test output. If test targets are rotational devices, zero fill formatting is used instead. Thus, the purge order of precedence is: 1) ATA secure erase; 2) TRIM (SSD only) 3) zero fill. These can be overridden using the nosecureerase, notrim and nozerofill test parameters.
  2. The test specification requirement for write caches to be disabled cannot be guaranteed in a cloud environment.
  3. Often virtual machine block storage devices are partial logical volumes on a physical drive that may contain additional logical volumes in use by other users. Because of this, testing may not be across the full LBA of
    the physical drive (i.e. ActiveRange < 100%). Additionally, the possible
    shared state of the physical drive may contribute to higher variability and difficulty or inability to achieve the prescribed +/-10% condition for Steady State verification. This may also restrict the ability to consistently reproduce results across multiple tests

TESTING PARAMETERS The SNIA test specification allow for some user configurable parameters. These are described below. To start testing, use run.php. For runtime options, use run.php --help. Each of the parameters below may be specified using CLI arguments (e.g. run.php --test=iops) or bm_param_ prefixed environment variables (e.g. export bm_param_test=iops)

  • active_range LBA range to use for testing represented as a percentage. Default is 100, or 100% per the SNIA test specification. To test across a smaller range, this parameter may be reduced. If the test targets are not devices, test size is total free space - 100 MB. WARNING: if multiple volume type targets are specified, active range will be limited to the least amount of free space on all targets

  • collectd_rrd If set, collectd rrd stats will be captured from --collectd_rrd_dir. To do so, when testing starts, existing directories in --collectd_rrd_dir will be renamed to .bak, and upon test completion any directories not ending in .bak will be zipped and saved along with other test artifacts (as collectd-rrd.zip). User MUST have sudo privileges to use this option

  • collectd_rrd_dir Location where collectd rrd files are stored - default is /var/lib/collectd/rrd

  • fio Optional explicit path to the fio command - otherwise fio in PATH will be used

  • fio_* Optional fio runtime parameters. By default, fio parameters are generated by the test script per the SNIA test specification. Use this parameter to override default fio settings (e.g. fio_ioengine=sync)

  • font_size The base font size pt to use in reports and graphs. All text will be relative to this size (i.e. smaller, larger). Default is 9. Graphs use this value + 4 (i.e. default 13). Open Sans is included with this software. To change this, simply replace the reports/font.ttf file with your desired font

  • highcharts_js_url URL to highcharts.js. Highcharts is used to render 3D charts in reports. Use 'no3dcharts' to disable 3D charts. Default for this parameter is http://code.highcharts.com/highcharts.js

  • highcharts3d_js_url URL to highcharts-3d.js. Highcharts is used to render 3D charts in reports. Use 'no3dcharts' to disable 3D charts. Default for this parameter is http://code.highcharts.com/highcharts-3d.js

  • jquery_url URL for jquery. jquery is used by Highcharts. Highcharts is used to render 3D charts in reports. Use 'no3dcharts' to disable 3D charts. Default for this parameter is http://code.jquery.com/jquery-2.1.0.min.js

  • meta_burst Optional flag designating if the testing was performed using within the burst capabilities of the test volumes. See --wd_sleep_between for details on automating burst testing

  • meta_compute_service The name of the compute service this test pertains to. Used for report headers. May also be specified using the environment variable bm_compute_service

  • meta_compute_service_id The id of the compute service this test pertains to. Added to saved results. May also be specified using the environment variable bm_compute_service_id

  • meta_cpu CPU descriptor to use for report headers. If not specified, it will be set using the 'model name' attribute in /proc/cpuinfo

  • meta_drive_interface Optional drive interface descriptor to use for report headers (e.g. SATA 6Gb/s)

  • meta_drive_model Optional drive model descriptor to use for report headers (e.g. Intel DC S3700)

  • meta_drive_type Optional drive type descriptor to use for report headers (e.g. SATA, SSD, PCIe)

  • meta_encryption Optional flag designating if the test volume had encryption enabled

  • meta_host_cache Optional host caching designation for the test volumes (if applicable). One of the following values: read, rw or write

  • meta_instance_id The compute service instance type this test pertains to (e.g. c3.xlarge). Used for report headers

  • meta_memory Memory descriptor to use for report headers. If not specified, the amount of memory will be used (as reported by 'free')

  • meta_notes_storage Optional notes to display in the Storage Platform header column

  • meta_notes_test Optional notes to display in the Test Platform header column

  • meta_os Operating system descriptor to use for report headers. If not specified, this meta data will be derived from the first line of /etc/issue

  • meta_piops Optional argument designating the number of provisioned IOPs associated with the test volumes

  • meta_provider The name of the cloud provider this test pertains to. Used for report headers. May also be specified using the environment variable bm_provider

  • meta_provider_id The id of the cloud provider this test pertains to. Added to saved results. May also be specified using the environment variable bm_provider_id

  • meta_pthroughput Optional argument designating the amount of provisioned throughput (MBps) associated with the test volumes

  • meta_region The compute service region this test pertains to. Used for report headers. May also be specified using the environment variable bm_region

  • meta_resource_id An optional benchmark resource identifiers. Added to saved results. May also be specified using the environment variable bm_resource_id

  • meta_run_id An optional benchmark run identifiers. Added to saved results. May also be specified using the environment variable bm_run_id

  • meta_storage_config Storage configuration descriptor to use for report headers. If not specified, 'Unknown' will be displayed in this column

  • meta_storage_vol_info If testing is against a volume, this optioanl parameter may be used to designate setup of that volume (e.g. Raid, File System, etc.). Only displayed when targets are volumes. If this parameter is not specified the file system type for each target volume will be included in this column

  • meta_test_id Identifier for the test. Used for report headers. If not specified, this header column will be blank

  • meta_test_sw Name/version of the test software. Used for report headers. If not specified, this header column will be blank

  • no3dcharts Don't generate 3D charts. Unlike 2D charts rendered with the free chart utility gnuplot, 3D charts are rendered using highcharts - a commercial javascript charting tool. highcharts is available for free for non-commercial and development use, and for a nominal fee otherwise. See http://www.highcharts.com for additional licensing information

  • nojson Don't generate JSON result or fio output files

  • nopdfreport Don't generate PDF version of test report - report.pdf. (wkhtmltopdf dependency removed if specified)

  • noprecondition Don't perform the default 2X 128K sequential workload independent preconditioning (per the SNIA test specification). This step precedes workload dependent preconditioning

  • noprecondition_rotational Don't perform preconditioning for rotational test targets

  • nopurge Don't require a purge for testing. If this parameter is not set, and at least 1 target could not be purged, testing will abort. This parameter is implicit if --nosecureerase, --notrim and --nozerofill are all specified

  • nopurge_ignore If set, and a device purge could not be performed, testing will still continue

  • norandom Don't test using random (less compressible) data. Use of random data for IO is a requirement of the SNIA test specification

  • randommap Random maps are allocated at init time to track written to blocks and duplicate block writes. When used, random maps must be allocated in memory at init time. The memory allocation for these can be problematic for large test volume (e.g. 16TB volume = 4.2GB memory). If this option is not set, random fio tests will be executed using the --norandommap and --randrepeat=0 fio options

  • noreport Don't generate html or PDF test reports - report.zip and report.pdf (gnuplot, wkhtmltopdf and zip dependencies removed if specified)

  • nosecureerase Don't attempt to secure erase device targets prior to test cycles (this is the first choice - hdparm dependency removed if specified). This parameter is implicit if --secureerase_pswd is not provided

  • notrim Don't attempt to TRIM devices/volumes prior to testing cycles (util-linux dependency removed if specified)

  • nozerofill Don't zero fill rotational devices (or SSD devices when TRIM is not supported) prior to testing cycles. Zero fill applies only to device targets

  • nozerofill_non_rotational If set, non-rotational targets will not be zero filled

  • oio_per_thread The outstanding IO per thread (a.k.a. queue depth). This translates to the fio iodepth parameter. Total outstanding IO for a given test is threads * oio_per_thread. Per the SNIA test specification, this is a user definable parameter. For latency tests, this parameter is a fixed value of 1. Default value for this parameter is 64

  • output The output directory to use for writing test artifacts (JSON and reports). If not specified, the current working directory will be used

  • precondition_once If set, preconditioning will be performed once only instead of prior to each test performed

  • precondition_passes Number of passes for workload independent preconditioning. Per the SNIA test specification the default is 2X. Use this or the --noprecondition argument to change this default behavior

  • precondition_time Fix each preconditioning pass to this specific duration (seconds)

  • purge_once If set, purge will be performed once only instead of prior to each test performed

  • oio_per_thread The outstanding IO per thread (a.k.a. queue depth). This translates to the fio 'iodepth' parameter. Total outstanding IO for a given test is 'threads' * 'threads_oio'. Per the SNIA test specification, this is a user definable parameter. For latency tests, this parameter is a fixed value of 1. Default value for this parameter is 64

  • randommap Random maps are allocated at init time to track written to blocks and duplicate block writes. When used, random maps must be allocated in memory at init time. The memory allocation for these can be problematic for large test volume (e.g. 16TB volume = 4.2GB memory). If this option is not set, random fio tests will be executed using the --norandommap and --randrepeat=0 fio options

  • savefio Include results from every fio test job in save output

  • secureerase_pswd In order for ATA secure erase to be attempted for device purge (prior to test invocation), you must first set a security password using the command: sudo hdparm --user-master u --security-set-pass [pswd] /dev/[device] The password used should be supplied using this test parameter. If it is not supplied, ATA secure erase will not be attempted. If this parameter is not specified, the hdparm dependent will be removed

  • sequential_only If set, all random tests will be executed using sequential IO instead

  • skip_blocksize block sizes to skip during testing. This argument may be repeated. Valid options are: 1m, 128k, 64k, 32k, 16k, 8k, 512b

  • skip_workload workloads to skip during testing. This argument may be repeated. Valid options are: 100/0, 95/5, 65/35, 50/50, 35/65, 5/95, 0/100

  • ss_max_rounds The maximum number of test cycle iterations to allow for steady state verification. Default is x=25 (per the SNIA test specification). If steady state cannot be reached within this number of test cycles (per the ss_verification ratio), testing will terminate, and results will be designated as having not achieved steady state. This parameter may be used to increase (or decrease) the number of test cycles. A minimum value of 5 is permitted (the minimum number of cycles for the measurement window)

  • ss_verification Ratio to utilize for steady state verification. The default is 10 or 10% per the SNIA test specification. In order to achieve steady state verification, the variance between the current test cycle loop and the 4 that precede it cannot exceed this value. In cloud environments with high IO variability, it may be difficult to achieve the default ratio and thus this value may be increased using this parameter

  • target REQUIRED: The target device or volume to use for testing. This parameter may reference either the physical device (e.g. /dev/sdc) or a mounted volume (e.g. /ssd). TRIM will be attempted using blkdiscard for a device and fstrim for a volume if the targets are non-rotational. For rotational devices, a zero fill will be used (i.e. dd if=/dev/zero). Multiple targets may be specified each separated by a comma. When multiple targets are specified, the threads parameter represents the number of threads per target (i.e. total threads = # of targets * threads). Multiple target tests provide aggregate metrics. With the exception of latency tests, if multiple targets are specified, they will be tested concurrently. Sufficient permissions for the device/volume must exist for the user that initiates testing. WARNING: If a device is specified (e.g. /dev/sdc), all data on that device will be erased during the course of testing. All targets must be of the same type (device or volume). WARNING: if multiple volume type targets are specified, active range will be limited to the least amount of free space on all targets.

                          Multiple devices may also by indicated with an 
                          alphanumeric range value enclosed in brackets. A 
                          few examples:
                          "/files[1-40]" == "/files1,/files2,...,files40"
                          "/dev/vd[b-e]" == "/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde"
                          "/dev/sd[c-ap]" == "/dev/sdc,/dev/sdd,...,/dev/sdz,/dev/sdaa,...,/dev/sdap"
                          "/dev/xvdb[a-b]" == "/dev/xvdba,/dev/xvdbb"
    
  • target_skip_not_present If set, targets specified that do not exist will be ignored (so long as at least 1 target exists)

  • test The SNIA SSS PTS tests to perform. One or more of the following: iops: IOPS Test - measures IOPS at a range of random block sizes and read/write mixes throughput: Throughput Test - measures 128k and 1m sequential read and write throughput (MB/s) in steady state latency: Latency Test - measures IO response times for 3 block sizes (0.5k, 4k and 8k), and 3 read/write mixes (100/0, 65/35 and 0/100). If multiple target devices or volumes are specified, latency tests are performed sequentially wsat: Write Saturation Test - measures how drives respond to continuous 4k random writes over time and total GB written (TGBW). NOTE: this implementation uses the alternate steady state test method (1 minute SS checks interspersed by 30 minute WSAT test intervals) hir: Host Idle Recovery - observes whether the devices utilizes background garbage collection wherein performance increases with the introduction of host idle times between periods of 4k random writes xsr: Cross Stimulus Recovery - tests how the device handles transitions from large block sequential writes to small block random writes and back NOT YET IMPLEMENTED ecw: Enterprise Composite Workload - measures performance in a mixed IO environment NOT YET IMPLEMENTED dirth: Demand Intensity / Response Time Histogram - measures performance degradation when a device is subject to a super saturating IO load NOT YET IMPLEMENTED Multiple tests may be specified each separated by a comma. Default for this parameter is iops. Some tests like 'hir' are specific to SSD type devices

  • threads The number of threads to use for the test cycle. Per the SNIA test specification, this is a user definable parameter. The default value for this parameter is the number of CPU cores. This parameter may contain the token {cpus} which will be replaced with the number of CPU cores present. It may also contain a mathematical expression in conjunction with {cpus} - e.g. {cpus}/2. If target references multiple devices or volumes, this parameter signifies the number of threads per device. Thus total threads is # of targets * threads. Latency tests are fixed at 1 thread. This parameter is used to define the fio --numjobs argument

  • threads_per_core_max Max number of threads per CPU core - default is 2. If this parameter causes the number of threads per target to be less than 1, threads per target will be increased to 1

  • threads_per_target_max Max number of threads per target - default is 8

  • timeout Max time to permit for testing in seconds. Default is 24 hours (86400 seconds)

  • trim_offset_end When invoking a blkdiscard TRIM, offset the lenth by this number of bytes

  • verbose Show verbose output - warning: this may produce a lot of output

  • wd_test_duration The test duration for workload dependent test iterations in seconds. Default is 60 per the SNIA test specification

  • wd_sleep_between Optional sleep duration (seconds) to apply between each workload dependent test. This may be a formula containing the following dynamic values: {duration} => duration of the last test (secs) {size} => size of targets (GB) {volumes} => number of target volumes Ternary operations are also supported. This parameter may be useful for credit based storage platforms like Amazon EC2 EBS volumes: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html The following EBS shortcuts are supported:

                          --wd_sleep_between gp2 => {duration}*({size} >= 1000 ? 0 : ({size} >= 750 ? 0.33 : ({size} >= 500 ? 1 : ({size} >= 250 ? 3 : ({size} >= 214 ? 3.6734 : ({size} >= 100 ? 9 : 29))))))
                          --wd_sleep_between st1 => {duration}*({size} >= 12500 ? 0 : (({size} >= 2000 ? 500 : ({size} >= 1000 ? 250 : 125)) - (({size}/1000)*40))/(({size}/1000)*40))
                          --wd_sleep_between sc1 => {duration}*({size} >= 20833 ? 0 : (({size} >= 3125 ? 250 : ({size} >= 3000 ? 240 : ({size} >= 2000 ? 160 : ({size} >= 1000 ? 80 : 40)))) - (({size}/1000)*12))/(({size}/1000)*12))
                          --wd_sleep_between efs => {duration}*({size} >= 4096 ? 0 : ({size} >= 1024 ? 1 : ({size} >= 512 ? 4 : ({size} >= 256 ? 8 : 200))))
    
  • wd_sleep_between_size Optional path to a file to use in determining a target size for use in the wd_sleep_between sleep interval calculation (the {size} value). May be useful for NFS volumes where size reported for the volume may not accurate

  • wkhtml_xvfb If set, wkhtmlto* commands will be prefixed with xvfb-run (which is added as a dependency). This is useful when the wkhtml installation does not support running in headless mode

DEPENDENCIES This benchmark suite uses the following packages:

fio Performs actual testing - version 2.1+ required gnuplot Generates graphs per the SNIA test specification. These graphs are used in the PDF report hdparm Used for ATA secure erase (when supported) php-cli Test automation scripts (/usr/bin/php) timeout Used to limit fio runtime when applicable to avoid stuck fio processes util-linux For TRIM operations using blkdiscard and fstrim (when supported). Not required if test targets are rotational wkhtmltopdf Generates PDF version of report - download from http://wkhtmltopdf.org xvfb-run Allows wkhtmltopdf to be run in headless mode (required if --nopdfreport is not set and --wkhtml_xvfb is set) zip Archives HTML test report into a single zip file

TEST ARTIFACTS Upon successful completion of testing, the following artifacts will be produced in the working directory ([test] replaced with one of the test identifiers - e.g. report-iops.json):

[test].json JSON formatted job results for [test]. Each test provides different result metrics. This file contains a single level hash of key/value pairs representing these metrics fio-[test].json JSON formatted fio job results for [test]. Jobs are in run order each with a unique job name. For workload independent preconditioning, the job name uses the format 'wipc-N', where N is the preconditioning pass (i.e. N=1 is first pass, N=2 second). Unless --precondition_passes is otherwise specified, only 2 wipc JSON files should be present (each representing one of the 2X preconditioning tests). For workload dependent preconditioning and other testing, the job name is set by the test. For example, for the IOPS test, the job name format is 'xN-[rw]-[bs]-rand', where N is iteration number (1-25+), [rw] is the read/write ratio (separated by underscore) and [bs] is block size. Jobs that fall within the steady state measurement window will have the suffix '-ssmw' (e.g. x5-0_100-4k-rand-ssmw). there may be up to 10 fio-[test].json corresponding with each of the tests and 2 files for throughput: test-throughput-128k.json and test-throughput-1024k.json report.zip HTML test report (open index.html). The report design and layout is based on the SNIA test specification. In addition, this archive contains source gnuplot scripts and data files for report graphs. Graphs are rendered in svg format report.pdf PDF version of the test report (wkhtmltopdf used to generate this version of the report)

collectd-rrd.zip collectd RRD files (see --collectd_rrd)

SAVE SCHEMA The following columns are included in CSV files/tables generated by save.sh. Indexed MySQL/PostgreSQL columns are identified by . Columns without descriptions are documented as runtime parameters above. Data types and indexing used is documented in save/schema/.json. Columns can be removed using the save.sh --remove parameter

COMMON COLUMNS (included in all CSV/tables): active_range benchmark_version: [benchmark version] collectd_rrd: [URL to zip file containing collectd rrd files] fio_version: [fio version] iteration: [iteration number (used with incremental result directories)] meta_burst meta_compute_service meta_compute_service_id* meta_cpu: [CPU model info] meta_cpu_cache: [CPU cache] meta_cpu_cores: [# of CPU cores] meta_cpu_speed: [CPU clock speed (MHz)] meta_drive_interface meta_drive_model meta_drive_type meta_encryption meta_host_cache meta_instance_id* meta_hostname: [system under test (SUT) hostname] meta_memory meta_memory_gb: [memory in gigabytes] meta_memory_mb: [memory in megabyets] meta_os_info: [operating system name and version] meta_piops meta_provider meta_provider_id* meta_region* meta_resource_id meta_run_id meta_storage_config* meta_storage_vol_info meta_test_id* meta_test_sw noprecondition noprecondition_rotational nopurge nopurge_ignore norandom nosecureerase notrim nozerofill nozerofill_non_rotational oio_per_thread: [queue depth per thread] precondition_once precondition_passes purge_methods: [purge methods used for each device] purge_once report_pdf: [URL to report PDF file (if --store option used)] report_zip: [URL to report ZIP file (if --store option used)] skip_blocksize skip_workload

Note: ss_* columns are not part of fio or wsat tables

ss_average: [mean value of y axis in steady state window] ss_largest_data_excursion: [largest data excusion in steady state window] ss_largest_slope_excursion: [largest slope excusion in steady state window] ss_linear_fit_formula: [steady state linear fit formula] ss_max_data_excursion: [max allowed data excusion in steady state window] ss_max_rounds ss_max_slope_excursion: [max allowed slope excusion in steady state window] ss_rounds: [steady state window rounds (e.g. 1 to 5)] ss_slope: [slope in steady state window] ss_start: [steady state window start round] ss_stop: [steady state window end round] ss_verification ss_y_intercept: [y intercept in steady state window] target target_count: [number of targets] target_size_gb: [average target size in gigabytes] target_sizes: [sizes of each target] test*: [test identifier (e.g. iops)] test_started*: [when the test started] test_stopped: [when the test ended] threads: [number of threads per target] threads_total: [total number of threads] timeout wd_sleep_between wd_test_duration

FIO CSV/TABLES fio_command: [complete fio command] iodepth_level_1: [distribution of IO with IO depth <= 1] iodepth_level_2: [distribution of IO with IO depth 2] iodepth_level_4: [distribution of IO with IO depth 4] iodepth_level_8: [distribution of IO with IO depth 8] iodepth_level_16: [distribution of IO with IO depth 16] iodepth_level_32: [distribution of IO with IO depth 32] iodepth_level_gte64: [distribution of IO with IO depth >= 64] jobname: [fio job name] latency_us_2: [distribution of IO with latency <= 2 microseconds] latency_us_4: [distribution of IO with latency >2 and <=4 microseconds] latency_us_10: [distribution of IO with latency >4 and <=10 microseconds] latency_us_20: [distribution of IO with latency >10 and <=20 microseconds] latency_us_50: [distribution of IO with latency >20 and <=50 microseconds] latency_us_100: [distribution of IO with latency >50 and <=100 microseconds] latency_us_250: [distribution of IO with latency >100 and <=250 microseconds] latency_us_500: [distribution of IO with latency >250 and <=500 microseconds] latency_us_750: [distribution of IO with latency >500 and <=750 microseconds] latency_us_1000: [distribution of IO with latency >750 and <=1000 microseconds] latency_ms_2: [distribution of IO with latency >1000 microseconds and <=2 milliseconds] latency_ms_4: [distribution of IO with latency >2 and <=4 milliseconds] latency_ms_10: [distribution of IO with latency >4 and <=10 milliseconds] latency_ms_20: [distribution of IO with latency >10 and <=20 milliseconds] latency_ms_50: [distribution of IO with latency >20 and <=50 milliseconds] latency_ms_100: [distribution of IO with latency >50 and <=100 milliseconds] latency_ms_250: [distribution of IO with latency >100 and <=250 milliseconds] latency_ms_500: [distribution of IO with latency >250 and <=500 milliseconds] latency_ms_750: [distribution of IO with latency >500 and <=750 milliseconds] latency_ms_1000: [distribution of IO with latency >750 and <=1000 milliseconds] latency_ms_2000: [distribution of IO with latency >1000 and <2000 milliseconds] latency_ms_gte2000: [distribution of IO with latency >=2000 milliseconds] majf: [major page faults] minf: [minor page faults] read_io_bytes: [read IO - KB] read_bw: [read bandwidth - KB/s] read_iops: [read IOPS] read_runtime: [read runtime - ms] read_slat_min: [min read submission latency - μs] read_slat_max: [max read submission latency - μs] read_slat_mean: [mean read submission latency - μs] read_slat_stddev: [read submission latency standard deviation - μs] read_clat_min: [min read completion latency - μs] read_clat_max: [max read completion latency - μs] read_clat_mean: [mean read completion latency - μs] read_clat_stddev: [read completion standard deviation - μs] read_clat_percentile_1: [1st percentile read completion latency - μs] read_clat_percentile_5: [5th percentile read completion latency - μs] read_clat_percentile_10: [10th percentile read completion latency - μs] read_clat_percentile_20: [20th percentile read completion latency - μs] read_clat_percentile_30: [30th percentile read completion latency - μs] read_clat_percentile_40: [40th percentile read completion latency - μs] read_clat_percentile_50: [50th percentile read completion latency - μs] read_clat_percentile_60: [60th percentile read completion latency - μs] read_clat_percentile_70: [70th percentile read completion latency - μs] read_clat_percentile_80: [80th percentile read completion latency - μs] read_clat_percentile_90: [90th percentile read completion latency - μs] read_clat_percentile_95: [95th percentile read completion latency - μs] read_clat_percentile_99: [99th percentile read completion latency - μs] read_clat_percentile_99_5: [99.5th percentile read completion latency - μs] read_clat_percentile_99_9: [99.9th percentile read completion latency - μs] read_clat_percentile_99_95: [99.95th percentile read completion latency - μs] read_clat_percentile_99_99: [99.99th percentile read completion latency - μs] read_lat_min: [min total read latency - μs] read_lat_max: [max total read latency - μs] read_lat_mean: [mean total read latency - μs] read_lat_stddev: [total read latency standard deviation - μs] read_bw_min: [min read bandwidth - KB/s] read_bw_max: [max read bandwidth - KB/s] read_bw_agg: [aggregate read bandwidth - KB/s] read_bw_mean: [mean read bandwidth - KB/s] read_bw_dev: [read bandwidth standard deviation - KB/s] started: [when the fio job started] stopped: [when the fio job ended] usr_cpu: [user CPU usage] sys_cpu: [system CPU usage] write_io_bytes: [write IO - KB] write_bw: [write bandwidth - KB/s] write_iops: [write IOPS] write_runtime: [write runtime - ms] write_slat_min: [min write submission latency - μs] write_slat_max: [max write submission latency - μs] write_slat_mean: [mean write submission latency - μs] write_slat_stddev: [write submission latency standard deviation - μs] write_clat_min: [min write completion latency - μs] write_clat_max: [max write completion latency - μs] write_clat_mean: [mean write completion latency - μs] write_clat_stddev: [write completion standard deviation - μs] write_clat_percentile_1: [1st percentile write completion latency - μs] write_clat_percentile_5: [5th percentile write completion latency - μs] write_clat_percentile_10: [10th percentile write completion latency - μs] write_clat_percentile_20: [20th percentile write completion latency - μs] write_clat_percentile_30: [30th percentile write completion latency - μs] write_clat_percentile_40: [40th percentile write completion latency - μs] write_clat_percentile_50: [50th percentile write completion latency - μs] write_clat_percentile_60: [60th percentile write completion latency - μs] write_clat_percentile_70: [70th percentile write completion latency - μs] write_clat_percentile_80: [80th percentile write completion latency - μs] write_clat_percentile_90: [90th percentile write completion latency - μs] write_clat_percentile_95: [95th percentile write completion latency - μs] write_clat_percentile_99: [99th percentile write completion latency - μs] write_clat_percentile_99_5: [99.5th percentile write completion latency - μs] write_clat_percentile_99_9: [99.9th percentile write completion latency - μs] write_clat_percentile_99_95: [99.95th percentile write completion latency - μs] write_clat_percentile_99_99: [99.99th percentile write completion latency - μs] write_lat_min: [min total write latency - μs] write_lat_max: [max total write latency - μs] write_lat_mean: [mean total write latency - μs] write_lat_stddev: [total write latency standard deviation - μs] write_bw_min: [min write bandwidth - KB/s] write_bw_max: [max write bandwidth - KB/s] write_bw_agg: [aggregate write bandwidth - KB/s] write_bw_mean: [mean write bandwidth - KB/s] write_bw_dev: [write bandwidth standard deviation - KB/s]

HIR CSV/TABLES: hir_steady_state_iops: [mean 4k rand write IOPS during steady state phase of preconditioning] hir_wait_5s_iops: [mean 4k rand write IOPS during testing with 5 second wait intervals] hir_wait_10s_iops: [mean 4k rand write IOPS during testing with 10 second wait intervals] hir_wait_15s_iops: [mean 4k rand write IOPS during testing with 15 second wait intervals] hir_wait_25s_iops: [mean 4k rand write IOPS during testing with 25 second wait intervals] hir_wait_50s_iops: [mean 4k rand write IOPS during testing with 50 second wait intervals]

IOPS CSV/TABLES: iops_1m_100_0: [mean 1m read IOPS in steady state] iops_128k_100_0: [mean 128k read IOPS in steady state] iops_64k_100_0: [mean 64k read IOPS in steady state] iops_32k_100_0: [mean 32k read IOPS in steady state] iops_16k_100_0: [mean 16k read IOPS in steady state] iops_8k_100_0: [mean 8k read IOPS in steady state] iops_4k_100_0: [mean 4k read IOPS in steady state] iops_512b_100_0: [mean 512b read IOPS in steady state] iops_1m_95_5: [mean 1m 95/5 rw IOPS in steady state] iops_128k_95_5: [mean 128k 95/5 rw IOPS in steady state] iops_64k_95_5: [mean 64k 95/5 rw IOPS in steady state] iops_32k_95_5: [mean 32k 95/5 rw IOPS in steady state] iops_16k_95_5: [mean 16k 95/5 rw IOPS in steady state] iops_8k_95_5: [mean 8k 95/5 rw IOPS in steady state] iops_4k_95_5: [mean 4k 95/5 rw IOPS in steady state] iops_512b_95_5: [mean 512b 95/5 rw IOPS in steady state] iops_1m_65_35: [mean 1m 65/35 rw IOPS in steady state] iops_128k_65_35: [mean 128k 65/35 rw IOPS in steady state] iops_64k_65_35: [mean 64k 65/35 rw IOPS in steady state] iops_32k_65_35: [mean 32k 65/35 rw IOPS in steady state] iops_16k_65_35: [mean 16k 65/35 rw IOPS in steady state] iops_8k_65_35: [mean 8k 65/35 rw IOPS in steady state] iops_4k_65_35: [mean 4k 65/35 rw IOPS in steady state] iops_512b_65_35: [mean 512b 65/35 rw IOPS in steady state] iops_1m_50_50: [mean 1m 50/50 rw IOPS in steady state] iops_128k_50_50: [mean 128k 50/50 rw IOPS in steady state] iops_64k_50_50: [mean 64k 50/50 rw IOPS in steady state] iops_32k_50_50: [mean 32k 50/50 rw IOPS in steady state] iops_16k_50_50: [mean 16k 50/50 rw IOPS in steady state] iops_8k_50_50: [mean 8k 50/50 rw IOPS in steady state] iops_4k_50_50: [mean 4k 50/50 rw IOPS in steady state] iops_512b_50_50: [mean 512b 50/50 rw IOPS in steady state] iops_1m_35_65: [mean 1m 35/65 rw IOPS in steady state] iops_128k_35_65: [mean 128k 35/65 rw IOPS in steady state] iops_64k_35_65: [mean 64k 35/65 rw IOPS in steady state] iops_32k_35_65: [mean 32k 35/65 rw IOPS in steady state] iops_16k_35_65: [mean 16k 35/65 rw IOPS in steady state] iops_8k_35_65: [mean 8k 35/65 rw IOPS in steady state] iops_4k_35_65: [mean 4k 35/65 rw IOPS in steady state] iops_512b_35_65: [mean 512b 35/65 rw IOPS in steady state] iops_1m_5_95: [mean 1m 5/95 rw IOPS in steady state] iops_128k_5_95: [mean 128k 5/95 rw IOPS in steady state] iops_64k_5_95: [mean 64k 5/95 rw IOPS in steady state] iops_32k_5_95: [mean 32k 5/95 rw IOPS in steady state] iops_16k_5_95: [mean 16k 5/95 rw IOPS in steady state] iops_8k_5_95: [mean 8k 5/95 rw IOPS in steady state] iops_4k_5_95: [mean 4k 5/95 rw IOPS in steady state] iops_512b_5_95: [mean 512b 5/95 rw IOPS in steady state] iops_1m_0_100: [mean 1m write IOPS in steady state] iops_128k_0_100: [mean 128k write IOPS in steady state] iops_64k_0_100: [mean 64k write IOPS in steady state] iops_32k_0_100: [mean 32k write IOPS in steady state] iops_16k_0_100: [mean 16k write IOPS in steady state] iops_8k_0_100: [mean 8k write IOPS in steady state] iops_4k_0_100: [mean 4k write IOPS in steady state] iops_512b_0_100: [mean 512b write IOPS in steady state]

LATENCY CSV/TABLES: latency_8k_100_0_mean: [mean 8k read latency] latency_8k_100_0_max: [max 8k read latency] latency_4k_100_0_mean: [mean 4k read latency] latency_4k_100_0_max: [max 8k read latency] latency_512b_100_0_mean: [mean 512b read latency] latency_512b_100_0_max: [max 512b read latency] latency_8k_65_35_mean: [mean 8k 65/35 rw latency] latency_8k_65_35_max: [max 8k 65/35 rw latency] latency_4k_65_35_mean: [mean 4k 65/35 rw latency] latency_4k_65_35_max: [max 4k 65/35 rw latency] latency_512b_65_35_mean: [mean 512b 65/35 rw latency] latency_512b_65_35_max: [max 512b 65/35 rw latency] latency_8k_0_100_mean: [mean 8k write latency] latency_8k_0_100_max: [max 8k write latency] latency_4k_0_100_mean: [mean 4k write latency] latency_4k_0_100_max: [max 4k write latency] latency_512b_0_100_mean: [mean 512b write latency] latency_512b_0_100_max: [max 512b write latency]

THROUGHPUT CSV/TABLES: throughput_1024k_100_0: [mean 1024k read throughput - MB/s] throughput_1024k_0_100: [mean 1024k write throughput - MB/s] throughput_128k_100_0: [mean 128k write throughput - MB/s] throughput_128k_0_100: [mean 128k write throughput - MB/s]

WSAT CSV/TABLES wsat_iops: [mean 4k write IOPS in steady state]

USAGE

perform IOPS test against device /dev/sdc

./run.sh --target=/dev/sdc --test=iops

perform IOPS, Latency and Throughput tests against /dev/sdc and /dev/sdd

concurrently using a maximum of [num CPU cores]*2 and 32 OIO per thread

./run.sh --target=/dev/sdc --target=/dev/sdd --test=iops --test=latency --test=throughput --threads="{cpu}*2" --oio_per_thread=32

perform IOPS test against device /dev/sdc but skip the purge step

./run.sh --target=/dev/sdc --test=iops --nopurge

perform IOPS test against device /dev/sdc but skip the purge and workload

independent preconditioning

./run.sh --target=/dev/sdc --test=iops --nopurge --noprecondition

perform 5 iterations of the same IOPS test above

for i in {1..5}; do mkdir -p ~/block-storage-testing/$i; ./run.sh --target=/dev/sdc --test=iops --nopurge --noprecondition --output ~/block-storage-testing/$i; done

save.sh saves results to CSV, MySQL, PostgreSQL, BigQuery or via HTTP

callback. It can also save artifacts (PDF and ZIP reports) to S3, Azure

Blob Storage or Google Cloud Storage

save results to CSV files

./save.sh

save results from 5 iterations text example above

./save.sh ~/block-storage-testing

save results to a PostgreSQL database

./save --db postgresql --db_user dbuser --db_pswd dbpass --db_host db.mydomain.com --db_name benchmarks

save results to BigQuery and artifacts (PDF and ZIP reports) to S3

./save --db bigquery --db_name benchmark_dataset --store s3 --store_key THISIH5TPISAEZIJFAKE --store_secret thisNoat1VCITCGggisOaJl3pxKmGu2HMKxxfake --store_container benchmarks1234