See the comp.benchmarks FAQ, and don't believe everything a vendor tells you.
There's a good paper on a new I/O benchmarking technique that also covers the pitfalls of I/O benchmarking in the Nov. '94 ACM Transactions on Computer Systems -- "A New Approach to I/O Performance Evaluation -- Self-Scaling I/O Benchmarks, Predicted I/O Performance", Peter Chen and David Patterson.
Bonnie, IOZONE, IOBENCH, nhfsstone, one of the SPECs (SFS), are all useful for measuring I/O performance. There is also a program called BENCHMARK available from email@example.com -- apparently a standardized set of scripts to test remote access to mass storage systems.
In particular, note that based on a discussion here recently (8/96), it appears that some magazines (who ought to know better) are using HDT BenchTest as a disk drive performance measure, with the I/O sizes set so small that the disk drive cache is covering them all, resulting in anomalously high data rates (especially write rates).
home.hkstar.com is the start of a
reasonable-looking benchmark for PC hard drives (posted by
==== SPEC SFS ====
SPEC's System-level File Server (SFS) workload measures NFS server performance. It uses one server and two or more "load generator" clients.
SPEC-SFS is not free; it costs US$1,200 from the SPEC corporation. There's a FAQ about SPEC posted sometimes in comp.benchmarks.
email me at firstname.lastname@example.org
Copyright 1996 Rod Van Meter