Bob Friesenhahn wrote:
On Wed, 15 Jul 2009, Richard Elling wrote:

Unfortunately, "zpool iostat" doesn't really tell you anything about
performance.  All it shows is bandwidth. Latency is what you need
to understand performance, so use iostat.

You are still thinking about this as if it was a hardware-related problem when it is clearly not. Iostat is useful for analyzing hardware-related problems in the case where the workload is too much for the hardware, or the hardware is non-responsive. Anyone who runs this crude benchmark will discover that iostat shows hardly any disk utilization at all, latencies are low, and read I/O rates are low enough that they could be satisfied by a portable USB drive. You can even observe the blinking lights on the front of the drive array and see that it is lightly loaded. This explains why a two disk mirror is almost able to keep up with a system with 40 fast SAS drives.

heh. What you would be looking for is evidence of prefetching.  If there
is a lot of prefetching, the actv will tend to be high and latencies relatively
low.  If there is no prefetching, actv will be low and latencies may be
higher. This also implies that if you use IDE disks, which cannot handle
multiple outstanding I/Os, the performance will look similar for both runs.

Or, you could get more sophisticated and use a dtrace script to look at
the I/O behavior to determine the latency between contiguous I/O
requests. Something like iopattern is a good start, though it doesn't
try to measure the time between requests, it would be easy to add.
http://www.richardelling.com/Home/scripts-and-programs-1/iopattern
-- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to