[revisiting the OP]
On Dec 23, 2009, at 8:27 AM, Auke Folkerts wrote:
Hello,
We have performed several tests to measure the performance
using SSD drives for the ZIL.
Tests are performed using a X4540 "Thor" with a zpool consisting of
3 14-disk RaidZ2 vdevs. This fileserver is connected to a Centos 5.4
machine which mounts a filesystem on the zpool via NFS, over a
dedicated, direct, 1Gb ethernet link.
The issue we're trying to resolve by using SSD's, is the much-
discussed
slow NFS performance when using synchronous IO. Unfortunately,
asynchronous IO is not possible, since the Solaris NFS server is
synchronous by default, and the linux clients are unable to request
asynchronous NFS traffic.
The SSD devices we've used are OCZ Vertex Turbo 30Gb disks.
Data was gathered using iozone from the centos machine:
(iozone -c -e -i 0 -i 1 -i 2 -o -a).
For the archives, these options are interpreted as:
-a test a number file and I/O sizes, 64 KB to 512 MB and 4 KB
to 16 MB respectively.
-c measure close() timing (note: Solaris file systems enforce
sync-on-close, some Linux file systems make this optional
thereby risking data)
-e include flush in timing calculations (note: before you measure
this, make sure your NFS server actually respects syncs,
Solaris does)
-i 0 run write/rewrite test
-i 1 run read/re-read test (note: this tests server side read cache,
and is probably not a very useful test unless both your client
and
server is memory constrained)
-i 2 run random read/write test
-o writes are synchronously written to the disk. Files are opened
with the O_SYNC flag. This would make sense if your workload
is a database, like Oracle, which opens its datafiles O_DSYNC.
It is not at all representative of normal file system use.
OK, so you are interested in finding out how fast sync writes can be
written on a Solaris file system and NFS service that ensures sync
writes are flushed to media. To fully understand how this is impacted
by ZFS, you need to understand how the ZIL works and the impact
of the logbias setting. This is why I asked about wsize, because by
default, that is automatically adjusted by an algorithm that you can
override with logbias settings. For tests that do a number of different
I/O sizes, you should be able to see the knee in the curve as a
result of this algorithm. In short, you can optimize for latency or
bandwidth. Don't expect good latency when you are optimized for
bandwidth.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss