Hi, I did some tests on a Sun Fire x4540 with an external J4500 array 
(connected via two
HBA ports). I.e. there are 96 disks in total configured as seven 12-disk raidz2 
vdevs
(plus system, spares, unused disks) providing a ~ 63 TB pool with fletcher4 
checksums.
The system was recently equipped with a Sun Flash Accelerator F20 with 4 FMod
modules to be used as log devices (ZIL). I was using the latest snv_134 
software release.

Here are some first performance numbers for the extraction of an uncompressed 
50 MB
tarball on a Linux (CentOS 5.4 x86_64) NFS-client which mounted the test 
filesystem
(no compression or dedup) via NFSv3 (rsize=wsize=32k,sync,tcp,hard).

standard ZIL:               7m40s  (ZFS default)
1x SSD ZIL:                  4m07s  (Flash Accelerator F20)
2x SSD ZIL:                  2m42s  (Flash Accelerator F20)
2x SSD mirrored ZIL:   3m59s  (Flash Accelerator F20)
3x SSD ZIL:                  2m47s  (Flash Accelerator F20)
4x SSD ZIL:                  2m57s  (Flash Accelerator F20)
disabled ZIL:               0m15s
(local extraction        0m0.269s)

I was not so much interested in the absolute numbers but rather in the relative
performance differences between the standard ZIL, the SSD ZIL and the disabled
ZIL cases.

Any opinions on the results? I wish the SSD ZIL performance was closer to the
disabled ZIL case than it is right now.

ATM I tend to use two F20 FMods for the log and the two other FMods as L2ARC 
cache
devices (although the system has lots of system memory i.e. the L2ARC is not 
really
necessary). But the speedup of disabling the ZIL altogether is appealing (and 
would
probably be acceptable in this environment).
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to