Hi Mertol,
On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:
ZFS SSD usage behaviour heavly depends on access pattern and for
asynch ops ZFS will not use SSD's. I'd suggest you to disable
SSD's , create a ram disk and use it as SLOG device to compare the
performance. If performance doesnt change, it means that the
measurement method have some flaws or you havent configured Slog
correctly.
I did some tests with a ramdisk slog and the the write IOPS seemed to
run about the 4k/s mark vs about 800/s when using the SSD as slog and
200/s without a slog.
# osol b117 RAID10+ramdisk slog
#
bash-3.2# time tar xf zeroes.tar; rm -rf zeroes/; | tee /root/zeroes-
test-scalzi-dell-ramdisk_slog.txt
# tar
real 1m32.343s
# rm
real 0m44.418s
# linux+XFS on Hardware RAID
bash-3.2# time tar xf zeroes.tar; time rm -rf zeroes/; | tee /root/
zeroes-test-linux-lsimegaraid_bbwc.txt
#tar
real 2m27.791s
#rm
real 0m46.112s
Please note that SSD's are way slower then DRAM based write cache's.
SSD's will show performance increase when you create load from
multiple clients at the same time, as ZFS will be flushing the dirty
cache sequantialy. So I'd suggest running the test from a lot of
clients simultaneously
I'm sure that it will be a more performant system in general, however,
it is this explicit set of tests that I need to maintain or improve
performance on.
cheers,
James
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss