On 03.07.09 15:34, James Lever wrote:
Hi Mertol,

On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:

ZFS SSD usage behaviour heavly depends on access pattern and for asynch ops ZFS will not use SSD's. I'd suggest you to disable SSD's , create a ram disk and use it as SLOG device to compare the performance. If performance doesnt change, it means that the measurement method have some flaws or you havent configured Slog correctly.

I did some tests with a ramdisk slog and the the write IOPS seemed to run about the 4k/s mark vs about 800/s when using the SSD as slog and 200/s without a slog.

# osol b117 RAID10+ramdisk slog
#
bash-3.2# time tar xf zeroes.tar; rm -rf zeroes/; | tee /root/zeroes-test-scalzi-dell-ramdisk_slog.txt
# tar
real    1m32.343s
# rm
real    0m44.418s

# linux+XFS on Hardware RAID
bash-3.2# time tar xf zeroes.tar; time rm -rf zeroes/; | tee /root/zeroes-test-linux-lsimegaraid_bbwc.txt
#tar
real    2m27.791s
#rm
real    0m46.112s

Above results make me question whether your Linux NFS server is really honoring synchronous semantics or not...

Slog in ramdisk is analogous to no slog at all and disable zil (well, it may be actually a bit worse). If you say that your old system is 5 years old difference in above numbers may be due to difference in CPU and memory speed, and so it suggests that your Linux NFS server appears to be working at the memory speed, hence the question. Because if it does not honor sync semantics you are really comparing apples with oranges here.

victor
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to