On Jul 3, 2009, at 8:20 PM, James Lever <j...@jamver.id.au> wrote:
On 03/07/2009, at 10:37 PM, Victor Latushkin wrote:
Slog in ramdisk is analogous to no slog at all and disable zil
(well, it may be actually a bit worse). If you say that your old
system is 5 years old difference in above numbers may be due to
difference in CPU and memory speed, and so it suggests that your
Linux NFS server appears to be working at the memory speed, hence
the question. Because if it does not honor sync semantics you are
really comparing apples with oranges here.
The slog in ramdisk is in no way similar to disabling the ZIL. This
is an NFS test, so if I had disabled the ZIL, writes would have to
go direct to disk (not ZIL) before returning, which would
potentially be even slower than ZIL on zpool.
The appearance of the Linux NFS server appearing to perform at
memory speed may just be the BBWC in the LSI MegaRaid SCSI card.
One of the developers here had explicitly performed tests to check
these similar assumptions and found no evidence that the Linux/XFS
sync implementation to be lacking even though there were previous
issues with it in one kernel revision.
XFS on LVM or EVMS volumes can't do barrier writes due to the lack of
barrier support in LVM and EVMS, so it doesn't do a hard cache sync
like it would on a raw disk partition which makes the numbers higher,
BUT with battery backed write cache the risk is negligible, but the
numbers are higher then those on file systems that do do a hard cache
sync.
Try XFS on a raw partition and NFS with sync writes enabled and see
how it performs then.
-Ross
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss