Ben Rockwood wrote:
I was really hoping for some option other than ZIL_DISABLE, but finally gave up 
the fight.  Some people suggested NFSv4 helping over NFSv3 but it didn't... at 
least not enough to matter.

ZIL_DISABLE was the solution, sadly.  I'm running B43/X86 and hoping to get up 
to 48 or so soonish (I BFU'd it straight to B48 last night and brick'ed it).

Here are the times.  This is an untar (gtar xfj) of SIDEkick 
(http://www.cuddletech.com/blog/pivot/entry.php?id=491) on NFSv4 on a 20TB 
RAIDZ2 ZFS Pool:

ZIL Enabled:
real    1m26.941s

ZIL Disabled:
real    0m5.789s


I'll update this post again when I finally get B48 or newer on the system and 
try it.  Thanks to everyone for their suggestions.


I imagine what's happening is that tar is a single-threaded application and it's basically doing: open, asynchronous write, close. This will go really fast locally. But for NFS due to the way it does cache consistency, on CLOSE, it must make sure that the writes are on stable storage, so it does a COMMIT, which basically turns your asynchronous write into a synchronous write. Which means you basically have a single-threaded app doing synchronous writes- ~ 1/2 disk rotational latency per write.

Check out 'mount_nfs(1M)' and the 'nocto' option. It might be ok for you to relax the cache consistency for client's mount as you untar the file(s). Then remount w/out the 'nocto' option once you're done.

Another option is to run multiple untars together. I'm guessing that you've got I/O to spare from ZFS's point of view.

eric

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to