Spencer Shepler writes:
 > On Tue, eric kustarz wrote:
 > > Ben Rockwood wrote:
 > > >I was really hoping for some option other than ZIL_DISABLE, but finally 
 > > >gave up the fight.  Some people suggested NFSv4 helping over NFSv3 but it 
 > > >didn't... at least not enough to matter.
 > > >
 > > >ZIL_DISABLE was the solution, sadly.  I'm running B43/X86 and hoping to 
 > > >get up to 48 or so soonish (I BFU'd it straight to B48 last night and 
 > > >brick'ed it).
 > > >
 > > >Here are the times.  This is an untar (gtar xfj) of SIDEkick 
 > > >(http://www.cuddletech.com/blog/pivot/entry.php?id=491) on NFSv4 on a 
 > > >20TB 
 > > >RAIDZ2 ZFS Pool:
 > > >
 > > >ZIL Enabled:
 > > >real    1m26.941s
 > > >
 > > >ZIL Disabled:
 > > >real    0m5.789s
 > > >
 > > >
 > > >I'll update this post again when I finally get B48 or newer on the system 
 > > >and try it.  Thanks to everyone for their suggestions.
 > > >
 > > 
 > > I imagine what's happening is that tar is a single-threaded application 
 > > and it's basically doing: open, asynchronous write, close.  This will go 
 > > really fast locally.  But for NFS due to the way it does cache 
 > > consistency, on CLOSE, it must make sure that the writes are on stable 
 > > storage, so it does a COMMIT, which basically turns your asynchronous 
 > > write into a synchronous write.  Which means you basically have a 
 > > single-threaded app doing synchronous writes- ~ 1/2 disk rotational 
 > > latency per write.
 > > 
 > > Check out 'mount_nfs(1M)' and the 'nocto' option.  It might be ok for 
 > > you to relax the cache consistency for client's mount as you untar the 
 > > file(s).  Then remount w/out the 'nocto' option once you're done.
 > 
 > This will not correct the problem because tar is extracting and therefore
 > creating files and directories; those creates will be synchronous at
 > the NFS server and there is no method to change this behavior at the
 > client.
 > 
 > Spencer
 > 

Thanks for that (I also thaught nocto would help, now I see
it won't).

I would   add that  this  is not   a  bug or  deficientcy in
implementation. Any NFS implementation tweak to make 'tar x'
go  as fast as   direct attached will  lead  to silent  data
corruption (tar x   succeeds but the files  don't  checksum
ok). 

Interestingly the quality of service of 'tar x' is higher
over NFS than direct attach since, with direct attach,
there is no guarantee that files are set to storage whereas
there is one with NFS.

Net Net, for single threaded 'tar x', data integrity
consideration forces NFS to provide a high quality slow
service. For direct attach, we don't have those data
integrity issues, and the community has managed to get by the 
lower quality higher speed service.


 > > 
 > > Another option is to run multiple untars together.  I'm guessing that 
 > > you've got I/O to spare from ZFS's point of view.
 > > 

Or maybe a threaded tar ?

To re-emphasise Eric's point, this  type of slowness affects
single threaded loads. There is lots of headroom to higher
performance by using concurrency.

-r

 > > eric
 > > 
 > > _______________________________________________
 > > zfs-discuss mailing list
 > > zfs-discuss@opensolaris.org
 > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to