To accelerate NFS (in particular single threaded loads)
you need (somewhat badly) some *RAM between the Server FS and 
it's storage; that *RAM is where NFS commited data may be stored.

If the *RAM does not survive a server reboot, the client is
at risk of seeing corruption.

For example, UFS over  WCE storage will  be fast and corrupt
prone  (from the client side  point  of view).  ZFS over WCE
storage   behaves  differently    because  it   manages  the
writecache is  a  way    that makes serving  NFS  slow   but
safe. zil_disable can be used to make ZFS serve NFS fast and
corrupt prone (from the client side  point  of view).

-r



Joe Little writes:
 > On 11/21/06, Matthew B Sweeney - Sun Microsystems Inc.
 > <[EMAIL PROTECTED]> wrote:
 > >
 > >  Roch,
 > >
 > >  Am I barking up the wrong tree?  Or is ZFS over NFS not the right 
 > > solution?
 > >
 > 
 > I strongly believe it is.. We just are at odds as to some philosophy.
 > Either we need NVRAM backed storage between NFS and ZFS, battery
 > backed-memory that can survive other subsystem failure, or a change in
 > the code path to allow some discretion here. Currently, the third
 > option, 6280630, ZIL syncronicity, or as I reference it, sync_deferred
 > functionality.
 > 
 > A combination is best, but the sooner this arrives, the better for
 > anyone who needs a general purpose file server / NAS that compares
 > anywhere near to the competion.
 > 
 > >  As I understand zil's functionality I may lose updates, but the filesystem
 > > would remain intact
 > >
 > >
 > >  from
 > > http://www.opensolaris.org/jive/thread.jspa?messageID=20935&#20935
 > >
 > > The ZIL is not required for fsckless operation. If you turned off
 > >  the ZIL, all it would mean is that in the event of a crash, it would
 > >  appear that some of the most recent (last few seconds) synchronous
 > >  system calls never happened. In other words, we wouldn't have net
 > >  the O_DSYNC specification, but the filesystem would nevertheless
 > >  still be perfectly consistent on disk.
 > >
 > >  Jeff
 > >  This application isn't anything transactional, a file is read, processed
 > > and a new (modified) file is written to another store.  So if all I'm
 > > risking is the current open file, I can have the application rewrite it.
 > >
 > >  I haven't had a chance to test this yet , the machines are physically
 > > somewhere else and not networked to the outside world.  Should I be using
 > > UFS over NFS?
 > >
 > >
 > >  Thanks
 > >  Matt
 > >
 > >
 > >
 > >  Roch - PAE wrote On 11/21/06 11:28,:
 > >  Matthew B Sweeney - Sun Microsystems Inc. writes:
 > >  > Hi
 > >  > I have an application that use NFS between a Thumper and a 4600. The
 > >  > Thumper exports 2 ZFS filesystems that the 4600 uses as an inqueue and
 > >  > outqueue.
 > >  >
 > >  > The machines are connected via a point to point 10GE link, all NFS is
 > >  > done over that link. The NFS performance doing a simple cp from one
 > >  > partition to the other is well below what I'd expect , 58 MB/s. I've
 > >  > some NFS tweaks, tweaks to the neterion cards (soft rings etc) , and
 > >  > tweaks to the TCP stack on both sides to no avail. Jumbo frames are
 > >  > enabled and working, which improves performance, but doesn't make it 
 > > fly.
 > >  >
 > >  > I've tested the link with iperf and have been able to sustain 5 - 6
 > >  > Gb/s. The local ZFS file systems (12 disk stripe, 34 disk stripe)
 > >  > perform very well (450 - 500 MB/s sustained).
 > >  >
 > >  > My research points to disabling the ZIL. So far the only way I've found
 > >  > to disable the ZIL is through mdb, echo 'zil_disable/W 1'|mdb -kw. My
 > >  > question is can I achieve this setting via a /kernel/drv/zfs.conf or
 > >  > /etc/system parameter?
 > >  >
 > >
 > > You may set in in /etc/system. We're thinking of renaming
 > > the variable to
 > >
 > >  set zfs_please_corrupt_my_client's_data = 1
 > >
 > > Just kidding (about the name) but it will corrupt your data.
 > >
 > > -r
 > >
 > >
 > >  >
 > >  > Thanks
 > >  > Matt
 > >  >
 > >  > --
 > >  >
 > >  > Matt Sweeney
 > >  > Engagement Architect
 > >  > Sun Microsystems
 > >  > 585-368-5930/x29097 desk
 > >  > 585-727-0573 cell
 > >  >
 > >  >
 > >  > _______________________________________________
 > >  > zfs-discuss mailing list
 > >  > zfs-discuss@opensolaris.org
 > >  > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 > >
 > > _______________________________________________
 > > zfs-discuss mailing list
 > > zfs-discuss@opensolaris.org
 > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 > >
 > >
 > >  --
 > >
 > > Matt Sweeney
 > > Engagement Architect
 > > Sun Microsystems
 > > 585-368-5930/x29097 desk
 > > 585-727-0573 cell
 > >
 > >
 > >
 > > _______________________________________________
 > > zfs-discuss mailing list
 > > zfs-discuss@opensolaris.org
 > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 > >
 > >
 > >

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to