Spencer,

        Summary: I am not sure that v4 would have a significant
        advantage over v3 or v2 in all envirs. I just believe it can
        have a significant advantage (no/minimal drawbacks)
        and one should use it if at all possbile to verify
        that it is not the bottleneck.

        So, no, I can not say that NFSv3 has the same performance
        as v4. I know that at its worst, I don't belive that v4
        performs under v3 and at best, performs up to 2x or more
        than v3.
        
        So,,

        The assumptions are:

        -  V4 is being actively worked on,
        -  v3 is stable but no major changes are being done on it..
        -  leases,
        -  better data caching (delagations and client callbacks),
        -  state behaviour,
        -  compound NFS requests (procs) to remove the sequential rtt
           of individual NFS requests
        - Significantly improved lookups for pathing (multi-lookup)
          and later attr requests.. I am sure that the attr calls
          are/were a significant percentage of NFS ops.
        - etc...
                ** I am not telling Spencer this he should already
                   know this because skip...

        So, with the compound procs in v4, the increased latency's
        with some of the ops might have a different congestion type
        behaviour (it scales better under more environments and
        allows the IO bandwidth to be more of an issue).

        So, yes, my assumption is that NFSv4 has a good possibility
        of significantly outperforming v3.. Either way, I know
        of no degradation in any op moving to v4.

        So, again, if we are tuning a setup, I would rather see what
         ZFS does with v4, knowing that a few performance holes were
         closed or almost closed versus v3.. I don't think this is
         specific to Sun.. It would apply to all NFSv4 environments.

        **Yes, however even when the public (Paw,Spencer, etc) NFSv4 paper
        was done, the SFS was stated as not yet done..

        -- LASTLY, I would also be interested in the actual times
           of the different TCP segments. To see, if acks are
           constantly in the pipeline between the dst and src, or
           whether "slow-start restart behaviour" is occuring. It
           is also theorectical that delayed acks of the dst,
           the number of acks is reduced, which reduces the
           bandwidth (IO ops) on subsequent data bursts. Also,
           is Allman's ABC being used in the TCP implementation.

        Mitchell Erblich
        ----------------

        

Spencer Shepler wrote:
> 
> On Apr 21, 2007, at 9:46 AM, Andy Lubel wrote:
> 
> > so what you are saying is that if we were using NFS v4 things
> > should be dramatically better?
> 
> I certainly don't support this assertion (if it was being made).
> 
> NFSv4 does have some advantages from the perspective of enabling
> more aggressive file data caching; that will enable NFSv4 to
> outperform NFSv3 in some specific workloads.  In general, however,
> NFSv4 performs similarly to NFSv3.
> 
> Spencer
> 
> >
> > do you think this applies to any NFS v4 client or only Suns?
> >
> >
> >
> > -----Original Message-----
> > From: [EMAIL PROTECTED] on behalf of Erblichs
> > Sent: Sun 4/22/2007 4:50 AM
> > To: Leon Koll
> > Cc: zfs-discuss@opensolaris.org
> > Subject: Re: [zfs-discuss] Re: ZFS+NFS on storedge 6120 (sun t4)
> >
> > Leon Koll,
> >
> >       As a knowldegeable outsider I can say something.
> >
> >       The benchbark (SFS) page specifies NFSv3,v2 support, so I question
> >        whether you ra n NFSv4. I would expect a major change in
> >        performance just to version 4 NFS version and ZFS.
> >
> >       The benchmark seems to stress your configuration enough that
> >       the latency to service NFS ops increases to the point of non
> >       serviced NFS requests. However, you don't know what is the
> >       byte count per IO op. Reads are bottlenecked against rtt of
> >       the connection and writes are normally sub 1K with a later
> >       commit. However, many ops are probably just file handle
> >       verifications which again are limited to your connection
> >       rtt (round trip time). So, my initial guess is that the number
> >       of NFS threads are somewhat related to the number of non
> >       state (v4 now has state) per file handle op. Thus, if a 64k
> >       ZFS block is being modified by 1 byte, COW would require a
> >       64k byte read, 1 byte modify, and then allocation of another
> >       64k block. So, for every write op, you COULD be writing a
> >       full ZFS block.
> >
> >       This COW philosphy works best with extending delayed writes, etc
> >       where later reads would make the trade-off of increased
> >       latency of the larger block on a read op versus being able
> >       to minimize the number of seeks on the write and read. Basicly
> >       increasing the block size from say 8k to 64K. Thus, your
> >       read latency goes up just to get the data off the disk
> >       and minimizing the number of seeks, and dropping the read
> >       ahead logic for the needed 8k to 64k file offset.
> >
> >       I do NOT know that "THAT" 4000 IO OPS load would match your maximal
> >       load and that your actual load would never increase past 2000 IO ops.
> >       Secondly, jumping from 2000 to 4000 seems to be too big of a jump
> >       for your environment. Going to 2500 or 3000 might be more
> >       appropriate. Lastly wrt the benchmark, some remnants (NFS and/or ZFS
> >       and/or benchmark) seem to remain that have a negative impact.
> >
> >       Lastly, my guess is that this NFS and the benchark are stressing
> > small
> >       partial block writes and that is probably one of the worst case
> >       scenarios for ZFS. So, my guess is the proper analogy is trying to
> >       kill a nat with a sledgehammer. Each write IO OP really needs to be
> > equal
> >       to a full size ZFS block to get the full benefit of ZFS on a per byte
> >       basis.
> >
> >       Mitchell Erblich
> >       Sr Software Engineer
> >       -----------------
> >
> >
> >
> >
> >
> > Leon Koll wrote:
> >>
> >> Welcome to the club, Andy...
> >>
> >> I tried several times to attract the attention of the community to
> >> the dramatic performance degradation (about 3 times) of NFZ/ZFS
> >> vs. ZFS/UFS combination - without any result : <a href="http://
> >> www.opensolaris.org/jive/thread.jspa?messageID=98592">[1]</a> , <a
> >> href="http://www.opensolaris.org/jive/thread.jspa?threadID=24015";>
> >> [2]</a>.
> >>
> >> Just look at two graphs in my <a href="http://napobo3.blogspot.com/
> >> 2006/08/spec-sfs-bencmark-of-zfsufsvxfs.html">posting dated
> >> August, 2006</a> to see how bad the situation was and,
> >> unfortunately, this situation wasn't changed much recently: http://
> >> photos1.blogger.com/blogger/7591/428/1600/sfs.1.png
> >>
> >> I don't think the storage array is a source of the problems you
> >> reported. It's somewhere else...
> >>
> >> [i]-- leon[/i]
> >>
> >>
> >> This message posted from opensolaris.org
> >> _______________________________________________
> >> zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> >
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to