On Nov 19, 2007 10:08 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> James Cone wrote:
> > Hello All,
> >
> > Here's a possibly-silly proposal from a non-expert.
> >
> > Summarising the problem:
> >    - there's a conflict between small ZFS record size, for good random
> > update performance, and large ZFS record size for good sequential read
> > performance
> >
>
> Poor sequential read performance has not been quantified.

I think this is a good point.  A lot of solutions are being thrown
around, and the problems are only theoretical at the moment.
Conventional solutions may not even be appropriate for something like
ZFS.

The point that makes me skeptical is this: blocks do not need to be
logically contiguous to be (nearly) physically contiguous.  As long as
you reallocate the blocks close to the originals, chances are that a
scan of the file will end up being mostly physically contiguous reads
anyway.  ZFS's intelligent prefetching along with the disk's track
cache should allow for good performance even in this case.

ZFS may or may not already do this, I haven't checked.  Obviously, you
won't want to keep a years worth of snapshots, or run the pool near
capacity.  With a few minor tweaks though, it should work quite well.
Talking about fundamental ZFS design flaws at this point seems
unnecessary to say the least.

Chris
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to