On Thu, 18 Sep 2008, Nils Goroll wrote:
>
> On the other hand, isn't there room for improvement here? If it was possible 
> to
> break large writes into smaller blocks with individual checkums(for instance
> those which are larger than a preferred_read_size parameter), we could still
> write all of these with a single RAIDZ(2) line, avoid the RAIDx write penalty
> and improve read performance because we'd only need to issue a single read I/O
> for each requested block - needing to access the full RAIDZ line only for the
> degraded RAID case.
>
> I think that this could make a big difference for write-once read many random
> access-type applications like DSS systems etc.

I imagine that this is indeed possible but that the law of diminishing 
returns would prevail.  The level of per-block overhead would become 
much greater so sequential throughput would be reduced and more disk 
space would be wasted.

You can be sure that the ZFS inventors thoroughly explored all of 
these issues and it would surprise me if someone didn't prototype it 
to see how it actually performs.

ZFS is designed for the present and the future.  Legacy filesystems 
were designed for the past.  In the present, the cost of memory is 
dramatically reduced, and in the future it will be even more so. 
This means that systems will contain massive cache RAM which 
dramatically reduces the number of read (and write) accesses.  Also, 
solid state disks (SSDs) will eventually become common and SSDs don't 
exhibit a seek penalty so designing the filesystem to avoid seeks does 
not carry over into the long term future.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to