On Tue, May 30, 2006 at 08:13:56AM -0700, Anton B. Rang wrote:
> Well, I don't know about his particular case, but many QFS clients
> have found the separation of data and metadata to be invaluable. The
> primary reason is that it avoids disk seeks. We have QFS customers who
                         ^^^^^^^^^^^^^^^^^^^^
Are you talking about reads or writes?

Anyways, for reads separating data and meta-data helps, sure, but so
would adding mirrors.  And anyways, separating meta-data/data _caching_
may make as much difference.

> are running at over 90% of theoretical bandwidth on a medium-sized set
> of FibreChannel controllers and need to maintain that streaming rate.
> Taking a seek to update the on-disk inodes once a minute or so slowed
> down transfers enough that QFS was invented.  ;-)

So we're talking about writes then, in which case ZFS should not seek
because there are no fixed inode locations (there are fixed root block
locations though).

> (For what it's worth, the current 128K-per-I/O policy of ZFS really
> hurts its performance for large writes. I imagine this would not be
> too difficult to fix if we allowed multiple 128K blocks to be
> allocated as a group.)

I've been following the thread on this and that's not clear yet.

Sure, the block size may be 128KB, but ZFS can bundle more than one
per-file/transaction, so that the block size shouldn't matter so much --
it may be a meta-data and read I/O trade-off, but should not have much
impact on write performance.  It may be that implementation-wise the
128KB block size does affect write performance, but design-wise I don't
see why it should.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to