On Thu, 14 Aug 2008, Richard L. Hamilton wrote:
>
> Ok, but that leaves the question what a better value would be.  I gather
> that HFS+ operates in terms of 512-byte sectors but larger allocation units;
> however, unless those allocation units are a power of two between 512 and 128k
> inclusive _and_ are accordingly aligned within the device (or actually, with
> the choice of a proper volblocksize can be made to correspond to blocks in
> the underlying zvol), it seems to me that a larger volblocksize would not 
> help;
> it might well mean that a one a.u. write by HFS+ equated to two blocks read
> and written by zfs, because the alignment didn't match, whereas at least with
> the smallest volblocksize, there should never be a need to read/merge/write.

More often than not, the default value (8K in this case, rather than 
the 128K I said earlier) is the best value. :-)

Volumes on zfs are an interesting creature since zfs provides massive 
caching in memory (ZFS ARC) so many accesses may be satisfied by cache 
rather than going to disk.  Non-synchronous writes go to server memory 
before being sent to disk at a more convenient time.  The concern over 
512-byte sectors is not well founded because most modern filesystems 
use large blocks or extents.  It is true that if the alignment of 
volume blocks does not match what the client filesystem uses, then 
there will be more overhead but perfect alignment is likely 
impossible.

> booted from a USB drive on a Mac Mini.  Still, I want to know if the pausing
> with iscsitgtd is in part something I can tune down to being non-obnoxious,
> or is (as I suspect) in some sense a real bug.

It may in fact be a real bug, or just a tuning problem.  It could 
easily be an issue with TCP settings or the network, and not with zfs 
at all.  We have seen quite a few reports on this list of zfs 
performance problems which ended up being something in the networking 
path.

Try testing between client and server with network benchmark software 
like 'ttcp' or 'netperf' and see if it shows any hickups at the 
network level.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to