> On Wed, 13 Aug 2008, Richard L. Hamilton wrote:
> >
> > Reasonable enough guess, but no, no compression,
> nothing like that;
> > nor am I running anything particularly demanding
> most of the time.
> >
> > I did have the volblocksize set down to 512 for
> that volume, since I thought
> > that for the purpose, that reflected hardware-like
> behavior.  But maybe there's
> > some reason that's not a good idea.
> 
> Yes, that would normally be a very bad idea.  The
> default is 128K. 
> The main reason to want to reduce it is if you have
> an application 
> doing random-access I/O with small block sizes (e.g.
> 8K is common for 
> applications optimized for UFS).  In that case the
> smaller block sizes 
> decrease overhead since zfs reads and updates whole
> blocks.  If the 
> block size is 512 then that means you are normally
> performing more 
> low-level I/Os, doing more disk seeks, and wasting
> disk space.
> 
> The hardware itself does not really deal with 512
> bytes any more since 
> buffering on the disk drive is sufficient to buffer
> entire disk tracks 
> and when data is read, it is common for the disk
> drive to read the 
> entire track into its local buffer.  A hardware RAID
> controller often 
> extends that 512 bytes to a somewhat larger value for
> its own 
> purposes.
> 
> Bob

Ok, but that leaves the question what a better value would be.  I gather
that HFS+ operates in terms of 512-byte sectors but larger allocation units;
however, unless those allocation units are a power of two between 512 and 128k
inclusive _and_ are accordingly aligned within the device (or actually, with
the choice of a proper volblocksize can be made to correspond to blocks in
the underlying zvol), it seems to me that a larger volblocksize would not help;
it might well mean that a one a.u. write by HFS+ equated to two blocks read
and written by zfs, because the alignment didn't match, whereas at least with
the smallest volblocksize, there should never be a need to read/merge/write.

I'm having trouble figuring out how to get the info to make a better choice on
the HFS+ side; maybe I'll just fire up wireshark, and see if it knows
how to interpret iSCSI, and/or run truss on iscsitgtd to see what it actually
is reading from/writing to the zvol; if there is a consistent least common
aligned blocksize, I would expect the latter especially to reveal it, and
probably the former to confirm it.

I did string Ethernet; I think that sped things up a bit, but it didn't change 
the
annoying pauses.  In the end, I found a 500GB USB drive on sale for $89.95 (US),
and put that on the Mac, with 1 partition for backups, and 1 each for possible
future [Open]Solaris x86, Linux, and Windows OSs, assuming they can be
booted from a USB drive on a Mac Mini.  Still, I want to know if the pausing
with iscsitgtd is in part something I can tune down to being non-obnoxious,
or is (as I suspect) in some sense a real bug.

cc-ing zfs-discuss, since I suspect the problem might be there at least as much
as with iscsitgtd (not that the latter is a prize-winner, having core-dumped
with an assert() somewhere a number of times).
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to