On Wed, 19 Mar 2008, Bill Moloney wrote:

> When application IO sizes get small, the overhead in ZFS goes
> up dramatically.

Thanks for the feedback.  However, from what I have observed, it is 
not a full story at all.  On my own system, when a new file is 
written, the write block size does not make a significant difference 
to the write speed.  Similarly, read block size does not make a 
significant difference to the sequential read speed.  I do see a 
large difference in rates when an existing file is updated 
sequentially.  There is a many orders of magnitude difference for 
random I/O type updates.

I think that there some rather obvious reasons for the difference 
between writing a new file, or updating an existing file.  When 
writing a new file, the system can buffer up to a disk block's worth 
of size prior to issuing a a disk I/O, or it can immedialy write what 
it has and since the write is sequential, it does not need to re-read 
prior to write (but there may be more metadata I/Os).  For the case of 
updating part of a disk block, there needs to be a read prior to write 
if the block is not cached in RAM.

If the system is short on RAM, it may be that ZFS issues many more 
write I/Os than if it has a lot of RAM.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to