> On Fri, 5 Sep 2008, Marcelo Leal wrote:
> > 4 - The last one... ;-)  For the FSB allocation,
> how the zfs knows 
> > the file size, for know if the file is smaller than
> the FSB? 
> > Something related to the txg? When the write goes
> to the disk, the 
> > zfs knows (some way) if that write is a whole file
> or a piece of it?
> 
> For synchronous writes (file opened with O_DSYNC
> option), ZFS must 
> write the data based on what it has been provided in
> the write so at 
> any point in time, the quality of the result (amount
> of data in tail 
> block) depends on application requests.  However, if
> the application 
> continues to extend the file via synchronous writes,
> existing data in 
> the sub-sized "tail" block will be re-written to a
> new location (due 
> to ZFS COW) with the extra data added.  This means
> that the filesystem 
> block size is more important for synchronous writes,
> and particularly 
> if there is insufficient RAM to cache the already
> written block.

 If i understand well, the recordsize is really important for big files. 
Because with small files, and small updates, we have a lot of chances to have 
the data well organized on disk. I think the problem is the big files... where 
we have tiny updates. In the creation´s time of the pool, the recordsize is 
128k, but i don´t know if that limit is real for, let´s say, when we are 
copying a DVD image. I think the recordsize can be lager. If so, if in lager 
files we can have a recordsize of... 1mb? So, what happen if we would change 
after that, 1k?
> 
> For asynchronous writes, ZFS will buffer writes in
> RAM for up to five 
> seconds before actually writing it.  This buffering
> allows ZFS to make 
> better informed decisions about how to write the data
> so that the data 
> is written to full blocks as contiguously as
> possible.  If the 
> application writes asynchronously, but then issues an
> fsync() call, 
> then any cached data will be committed to disk at
> that time.
> 
> It can be seen that for asynchronous writes, the
> quality of the 
> written data layout is somewhat dependent on how much
> RAM the system 
> has available and how fast the data is written.  With
> more RAM, there 
> can be more useful write caching (up to five seconds)
> and ZFS can make 
> better decisions when it writes the data so that the
> data in a file 
> can be written optimally, even with the pressure of
> multi-user writes.
> 

 Agree. 
 Any other ZFS experts to answer the first questions? ;-)

> Bob
> ======================================
> Bob Friesenhahn
> [EMAIL PROTECTED],
> http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,
>    http://www.GraphicsMagick.org/
> _____________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

Thanks bfriesen!

 Leal
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to