Hello Roch!

> 
> Leave the default recordsize. With 128K recordsize,
> files smaller than  
> 128K are stored as single record
> tightly fitted to the smallest possible # of disk
> sectors. Reads and  
> writes are then managed with fewer ops.
 In the write ZFS is dynamic, but in the read? 
 If i have many small files (smaller than 128K), i would not waste time reading 
128K? And after the ZFS has allocated a FSB of 64K for example, if that file 
gets bigger, ZFS will use 64K blocks right?
 
> 
> Not tuning the recordsize is very generally more
> space efficient and  
> more performant.
> Large DB (fixed size aligned accesses to uncacheable
> working set) is  
> the exception here (tuning recordsize helps) and a
> few other corner  
> cases.
> 
> -r
> 
> 
> Le 15 sept. 08 à 04:49, Peter Eriksson a écrit :
> 
> > I wonder if there exists some tool that can be used
> to figure out an
> > optimal ZFS recordsize configuration? Specifically
> for a mail
> > server using Maildir (one ZFS filesystem per user).
> Ie, lot's of
> > small files (one file per email).
> > --
> > This message posted from opensolaris.org
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> >
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to