On Wed, Feb 24, 2010 at 03:31:51PM -0600, Bob Friesenhahn wrote: > With millions of such tiny files, it makes sense to put the small > files in a separate zfs filesystem which has its recordsize property > set to a size not much larger than the size of the files. This should > reduce waste, resulting in reduced potential for fragmentation in the > rest of the pool.
Tuning the dataset recordsize down does not help in this case. The files are already small, so their recordsize is already small. Nico -- _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss