On Tue, 20 Oct 2009, Matthias Appel wrote:
OK, that means, over time, data will be distributed across all mirrors? (assuming all blocks will be written once)
Yes, but it is quite rare for all files to be re-written. If you have reliable storage somewhere else, you could send your existing pool to it, and then re-create your pool from scratch.
ZFS's existing limitations are a good reason to initially over-provision the pool and not wait until the pool is close to full before adding more disks. Regardless, the only real loss is the boost to available IOPS if all disks can be used to store new data.
I think a useful extension to ZFS would be a background task which distributes all used blocks across all vdefs.
Yes. That would be a useful option. This could be combined with a file optimizer which attempts to re-layout large files for most efficient access.
Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss