On 2012-11-19 20:28, Peter Jeremy wrote:
Yep - that's the fallback solution. With 1874 snapshots spread over 54
filesystems (including a couple of clones), that's a major undertaking.
(And it loses timestamp information).
Well, as long as you have and know the base snapshots for the clones,
you can recreate them at the same branching point on the new copy too.
Remember to use something like "rsync -cavPHK --delete-after --inplace
src/ dst/" to do the copy, so that the files removed from the source
snapshot are removed on target, the changes are detected thanks to
file checksum verification (not only size and timestamp), and changes
take place within the target's copy of the file (not as rsync's default
copy-and-rewrite) in order for the retained snapshots history to remain
sensible and space-saving.
Also, while you are at it, you can use different settings on the new
pool, based on your achieved knowledge of your data - perhaps using
better compression (IMHO stale old data that became mostly read-only
is a good candidate for gzip-9), setting proper block sizes for files
of databases and disk images, maybe setting better checksums, and if
your RAM vastness and data similarity permit - perhaps employing dedup
(run "zdb -S" on source pool to simulate dedup and see if you get any
better than 3x savings - then it may become worthwhile).
But, yes, this will take quite a while to effectively walk your pool
several thousand times, if you do the plain rsync from each snapdir.
Perhaps, if the "zfs diff" does perform reasonably for you, you can
feed its output as the list of objects to replicate in rsync's input
and save many cycles this way.
Good luck,
//Jim Klimov
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss