2011-11-05 2:12, HUGE | David Stahl wrote:
Our problem is that we need to use the -R to snapshot and send all
the child zvols, yet since we have a lot of data (3.5 TB), the hourly
snapshots are cleaned on the sending side, and breaks the script as it
is running.
In recent OpenSolaris and Illumos releases, you can use
"zfs hold" command to lock a snapshot from deletion.
So before sending you'd walk the snapshots you want to
send and "hold" them; after the send is complete you'd
"unhold" them so they can be actually deleted. It would
be correct to wrap this all into a script...
You can review the latest snapshots for a tree with a
one-liner like this:
# zfs list -tall -H -o name -r pool/export | grep -v @ | \
while read DS; do zfs list -t snapshot -d1 -r "$DS" | tail -1; done
pool/export@zfs-auto-snap:frequent-2011-11-05-17:00 0 - 22K -
pool/export/distr@zfs-auto-snap:frequent-2011-11-05-17:00 0 -
4.81G -
pool/export/home@zfs-auto-snap:frequent-2011-11-05-17:00 0 -
396M -
pool/export/home/jim@zfs-auto-snap:frequent-2011-11-05-17:00 0
- 24.7M -
If you only need filesystem OR volume datasets, you can
replace the first line with one of these:
# zfs list -t filesystem -H -o name -r pool/export | \
# zfs list -t volume -H -o name -r pool/export | \
Probably (for a recursive send) you'd need to catch
all the identically-named snapshots in the tree.
Another workaround can be to store more copies of the
snapshots you need, i.e. not 24 "hourlies" but 100 or so.
That would be like:
# svccfg -s hourly listprop | grep zfs/keep
zfs/keep astring 24
# svccfg -s hourly setprop zfs/keep = 100
zfs/keep astring 24
# svcadm refresh hourly
You could also use zfs-auto-send SMF-instance attributes
like "zfs/backup-save-cmd" to use a script which would
place a "hold" on the snapshot, then send and unhold it.
So you have a number of options almost out-of-the-box ;)
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss