This is what I've done, but am still a bit stuck, as it doesn't quite work!

I scan the zpool list for the drive (I created backup1/data and backup2/data on 
the two USB drives)

/usr/sbin/zpool import backup2
/usr/sbin/zfs snapshot -r rp...@20090715033358
/usr/sbin/zfs destroy rpool/s...@20090715033358
/usr/sbin/zfs destroy rpool/d...@20090715033358
/usr/sbin/zfs send -R rp...@20090715033358 | /usr/sbin/zfs recv -d -F 
backup2/dump
/usr/sbin/zfs unmount -f /backup2   # one of the rpool bits is shared, if I 
don't do this it refuses to export

/usr/sbin/zpool export backup2


The send/recv bit isn't working.  It moans :

/usr/sbin/zfs send -R rp...@20090715033358 | /usr/sbin/zfs recv -d -F 
backup2/dump
cannot receive new filesystem stream: destination has snapshots (eg. 
backup2/d...@zfs-auto-snap:monthly-2009-06-29-12:47)
must destroy them to overwrite it.  I get dozens of auto-snapshots in there 
which I'm not sure how they got there?  I've not got the timeslider set to 
create anything in backup/ - I think they're being created when the send/recv 
runs?

When I try after deleting all the snapshots on backup2/ (be nice if zfs destroy 
took multiple arguments!) it seems to cheerfully recreate all those snapshots 
etc, but I really only want it to grab the one I took.

zfs list shows :

 zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
backup2                          15.4G   898G    23K  /backup2
backup2/dump                     15.4G   898G  84.5K  /backup2/dump
backup2/dump/ROOT                15.1G   898G    21K  legacy
backup2/dump/ROOT/b118           15.0G   898G  11.3G  /
backup2/dump/ROOT/opensolaris    37.4M   898G  5.02G  /
backup2/dump/ROOT/opensolaris-1  88.2M   898G  11.2G  /
backup2/dump/cashmore             194K   898G    22K  /backup2/dump/cashmore
backup2/dump/export               232M   898G    23K  /export
backup2/dump/export/home          232M   898G   737K  /export/home
backup2/dump/export/home/carl     228M   898G   166M  /export/home/carl
rpool                            17.4G   896G  84.5K  /rpool
rpool/ROOT                       15.1G   896G    19K  legacy
rpool/ROOT/b118                  15.0G   896G  11.3G  /
rpool/ROOT/opensolaris           37.7M   896G  5.02G  /
rpool/ROOT/opensolaris-1         88.4M   896G  11.2G  /
rpool/cashmore                    199K   896G    22K  /rpool/cashmore
rpool/dump                       1018M   896G  1018M  -
rpool/export                      232M   896G    23K  /export
rpool/export/home                 232M   896G   736K  /export/home
rpool/export/home/carl            228M   896G   166M  /export/home/carl
rpool/swap                       1018M   897G   101M  -

now if I try again to send to it, is there a magic incantation to recv that 
says "incremental update"?  Is this the right way to do what I'm trying to do?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to