Hi Don,
I'm no snapshot expert but I think you will have to remove the previous
receiving side snapshots, at least.
I created a file system hierarchy that includes a lower-level snapshot,
created a recursive snapshot of that hierarchy and sent it over to
a backup pool. Then, did the same steps again. See the example below.
You can see from my example that this process fails if I don't remove
the existing snapshots first. And, because I didn't remove the original
recursive snapshots on the sending side, the snapshots become nested.
I'm sure someone else has a better advice.
I had an example of sending root pool snapshots on the ZFS
troubleshooting wiki but it was removed so I will try to restore that
example.
Thanks,
Cindy
# zfs list -r tank/home
NAME USED AVAIL REFER MOUNTPOINT
tank/home 1.12M 66.9G 25K /tank/home
tank/h...@snap2 0 - 25K -
tank/home/anne 280K 66.9G 280K /tank/home/anne
tank/home/a...@snap2 0 - 280K -
tank/home/bob 280K 66.9G 280K /tank/home/bob
tank/home/b...@snap2 0 - 280K -
tank/home/cindys 561K 66.9G 281K /tank/home/cindys
tank/home/cin...@snap2 0 - 281K -
tank/home/cindys/dir1 280K 66.9G 280K /tank/home/cindys/dir1
tank/home/cindys/d...@snap1 0 - 280K -
tank/home/cindys/d...@snap2 0 - 280K -
# zfs send -R tank/h...@snap2 | zfs recv -d bpool
# zfs list -r bpool/home
NAME USED AVAIL REFER MOUNTPOINT
bpool/home 1.12M 33.2G 25K /bpool/home
bpool/h...@snap2 0 - 25K -
bpool/home/anne 280K 33.2G 280K /bpool/home/anne
bpool/home/a...@snap2 0 - 280K -
bpool/home/bob 280K 33.2G 280K /bpool/home/bob
bpool/home/b...@snap2 0 - 280K -
bpool/home/cindys 561K 33.2G 281K /bpool/home/cindys
bpool/home/cin...@snap2 0 - 281K -
bpool/home/cindys/dir1 280K 33.2G 280K /bpool/home/cindys/dir1
bpool/home/cindys/d...@snap1 0 - 280K -
bpool/home/cindys/d...@snap2 0 - 280K -
# zfs snapshot -r tank/h...@snap3
# zfs send -R tank/h...@snap3 | zfs recv -dF bpool
cannot receive new filesystem stream: destination has snapshots (eg.
bpool/h...@snap2)
must destroy them to overwrite it
# zfs destroy -r bpool/h...@snap2
# zfs destroy bpool/home/cindys/d...@snap1
# zfs send -R tank/h...@snap3 | zfs recv -dF bpool
# zfs list -r bpool
NAME USED AVAIL REFER MOUNTPOINT
bpool 1.35M 33.2G 23K /bpool
bpool/home 1.16M 33.2G 25K /bpool/home
bpool/h...@snap2 0 - 25K -
bpool/h...@snap3 0 - 25K -
bpool/home/anne 280K 33.2G 280K /bpool/home/anne
bpool/home/a...@snap2 0 - 280K -
bpool/home/a...@snap3 0 - 280K -
bpool/home/bob 280K 33.2G 280K /bpool/home/bob
bpool/home/b...@snap2 0 - 280K -
bpool/home/b...@snap3 0 - 280K -
bpool/home/cindys 582K 33.2G 281K /bpool/home/cindys
bpool/home/cin...@snap2 0 - 281K -
bpool/home/cin...@snap3 0 - 281K -
bpool/home/cindys/dir1 280K 33.2G 280K /bpool/home/cindys/dir1
bpool/home/cindys/d...@snap1 0 - 280K -
bpool/home/cindys/d...@snap2 0 - 280K -
bpool/home/cindys/d...@snap3 0 - 280K -
On 12/01/10 11:30, Don Jackson wrote:
Hello,
I am attempting to move a bunch of zfs filesystems from one pool to another.
Mostly this is working fine, but one collection of file systems is causing me problems,
and repeated re-reading of "man zfs" and the ZFS Administrators Guide is not
helping. I would really appreciate some help/advice.
Here is the scenario.
I have a nested (hierarchy) of zfs file systems.
Some of the deeper fs are snapshotted.
All this exists on the source zpool
First I recursively snapshotted the whole subtree:
zfs snapshot -r nasp...@xfer-11292010
Here is a subset of the source zpool:
# zfs list -r naspool
NAME USED AVAIL REFER
MOUNTPOINT
naspool 1.74T 42.4G 37.4K /naspool
nasp...@xfer-11292010 0 - 37.4K -
naspool/openbsd 113G 42.4G 23.3G
/naspool/openbsd
naspool/open...@xfer-11292010 0 - 23.3G -
naspool/openbsd/4.4 21.6G 42.4G 2.33G
/naspool/openbsd/4.4
naspool/openbsd/4...@xfer-11292010 0 - 2.33G -
naspool/openbsd/4.4/ports 592M 42.4G 200M
/naspool/openbsd/4.4/ports
naspool/openbsd/4.4/po...@patch000 52.5M - 169M -
naspool/openbsd/4.4/po...@patch006 54.7M - 194M -
naspool/openbsd/4.4/po...@patch007 54.9M - 194M -
naspool/openbsd/4.4/po...@patch013 55.1M - 194M -
naspool/openbsd/4.4/po...@patch016 35.1M - 200M -
naspool/openbsd/4.4/po...@xfer-11292010 0 - 200M -
Now I want to send this whole hierarchy to a new pool.
# zfs create npool/openbsd
# zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv npool/openbsd
receiving full stream of naspool/open...@xfer-11292010 into npool/open...@xfer-11292010
received 23.5GB stream in 883 seconds (27.3MB/sec)
cannot receive new filesystem stream: destination has snapshots (eg.
npool/open...@xfer-11292010)
must destroy them to overwrite it
What am I doing wrong? What is the proper way to accomplish my goal here?
And I have a follow up question:
I had to snapshot the source zpool filesystems in order to zfs send them.
Once they are received on the new zpool, I really don't need nor want this
"snapshot" on the receiving side.
Is it OK to zfs destroy that snapshot?
I've been pounding my head against this problem for a couple of days, and I
would definitely appreciate any tips/pointers/advice.
Don
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss