Uwe Dippel wrote:
[EMAIL PROTECTED]:/u01/home# zfs snapshot u01/[EMAIL PROTECTED] [EMAIL PROTECTED]:/u01/home# zfs send u01/[EMAIL PROTECTED] | zfs receive
u02/home

One caveat here is that I could not find a way to back up the base of
the zpool "u01" into the base of zpool "u02". i.e.

zfs snapshot [EMAIL PROTECTED] zfs send [EMAIL PROTECTED] | zfs receive u02

Does not work because "u02" already exists - the receive must be done
into a brand new zfs. (It will create the zfs)

FYI, this is bug 6358519.

PS I think the "zfs backup" functionality was replaced with the "zfs
send" - zfs send just writes to stdout so you can pipe it to ssh to
send it to another machine, redirect it to a file, etc.

'zfs send' is simply the new name for 'zfs backup'. It more clearly expresses the intent -- to send your fs to another pool. This can be used for backups too, but it is not a complete backup solution.

I wonder if I should start a new thread for this, but to me, as a
'cool eye' third party reviewer, ZFS has lost focus very much: What
had been intended as a high level file system 'language' or API, has
recently - so it seems to me - retarded in a bunch of low-level
respectively atomistic set of commands.

Are there any other examples? backup->send is not relevant here (see above / below).

The removal of 'backup' is a
good example: backup filesystem1 filesystem2 is a high level
approach. Now we / you are back to send / receive.   Thirty years ago,
dump had exactly the same: dump / restore. Only, that the word 'dump'
has a negative bias. So a word with a negative bias was replaced with
a misleading word: 'send'. What progress !!?

As mentioned above, this was a simple name change, intended to make it *more* clear what the intended use and functionality is. Calling it 'backup' is misleading because it is not a complete backup solution (eg, doesn't handle tape drives, restoring individual files, managing multiple backups, etc).

Look at all the proposals here, to my questions on how to get an
identical copy of a filesystem to another partition !

Did you read the zfs(1m) manpage, including the following example?

     Example 12 Remotely Replicating ZFS Data

     The following commands send a full stream and then an incre-
     mental  stream  to  a  remote  machine,  restoring them into
     "poolB/received/[EMAIL PROTECTED]"  and  "poolB/received/[EMAIL 
PROTECTED]",   respec-
     tively.    "poolB"    must    contain    the   file   system
     "poolB/received",   and   must   not    initially    contain
     "poolB/received/fs".

       # zfs send pool/[EMAIL PROTECTED] | \
         ssh host zfs receive poolB/received/[EMAIL PROTECTED]
       # zfs send -i a pool/[EMAIL PROTECTED] | ssh host \
         zfs receive poolB/received/fs


I can only urgently suggest to review the work done, and if the
desire actually prevails to offer a high-level command set, to revert
to high-level commands. backup could be a great asset, as in backup
[-f] pool|filesystem pool|filesystem [b]That[/b] would help the
admin: backing up a live pool into another pool, respectively another
filesystem.

I'm not sure that this is a typical "backup" scenerio.

That said, this will be very easy to do once 6421958 "want recursive zfs send ('zfs send -r')" is integrated.

Actually, something like: "First, you have to make a snapshot. Then
you send this snapshot to a ZFS filesystem that exists. Then, you can
receive the file resulting from this action to a non-existing drive".
Sorry, that is [b]worse[/b] than dump / restore ! No, I don't have to
newfs the new drive any longer, but if it exists, I have to destroy
it before I can receive the snapshot. That's not much of progress !

In order to support a more powerful and flexible model, sometimes old concepts (eg. volume management) must be replaced with new concepts (eg. pooled storage).

As I mentioned, we are working on making this easier to use. Your use case makes a number of assumptions that we didn't want to make for the general case (eg. that the stream will be sent to and stored on a raw device on the same machine as the zpool). However, our infrastructure allows us to provide the type of simple, limited-use functionality you are requesting. That said, we have limited resources and we need to evaluate what will be most useful to the most customers. We must learn to walk before we can run.

And I don't even dare to attack all those fabulous underpinnings and
the huge development effort and progress of the work done. I do dare
to question, though, the party who signed off the interface; and its
deviation from high-level comprehensive commands to piecemeal atomic
can-(and-must)-do everything-and-anything.

You have mentioned one example, which I feel you have simply misunderstood. If there are any others, we'd be happy to hear them.

--matt
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to