comment below...

Uwe Dippel wrote:
Dear Richard,

 > > Could it be that you are looking for the zfs clone subcommand?
 >
 > I'll have to look into it !

I *did* look into it.
man zfs, /clone. This is what I read:

Clones
A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and
       initially consumes no additional space.

Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The "origin" property exposes this dependency, and the
       destroy command lists any such dependencies, if they exist.

The clone parent-child dependency relationship can be reversed by using the "promote" subcommand. This causes the "origin" file system to become a clone of the speci- fied file system, which makes it possible to destroy the file system that the clone was created from.
...
zfs clone snapshot filesystem|volume

Creates a clone of the given snapshot. See the "Clones" section for details. The target dataset can be located anywhere in the ZFS hierarchy, and is created as the
           same type as the original.
...
Example 9 Creating a ZFS Clone

The following command creates a writable file system whose initial contents are the same as " pool/home/[EMAIL PROTECTED]".

         # zfs clone pool/home/[EMAIL PROTECTED] pool/clone

Richard, I can read and usually understand Shakespeare, though my mother tongue is not English. And I've been in computers for 25 years, but this is definitively above my head.

Yeah, I know what you mean.  And I don't think that you wanted to clone
when a simple copy would suffice.

In order to understand clones, you need to understand snapshots.  In my
mind a clone is a writable snapshot, similar to a fork in source code
management.  This is not what you currently need.

The latter comes closest to be understood, but does not address my persistent problem of me having slices on other disks; not a new pool within my file system.

zpools are composed of devices.
zfs file systems are created in zpools.
Historically, a file system was created on one device and there was only
one file system per device.  If you don't understand this simple change,
then the rest gets very confusing.

To me it currently looks like a 'dead' invention; like so many so great ideas in the history of mankind. Serious, I saw the flash presentation, knew ZFS is *the* filesystem for at least as long as I live ! On the other hand, it needs a 'handle'; it needs to solve legacy problems. To me, the worst decision taken until here, is, that we cannot associate an arbitrary disk partition or slice - though formatted as ZFS - readily with a mount point in our systems; do something that we control; and relinquish the association.

See previous point.

In order to be accepted on a breadth, IMHO a new filesystem - as much as it shines - can only succeed if it offers a transition from what we system admins have been doing all along, and adds all those phantastic items. Look, I was kind of feeling bad and stupid for my initial post. Because I'd myself answer RTFM if someone asked this in a list for BSD or Linux. And the desire is so straightforward: - replicating an existing, 'live' file system on another drive, any other drive

tar, cpio, rsync, rdist, cp, pax, zfs send/receive,... take your pick.

- associate (mount) any slice from an arbitrary other drive to a branch in my file system

Perhaps you are getting confused over the default mount point for ZFS
file systems?  You can set a specific mount point for each ZFS file system
as a "mountpoint" property.  There is an example of this in the zfs(1m)
man page:
  EXAMPLES
       Example 1 Creating a ZFS File System Hierarchy

       The  following  commands  create   a   file   system   named
       "pool/home"  and  a  file  system named "pool/home/bob". The
       mount point "/export/home" is set for the parent  file  sys-
       tem, and automatically inherited by the child file system.
         # zfs create pool/home
         # zfs set mountpoint=/export/home pool/home
         # zfs create pool/home/bob

What you end up with in this example is:
        ZFS file system "pool/home" mounted as "/export/home" (rather than
          the default "/pool/home")
        ZFS file system "pool/home/bob" mounted as "/export/home/bob"
IMHO, this isn't clear from the example :-(
 -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to