i made a mistake and created my zpool on a partition (c2t0d0p0).  i can't
attach another identical whole drive (c3t0d0) to this pool, i get an
error that the new drive is too small (i'd have thought it would be
bigger!)

the mount point of the "top" dataset is 'none', and various datasets
in the pool have fixed (not inherited) mount points.

if i do 'zfs send -R d...@0 | zfs recv -dF data2' it stops at the first
filesystem mount point that is not empty.  ie, as soon as it receives
and sets the mountpoint property it quits because at that point it
actually tries to mount the new replicated dataset and fails because
zfs won't shadow directories i guess.

i believe the last example at
<http://docs.sun.com/app/docs/doc/817-2271/gfwqb?a=view>
works because at the top level (users), the mountpoint is just the
default, not explicit, so when replicated to users2, it becomes an
implicit /users2 instead of an explicit /users.

so, is there a way to tell zfs not to perform the mounts for data2?
or another way i can replicate the pool on the same host, without
exporting the original pool?

somewhat related question, any way to tell zfs it's ok to shadow a
directory?  i would like to create datasets for /usr/local dirs in
each sparse zone, however because /usr is inherited and the global
zone's /usr/local is populated, when the zone boots with a dataset
whose mountpoint is /usr/local, it won't mount.  if i made /usr/local
a separate dataset in the global zone would that work?  (i can't
test this right now.)

-frank
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to