On 9/1/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Marlanne DeLaSource wrote:
> Thanks for all your answers.
>
> The initial idea was to make a dataset/snapshot and clone (fast) and then 
separate the clone from its snapshot. The clone could be then used as a new 
independant dataset.
>
> The send/receive subcommands are probably the only way to duplicate a dataset.

I'm still not sure I understand what about clones makes you not want to
use them.  What do you mean by "separate the clone from its snapshot"?
Is it that you want to destroy the filesystem that the clone was created
from?  To do that you can use 'zfs promote'.  Is it that you want to
guarantee space availability to overwrite it?  To do that you can use
'zfs set reservation'.

A couple scenarios from environments that I work in, using "legacy"
file systems and volume managers:

1) Various test copies need to be on different spindles to remove any
perceived or real performance impact imposed by one or the other.
Arguably by having the IO activity spread across all the spindles
there would be fewer bottlenecks.  However, if you are trying to
simulate the behavior of X production spindles, doing so with 1.3 X
or 2 X spindles is not a proper comparison.  Hence being wasteful and
getting suboptimal performance may be desirable.  If you don't
understand that logic, you haven't worked in a big enough company or
studied Dilbert enough.  :)

2) One of the copies of the data needs to be portable to another
system while the original stays put.  This could be done to refresh
non-production instances from production, to perform backups in such a
way that it doesn't put load on the production spindles, networks,
etc.

Mike

--
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to