On Tue, April 5, 2011 14:38, Joe Auty wrote:

> Migrating to a new machine I understand is a simple matter of ZFS
> send/receive, but reformatting the existing drives to host my existing
> data is an area I'd like to learn a little more about. In the past I've
> asked about this and was told that it is possible to do a send/receive
> to accommodate this, and IIRC this doesn't have to be to a ZFS server
> with the same number of physical drives?

The internal structure of the pool (how many vdevs, and what kind) is
irrelevant to zfs send / receive.  So I routinely send from a pool of 3
mirrored pairs of disks to a pool of one large drive, for example (it's
how I do my backups).   I've also gone the other way once :-( (It's good
to have backups).

I'm not 100.00% sure I understand what you're asking; does that answer it?

Mind you, this can be slow.  On my little server (under 1TB filled) the
full backup takes about 7 hours (largely because the single large external
drive is a USB drive; the bottleneck is the USB).  Luckily an incremental
backup is rather faster.

> How about getting a little more crazy... What if this entire server
> temporarily hosting this data was a VM guest running ZFS? I don't
> foresee this being a problem either, but with so much at stake I thought
> I would double check :) When I say temporary I mean simply using this
> machine as a place to store the data long enough to wipe the original
> server, install the new OS to the original server, and restore the data
> using this VM as the data source.

I haven't run ZFS extensively in VMs (mostly just short-lived small test
setups).  From my limited experience, and what I've heard on the list,
it's solid and reliable, though, which is what you need for that
application.

> Also, more generally, is ZFS send/receive mature enough that when you do
> data migrations you don't stress about this? Piece of cake? The
> difficulty of this whole undertaking will influence my decision and the
> whole timing of all of this.

A full send / receive has been reliable for a long time.  With a real
(large) data set, it's often a long run.  It's often done over a network,
and any network outage can break the run, and at that point you start
over, which can be annoying.  If the servers themselves can't stay up for
10 or 20 hours you presumably aren't ready to put them into production
anyway :-).

> I'm also thinking that a ZFS VM guest might be a nice way to maintain a
> remote backup of this data, if I can install the VM image on a
> drive/partition large enough to house my data. This seems like it would
> be a little less taxing than rsync cronjobs?

I'm a big fan of rsync, in cronjobs or wherever.  What it won't do is
properly preserve ZFS ACLs, and ZFS snapshots, though.  I moved from using
rsync to using zfs send/receive for my backup scheme at home, and had
considerable trouble getting that all working (using incremental
send/receive when there are dozens of snapshots new since last time).  But
I did eventually get up to recent enough code that it's working reliably
now.

If you can provision big enough data stores for your VM to hold what you
need, that seems a reasonable approach to me, but I haven't tried anything
much like it, so my opinion is, if you're very lucky, maybe worth what you
paid for it.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to