On Tue, 2006-05-09 at 00:49 +0200, Patrick wrote:
> > In my experience, the approach and solution for "remote mirroring"
> > really depends on two things:
> >         1. are you doing disaster recovery, versus mirroring diversity?
> 
> I'm not actually sure, i'm currently mirroring from the one disk
> device to the other over the network to cater for hardware failures,
> and ''software'' failures ( such as a kernel panic and such ) the idea
> was to have an 'offline' machine that would have a full copy of the
> data. Although how usefull that would be in the real world is still to
> be decided, however currently i've got it clipped into a few other
> bits like Heartbeat and such, so it'll do a full failover and move,
> but i'm probably going to remove that due to the 'extra layer of
> complexity creating more complex problems'
> 
> so I suppose that'd put me into the 'disaster recovery' class.

No.  You are describing a more common mirroring diversity setup.

> >         2. how far apart are the mirrored devices?
> 
> about 15cm-20cm ( via crossover on seperate interfaces, not network
> osmosis ) ( v20z's btw. )

Definitely not disaster recovery... if a tornado hit one box, it
would likely also hit the other.  Also, in disaster recovery
scenarios we often consider a required time delay between committing
data on the primary versus the secondary, in order to protect
against accidental data loss (eg. rm *)

There are a couple of ways to do this, some of which aren't quite
ready for release and aren't part of the OpenSolaris tree, yet.

The Sun StorageTek Availability Suite software provides block-device
level replication and should work out of the box with ZFS (I dunno
for sure, but since they operate at different levels in the stack,
they should interoperate ok).  
http://www.sun.com/storagetek/management_software/data_protection/availability/

For a real cluster solution, which it sounds like you don't want,
Sun Cluster will have failover file system support for ZFS in an
upcoming release, pretty much like currently exists with other file
systems (UFS, QFS, VxFS).

It seems to me that drbd is a compromise between the previous two.
I would say that they are doing the easy parts, but putting off the
hard parts.

For a more point-in-time snapshot solution, you could use zfs 
send/receive.

There has also been a discussion on the requirements for a multi-node
ZFS implementation.  The window for comments may still be open.  See
the ZFS discuss forum archive for the threads.
 -- richard


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to