2012-10-04 21:19, Dan Swartzendruber writes:
Sorry to be dense here, but I'm not getting how this is a cluster setup,
or what your point wrt authoritative vs replication meant.  In the
scenario I was looking at, one host is providing access to clients - on
the backup host, no services are provided at all.  The master node does
mirrored writes to the local disk and the network disk.  The mirrored
write does not return until the backup host confirms the data is safely
written to disk.  If a failover event occurs, there should not be any
writes the client has been told completed that was not completed to both
sides.  The master node stops responding to the virtual IP, and the
backup starts responding to it.  Any pending NFS writes will presumably
be retried by the client, and the new master node has completely up to
date data on disk to respond with.  Maybe I am focusing too narrowly
here, but in the case I am looking at, there is only a single node which
is active at any time, and it is responsible for replication and access
by clients, so I don't see the failure modes you allude to.  Maybe I
need to shell out for that book :)

What if the backup host is down (i.e. the ex-master after the failover)?
Will your failed-over pool accept no writes until both storage machines
are working?

What if internetworking between these two heads has a glitch, and as
a result both of them become masters of their private copies (mirror
halves), and perhaps both even manage to accept writes from clients?

This is the clustering part, which involves "fencing" around the node
which is considered dead, perhaps including a hardware reset request
just to make sure it's dead, before taking over resources it used to
master (STONITH - Shoot The Other Node In The Head). In particular,
clusters suggest that for hearbeats so as to make sure both machines
work indeed, you use at least two separate wires (i.e. serial and LAN)
without active hardware (switches) in-between, separate from data
networking.


HTH,
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to