On 4/25/12 6:57 PM, Paul Kraus wrote:
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams<n...@cryptonector.com> wrote:
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
<richard.ell...@gmail.com> wrote:
Nothing's changed. Automounter + data migration -> rebooting clients
(or close enough to rebooting). I.e., outage.
Uhhh, not if you design your automounter architecture correctly
and (as Richard said) have NFS clients that are not lame to which I'll
add, automunters that actually work as advertised. I was designing
And applications that don't pin the mount points, and can be idled
during the migration. If your migration is due to a dead server, and you
have pending writes, you have no choice but to reboot the client(s) (and
accept the data loss, of course).
Which is why we use AFS for RO replicated data, and NetApp clusters with
SnapMirror and VIPs for RW data.
To bring this back to ZFS, sadly ZFS doesn't support NFS HA without
shared / replicated storage, as ZFS send / recv can't preserve the data
necessary to have the same NFS filehandle, so failing over to a replica
causes stale NFS filehandles on the clients. Which frustrates me,
because the technology to do NFS shadow copy (which is possible in
Solaris - not sure about the open source forks) is a superset of that
needed to do HA, but can't be used for HA.
--
Carson
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss