On 29/11/2012 14:51, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Enda o'Connor - Oracle Ireland -
Say I have an ldoms guest that is using zfs root pool that is mirrored,
and the two sides of the mirror are coming from two separate vds
servers, that is
mirror-0
c3d0s0
c4d0s0
where c3d0s0 is served by one vds server, and c4d0s0 is served by
another vds server.
Now if for some reason, this physical rig loses power, then how do I
know which side of the mirror to boot off, ie which side is most recent.
If one storage host goes down, it should be no big deal, one side of the mirror
becomes degraded, and later when it comes up again, it resilvers.
If one storage host goes down, and the OS continues running for a while and
then *everything* goes down, later you bring up both sides of the storage, and
bring up the OS, and the OS will know which side is more current because of the
higher TXG. So the OS will resilver the old side.
If one storage host goes down, and the OS continues running for a while and
then *everything* goes down... Later you bring up only one half of the
storage, and bring up the OS. Then the pool will refuse to mount, because with
missing devices, it doesn't know if maybe the other side is more current.
As long as one side of the mirror disappears and reappears while the OS is
still running, no problem.
As long as all the devices are present during boot, no problem.
Only problem is when you try to boot from one side of a broken mirror. If you need to do
this, you should mark the broken mirror as broken before shutting down - Certainly detach
would do the trick. Perhaps "offline" might also do the trick.
thanks, from my testing,ie appears that if disk goes into UNAVAIL state
and further data is written to the other disk, then even if I boot from
the stale side of mirror, the boot process detects this and actually
mounts the good side and resilvers the side I passed to boot arg.
If disk is FAULTED then booting from it results in the zfs panicing and
telling me to boot the other side.
So it appears that some failure modes are handled well, others appear to
result in the panic loop.
I have both sides in boot-device and both disks are available to OBP at
boot time in my testing.
I'm just trying to determine optimal value for autoboot in my ldoms
guests in the face of various failure modes.
thanks for the info
Enda
Does that answer it?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss