On 09/17/2013 04:18 AM, Lars Marowsky-Bree wrote:
> On 2013-09-16T16:36:38, Tom Parker <[email protected]> wrote:
>
>>> Can you kindly file a bug report here so it doesn't get lost
>>> https://github.com/ClusterLabs/resource-agents/issues ?
>> Submitted (Issue *#308)*
> Thanks.
>
>> It definitely leads to data corruption and I think has to do with the
>> way that the locking is not working properly on my lvm partitions. 
> Well, not really an LVM issue. The RA thinks the guest is gone, the
> cluster reacts and schedules it to be started (perhaps elsewhere); and
> then the hypervisor starts it locally again *too*.
I mean the locking of the LVs.  I should not be able to mount the same
LV in two places.  I know I can lock each LV exclusive to a node but I
am not sure how to tell the RA to do that for me.  At the moment I am
activating a VG with the LVM RA and that is shared across all my
physical machines.  If I do exclusive activation I think that locks the
vg to a particular node instead of the LVs.
>
> I think changing those libvirt settings to "destroy" could work - the
> cluster will then restart the guest appropriately, not the hypervisor.
>
>
> Regards,
>     Lars
>

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to