sorry, here is it:
br
miha
Dne 8/18/2014 11:33 AM, piše emmanuel segura:
your cman /etc/cluster/cluster.conf ?
2014-08-18 7:08 GMT+02:00 Miha :
Dear all,
I'm in the process of setting up my first four-node cluster. I'm
using CentOS7 with PCS/Pacemaker/Corosync.
I've got everything set up with shared storage using GlusterFS. The
cluster is running and I'm in the process of adding resources. My
intention for the cluster is to use it to
Yes there is;
stonith_admin --confirm=
I know you will confirm this, but it needs to be stated how critical it
is that you really have confirmed the node is off.
digimer
On 18/08/14 02:01 PM, Felix Schrage wrote:
Thanks for the quick answer. I'll have a look at that.
Is there a way to manua
Thanks for the quick answer. I'll have a look at that.
Is there a way to manually force a failover when I can be sure the other
machine is down?
Kind regards
Felix
-Ursprüngliche Nachricht-
Von: Digimer [mailto:li...@alteeve.ca]
Gesendet: Montag, 18. August 2014 19:57
An: The Pacemaker
On 18/08/14 01:50 PM, Felix Schrage wrote:
Hi,
I'am building a two-node cluster running XenServer, pacemaker and DRBD. There's
a problem when testing the failover by powering off the current active node.
When using the fence_xenapi agent, the resource ClusterIP will not be moved to
the 2nd nod
Hi,
I'am building a two-node cluster running XenServer, pacemaker and DRBD. There's
a problem when testing the failover by powering off the current active node.
When using the fence_xenapi agent, the resource ClusterIP will not be moved to
the 2nd node until the first node was successfully shut
your cman /etc/cluster/cluster.conf ?
2014-08-18 7:08 GMT+02:00 Miha :
> Hi Emmanuel,
>
> this is my config:
>
>
> Pacemaker Nodes:
> sip1 sip2
>
> Resources:
> Master: ms_drbd_mysql
> Meta Attrs: master-max=1 master-node-max=1 clone-max=2 clone-node-max=1
> notify=true
> Resource: p_drbd_my