Hi folks,
I'm running a 2 node cluster with pacemaker, DRBD dual primary, ocfs2.
Now I'm trying to setup stonith correctly, but my stonith resources don't
start. I did some research but I didn't find a solution to my problem.
This is my cib:
node server1
node server2
primitive DLM ocf:pacemaker:c
Hi, I'm running a DRBD 2 node dual-primary cluster for some tests.
Everything works fine, but my cluster doesn't do file locking.
this is my cib configuration:
node server1
node server2
primitive DLM ocf:pacemaker:controld \
op monitor interval="120s"
primitive DRBD ocf:linbit:drbd \
params drbd_
2010/7/12 Florian Haas
> On 2010-07-12 17:17, Matteo wrote:
> > Hi, I'm running a DRBD 2 node dual-primary cluster for some tests.
> > Everything works fine, but my cluster doesn't do file locking.
>
> What evidence do you have that it doesn't?
>
If I o
an auto "yes" in the
command line.
thank you guys
Matteo
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started:
No the nodes it's ok, shouldn't delete them.
but to use the crm configure erase I would need to stop any resource
that's running, correct?
what is better? is the cibadmin too invasive?
On 06/08/2012 03:05 AM, Dejan Muhamedagic wrote:
Hi,
On Thu, Jun 07, 2012 at 12:01:16P
Hi guys,
I am having trouble understanding why this happens:
I have this cluster with 2 nodes, when I put one node in standby or
crash the resource, it correctly promotes on the second machine as master.
But if I UNPLUG the first machine from the network, the resource won't
promote the other o
e GatewayStatus, can I configure the cluster telling
it that IF the network fails I want ResourceCustom, rseries:Master and
Gateway to be together? I ask because when I tried putting
GatewayStatusClone in the colocation... havoc happened... and nothing
worked. did I do something wrong and it shou
On two machines (A and B) I've created three identical LVM
partitions (DRBD backing device) called srv, home and software.
The fs on all of them is ext4.
The home fs has quotas.
srv, home and software are exported via NFS.
Both A and B do also have an extra locally mounted fs (data1 and
data2 r
on \
op start interval="0" timeout="60" \
op stop interval="0" timeout="240" \
op monitor interval="20"
primitive p_service_nfs-kernel-server lsb:nfs-kernel-server \
op start interval="0" timeout="60" \
op stop int
0 expected votes
Obviusly the cluster has no quorum :).
Any idea to explain where does come from the value 3075348770 ?
Cheers,
Matteo
___
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Hi,
from this tutorial http://clusterlabs.org/wiki/Debian_Lenny_HowTo seems
that this version of the Debian package.
pacemaker-openais 1.0.4
is working...is true??
Cheers,
Matteo
Andrew Beekhof ha scritto:
On Sun, Oct 18, 2009 at 11:40 AM, wrote:
So for a working debian
Hi all,
I have the same problem.
I use corosync 1.0.0-5~bpo50+1
pacemaker-openais 1.0.5+hg20090915-1~bpo50+1
debian lenny
Regards,
Matteo
Il 21/10/2009 15.11, Michael Schwartzkopff ha scritto:
Hi,
perhaps this is the wrong list but anyway:
I have corosync-1.1.1 and
12 matches
Mail list logo