I figured it out, turns out there are some undocumented properties for
ping. What was happening was it was using the default monitor timeout of 20
seconds, but it was killing the ping process after this time, but the ping
wasn't finished yet.
See:
http://hg.clusterlabs.org/pacemaker/stable-1.0/ra
On Sun, Feb 12, 2012 at 10:01 PM, diego fanesi wrote:
> Hi,
>
> I'm trying to install corosync with pacemaker using drbd + gfs2 with cman
> support.
Why?
GFS2 with dual-Primary DRBD with Pacemaker 1.1.6 is working very well
in squeeze-backports with the dlm_controld.pcmk and gfs_controld.pcmk
da
I'm sorry. now the problem it's a little bit different. now I can find cman
support on pacemakerd features:
# pacemakerd --features
Pacemaker 1.1.6 (Build: 9971ebba4494012a93c03b40a2c58ec0eb60f50c)
Supporting: generated-manpages agent-manpages ncurses cman
corosync-quorum heartbeat corosync snmp
I tried to write to debian maintainers mailing list but I didn't receive
answer.
Now I'm trying to recompile debian package with the option --with-cman in
debian/rules but the result is the same. I have installed libcman-dev
libfence-dev libstonith1-dev and all other dev packages that pacemaker
co
Also, in my constraints section, for the ping connectivity resource
location definitions, a node attribute is not specified on rsc_location.
What is the default value of node then?
Anlu
On Fri, Feb 17, 2012 at 10:57 AM, Anlu Wang wrote:
> I'm running 1.0.8. In accordance with the bug in the pos
I'm running 1.0.8. In accordance with the bug in the post you linked, I
changed the config so that interval is greater than dampen. Here is the
relevant section now:
Hi there!
I wrote documentation about how to build an active-active LDAP cluster
using pacemaker over Debian Squeeze. In the examples I use a 2 nodes
cluster, but I've deployed it in a 3 nodes cluster with success. IPv4
& IPv6
You can found the documentation here:
Part 1:
(isn't technical)
Part
Hi,
On Thu, Feb 16, 2012 at 07:57:14PM -0800, Anlu Wang wrote:
> I have three machines named anlutest1, anlutest2, and anlutest3 that I'm
> trying to get IP failover working on. I'm using heartbeat for the messaging
> layer, and everything works great when a machine goes down. But I also
> would l
Hi,
On Fri, Feb 17, 2012 at 10:37:39AM +0100, fatcha...@gmx.de wrote:
> Hi,
>
> we are using a pacemaker 1.0.5-4.7 with heartbeat 3.0.0-33.10
> and a drbd device on a CentOS 5.7 webserver-cluster. When the
> active node gets under heavy load, the cluster sometimes starts
> to failover.
What does
Hi,
we are using a pacemaker 1.0.5-4.7 with heartbeat 3.0.0-33.10 and a drbd device
on a CentOS 5.7 webserver-cluster. When the active node gets under heavy load,
the cluster sometimes starts to failover. Is there a way to make this behavior
less sensitive like changing the retry/recheck time/c
Hi, Jiaju
I am trying behavior of revoke Implementation manually.
I have questions of revoke.
If I revoked in one of grant ticket site, It would be free from grant
status.
but the other site runs automatic failover after that!
Is it correct specifications?
I think the expire timer is free in al
11 matches
Mail list logo