# HG changeset patch
# User Rainer Weikusat
# Date 1316036167 -3600
# Branch stable-1.0
# Node ID ea611ef8c1e6a9d294d9d0dff6db2f317232292b
# Parent a15ead49e20f047e129882619ed075a65c1ebdfe
This is an alternate fix for Bug #2528 based on a patch to the
Debian Squeeze pacemaker package used to prov
Hi,
Sorry for the rather lame question but I see that Debian/Squeeze has
pacemaker 1.0.11-1~bpo60+1 in the backports. Can any points me to a
list of feature changes from the current 'stable' version
1.0.9.1+hg15626-1?
thanks
jf
___
Pacemaker mailing li
On 2011-09-14 17:29, Schaefer, Diane E wrote:
> Hi,
>
> We are running a two-node cluster using pacemaker 1.1.5-18.1 with
> heartbeat 3.0.4-41.1. I am confused on the correct syntax to use when
> adding a location constraint using the crm shell. I would like a
> resource to always run on a par
Hi,
We are running a two-node cluster using pacemaker 1.1.5-18.1 with heartbeat
3.0.4-41.1. I am confused on the correct syntax to use when adding a location
constraint using the crm shell. I would like a resource to always run on a
particular node. Here are the results of my experiments:
Hi,
On Thu, Sep 01, 2011 at 02:11:11PM +, Max Williams wrote:
> Hi All,
> I am wondering if there has been any further testing or development of the
> sg_persist RA over the last 6 months?
> Link here:
> https://github.com/nif/ClusterLabs__resource-agents/commit/d0c46fb35338d28de3e2c20c11d0ad
Hi,
On Fri, Aug 26, 2011 at 05:07:13PM -0600, Chris Redekop wrote:
> I'm attempting to set up a master/slave database cluster where the master is
> R/W and the slave is R/O. The master failure scenario works fine (slave
> becomes master, master vip moves over)however when the slave resource
>
On 09/13/2011 10:36 PM, Brad Johnson wrote:
> Yes, the suggested approach has the problem when both nodes drop to a
> score of zero the resource can not run anywhere. I have gone back to my
> original "best connectivity" approach, but now using my own ping RA
> which uses different dampening delay
On 2011-09-14 10:40, kari pahula wrote:
> Sep 14 11:17:29 mgr-testcluster-2 pengine: [8218]: notice: stage6: Cannot
> fence unclean nodes until quorum is attained (or no-quorum-policy is set to
> ignore)
2-node cluster with "no-quorum-policy" not set to "ignore" isn't going
to ever successfully
Hi. I'm trying to set up a two node cluster with stonith, but I'm
having trouble setting up fencing. I'm trying it out with meatware
stonith, running killall -9 corosync on the other node, but instead of
seeing an "OPERATIOR INTERVENTION REQUIRED" message I get "No match for
shutdown action".
Hi,
Pacemaker 1.1 shows the same behavior.
It seems that the following chengeset has the problems.
http://hg.clusterlabs.org/pacemaker/stable-1.0/diff/281c8c03a8c2/pengine/native.c
I could get the expected behavior with the latest Pacemaker 1.0 after
reverting the above change.
Thanks,
Junko
2
10 matches
Mail list logo