What I'm looking for is a way to pass parameters to my resource stop operation.
My first attempt has been to set the paramter with crm_resource and
then stop the resource.
1) crm_resource --resource myres --set-parameter myparam
--parameter-value myvalue
2) crm_resource --resource myres --set-param
et access to an internal state in pacemaker that can
distinguish between 'start in progress' and 'start completed'?
Alan
On Thu, Mar 24, 2011 at 2:12 PM, Lars Ellenberg
wrote:
> On Thu, Mar 24, 2011 at 01:21:19PM -0700, Alan Jones wrote:
>> What is the best way to query p
What is the best way to query pacemaker for the state the resource is
in when the RA's start function has returned success?
(... as apposed to when the RA's start function has been called.)
I've tried:
/usr/sbin/crm resource show
/usr/sbin/crm_resource --resource --locate
and even
I'm looking for a simple way to disable failover, i.e. prevent
Pacemaker from migrating resources due to node or resource failures.
Simple meaning that I can easily revert back to the previous state
with failover enabled without modifying all the resources, for
example.
Barring that I'd like a way
m!
Alan
On Mon, Nov 29, 2010 at 7:16 PM, Tim Serong wrote:
> On 11/30/2010 at 10:11 AM, Alan Jones wrote:
>> On Thu, Nov 25, 2010 at 6:32 AM, Tim Serong wrote:
>> > Can you elaborate on why you want this particular behaviour? Maybe
>> > there's some other way
On Thu, Nov 25, 2010 at 6:32 AM, Tim Serong wrote:
> Can you elaborate on why you want this particular behaviour? Maybe
> there's some other way to approach the problem?
I have explained the issue as clearly as I know how. The problem is fundamental
to the design of the policy engine in Pacemak
> Instead of:
>
> colocation resX-resY -2: resX resY
>
> Try:
>
> colocation resX-resY -2: resY resX
>
That works fine, as you describe, for placing resY when resX is
limited by the -inf rule; but not the reverse.
In my configuration the -inf constraints come from an external source
and I wish p
On Sat, Nov 20, 2010 at 1:05 AM, Andrew Beekhof wrote:
> Then -2 obviously isn't big enough is it.
I need a value between and not including -inf and -2 that will work.
All the values I've tried don't, so I'm open to suggestions.
> Please read and understand:
>
> http://www.clusterlabs.org/doc
10 at 11:18 PM, Andrew Beekhof wrote:
> On Fri, Nov 5, 2010 at 4:07 AM, Vadym Chepkov wrote:
>>
>> On Nov 4, 2010, at 12:53 PM, Alan Jones wrote:
>>
>>> If I understand you correctly, the role of the second resource in the
>>> colocation command was defaultin
>> I have tried larger values. If you know of a value that *should*
>> work, please share it.
>
> INFINITY
My understanding is that a colocation score of minus infinity will
prevent the resources from running on the same node, which in my
configuration would result in a loss of availability. The
On Sat, Nov 13, 2010 at 3:20 AM, Andrew Beekhof wrote:
> On Fri, Nov 12, 2010 at 5:27 PM, Alan Jones wrote:
>> On Thu, Nov 11, 2010 at 11:31 PM, Andrew Beekhof wrote:
>>>> colocation X-Y -2: X Y
>>>> colocation Y-X -2: Y X
>>>
>>> the
I've looked into the code more and added more logging, etc.
The pengine essentially walks the list of constraints, applying
weights, and then walks the list of resources and tallies the weights.
In my example, it ends up walking the resources backward, i.e. it
assigns a node to Y and then assigns a
On Thu, Nov 11, 2010 at 11:31 PM, Andrew Beekhof wrote:
>> colocation X-Y -2: X Y
>> colocation Y-X -2: Y X
>
> the second one is implied by the first and is therefore redundant
If only that were true!
What happens with the first rule is that other constraints that force
Y to a node will evict X
How to I express symmetric anti-collocation in Pacemaker 1.0.9.1?
I'd like to write two rules:
colocation X-Y -2: X Y
colocation Y-X -2: Y X
The idea is that external conditions could place either resource and
I'd like Pacemaker to place the other accordingly.
Unfortunately, Pacemaker will only app
hepkov wrote:
>>
>> On Nov 4, 2010, at 12:53 PM, Alan Jones wrote:
>>
>> > If I understand you correctly, the role of the second resource in the
>> > colocation command was defaulting to that of the first "Master" which
>> > is not defined or
This question should be on the openais list, however, I happen to know
the answer.
To get up and running quickly you can configure broadcast with the
version you have.
Corosync can distinguish separate clusters with the multicast address
and port that become payload to the messages.
The patch you r
If I understand you correctly, the role of the second resource in the
colocation command was defaulting to that of the first "Master" which
is not defined or is untested for none-ms resources.
Unfortunately, after changed that line to:
colocation mystateful-ms-loc inf: mystateful-ms:Master myprim:
I running with Pacemaker 1.0.9.1 and Corosync 1.2.7.
I have a simple config below where colocation seems to have the opposite effect.
Note that if you force myprim's location then mystateful's Master will
colocate correctly.
The command I use to force is: location myprim-loc myprim -inf:
node-not-t
Hi,
Pacemaker 1.0.9.1, Corosync 1.2.7
I have a sane master/slave configuration that gives me normal looking
notify() calls when I standby each node in turn.
However, when I configure the master/slave on a group of three
resources, things look pretty strange.
Note that I get no "post" calls at all.
I'm trying to configure a simple resource that depends on a local clone.
The configuration is below.
For those familiar with the Veritas Cluster Server, I'm trying to get
something like permanent resources.
Unfortunately, the simple resource (foo) will not start until *both* bar
clones are up.
The
Has anyone configured pacemaker to simulate multiple nodes with multiple
process instances?
Ideally, I'd like to bind corosync daemons to different loopback IPs (e.g.
127.0.0.1, 127.0.0.2, etc) and somehow direct the pacemaker instances to
separate corosync processes.
Any thoughts or comments welco
the "--with-ais" in pacemaker's configure command line options.
Do you know the answer?
Alan
On Wed, Aug 4, 2010 at 1:18 AM, Andrew Beekhof wrote:
> On Tue, Aug 3, 2010 at 2:23 AM, Alan Jones wrote:
> > I'd like to configure pacemaker to use corosync without the openais
I'm trying to work a cib seg fault in init_ais_connection() for pacemaker
1.0.9.
The 1.0.8 version of this function is pretty stright forward, calling one of
the
comm stack's connect functions depending on the config.
In 1.0.9, however, it appears to be a recursive call that never ends.
There is al
I'd like to configure pacemaker to use corosync without the openais package.
We have our own custom Linux distro, so I'm trying to compile:
Pacemaker-1-0-Pacemaker-1.0.9.tar.bz2
Reusable-Cluster-Components-8286b46c91e3.tar.bz2
corosync-1.2.7.tar.gz
The relevent options seem to be:
- configure op
I can imagine several options for configuring Pacemaker to run on a single
node.
My first cut is to configure it with openais/corosync using a loopback
network (localhost).
What would you recommend? Has anyone done this?
Alan
___
Pacemaker mailing list:
On Wed, Apr 7, 2010 at 6:38 AM, Andrew Beekhof wrote:
> > It seems there are only two configuration options for pacemaker as
> started
> > by
> > corosync: use_logd which I've enabled and use_mgmtd which I don't
> > understand.
>
> pacemaker also uses the logging options of corosync.conf
> though
\
(void)(level); \
cl_log(LOG_INFO, fmt, ##args); \
} while(0)
#endif
On Thu, Apr 1, 2010 at 4:03 PM, Alan Jones wrote:
> Hi,
> I would like to set debugging levels higher than zero with
> p
Hi,
I would like to set debugging levels higher than zero with
pacemaker/corosync.
[r...@fc12-a heartbeat]# ./crmd version
CRM Version: 1.0.5 (ee19d8e83c2a5d45988f1cee36d334a631d84fc7)
[r...@fc12-a heartbeat]# corosync -v
Corosync Cluster Engine, version '1.1.2' SVN revision '2539'
Copyright (c) 2
Friends,
The ocf:pacemaker:Dummy example resource agent script specifies a default
monitoring interval (10)
which I assume is 10 seconds. This seems like the appropriate place to
specify this interval, ie.
the resource implementation knows how heavy weight the monitor is and what
is a good comprom
that I had to add points for each resource on each node to overcome the
negative colocation value to allow them both to run on one node.
If there is a more elegant solution, let me know.
Alan
On Tue, Mar 23, 2010 at 8:24 AM, Andrew Beekhof wrote:
> On Mon, Mar 22, 2010 at 9:18 PM, Alan Jones
I software
packages tools.
Alan
On Tue, Mar 23, 2010 at 3:47 PM, Alan Jones wrote:
> The following rules give me the behavior I was looking for:
>
> primitive master ocf:pacemaker:Dummy meta resource-stickiness="INFINITY"
> is-managed="true"
> location l-ma
AM, Dejan Muhamedagic wrote:
> Hi,
>
> On Mon, Mar 22, 2010 at 09:29:50AM -0700, Alan Jones wrote:
> > Friends,
> > I have what should be a simple goal. Two resources to run on two nodes.
> > I'd like to configure them to run on separate nodes when available, ie.
&g
Friends,
I have what should be a simple goal. Two resources to run on two nodes.
I'd like to configure them to run on separate nodes when available, ie.
active-active,
and provide for them to run together on either node when one fails, ie.
failover.
Up until this point I have assumed that this wou
On Wed, Mar 17, 2010 at 1:39 PM, Andrew Beekhof wrote:
> On Wed, Mar 17, 2010 at 7:23 PM, Alan Jones
> wrote:
> > Is there any interest among people working with Pacemaker to provide for
> > restarting crmd locally without failover and rediscovering resouce agent
> &
Is there any interest among people working with Pacemaker to provide for
restarting crmd locally without failover and rediscovering resouce agent
states through their monitor scripts?
Alan
___
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://os
I'm trying to follow the code in lib/ais/plugin.c
In many functions the first argument "conn" is assigned to a local
"async_conn" which is never modified, e.g.:
void pcmk_notify(void *conn, ais_void_ptr *msg)
{
const AIS_Message *ais_msg = msg;
char *data = get_ais_data(ais_msg);
void
On Fri, Feb 26, 2010 at 2:57 AM, Andrew Beekhof wrote:
> Not only safe but essential, otherwise B would kill A for no reason.
> Transitional memberships are just corosync's way of saying "i;m still
> working, but this is what I have so far".
>
The best explanation I've found for the transitional
It appears from the code in lib/ais/plugin.c:pcmk_peer_update() that
Pacemaker ignores
transitional membership updates from Corosync. It is my understanding that
this information
tells you which members have maintained synchronized state during
transitions. For example,
view AB on both A and B fo
It appears from the code in lib/ais/plugin.c:pcmk_peer_update() that
Pacemaker ignores
transitional membership updates from Corosync. It is my understanding that
this information
tells you which members have maintained synchronized state during
transitions. For example,
view AB on both A and B fo
Below I have the commands that built this configuration and then some errors
I encounter during failover.
Should I add contraints to tell Pacemaker that these resources are only to
run on one node (each the other)?
Do you know what the PE processing error is about? I've attached the first
bz2 file
I haven't tried it yet, but there are several man pages under corosync for
qdisk.
Alan
On Mon, Feb 8, 2010 at 7:23 AM, jimbob palmer wrote:
> Hello,
>
> I've searched the mailing lists for qdisk, quorum disk and quorum
> partition but found nothing.
>
> How can I configure a two node cluster to u
The answer is to use configure options to get the different projects to
agree where var is.
Alan
On Fri, Feb 5, 2010 at 11:28 AM, Alan Jones wrote:
> Ok, hb_report only collects existing logs so it won't help me *get* crmd to
> use the logd.
> However, I am making progress read
ib_rw
srwxrwxrwx 1 hacluster root 0 Feb 5 11:21 pengine-bash-3.00
# ls -l /var/run
drwxr-x--- 2 hacluster haclient 4096 Feb 5 11:21 crm
drwxr-xr-x 2 hacluster haclient 4096 Feb 5 10:48 heartbeat
On Fri, Feb 5, 2010 at 9:53 AM, Alan Jones wrote:
> Hi,
> I'm trying to use the fo
Hi,
I'm trying to use the following software together:
cluster-glue-1.0.2-1.fc13.src.rpm
corosync-1.2.0-1.fc13.src.rpm
openais-1.1.2-1.fc13.src.rpm
pacemaker-1.0.5-5.fc13.src.rpm
I'm having trouble with crmd as I wrote earlier:
Feb 4 17:57:42 dd690-42 crmd: [1910]: WARN: lrm_signon: can not initia
I'm trying to run with corosync 1.2.0 and pacemaker 1.0.5 and get the
following repeatedly in /var/log/messages:
Feb 4 17:57:42 dd690-42 crmd: [1910]: WARN: lrm_signon: can not initiate
connection
Feb 4 17:57:42 dd690-42 crmd: [1910]: WARN: do_lrm_control: Failed to sign
on to the LRM 1 (30 max)
45 matches
Mail list logo