Hi,
it seems to me that patch
http://hg.clusterlabs.org/pacemaker/stable-1.0/rev/8241f689bf9f
broke timeouts for stop operations. The observable effect is that the
timeout for stop operations is always 125s, regardless what was
specified in the CIB. Reverting the part of the patch that cha
Hi,
Just a quick question, who generates the very first cib.xml when
pacemaker processes are initialized?
Thanks
Shravan
On Thu, Sep 30, 2010 at 4:22 AM, Andrew Beekhof wrote:
> On Tue, Sep 28, 2010 at 11:47 AM, Andrew Beekhof wrote:
>> On Mon, Sep 27, 2010 at 6:26 AM, Shravan Mishra
>> wrote
Hi,
I observed the following in pacemaker Versions 1.1.3 and tip up to patch
10258.
In a small test environment to study fail-count behavior I have one
resource
anything
doing sleep 600 with monitoring interval 10 secs.
The failure-timeout is 300.
I would expect to never see a failcount highe
On Fri, Oct 1, 2010 at 4:00 AM, wrote:
> Hi Andrew,
>
> Thank you for comment.
>
>> During crmd startup, one could read all the values from attrd into the
>> hashtable.
>> So the hashtable would only do something if only attrd went down.
>
> If attrd communicates with crmd at the time of start an
Hi,
It seams that it happens every time PE wants to check the conf
09:23:55 crmd: [3473]: info: crm_timer_popped: PEngine Recheck Timer
(I_PE_CALC) just popped!
and then check_rsc_parameters() wants to reset my resources
09:23:55 pengine: [3979]: notice: check_rsc_parameters: Forcing restart of
p
Hi Andrew,
thanks for your answer. I still need syslog-ng to restart on all nodes after
the ClusterIp moved. I tried it like this:
Resource:
primitive res_SyslogNG lsb:syslog-ng \
op monitor interval="15s" timeout="20s" start-delay="15s"
Clone:
clone cl-SyslogNG res_Sy
Hi
Could be related to a possible bug mentioned here[1]?
BTW here is the conf of pacemaker
node $id="b8ad13a6-8a6e-4304-a4a1-8f69fa735100" node-02
node $id="d5557037-cf8f-49b7-95f5-c264927a0c76" node-01
node $id="e5195d6b-ed14-4bb3-92d3-9105543f9251" node-03
primitive drbd_01 ocf:linbit:drbd \