Hi Dejan,
There seem to be some problems for a retouch somehow or other.
An error is given unless I appoint enable-fatal-warnings=no.
[r...@x3650e Pacemaker-1-0-efdc0d8143dd]# ./autogen.sh && ./configure
--prefix=$PREFIX
--localstatedir=/var --with-lcrso-dir=$LCRSODIR
(snip)
[r...@x3650e Pacem
On Mon, Mar 8, 2010 at 9:37 PM, hj lee wrote:
> Hi,
>
> In the typical multi-state resource agent, it changes master score on demote
> or promote. Each change in master score triggers PE calculation. Suppose the
> following scenario.
> 1. Pacemaker initiates demote/promote
> 2. demote is issued an
On Tue, Mar 9, 2010 at 12:27 AM, Erich Weiler wrote:
> I think I may have found an answer. I had this in my config:
>
> order LDAP-after-IP inf: LDAP-IP LDAP-clone
>
> And, according to the logs, it *looks* like what happens when genome-ldap1
> goes gown, the IP goes over to genome-ldap2, AND THE
2010/3/9 :
> Hi Andrew,
>
>> This is normal for constraints with scores < INFINITY.
>> Anything < INFINITY is "preferable but not mandatory"
>
> Sorry
> The method of my question was bad.
>
> As of STEP9, is the setting that a resource of UMgroup01 does not start
> possible?
Only if you chan
Hi Andrew,
> This is normal for constraints with scores < INFINITY.
> Anything < INFINITY is "preferable but not mandatory"
Sorry
The method of my question was bad.
As of STEP9, is the setting that a resource of UMgroup01 does not start
possible?
I do not perform the INFINITY setting in ci
Hi Dejan,
> Anything not to upset the operator :) Similar patch applied.
Thanks.
> BTW, can't recall seeing this error. Still not clear to me when
> did you encounter it.
In the case of ssh/external, it seems to be generated when I let do sleep of
status on purpose in
slightly long time.
Durin
I think I may have found an answer. I had this in my config:
order LDAP-after-IP inf: LDAP-IP LDAP-clone
And, according to the logs, it *looks* like what happens when
genome-ldap1 goes gown, the IP goes over to genome-ldap2, AND THEN tries
to start LDAP there, even though LDAP is already star
I tried that (after setting 'property symmetric-cluster="true"'), didn't
seem to make a difference...
Thanks for the suggestion though!
hj lee wrote:
I don't think it's a good idea to put location on clone. The clone is
designed to run equally on every node. Please remove these and see that
h
I don't think it's a good idea to put location on clone. The clone is
designed to run equally on every node. Please remove these and see that
helps.
location LDAP-IP-placement-2 LDAP-IP 50: genome-ldap2
location LDAP-placement-1 LDAP-clone 100: genome-ldap1
location LDAP-placement-2 LDAP-clone 100
Hi,
In the typical multi-state resource agent, it changes master score on demote
or promote. Each change in master score triggers PE calculation. Suppose the
following scenario.
1. Pacemaker initiates demote/promote
2. demote is issued and lower the master score on the demoted node.
3. promote is
Hi All,
I have a (hopefully) simple problem that I need to fix, but I feel like
I'm missing a key concept here that is causing problems. I have 2
nodes, genome-ldap1 and genome-ldap2. Using latest corosync, pacemaker
and openais from the epel and clusterlabs repos, CentOS 5.4.
Both nodes a
On Mon, March 8, 2010 3:30 pm, Lars Marowsky-Bree wrote:
> On 2010-03-02T13:12:25, Rasto Levrinc wrote:
>
>> Thanks lmb. I see a place for Hawk as a lightweight tool to quickly
>> make some changes and I could even somehow integrate in the DRBD-MC.
>
> "Integrate"? I'm not sure how that would wor
Hi,
On Fri, Mar 05, 2010 at 10:50:30PM +0800, Martin Aspeli wrote:
> Hi Dejan,
>
> Dejan Muhamedagic wrote:
> >Hi,
> >
> >On Fri, Mar 05, 2010 at 10:00:06AM +0800, Martin Aspeli wrote:
[...]
> >> - I'm not sure we need to use Pacemaker to manage HAProxy on slave;
> >>it will simply not be used u
Hi,
On Mon, Mar 08, 2010 at 06:08:37PM +0100, Sander van Vugt wrote:
> Hi,
>
> > > Is this still current? Can anyone point me to any documentation or
> > > examples of configuring iDRAC 6 Enterprise for STONITH, if indeed
> > > it's possible?
> >
> > It should be possible, but I can't say. Perha
Hi,
> > Is this still current? Can anyone point me to any documentation or
> > examples of configuring iDRAC 6 Enterprise for STONITH, if indeed
> > it's possible?
>
> It should be possible, but I can't say. Perhaps you can try with
> drac5. If that won't do, then somebody has to write a stonith
Hi Hideo-san,
> On Mon, Mar 08, 2010 at 11:21:19AM +0900, renayama19661...@ybb.ne.jp wrote:
> Hi,
>
> We confirmed log of stonithd by the setting that a period of the operation of
> stonith was long.
>
> When STONITH is carried out in the case of the setting that a period of the
> operation of
On Sun, Mar 7, 2010 at 9:00 PM, Martin Aspeli wrote:
> Hi,
>
> We have a two-node cluster of Dell servers. They have an iDRAC 6 Enterprise
> each. The cluster is also backed up by a UPS with a diesel generator.
Don't forget that to make it reliable you have to backup up by UPS not
only cluster no
Hi,
On Mon, Mar 08, 2010 at 12:00:44PM +0800, Martin Aspeli wrote:
> Hi,
>
> We have a two-node cluster of Dell servers. They have an iDRAC 6
> Enterprise each. The cluster is also backed up by a UPS with a
> diesel generator.
>
> I realise on-board devices like the DRAC are not ideal for fencin
Matthew Palmer wrote:
On Mon, Mar 08, 2010 at 03:21:32PM +0800, Martin Aspeli wrote:
Matthew Palmer wrote:
What is the normal way to handle this? Do people have one floating IP
address per service?
This is how I prefer to do it. RFC1918 IP addresses are cheap, IPv6 address
quintuply so. Havi
On 2010-03-02T13:12:25, Rasto Levrinc wrote:
> > cool stuff. It's sad that we end up with a competing thingy ... Maybe we
> > could keep Tim's pure web-ui for the monitoring and most simple bits and
> > have drbd-mc replace the python UI.
> Thanks lmb. I see a place for Hawk as a lightweight tool
On 2010-03-02T12:18:44, Michael Schwartzkopff wrote:
> > cool stuff. It's sad that we end up with a competing thingy ... Maybe we
> > could keep Tim's pure web-ui for the monitoring and most simple bits and
> > have drbd-mc replace the python UI.
> I'd prefer python-UI AND MC. Some like Java, som
On Fri, Feb 26, 2010 at 4:11 PM, Raphael Daum wrote:
> dear all,
>
> thanks for valuable feedback.
>
Ante Karamatic schrieb am 24.02.2010 um 09:25:
>> On 22.02.2010 21:23, Andrew Beekhof wrote:
>>
>>> Wait a second, thats not right. You want:
>>>
>>> primitive gfsd ocf:pacemaker:controld
>>>
Not sure about this bit:
+if(failcount > 0) {
+ printed = TRUE;
+ print_as(": Resource is failure!!");
+}
+
Was there any reason you didn't use node->details->attrs (or
utilization) directly?
That would be simpler and wouldn't require (incorrectly) assuming that
there is only
2010/3/5 :
> Hi All,
>
> We test complicated colocation appointment.
>
> We did resource appointment to start by limitation of colocation together.
>
> But, the resource that set limitation starts when the resource that we
> appointed does not start in a
> certain procedure.
>
> We did the follow
On Fri, Mar 5, 2010 at 3:38 PM, Kees wrote:
> Hi,
>
> When i start the cluster software with /etc/init.d/corosync start, i see the
> whole stack in my processlist:
>
> 31838 ? Ssl 0:06 /usr/sbin/corosync
> 31849 ? SLs 0:00 \_ /usr/lib/heartbeat/stonithd
> 31850 ? S
On Mon, Mar 08, 2010 at 03:21:32PM +0800, Martin Aspeli wrote:
> Matthew Palmer wrote:
>>> What is the normal way to handle this? Do people have one floating IP
>>> address per service?
>>
>> This is how I prefer to do it. RFC1918 IP addresses are cheap, IPv6 address
>> quintuply so. Having every
>>>On 3/8/2010 at 04:45 PM, Andrew Beekhof wrote:
> Thanks!
> I've pushed it as http://hg.clusterlabs.org/pacemaker/devel/rev/b9de8aa8b2ef
Thanks for taking care of it!
Regards,
Yan
Yan Gao
Software Engineer
China Server Team, OPS Engineering, Novell, Inc.
Thanks!
I've pushed it as http://hg.clusterlabs.org/pacemaker/devel/rev/b9de8aa8b2ef
On Mon, Mar 8, 2010 at 5:45 AM, Yan Gao wrote:
> Hi Andrew,
On 3/5/2010 at 10:45 PM, Andrew Beekhof wrote:
>> On Thu, Mar 4, 2010 at 5:53 PM, Yan Gao wrote:
>> > Hi Andrew,
>> > You were reading the earlies
28 matches
Mail list logo