Re: [Linux-HA] Oracle RAC + Linux-HA on OCFS2 filesystems.

2010-02-16 Thread Enrique Sanchez
Serge,

The project manager insisted on using filesystems, he argued something
about SAP not supporting RAC on raw devices on Linux, whether that is
true of false, is beyond me, now I need to move forward.

thanks,
esv.




On Mon, Feb 15, 2010 at 9:33 PM, Serge Dubrouski  wrote:
> For me it completely doesn't, sorry. Oracle RAC is a very complex (and
> very expensive) product that has all that you need without using any
> additional products. Use Oracle RAC + Oracle Custerware + Oracle ASM.
> Oracle 11G supports s shared filesystem over ASM so you don't have to
> use OCFS2. It'll be complex enough in supporting without bringing
> additional complexity of Pacemaker or Heartbeat.
>
> On Mon, Feb 15, 2010 at 2:16 PM, Enrique Sanchez
>  wrote:
>> does this make sense at all?
>>
>> thanks,
>> esv.
>>
>> --
>> Enrique Sanchez Vela
>> --
>> ___
>> Linux-HA mailing list
>> Linux-HA@lists.linux-ha.org
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
>
>
>
> --
> Serge Dubrouski.
> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>



-- 
Enrique Sanchez Vela
--
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Need some help on a problem with Pacemaker/corosync on fc12

2010-02-16 Thread Dejan Muhamedagic
Hi,

On Tue, Feb 16, 2010 at 08:49:30AM +0100, Alain.Moulle wrote:
> Hi ,
> 
> the other messages are only :
> 
> Feb 16 08:00:39 node3 pengine: [3876]: WARN: unpack_rsc_op: Processing 
> failed op restofencenode1_start_0 on node3: unknown error (1)
> Feb 16 08:00:39 node3 pengine: [3876]: notice: native_print: 
> restofencenode1#011(stonith:external/ipmi):#011Stopped
> Feb 16 08:00:39 node3 pengine: [3876]: info: get_failcount: 
> restofencenode1 has failed 100 times on node3
> Feb 16 08:00:39 node3 pengine: [3876]: WARN: common_apply_stickiness: 
> Forcing restofencenode1 away from node3 after 100 failures (max=100)
> Feb 16 08:00:39 node3 pengine: [3876]: WARN: native_color: Resource 
> restofencenode1 cannot run anywhere
> Feb 16 08:00:39 node3 pengine: [3876]: notice: LogActions: Leave 
> resource restofencenode1#011(Stopped)

No messages from stonithd, strange.

> knowing that I had set the following constraints :
> 
>rsc="restofencenode1" score="+INFINITY"/>
>rsc="restofencenode1" score="-INFINITY"/>
>rsc="restofencenode3" score="+INFINITY"/>
>rsc="restofencenode3" score="-INFINITY"/>
> 
> 
> 
> Either I made a big mistake, or there is something wrong in these 
> releases, because I think I did the
> quite the same thing with releases on fc11 and it worked fine ... it was 
> upon openais (openais.conf)
> and not yet upon corosync (corosync.conf) but it seems to be a problem 
> of resources management
> not on cluster management ...
> 
> Really thanks for help, because I'm stalled ...

Can't say what's going on. Please make a hb_report report from
the time the resources failed (was that the first cluster start
after the upgrade?). If the tarball is big to post, just open a
bugzilla.

Thanks,

Dejan

> Regards
> Alain
> > On Mon, Feb 15, 2010 at 04:17:27PM +0100, Alain.Moulle wrote:
> >   
> >> > Hi,
> >> > 
> >> > I've retrieved the following rpms for fc12 :
> >> > cluster-glue-1.0.3-1.fc12.x86_64.rpm
> >> > cluster-glue-debuginfo-1.0.3-1.fc12.x86_64.rpm
> >> > cluster-glue-libs-1.0.3-1.fc12.x86_64.rpm
> >> > corosync-1.2.0-1.fc12.x86_64.rpm
> >> > corosync-debuginfo-1.2.0-1.fc12.x86_64.rpm
> >> > corosynclib-1.2.0-1.fc12.x86_64.rpm
> >> > heartbeat-3.0.2-2.fc12.x86_64.rpm
> >> > heartbeat-debuginfo-3.0.2-2.fc12.x86_64.rpm
> >> > heartbeat-libs-3.0.2-2.fc12.x86_64.rpm
> >> > pacemaker-1.0.7-4.fc12.x86_64.rpm
> >> > pacemaker-debuginfo-1.0.7-4.fc12.x86_64.rpm
> >> > pacemaker-libs-1.0.7-4.fc12.x86_64.rpm
> >> > 
> >> > and I'm facing a problem on start of any resource :
> >> > restofencenode1_start_0 (node=node3, call=4, rc=1, status=complete): 
> >> > unknown error
> >> > 
> >> > /var/log/messages gives :
> >> > Feb 15 15:30:39 node3 pengine: [3876]: WARN: unpack_rsc_op: Processing 
> >> > failed op restofencenode1_start_0 on node3: unknown error (1)
> >> 
> >
> > There should be more logs. Just grep for the resource id. If not,
> > then please make a hb_report.
> >
> > Thanks,
> >
> > Dejan
> >
> >   
> >> > and crm_verify -L -V :
> >> > crm_verify[3991]: 2010/02/15_15:37:20 WARN: unpack_rsc_op: Processing 
> >> > failed op restofencenode1_start_0 on node3: unknown error (1)
> >> > crm_verify[3991]: 2010/02/15_15:37:20 WARN: common_apply_stickiness: 
> >> > Forcing restofencenode1 away from node3 after 100 failures 
> >> > (max=100)
> >> > crm_verify[3991]: 2010/02/15_15:37:20 WARN: native_color: Resource 
> >> > restofencenode1 cannot run anywhere
> >> > crm_verify[3991]: 2010/02/15_15:37:20 WARN: native_color: Resource 
> >> > restofencenode3 cannot run anywhere
> >> > Warnings found during check: config may not be valid
> >> > 
> >> > I can't find the problem, as I do as I did with fc11 rpms , meaning that 
> >> > my restofencenode1 is in cib.xml like :
> >> > 
> >> >>> > type="external/ipmi">
> >> > 
> >> >>> > name="target-role" value="Started"/>
> >> >>> > name="hostname" value="node3"/>
> >> >>> > name="ipaddr" value="12.1.1.121"/>
> >> >>> > name="userid" value="mylogin"/>
> >> >>> > name="password" value="mypass"/>
> >> >>> > name="interface" value="lan"/>
> >> > 
> >> >   
> >> > 
> >> > I've also tried with a simple lsb resource, and I've got the same result 
> >> > .
> >> > 
> >> > Note that I have the same problem with the official releases provided 
> >> > with fc12 :
> >> > cluster-glue-1.0-0.11.b79635605337.hg.fc12.x86_64.rpm
> >> > cluster-glue-libs-1.0-0.11.b79635605337.hg.fc12.x86_64.rpm
> >> > corosync-1.1.2-1.fc12.x86_64.rpm
> >> > corosynclib-1.1.2-1.fc12.x86_64.rpm
> >> > heartbeat-3.0.0-0.5.0daab7da36a8.hg.fc12.x86_64.rpm
> >> > heartbeat-libs-3.0.0-0.5.0daab7da36a8.hg.fc12.x86_64.rpm
> >> > openais-1.1.0-1.fc12.x86_64.rpm
> >> > openaislib-1.1.0-1.fc12.x86_64.rpm
> >> > pacemaker-1.0.5-4.fc12.x86_64.rpm
> >> > pacemaker-libs-1.0.5-4.fc12.x86_64.rpm
> >> > pacemaker-libs-devel-1.0.5-4.fc12.x86_64.rpm
> >> > 
> 

Re: [Linux-HA] Oracle RAC + Linux-HA on OCFS2 filesystems.

2010-02-16 Thread Dejan Muhamedagic
Hi,

On Tue, Feb 16, 2010 at 05:42:27AM -0500, Enrique Sanchez wrote:
> Serge,
> 
> The project manager insisted on using filesystems, he argued something
> about SAP not supporting RAC on raw devices on Linux, whether that is
> true of false, is beyond me, now I need to move forward.

No experience with RAC here, but there are oracle and oralsnr
resource agents. Don't know which RAC features you need and if
it's an option not to use it.

Thanks,

Dejan

> thanks,
> esv.
> 
> 
> 
> 
> On Mon, Feb 15, 2010 at 9:33 PM, Serge Dubrouski  wrote:
> > For me it completely doesn't, sorry. Oracle RAC is a very complex (and
> > very expensive) product that has all that you need without using any
> > additional products. Use Oracle RAC + Oracle Custerware + Oracle ASM.
> > Oracle 11G supports s shared filesystem over ASM so you don't have to
> > use OCFS2. It'll be complex enough in supporting without bringing
> > additional complexity of Pacemaker or Heartbeat.
> >
> > On Mon, Feb 15, 2010 at 2:16 PM, Enrique Sanchez
> >  wrote:
> >> does this make sense at all?
> >>
> >> thanks,
> >> esv.
> >>
> >> --
> >> Enrique Sanchez Vela
> >> --
> >> ___
> >> Linux-HA mailing list
> >> Linux-HA@lists.linux-ha.org
> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> See also: http://linux-ha.org/ReportingProblems
> >>
> >
> >
> >
> > --
> > Serge Dubrouski.
> > ___
> > Linux-HA mailing list
> > Linux-HA@lists.linux-ha.org
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> >
> 
> 
> 
> -- 
> Enrique Sanchez Vela
> --
> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] ERROR: ocf:heartbeat:ldirectord: no such resource agent

2010-02-16 Thread Dejan Muhamedagic
Hi,

On Tue, Feb 16, 2010 at 03:08:53PM +1100, Simon Horman wrote:
> On Mon, Feb 15, 2010 at 10:34:44PM +1100, Simon Horman wrote:
> > On Mon, Feb 15, 2010 at 10:33:26AM +0100, Thomas Baumann wrote:
> > > If you use following patch to the source, then all will be OK:
> > > 
> > > diff -uNr resource-agents.orig/ldirectord/OCF/ldirectord.in
> > > resource-agents/ldirectord/OCF/ldirectord.in
> > > --- resource-agents.orig/ldirectord/OCF/ldirectord.in   2010-02-15
> > > 10:18:17.0 +0100
> > > +++ resource-agents/ldirectord/OCF/ldirectord.in2010-02-15
> > > 10:19:26.0 +0100
> > > @@ -48,8 +48,8 @@
> > > 
> > >  . ${OCF_ROOT}/resource.d/heartbeat/.ocf-shellfuncs
> > > 
> > > -LDIRCONF=${OCF_RESKEY_configfile:-...@sbindir@/ldirectord/ldirectord.cf}
> > > -LDIRECTORD=${OCF_RESKEY_ldirectord:-...@sysconfdir@/ldirectord}
> > > +LDIRCONF=${OCF_RESKEY_configfile:-...@sysconfdir@/ha.d/ldirectord.cf}
> > > +LDIRECTORD=${OCF_RESKEY_ldirectord:-...@sbindir@/ldirectord}
> > 
> > That looks good to me.
> 
> Does anyone object. If not I'll put a version of this into hg.

No objections. The patch is probably good, but I still don't
understand how setting different defaults can fix a bug. Or has
this bug already been fixed by Andreas?

Thanks,

Cheers
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] compiling glue on solaris 10 problem

2010-02-16 Thread Dejan Muhamedagic
Hi,

On Tue, Feb 16, 2010 at 08:10:52AM +0200, Yossi Nachum wrote:
> after I run
> ./configure --enable-ansi=no --enable-fatal-warnings=no
> make completed successfully, I hope it will work now because in the process
> I get many warnings

Well, there's a reason for having fatal-warnings enabled.

Thanks,

Dejan

> Thanks for the help
> Yossi
> 
> On Mon, Feb 15, 2010 at 4:55 PM, Dejan Muhamedagic wrote:
> 
> > Hi,
> >
> > On Mon, Feb 15, 2010 at 04:42:57PM +0200, Yossi Nachum wrote:
> > > Hi
> > > I have now the first problem from the already repoted bug:
> > >
> > > libtool: compile:  gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I../../include
> > > -I../../include -I../../include -I../../linux-ha -I../../linux-ha
> > > -I../../libltdl -I../../libltdl -I/usr/include/glib-2.0
> > > -I/usr/lib/glib-2.0/include -I/usr/include/libxml2 -g -O2 -Wall
> > > -Wmissing-prototypes -Wmissing-declarations -Wstrict-prototypes
> > > -Wdeclaration-after-statement -Wpointer-arith -Wwrite-strings -Wcast-qual
> > > -Wcast-align -Wbad-function-cast -Winline -Wmissing-format-attribute
> > > -Wformat=2 -Wformat-security -Wformat-nonliteral -Wno-long-long
> > > -Wno-strict-aliasing -ggdb3 -funsigned-char -ggdb3 -O0 -Wall
> > > -Waggregate-return -Wbad-function-cast -Wcast-qual -Wcast-align
> > > -Wdeclaration-after-statement -Wendif-labels -Wfloat-equal -Wformat=2
> > > -Wformat-security -Wformat-nonliteral -Winline -Wmissing-prototypes
> > > -Wmissing-declarations -Wmissing-format-attribute -Wnested-externs
> > > -Wno-long-long -Wno-strict-aliasing -Wpointer-arith -Wstrict-prototypes
> > > -Wwrite-strings -ansi -D_GNU_SOURCE -DANSI_ONLY -Werror -g -O2 -Wall
> > > -Wmissing-prototypes -Wmissing-declarations -Wstrict-prototypes
> > > -Wdeclaration-after-statement -Wpointer-arith -Wwrite-strings -Wcast-qual
> > > -Wcast-align -Wbad-function-cast -Winline -Wmissing-format-attribute
> > > -Wformat=2 -Wformat-security -Wformat-nonliteral -Wno-long-long
> > > -Wno-strict-aliasing -ggdb3 -funsigned-char -ggdb3 -O0 -Wall
> > > -Waggregate-return -Wbad-function-cast -Wcast-qual -Wcast-align
> > > -Wdeclaration-after-statement -Wendif-labels -Wfloat-equal -Wformat=2
> > > -Wformat-security -Wformat-nonliteral -Winline -Wmissing-prototypes
> > > -Wmissing-declarations -Wmissing-format-attribute -Wnested-externs
> > > -Wno-long-long -Wno-strict-aliasing -Wpointer-arith -Wstrict-prototypes
> > > -Wwrite-strings -ansi -D_GNU_SOURCE -DANSI_ONLY -Werror -MT pils.lo -MD
> > -MP
> > > -MF .deps/pils.Tpo -c pils.c  -fPIC -DPIC -o .libs/pils.o
> > > pils.c: In function `PIL_strerror':
> > > pils.c:1299: warning: implicit declaration of function `snprintf'
> > > pils.c:1299: warning: nested extern declaration of `snprintf'
> >
> > snprintf should be declared in stdio.h which is included at the
> > top of pils.c. You could try to copy the above gcc line, add
> > the -E option to get the cpp output, replace -o .libs/pils.o
> > with -o pils.i, and check the resulting file (i.e. search for
> > snprintf).
> >
> > > make[2]: *** [pils.lo] Error 1
> > > make[2]: Leaving directory
> > > `/usr/local/src/linux-ha/Reusable-Cluster-Components-glue-1.0.3/lib/pils'
> > > make[1]: *** [all-recursive] Error 1
> > > make[1]: Leaving directory
> > > `/usr/local/src/linux-ha/Reusable-Cluster-Components-glue-1.0.3/lib'
> > > make: *** [all-recursive] Error 1
> > >
> > > when I make an old version of heartbeat (2.1.4) its working...
> >
> > That's odd. pils.c didn't change much since 2005.
> >
> > Thanks,
> >
> > Dejan
> >
> > > On Mon, Feb 15, 2010 at 1:20 PM, Dejan Muhamedagic  > >wrote:
> > >
> > > > Hi,
> > > >
> > > > On Mon, Feb 15, 2010 at 01:15:49PM +0200, Yossi Nachum wrote:
> > > > > I solved this either by adding /usr/ccs/bin to my PATH
> > > > > but I have more problems...
> > > > > I am trying now to compile it gnu ld, the gnu ld is in
> > /usr/local/bin/ld
> > > > do
> > > > > you know how I tell configure to use it?
> > > > > it always pick the sun version of ld ...
> > > >
> > > > Not sure. Perhaps by fixing your PATH to look first in
> > > > /usr/local/bin.
> > > >
> > > > Thanks,
> > > >
> > > > Dejan
> > > >
> > > > > On Mon, Feb 15, 2010 at 12:24 PM, Dejan Muhamedagic <
> > deja...@fastmail.fm
> > > > >wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > On Thu, Feb 11, 2010 at 10:53:24PM +0200, Yossi Nachum wrote:
> > > > > > > I solved this error by installing more dependencies packages
> > > > > > > but now I get this error
> > > > > > >
> > > > > > > make[1]: *** [libreplace.la] Error 1
> > > > > > > make[1]: Leaving directory
> > > > > > >
> > > >
> > `/usr/local/src/linux-ha/Reusable-Cluster-Components-glue-1.0.3/replace'
> > > > > > > make: *** [all-recursive] Error 1
> > > > > >
> > > > > > This doesn't tell us anything. Isn't there more?
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Dejan
> > > > > > ___
> > > > > > Linux-HA mailing list
> > > > > > Linux-HA@lists.linux-ha.org
> > > > > > ht

[Linux-HA] About hb_gui

2010-02-16 Thread Alain.Moulle
Hi

I can't find anymore on whatever linux distribution on 
www.clusterlabs.org/rpm
the pacemaker-mgmt which gave us the hb_gui ...

Is it definitively removed from any distribution ?
Or where is it hidden ?

Thanks
Regards
Alain
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Simple 2 nodes Linux-HA scenario

2010-02-16 Thread fabio.anton...@kaskonetworks.it
Hi Andreew
thanks a lot for your time.
I have read the document you wrote and I have understood many things not 
so clear before. I have added a resource section within the cib.xml. The 
cib.xml file appears now as

# cat /var/lib/heartbeat/crm/cib.xml
 
   
 
   
 
   
 
   
 
 
   
   
 
 
   
 
   
 
   
 
 
   
 
 
   
 
   
 
 
   
 

The crm_mon command output is:


Last updated: Tue Feb 16 16:42:57 2010
Current DC: ws-mythtv-9 (4b6c73b2-990e-4fcf-9c13-78c3c3648801)
2 Nodes configured.
1 Resources configured.


Node: ws-mythtv-9 (4b6c73b2-990e-4fcf-9c13-78c3c3648801): online
Node: fabio-laptop (05f71d5a-e249-4f9c-a7d5-0dcd954578de): online

But  i cannot see the eth0:0 interface with IP= 192.168.2.154. What's 
wrong in your opinion? Can I monitor something else to understand what's 
happening?
Forgive me for these trivial questions but I need some help to start up.
Thanks a lot in advance
Fabio

Andrew Beekhof ha scritto:
> On Mon, Feb 15, 2010 at 1:06 PM, fabio.anton...@kaskonetworks.it
>  wrote:
>   
>> Hi all
>> I'm a newbie of Linux-HA.
>> My final target is to setup a simple 2 nodes cluster with a only one
>> virtual IP address.
>> I have a couple of PCs running heartbeat 2.1.4-2 (Ubuntu 9.0.4).
>> The PCs are named
>> fabio-laptop and ws-mythtv-9
>> Their IP addresses are respectively
>> fabio-laptop: 192.168.2.10
>> ws-mythtv-9:  192.168.2.15
>>
>> My ha.cf file is
>>
>> autojoin any
>> udpport 694
>> bcast eth0
>> warntime 5
>> deadtime 15
>> initdead 60
>> keepalive 2
>> node fabio-laptop
>> node ws-mythtv-9
>> auto_failback off
>> debugfile /var/log/ha-debug
>> logfile /var/log/ha-log
>> # Logging
>> debug1
>> coredumpstrue
>> logfacilitydaemon
>> crm respawn
>>
>> Of course this file has been replicated on both the PCs.
>>
>> My haresources is:
>> 
>
> haresources isn't used for crm based clusters (indicated by "crm
> respawn" in ha.cf)
> have a read of:
>
> http://www.clusterlabs.org/mediawiki/images/7/7d/Configuration_Explained_0.6.pdf
>
>   
>> fabio-laptop 192.168.2.154
>>
>> In other words I would like to have a virtual IP address 192.168.2.154
>> in high availability.
>> I run both the heartbeat daemons but I cannot see the eth0:0 assigned to
>> the Active PC (fabio-laptop). Should I see the eth0:0 created at boot
>> time or not?
>> The hearbeat channel seems to be working because if I shutdown one of
>> the linux box (either fabio-laptop pr ws-mythtv-9) I see that the remote
>> part detects such event. Anyway I cannot see the eth0:0 virtual
>> interface defined in any case.
>> Probably I have a basic setup problem. Have I to write some additional
>> file or to provide some additional info to heartbeat and crm daemons?
>> Please let me know if you need some more details or log info.
>> Any help will be appreciated.
>> Regards
>>
>> fabio antonini
>>
>> --
>> Fabio Antonini PhD
>> SW Designer
>> Kasko Networks srl
>> Loc.Boschetto, zona ind.le di Pile
>> 67100 L'Aquila
>>
>> ___
>> Linux-HA mailing list
>> Linux-HA@lists.linux-ha.org
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
>> 
> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
>
>
>   


-- 
Fabio Antonini PhD
SW Designer
Kasko Networks srl
Loc.Boschetto, zona ind.le di Pile
67100 L'Aquila

___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Simple 2 nodes Linux-HA scenario

2010-02-16 Thread Darren.Mansell
Try with: 

ip add sh

-Original Message-
From: linux-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of
fabio.anton...@kaskonetworks.it
Sent: 16 February 2010 15:46
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] Simple 2 nodes Linux-HA scenario

Hi Andreew
thanks a lot for your time.
I have read the document you wrote and I have understood many things not

so clear before. I have added a resource section within the cib.xml. The

cib.xml file appears now as

# cat /var/lib/heartbeat/crm/cib.xml
 
   
 
   
 
   
 
   
 
 
   
   
 
 
   
 
   
 
   
 
 
   
 
 
   
 
   
 
 
   
 

The crm_mon command output is:


Last updated: Tue Feb 16 16:42:57 2010
Current DC: ws-mythtv-9 (4b6c73b2-990e-4fcf-9c13-78c3c3648801)
2 Nodes configured.
1 Resources configured.


Node: ws-mythtv-9 (4b6c73b2-990e-4fcf-9c13-78c3c3648801): online
Node: fabio-laptop (05f71d5a-e249-4f9c-a7d5-0dcd954578de): online

But  i cannot see the eth0:0 interface with IP= 192.168.2.154. What's 
wrong in your opinion? Can I monitor something else to understand what's

happening?
Forgive me for these trivial questions but I need some help to start up.
Thanks a lot in advance
Fabio

Andrew Beekhof ha scritto:
> On Mon, Feb 15, 2010 at 1:06 PM, fabio.anton...@kaskonetworks.it
>  wrote:
>   
>> Hi all
>> I'm a newbie of Linux-HA.
>> My final target is to setup a simple 2 nodes cluster with a only one
>> virtual IP address.
>> I have a couple of PCs running heartbeat 2.1.4-2 (Ubuntu 9.0.4).
>> The PCs are named
>> fabio-laptop and ws-mythtv-9
>> Their IP addresses are respectively
>> fabio-laptop: 192.168.2.10
>> ws-mythtv-9:  192.168.2.15
>>
>> My ha.cf file is
>>
>> autojoin any
>> udpport 694
>> bcast eth0
>> warntime 5
>> deadtime 15
>> initdead 60
>> keepalive 2
>> node fabio-laptop
>> node ws-mythtv-9
>> auto_failback off
>> debugfile /var/log/ha-debug
>> logfile /var/log/ha-log
>> # Logging
>> debug1
>> coredumpstrue
>> logfacilitydaemon
>> crm respawn
>>
>> Of course this file has been replicated on both the PCs.
>>
>> My haresources is:
>> 
>
> haresources isn't used for crm based clusters (indicated by "crm
> respawn" in ha.cf)
> have a read of:
>
http://www.clusterlabs.org/mediawiki/images/7/7d/Configuration_Explained
_0.6.pdf
>
>   
>> fabio-laptop 192.168.2.154
>>
>> In other words I would like to have a virtual IP address
192.168.2.154
>> in high availability.
>> I run both the heartbeat daemons but I cannot see the eth0:0 assigned
to
>> the Active PC (fabio-laptop). Should I see the eth0:0 created at boot
>> time or not?
>> The hearbeat channel seems to be working because if I shutdown one of
>> the linux box (either fabio-laptop pr ws-mythtv-9) I see that the
remote
>> part detects such event. Anyway I cannot see the eth0:0 virtual
>> interface defined in any case.
>> Probably I have a basic setup problem. Have I to write some
additional
>> file or to provide some additional info to heartbeat and crm daemons?
>> Please let me know if you need some more details or log info.
>> Any help will be appreciated.
>> Regards
>>
>> fabio antonini
>>
>> --
>> Fabio Antonini PhD
>> SW Designer
>> Kasko Networks srl
>> Loc.Boschetto, zona ind.le di Pile
>> 67100 L'Aquila
>>
>> ___
>> Linux-HA mailing list
>> Linux-HA@lists.linux-ha.org
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
>> 
> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
>
>
>   


-- 
Fabio Antonini PhD
SW Designer
Kasko Networks srl
Loc.Boschetto, zona ind.le di Pile
67100 L'Aquila

___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


[Linux-HA] Question about "group" in Pacemaker

2010-02-16 Thread Alain.Moulle
Hi,

Suppose we have 4 resources in a group, res1, res2, res3 and res4.

By default if the res4 fails(or stops), there is no impact on others, 
whereas if
res2 fails(or stops) , res3 and res4 will be stopped and no impact on res1.

But I would like that if any of the 4 resources fails(or stops) , the 
whole group
migrate to another node in the cluster.

Is it possible to be configured with a group ? or do I have to give up 
the group
and to use collocation constraints ?

Thanks
Regards
Alain
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] hb_reset in HB 2.x?

2010-02-16 Thread David Lang
On Mon, 15 Feb 2010, Dejan Muhamedagic wrote:

> On Thu, Feb 11, 2010 at 10:04:04AM -0800, David Lang wrote:
>> on my old systems that are still running heartbeat 1.x there is a hb_reset
>> command that moves all resources to the node they are configured to start on
>> (assuming auto_fallback is turned off)
>>
>> however in a recent 2.x build I do not find this program, is there a way to 
>> do
>> this (I am still using 1.x configs as they do the job so I don't need the
>> complexity of the 2.x crm stuff)
>
> Can't recall seeing this program ever. You should be able to do a
> bit of scripting around ResourceManager (listkeys/takegroup/givegroup).

At this point we are just using 1.x configs.

I have a setup where I have a pair of boxes running a distributed database, the 
data is on both boxes so if one box goes down I have both VIPs go over to the 
single box. I don't want the failed box to get the VIP when it first comes up 
(as I need to replicate data to it first), so I don't want to use 
auto_failback. 
But once I do the the failed box back online, how do I get the VIPs distributed 
like they would be if the boxes were to bootup at the same time?

by haresources looks like

system-p 192.168.1.1
system-b 192.168.1.2

David Lang
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Oracle RAC + Linux-HA on OCFS2 filesystems.

2010-02-16 Thread Serge Dubrouski
On Tue, Feb 16, 2010 at 5:07 AM, Dejan Muhamedagic  wrote:
> Hi,
>
> On Tue, Feb 16, 2010 at 05:42:27AM -0500, Enrique Sanchez wrote:
>> Serge,
>>
>> The project manager insisted on using filesystems, he argued something
>> about SAP not supporting RAC on raw devices on Linux, whether that is
>> true of false, is beyond me, now I need to move forward.
>
> No experience with RAC here, but there are oracle and oralsnr
> resource agents. Don't know which RAC features you need and if
> it's an option not to use it.
>

Oracle RAC allows you to build an Active/Active cluster which I don't
think would be possible with oracle and oralsnr resource agents.

And even if you need to use OCFS2 you still don't have to use
Pacemaker to ensure high availability and integrity of those
resources. Oracle Clusterware has its own features for that.

-- 
Serge Dubrouski.
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] hb_reset in HB 2.x?

2010-02-16 Thread Lars Ellenberg
On Tue, Feb 16, 2010 at 09:30:35AM -0800, David Lang wrote:
> On Mon, 15 Feb 2010, Dejan Muhamedagic wrote:
> 
> > On Thu, Feb 11, 2010 at 10:04:04AM -0800, David Lang wrote:
> >> on my old systems that are still running heartbeat 1.x there is a hb_reset
> >> command that moves all resources to the node they are configured to start 
> >> on
> >> (assuming auto_fallback is turned off)
> >>
> >> however in a recent 2.x build I do not find this program, is there a way 
> >> to do
> >> this (I am still using 1.x configs as they do the job so I don't need the
> >> complexity of the 2.x crm stuff)
> >
> > Can't recall seeing this program ever. You should be able to do a
> > bit of scripting around ResourceManager (listkeys/takegroup/givegroup).
> 
> At this point we are just using 1.x configs.
> 
> I have a setup where I have a pair of boxes running a distributed database, 
> the 
> data is on both boxes so if one box goes down I have both VIPs go over to the 
> single box. I don't want the failed box to get the VIP when it first comes up 
> (as I need to replicate data to it first), so I don't want to use 
> auto_failback. 
> But once I do the the failed box back online, how do I get the VIPs 
> distributed 
> like they would be if the boxes were to bootup at the same time?

hb_standby foreign
hb_takeover local

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


[Linux-HA] Pacemaker and LVS

2010-02-16 Thread Ciro Iriarte
How is the recommended setup to have a high available LVS service?,
should we run ldirectord with pacemaker?, should we run ipvasadm
outside the cluster?

Regards,
CI.-
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Pacemaker and LVS

2010-02-16 Thread Michael Schwartzkopff
Am Dienstag, 16. Februar 2010 19:25:47 schrieb Ciro Iriarte:
> How is the recommended setup to have a high available LVS service?,
> should we run ldirectord with pacemaker?, should we run ipvasadm
> outside the cluster?
>
> Regards,
> CI.-

Hi,

I would run the virtual IP (VIP) of the LVS server as a IPaddr2 resource of 
the cluster. If you have a NAT setup you alos would run the DIP as a cluster 
service.

I also like to integrate ldirectord into the cluster since the cluster can 
check if the daemon still is alive.

Greetings,

-- 
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 - 89 - 45 69 11 0
Fax: +49 - 89 - 45 69 11 21
mob: +49 - 174 - 343 28 75

mail: mi...@multinet.de
web: www.multinet.de

Sitz der Gesellschaft: 85630 Grasbrunn
Registergericht: Amtsgericht München HRB 114375
Geschäftsführer: Günter Jurgeneit, Hubert Martens

---

PGP Fingerprint: F919 3919 FF12 ED5A 2801 DEA6 AA77 57A4 EDD8 979B
Skype: misch42
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Pacemaker and LVS

2010-02-16 Thread Ciro Iriarte
2010/2/16 Michael Schwartzkopff :
> Am Dienstag, 16. Februar 2010 19:25:47 schrieb Ciro Iriarte:
>> How is the recommended setup to have a high available LVS service?,
>> should we run ldirectord with pacemaker?, should we run ipvasadm
>> outside the cluster?
>>
>> Regards,
>> CI.-
>
> Hi,
>
> I would run the virtual IP (VIP) of the LVS server as a IPaddr2 resource of
> the cluster. If you have a NAT setup you alos would run the DIP as a cluster
> service.
>
> I also like to integrate ldirectord into the cluster since the cluster can
> check if the daemon still is alive.
>
> Greetings,
>
> --
> Dr. Michael Schwartzkopff
> MultiNET Services GmbH
> Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
> Tel: +49 - 89 - 45 69 11 0
> Fax: +49 - 89 - 45 69 11 21
> mob: +49 - 174 - 343 28 75
>

So, your recommendation would be to run ldirectord as a resource?,
what about the connection table synchronization in failover and
failback events?. I was initially planning to use LVS-NAT, but I'll
give LVS-DR a try...

CI.-
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Pacemaker and LVS

2010-02-16 Thread Michael Schwartzkopff
Am Dienstag, 16. Februar 2010 20:50:33 schrieb Ciro Iriarte:
> 2010/2/16 Michael Schwartzkopff :
> > Am Dienstag, 16. Februar 2010 19:25:47 schrieb Ciro Iriarte:
> >> How is the recommended setup to have a high available LVS service?,
> >> should we run ldirectord with pacemaker?, should we run ipvasadm
> >> outside the cluster?
> >>
> >> Regards,
> >> CI.-
> >
> > Hi,
> >
> > I would run the virtual IP (VIP) of the LVS server as a IPaddr2 resource
> > of the cluster. If you have a NAT setup you alos would run the DIP as a
> > cluster service.
> >
> > I also like to integrate ldirectord into the cluster since the cluster
> > can check if the daemon still is alive.
> >
> > Greetings,
> >
> > --
> > Dr. Michael Schwartzkopff
> > MultiNET Services GmbH
> > Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
> > Tel: +49 - 89 - 45 69 11 0
> > Fax: +49 - 89 - 45 69 11 21
> > mob: +49 - 174 - 343 28 75
>
> So, your recommendation would be to run ldirectord as a resource?,
> what about the connection table synchronization in failover and
> failback events?. I was initially planning to use LVS-NAT, but I'll
> give LVS-DR a try...
>
> CI.-

Connection table sync is the job of ipvsadm. Since this is a part of the 
kernel module the cluster (userspace!) would not help too much here.

I'd start the sync daemons (master and slave) on both machines together with 
ipvsadm. The corresponding option in debian is "both". So both machines are 
master and slave. Since only one machine has the VIP this setup works.

Greetings,

-- 
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 - 89 - 45 69 11 0
Fax: +49 - 89 - 45 69 11 21
mob: +49 - 174 - 343 28 75

mail: mi...@multinet.de
web: www.multinet.de

Sitz der Gesellschaft: 85630 Grasbrunn
Registergericht: Amtsgericht München HRB 114375
Geschäftsführer: Günter Jurgeneit, Hubert Martens

---

PGP Fingerprint: F919 3919 FF12 ED5A 2801 DEA6 AA77 57A4 EDD8 979B
Skype: misch42
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Simple 2 nodes Linux-HA scenario

2010-02-16 Thread Andrew Beekhof
On Tue, Feb 16, 2010 at 4:46 PM, fabio.anton...@kaskonetworks.it
 wrote:
> Hi Andreew
> thanks a lot for your time.
> I have read the document you wrote and I have understood many things not
> so clear before. I have added a resource section within the cib.xml. The
> cib.xml file appears now as

I'd need the full cib from the live cluster:
   cibadmin -Ql

Looks like the services are failing though.
Logs should also tell you more... look for "IPaddr2" and/or "resource_ipaddr".
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Question about "group" in Pacemaker

2010-02-16 Thread Andrew Beekhof
On Tue, Feb 16, 2010 at 5:00 PM, Alain.Moulle  wrote:
> Hi,
>
> Suppose we have 4 resources in a group, res1, res2, res3 and res4.
>
> By default if the res4 fails(or stops), there is no impact on others,

What version are you using?
Recent versions shouldn't be behaving like that.

> whereas if
> res2 fails(or stops) , res3 and res4 will be stopped and no impact on res1.
>
> But I would like that if any of the 4 resources fails(or stops) , the
> whole group
> migrate to another node in the cluster.
>
> Is it possible to be configured with a group ? or do I have to give up
> the group
> and to use collocation constraints ?
>
> Thanks
> Regards
> Alain
> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] About hb_gui

2010-02-16 Thread Andrew Beekhof
On Tue, Feb 16, 2010 at 2:54 PM, Alain.Moulle  wrote:
> Hi
>
> I can't find anymore on whatever linux distribution on
> www.clusterlabs.org/rpm
> the pacemaker-mgmt which gave us the hb_gui ...
>
> Is it definitively removed from any distribution ?
> Or where is it hidden ?
>

I've just not had time to add it to the build list yet
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Restart a resourse if other is migrated

2010-02-16 Thread Andrew Beekhof
On Fri, Feb 5, 2010 at 11:04 AM, Marian Marinov  wrote:
> Hello,
> sorry for the stupid question, but I can't seem to find a solution.
>
> I have an lsb resource which is a clone in my cluster. I want this lsb
> resource to be restarted each time heartbeat is moving certain resource (for
> example an IP address). I need all instances of the cloned resource to be
> restarted on all nodes.
>
> Does someone knows how I can achieve this ?
>

If you made an ordering constraint which told the clone to start after
the IP, that would probably do what you need.
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Pacemaker and LVS

2010-02-16 Thread Ciro Iriarte
2010/2/16 Michael Schwartzkopff :
> Am Dienstag, 16. Februar 2010 20:50:33 schrieb Ciro Iriarte:
>> 2010/2/16 Michael Schwartzkopff :
>> > Am Dienstag, 16. Februar 2010 19:25:47 schrieb Ciro Iriarte:
>> >> How is the recommended setup to have a high available LVS service?,
>> >> should we run ldirectord with pacemaker?, should we run ipvasadm
>> >> outside the cluster?
>> >>
>> >> Regards,
>> >> CI.-
>> >
>> > Hi,
>> >
>> > I would run the virtual IP (VIP) of the LVS server as a IPaddr2 resource
>> > of the cluster. If you have a NAT setup you alos would run the DIP as a
>> > cluster service.
>> >
>> > I also like to integrate ldirectord into the cluster since the cluster
>> > can check if the daemon still is alive.
>> >
>> > Greetings,
>> >
>> > --
>> > Dr. Michael Schwartzkopff
>> > MultiNET Services GmbH
>> > Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
>> > Tel: +49 - 89 - 45 69 11 0
>> > Fax: +49 - 89 - 45 69 11 21
>> > mob: +49 - 174 - 343 28 75
>>
>> So, your recommendation would be to run ldirectord as a resource?,
>> what about the connection table synchronization in failover and
>> failback events?. I was initially planning to use LVS-NAT, but I'll
>> give LVS-DR a try...
>>
>> CI.-
>
> Connection table sync is the job of ipvsadm. Since this is a part of the
> kernel module the cluster (userspace!) would not help too much here.
>
> I'd start the sync daemons (master and slave) on both machines together with
> ipvsadm. The corresponding option in debian is "both". So both machines are
> master and slave. Since only one machine has the VIP this setup works.
>
> Greetings,
>
> --
> Dr. Michael Schwartzkopff
>
So having that working outside the cluster and ldirectord as a
resource would still give me Connection table sync, right?.

Thanks!
-- 
Ciro Iriarte
http://cyruspy.wordpress.com
--
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Pacemaker and LVS

2010-02-16 Thread Michael Schwartzkopff
Am Dienstag, 16. Februar 2010 22:01:36 schrieb Ciro Iriarte:
> 2010/2/16 Michael Schwartzkopff :
> > Am Dienstag, 16. Februar 2010 20:50:33 schrieb Ciro Iriarte:
> >> 2010/2/16 Michael Schwartzkopff :
> >> > Am Dienstag, 16. Februar 2010 19:25:47 schrieb Ciro Iriarte:
> >> >> How is the recommended setup to have a high available LVS service?,
> >> >> should we run ldirectord with pacemaker?, should we run ipvasadm
> >> >> outside the cluster?
> >> >>
> >> >> Regards,
> >> >> CI.-
> >> >
> >> > Hi,
> >> >
> >> > I would run the virtual IP (VIP) of the LVS server as a IPaddr2
> >> > resource of the cluster. If you have a NAT setup you alos would run
> >> > the DIP as a cluster service.
> >> >
> >> > I also like to integrate ldirectord into the cluster since the cluster
> >> > can check if the daemon still is alive.
> >> >
> >> > Greetings,
> >> >
> >> > --
> >> > Dr. Michael Schwartzkopff
> >> > MultiNET Services GmbH
> >> > Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
> >> > Tel: +49 - 89 - 45 69 11 0
> >> > Fax: +49 - 89 - 45 69 11 21
> >> > mob: +49 - 174 - 343 28 75
> >>
> >> So, your recommendation would be to run ldirectord as a resource?,
> >> what about the connection table synchronization in failover and
> >> failback events?. I was initially planning to use LVS-NAT, but I'll
> >> give LVS-DR a try...
> >>
> >> CI.-
> >
> > Connection table sync is the job of ipvsadm. Since this is a part of the
> > kernel module the cluster (userspace!) would not help too much here.
> >
> > I'd start the sync daemons (master and slave) on both machines together
> > with ipvsadm. The corresponding option in debian is "both". So both
> > machines are master and slave. Since only one machine has the VIP this
> > setup works.
> >
> > Greetings,
> >
> > --
> > Dr. Michael Schwartzkopff
>
> So having that working outside the cluster and ldirectord as a
> resource would still give me Connection table sync, right?.
>
> Thanks!

Yes. Perhaps your could add some lines to your ldirectord OCF script t ocheck 
the state of the sync daemon.

But this setup works when I have my classes. 

-- 
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 - 89 - 45 69 11 0
Fax: +49 - 89 - 45 69 11 21
mob: +49 - 174 - 343 28 75

mail: mi...@multinet.de
web: www.multinet.de

Sitz der Gesellschaft: 85630 Grasbrunn
Registergericht: Amtsgericht München HRB 114375
Geschäftsführer: Günter Jurgeneit, Hubert Martens

---

PGP Fingerprint: F919 3919 FF12 ED5A 2801 DEA6 AA77 57A4 EDD8 979B
Skype: misch42
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] ERROR: ocf:heartbeat:ldirectord: no such resource agent

2010-02-16 Thread Simon Horman
On Tue, Feb 16, 2010 at 01:09:42PM +0100, Dejan Muhamedagic wrote:
> Hi,
> 
> On Tue, Feb 16, 2010 at 03:08:53PM +1100, Simon Horman wrote:
> > On Mon, Feb 15, 2010 at 10:34:44PM +1100, Simon Horman wrote:
> > > On Mon, Feb 15, 2010 at 10:33:26AM +0100, Thomas Baumann wrote:
> > > > If you use following patch to the source, then all will be OK:
> > > > 
> > > > diff -uNr resource-agents.orig/ldirectord/OCF/ldirectord.in
> > > > resource-agents/ldirectord/OCF/ldirectord.in
> > > > --- resource-agents.orig/ldirectord/OCF/ldirectord.in   2010-02-15
> > > > 10:18:17.0 +0100
> > > > +++ resource-agents/ldirectord/OCF/ldirectord.in2010-02-15
> > > > 10:19:26.0 +0100
> > > > @@ -48,8 +48,8 @@
> > > > 
> > > >  . ${OCF_ROOT}/resource.d/heartbeat/.ocf-shellfuncs
> > > > 
> > > > -LDIRCONF=${OCF_RESKEY_configfile:-...@sbindir@/ldirectord/ldirectord.cf}
> > > > -LDIRECTORD=${OCF_RESKEY_ldirectord:-...@sysconfdir@/ldirectord}
> > > > +LDIRCONF=${OCF_RESKEY_configfile:-...@sysconfdir@/ha.d/ldirectord.cf}
> > > > +LDIRECTORD=${OCF_RESKEY_ldirectord:-...@sbindir@/ldirectord}
> > > 
> > > That looks good to me.
> > 
> > Does anyone object. If not I'll put a version of this into hg.
> 
> No objections. The patch is probably good, but I still don't
> understand how setting different defaults can fix a bug. Or has
> this bug already been fixed by Andreas?

I'm not sure either.
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Question about "group" in Pacemaker

2010-02-16 Thread Alain.Moulle
Hi Andrew,
the releases are those officially delivered with fc12 :
pacemaker-1.0.5-4.fc12
and :
cluster-glue-1.0-0.11.b79635605337.hg.fc12
corosync-1.1.2-1.fc12
heartbeat-3.0.0-0.5.0daab7da36a8.hg.fc12
openais-1.1.0-1.fc12
all in x86_64 .

Thanks
Alain

> Hi,
> >
> > Suppose we have 4 resources in a group, res1, res2, res3 and res4.
> >
> > By default if the res4 fails(or stops), there is no impact on others,
>   
>
> What version are you using?
> Recent versions shouldn't be behaving like that.
>
>   
>> > whereas if
>> > res2 fails(or stops) , res3 and res4 will be stopped and no impact on res1.
>> >
>> > But I would like that if any of the 4 resources fails(or stops) , the
>> > whole group
>> > migrate to another node in the cluster.
>> >
>> > Is it possible to be configured with a group ? or do I have to give up
>> > the group
>> > and to use collocation constraints ?
>> >
>> > Thanks
>> > Regards
>> > Alain
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems