On Thu, Dec 31, 2009 at 11:35 AM, Daniel Qian wrote:
> I am using pacemaker, corosync and ocfs2 on Fedora 12 to build an
> active/active cluster. When I try to start up o2cb resource with
> ocfs2-tools-pcmk-1.4.3-3.fc12.x86_64 that comes with Fedora 12 it produces
> the following errors:
>
> Dec 3
I am using pacemaker, corosync and ocfs2 on Fedora 12 to build an active/active
cluster. When I try to start up o2cb resource with
ocfs2-tools-pcmk-1.4.3-3.fc12.x86_64 that comes with Fedora 12 it produces the
following errors:
Dec 30 22:06:29 ilo150 corosync[3866]: [pcmk ] info: pcmk_notify
I found a few problems in your configuration.
First why do you set max-master=2? So both nodes are promoted and both nodes
are master. Then your colocation constraints does not make sense. Because
Hosting has to be started on both nodes. If this is not what you want, then
please remove max-master=
On Wed, Dec 30, 2009 at 10:54:59AM +0100, f...@fredleroy.com wrote:
> Many thanks for your help !
>
> just one question about your mysql ip.
> Do you use a dedicated ip for mysql ? Why not just refer to localhost ?
We have a strong policy of "one service, one IP", on the basis that sooner
or late
Hi,
On Wed, Dec 30, 2009 at 12:59:07AM -0500, Daniel Qian wrote:
>
> - Original Message -
> From: hj lee
> To: pacemaker@oss.clusterlabs.org ; Daniel Qian
> Sent: Wednesday, December 30, 2009 12:16 AM
> Subject: Re: [Pacemaker] parse error in config: The consensus timeout
>
Hi,
On Wed, Dec 30, 2009 at 01:31:27PM +0100, Bernd Schubert wrote:
> Hello Dejan,
>
> On Thursday 24 December 2009, Dejan Muhamedagic wrote:
>
> [...]
>
> > > > > In a pacemaker cluster with correctly enables STONITH the cluster
> > > > > manager takes care that the resource is only mounted on
Hi,
i've replaced /usr/lib/ocf/resource.d/linbit/drbd with version from git
as you suggested. Lars, you can get a diff if you wish. I've also
changed preference scores on all definitions.
Resource Hosting (mounted on ms_drbd_r0/primary) still gets restarted if
_anything_ happens to peer's drbd
Hello Dejan,
On Thursday 24 December 2009, Dejan Muhamedagic wrote:
[...]
> > > > In a pacemaker cluster with correctly enables STONITH the cluster
> > > > manager takes care that the resource is only mounted on one node,
> > > > isn't it? At least in my understanding it should. Only after getti
On Wed, Dec 30, 2009 at 10:34:17AM +0100, Martin Gombač wrote:
> I just fixed interleave:
> >
> ms ms_drbd_r0 drbd_r0 \
>meta notify="true" master-max="2" interleave="true"
> >
> then started the other node. Before it promoted drbd on the second node,
> it stopped Hosting resource on the firs
On 12/30/2009 10:54 AM, f...@fredleroy.com wrote:
> Many thanks for your help !
>
> just one question about your mysql ip.
> Do you use a dedicated ip for mysql ? Why not just refer to localhost ?
because we've prepared the setup for a active-active configuration,
where the mysql server needs to
Many thanks for your help !
just one question about your mysql ip.
Do you use a dedicated ip for mysql ? Why not just refer to localhost ?
--
Frédéric LEROY
Mobile : +33 6 85 84 49 07
> On 12/30/2009 10:04 AM, f...@fredleroy.com wrote:
>> Hi all,
>>
>> I'm a real newbie to pacemaker and after qu
On 12/30/2009 10:04 AM, f...@fredleroy.com wrote:
> Hi all,
>
> I'm a real newbie to pacemaker and after quite a few reading, I believe my
> setup would be the following :
> - 2 node cluster active/passive
> - using debian lenny, 1 nic per node, hard raid1 on each node
> - plan to use the corosync
I just fixed interleave:
>
ms ms_drbd_r0 drbd_r0 \
meta notify="true" master-max="2" interleave="true"
>
then started the other node. Before it promoted drbd on the second node,
it stopped Hosting resource on the first one and then ran it again.
>
Dec 30 10:26:27 ibm1 pengine: [6984]: notice
On Wed, Dec 30, 2009 at 10:04:43AM +0100, f...@fredleroy.com wrote:
> Hi all,
>
> I'm a real newbie to pacemaker and after quite a few reading, I believe my
> setup would be the following :
> - 2 node cluster active/passive
> - using debian lenny, 1 nic per node, hard raid1 on each node
> - plan t
I have it set already to ignore, because i do resource level fencing
with custom app, which is run/triggered by drbd when it looses connection.
>
property $id="cib-bootstrap-options" \
dc-version="1.0.6-f709c638237cdff7556cb6ab615f32826c0f8c06" \
cluster-infrastructure="Heartbeat" \
ston
Hi all,
I'm a real newbie to pacemaker and after quite a few reading, I believe my
setup would be the following :
- 2 node cluster active/passive
- using debian lenny, 1 nic per node, hard raid1 on each node
- plan to use the corosync/pacemaker package
- each node will host drdb (protocol c), ip,
16 matches
Mail list logo