Devin Reade writes:
>
> --On Wednesday, July 20, 2011 09:19:33 AM + pskrap
> wrote:
>
> > I have a cluster where some of the resources cannot run on the same node.
> > All resources must be running to provide a functioning service. This
> > means that a certain amount of nodes needs to b
Okay, this configuration works on one node (I am waiting for a hardware problem
to be fixed before testing with second node):
node cnode-1-3-5
node cnode-1-3-6
primitive glance-drbd ocf:linbit:drbd \
params drbd_resource="glance-repos-drbd" \
op start interval="0" timeout="240" \
--On Wednesday, July 20, 2011 09:19:33 AM + pskrap
wrote:
> I have a cluster where some of the resources cannot run on the same node.
> All resources must be running to provide a functioning service. This
> means that a certain amount of nodes needs to be up before it makes
> sense for the
One correction:
I removed the "location" constraint and simply went with this:
colocation coloc-rule-w-master inf: glance-repos ms_drbd:Master
glance-repos-fs-group
order glance-order-fs-after-drbd inf: glance-repos:start ms_drbd:promote
glance-repos-fs-group:start
order gla
Hi, Andrew
I confirmed that a problem was revised.
Many thanks!!
Yuusuke
(2011/07/19 10:42), Andrew Beekhof wrote:
This should now be fixed in:
http://hg.clusterlabs.org/pacemaker/devel/rev/960a7e3da680
Its based on your patches but is a little more generic.
On Mon, Jul 11, 2011 at 10:22
Hi, Andrew
I confirmed that a problem was revised.
Many thanks!!
Yuusuke
(2011/07/19 10:42), Andrew Beekhof wrote:
This should also now be fixed in:
http://hg.clusterlabs.org/pacemaker/devel/rev/960a7e3da680
On Tue, Jul 5, 2011 at 9:43 PM, Yuusuke IIDA wrote:
Hi, Andrew
I know that the
One correction:
I removed the "location" constraint and simply went with this:
colocation coloc-rule-w-master inf: glance-repos ms_drbd:Master
glance-repos-fs-group
order glance-order-fs-after-drbd inf: glance-repos:start ms_drbd:promote
glance-repos-fs-group:start
order glanc
Hi group,
I am running a 6 node system, 4 of which mount the LUNs for my Lustre file
system. I currently have 29 LUNs per server set up in 4 Resource Groups. I
understand the default startup/shudown order of the resource but I was
wondering if there is a way to override that and have all the res
On 07/20/2011 11:24 AM, Hugo Deprez wrote:
> Hello Andrew,
>
> in fact DRBD was in standalone mode but the cluster was working :
>
> Here is the syslog of the drbd's split brain :
>
> Jul 15 08:45:34 node1 kernel: [1536023.052245] block drbd0: Handshake
> successful: Agreed network protocol vers
Hello Andrew,
in fact DRBD was in standalone mode but the cluster was working :
Here is the syslog of the drbd's split brain :
Jul 15 08:45:34 node1 kernel: [1536023.052245] block drbd0: Handshake
successful: Agreed network protocol version 91
Jul 15 08:45:34 node1 kernel: [1536023.052267] block
Hi,
I have a cluster where some of the resources cannot run on the same node. All
resources must be running to provide a functioning service. This means that a
certain amount of nodes needs to be up before it makes sense for the cluster to
start any resources. All works fine after enough nodes h
11 matches
Mail list logo