Re: [Pacemaker] Initial quorum

2011-07-20 Thread pskrap
Devin Reade writes: > > --On Wednesday, July 20, 2011 09:19:33 AM + pskrap > wrote: > > > I have a cluster where some of the resources cannot run on the same node. > > All resources must be running to provide a functioning service. This > > means that a certain amount of nodes needs to b

[Pacemaker] Fw: Fw: Configuration for FS over DRBD over LVM

2011-07-20 Thread Bob Schatz
Okay, this configuration works on one node (I am waiting for a hardware problem to be fixed before testing with second node): node cnode-1-3-5 node cnode-1-3-6 primitive glance-drbd ocf:linbit:drbd \         params drbd_resource="glance-repos-drbd" \         op start interval="0" timeout="240" \

Re: [Pacemaker] Initial quorum

2011-07-20 Thread Devin Reade
--On Wednesday, July 20, 2011 09:19:33 AM + pskrap wrote: > I have a cluster where some of the resources cannot run on the same node. > All resources must be running to provide a functioning service. This > means that a certain amount of nodes needs to be up before it makes > sense for the

[Pacemaker] Fw: Configuration for FS over DRBD over LVM

2011-07-20 Thread Bob Schatz
One correction: I removed the "location" constraint and simply went with this:       colocation coloc-rule-w-master inf: glance-repos ms_drbd:Master glance-repos-fs-group       order glance-order-fs-after-drbd inf: glance-repos:start ms_drbd:promote glance-repos-fs-group:start       order gla

Re: [Pacemaker] The placement strategy of the group resource does not work well

2011-07-20 Thread Yuusuke IIDA
Hi, Andrew I confirmed that a problem was revised. Many thanks!! Yuusuke (2011/07/19 10:42), Andrew Beekhof wrote: This should now be fixed in: http://hg.clusterlabs.org/pacemaker/devel/rev/960a7e3da680 Its based on your patches but is a little more generic. On Mon, Jul 11, 2011 at 10:22

Re: [Pacemaker] A question and demand to a resource placement strategy function

2011-07-20 Thread Yuusuke IIDA
Hi, Andrew I confirmed that a problem was revised. Many thanks!! Yuusuke (2011/07/19 10:42), Andrew Beekhof wrote: This should also now be fixed in: http://hg.clusterlabs.org/pacemaker/devel/rev/960a7e3da680 On Tue, Jul 5, 2011 at 9:43 PM, Yuusuke IIDA wrote: Hi, Andrew I know that the

Re: [Pacemaker] Fw: Configuration for FS over DRBD over LVM

2011-07-20 Thread Bob Schatz
One correction: I removed the "location" constraint and simply went with this:       colocation coloc-rule-w-master inf: glance-repos ms_drbd:Master glance-repos-fs-group       order glance-order-fs-after-drbd inf: glance-repos:start ms_drbd:promote glance-repos-fs-group:start       order glanc

[Pacemaker] Resource Group Questions - Start/Stop Order

2011-07-20 Thread Bobbie Lind
Hi group, I am running a 6 node system, 4 of which mount the LUNs for my Lustre file system. I currently have 29 LUNs per server set up in 4 Resource Groups. I understand the default startup/shudown order of the resource but I was wondering if there is a way to override that and have all the res

Re: [Pacemaker] Cluster with DRBD : split brain

2011-07-20 Thread Digimer
On 07/20/2011 11:24 AM, Hugo Deprez wrote: > Hello Andrew, > > in fact DRBD was in standalone mode but the cluster was working : > > Here is the syslog of the drbd's split brain : > > Jul 15 08:45:34 node1 kernel: [1536023.052245] block drbd0: Handshake > successful: Agreed network protocol vers

Re: [Pacemaker] Cluster with DRBD : split brain

2011-07-20 Thread Hugo Deprez
Hello Andrew, in fact DRBD was in standalone mode but the cluster was working : Here is the syslog of the drbd's split brain : Jul 15 08:45:34 node1 kernel: [1536023.052245] block drbd0: Handshake successful: Agreed network protocol version 91 Jul 15 08:45:34 node1 kernel: [1536023.052267] block

[Pacemaker] Initial quorum

2011-07-20 Thread pskrap
Hi, I have a cluster where some of the resources cannot run on the same node. All resources must be running to provide a functioning service. This means that a certain amount of nodes needs to be up before it makes sense for the cluster to start any resources. All works fine after enough nodes h