On Tue, Mar 23, 2010 at 11:51 PM, Alan Jones <falanclus...@gmail.com> wrote: > BTW: The order matters in the colocation rule. When I configure: > colocation colo-master_worker -1: master worker > Then "failback" is blocked by the stickiness. In my opinion this is a bug, > but others may have an explanation.
The order is significant.... colocation colo-master_worker -1: master worker is not the same as: colocation colo-master_worker -1: worker master But there may be something else going on, so I'll certainly take a look if you file a bug. Be sure to include the full CIB when the cluster is in the state you described (ie. preventing failback) > This is the default version that installs on FC12 using the GUI software > packages tools. > Alan > > On Tue, Mar 23, 2010 at 3:47 PM, Alan Jones <falanclus...@gmail.com> wrote: >> >> The following rules give me the behavior I was looking for: >> >> primitive master ocf:pacemaker:Dummy meta resource-stickiness="INFINITY" >> is-managed="true" >> location l-master_a master 1: fc12-a >> location l-master_b master 1: fc12-b >> primitive master ocf:pacemaker:Dummy >> location l-worker_a worker 1: fc12-a >> location l-worker_b worker 1: fc12-b >> colocation colo-master_worker -1: worker master >> >> To recap, the goal is an active-active two node cluster were "master" is >> sticky and "master" and "worker" with anti-colocate when possible for >> performance. >> Note that I had to add points for each resource on each node to overcome >> the negative colocation value to allow them both to run on one node. >> If there is a more elegant solution, let me know. >> Alan >> >> On Tue, Mar 23, 2010 at 8:24 AM, Andrew Beekhof <and...@beekhof.net> >> wrote: >>> >>> On Mon, Mar 22, 2010 at 9:18 PM, Alan Jones <falanclus...@gmail.com> >>> wrote: >>> > Well, I guess my configuration is not as common. >>> > In my case, one of these resources, say resource A, suffers greater >>> > disruption if it is moved. >>> > So, after a failover I would prefer that resource B move, reversing the >>> > node >>> > placement. >>> > Is this possible to express? >>> >>> Make A stickier than B. >>> >>> Please google for the following keywords: >>> site:clusterlabs.org resource-stickiness >>> >>> > Alan >>> > >>> > On Mon, Mar 22, 2010 at 11:10 AM, Dejan Muhamedagic >>> > <deja...@fastmail.fm> >>> > wrote: >>> >> >>> >> Hi, >>> >> >>> >> On Mon, Mar 22, 2010 at 09:29:50AM -0700, Alan Jones wrote: >>> >> > Friends, >>> >> > I have what should be a simple goal. Two resources to run on two >>> >> > nodes. >>> >> > I'd like to configure them to run on separate nodes when available, >>> >> > ie. >>> >> > active-active, >>> >> > and provide for them to run together on either node when one fails, >>> >> > ie. >>> >> > failover. >>> >> > Up until this point I have assumed that this would be a base use >>> >> > case >>> >> > for >>> >> > Pacemaker, however, it seems from the discussion on: >>> >> > http://wiki.lustre.org/index.php/Using_Pacemaker_with_Lustre >>> >> > ... that it is not (see below). Any ideas? >>> >> >>> >> Why not just two location constraints (aka node preferences): >>> >> >>> >> location l1 rsc1 100: node1 >>> >> location l2 rsc2 100: node2 >>> >> >>> >> Thanks, >>> >> >>> >> Dejan >>> >> >>> >> > Alan >>> >> > >>> >> > *Note:* Use care when setting up your point system. You can use the >>> >> > point system if your cluster has at least three nodes or if the >>> >> > resource >>> >> > can acquire points from other constraints. However, in a system with >>> >> > only two nodes and no way to acquire points, the constraint in the >>> >> > example above will result in an inability to migrate a resource from >>> >> > a >>> >> > failed node. >>> >> > >>> >> > The example they refer to is similar to yours: >>> >> > >>> >> > # crm configure colocation colresOST1resOST2 -100: resOST1 resOST2 >>> >> >>> >> > _______________________________________________ >>> >> > Pacemaker mailing list >>> >> > Pacemaker@oss.clusterlabs.org >>> >> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>> >> >>> >> >>> >> _______________________________________________ >>> >> Pacemaker mailing list >>> >> Pacemaker@oss.clusterlabs.org >>> >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>> > >>> > >>> > _______________________________________________ >>> > Pacemaker mailing list >>> > Pacemaker@oss.clusterlabs.org >>> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>> > >>> > >>> >>> _______________________________________________ >>> Pacemaker mailing list >>> Pacemaker@oss.clusterlabs.org >>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> > > > _______________________________________________ > Pacemaker mailing list > Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > _______________________________________________ Pacemaker mailing list Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker