On 05/27/14 05:38, K Mehta wrote:
One more question.
With crmsh, it was easy to add constraint to avoid a resource from running only
a subset(say vsanqa11 and vsanqa12) of nodes using the following command

crm configure location ms-${uuid}-nodes ms-$uuid rule -inf: \#uname ne vsanqa11
and \#uname ne  vsanqa12
[root@vsanqa11 ~]# pcs constraint show --full
Location Constraints:
   Resource: ms-c6933988-9e5c-419e-8fdf-744100d76ad6
     Constraint: ms-c6933988-9e5c-419e-8fdf-744100d76ad6-nodes
       Rule: score=-INFINITY
  (id:ms-c6933988-9e5c-419e-8fdf-744100d76ad6-nodes-rule)
         Expression: #uname ne vsanqa11
  (id:ms-c6933988-9e5c-419e-8fdf-744100d76ad6-nodes-expression)
         Expression: #uname ne vsanqa12
  (id:ms-c6933988-9e5c-419e-8fdf-744100d76ad6-nodes-expression-0)
Ordering Constraints:
Colocation Constraints:

So, both expression are part of the same rule as expected.



With pcs, I am not sure how to use avoid constraints if I need a resource to run
on vsanqa11 and vsanqa12 and not on any other node.
So I tried adding location constraint as follows:
pcs -f $CLUSTER_CREATE_LOG constraint location vha-$uuid rule score=-INFINITY
\#uname ne vsanqa11 and \#uname ne vsanqa12
Even though no error is thrown, the condition after "and" is silently dropped as
shown below

[root@vsanqa11 ~]# pcs constraint show --full
Location Constraints:
   Resource: ms-c6933988-9e5c-419e-8fdf-744100d76ad6
     Constraint: location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6
       Rule: score=-INFINITY
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-rule)
         Expression: #uname ne vsanqa11
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-rule-expr)
Ordering Constraints:
Colocation Constraints:


Then I tried the following
pcs -f $CLUSTER_CREATE_LOG constraint location vha-$uuid rule score=-INFINITY
\#uname ne vsanqa11
pcs -f $CLUSTER_CREATE_LOG constraint location vha-$uuid rule score=-INFINITY
\#uname ne vsanqa12

but running these two commands did not help either. Expressions were added to
separate rules.

[root@vsanqa11 ~]# pcs constraint show --full
Location Constraints:
   Resource: ms-c6933988-9e5c-419e-8fdf-744100d76ad6
     Constraint: location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-1
       Rule: score=-INFINITY
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-1-rule)
         Expression: #uname ne vsanqa12
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-1-rule-expr)
     Constraint: location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6
       Rule: score=-INFINITY
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-rule)
         Expression: #uname ne vsanqa11
  (id:location-vha-c6933988-9e5c-419e-8fdf-744100d76ad6-rule-expr)
Ordering Constraints:
Colocation Constraints:


Also, tried using multistate resource name
[root@vsanqa11 ~]# pcs constraint location
ms-c6933988-9e5c-419e-8fdf-744100d76ad6 rule score=-INFINITY \#uname ne vsanqa11
Error: 'ms-c6933988-9e5c-419e-8fdf-744100d76ad6' is not a resource


Can anyone let me correct command for this ?

Which version of pcs are you using (and what distribution)? This has been fixed upstream. (Below is a test from my system using the upstream pcs).

[root@rh7-1 pcs]# pcs constraint location D1 rule score=-INFINITY \#uname ne vsanqa11 and \#uname ne vsanqa12
[root@rh7-1 pcs]# pcs constraint
Location Constraints:
  Resource: D1
    Constraint: location-D1
      Rule: score=-INFINITY boolean-op=and
        Expression: #uname ne vsanqa11
        Expression: #uname ne vsanqa12

Thanks,
Chris






On Tue, May 27, 2014 at 11:01 AM, Andrew Beekhof <and...@beekhof.net
<mailto:and...@beekhof.net>> wrote:


    On 27 May 2014, at 2:37 pm, K Mehta <kiranmehta1...@gmail.com
    <mailto:kiranmehta1...@gmail.com>> wrote:

     > So is globally-unique=false correct in my case ?

    yes

     >
     >
     > On Tue, May 27, 2014 at 5:30 AM, Andrew Beekhof <and...@beekhof.net
    <mailto:and...@beekhof.net>> wrote:
     >
     > On 26 May 2014, at 9:56 pm, K Mehta <kiranmehta1...@gmail.com
    <mailto:kiranmehta1...@gmail.com>> wrote:
     >
     > > What I understand from "globally-unique=false" is as follows
     > > Agent handling the resource does exactly same processing on all nodes.
    For processing this resource, agent on all nodes will use exactly same
    resources (files, processes, same parameters to agent entry points, etc).
     > >
     > > In case of my resource, agent on all nodes execute same "command" to
    find score.
     > > Driver present on all nodes will make sure that the node that is to be
    promoted is the one that reports highest score as output of the "command".
    Score is reported to CM using ( /usr/sbin/crm_master -Q -l reboot -v $score)
    in monitor entry point. Until this score
     > > is reported, agent on other node will just delete the score using
    /usr/sbin/crm_master -Q -l reboot -D in monitor entry point
     > >
     > >
     > >
     > >
     > > I want to make sure that the resource does not run on nodes other than
    $node1 and $node2. To achieve this i use the following commands.
     > >
     > >         pcs -f $CLUSTER_CREATE_LOG constraint  location vha-${uuid}
      prefers $node1
     > >         pcs -f $CLUSTER_CREATE_LOG constraint  location vha-${uuid}
      prefers $node2
     > >         pcs -f $CLUSTER_CREATE_LOG constraint  location ms-${uuid}
      prefers $node1
     > >         pcs -f $CLUSTER_CREATE_LOG constraint  location ms-${uuid}
      prefers $node2
     > >
     > > Any issue here ?
     >
     > Perhaps this is not intuitive but you'd need to specify 'avoids'
    constraints for the nodes it must not run on.
     > 'prefers' only says that of all the available nodes, this one is the 
best.
     >
     > >
     > > Regards,
     > >  Kiran
     > >
     > >
     > >
     > > On Mon, May 26, 2014 at 8:54 AM, Andrew Beekhof <and...@beekhof.net
    <mailto:and...@beekhof.net>> wrote:
     > >
     > > On 22 May 2014, at 11:20 pm, K Mehta <kiranmehta1...@gmail.com
    <mailto:kiranmehta1...@gmail.com>> wrote:
     > >
     > > > > May 13 01:38:36 vsanqa28 pengine[4310]:   notice: LogActions:
    Promote vha-924bf029-93a2-41a0-adcf-f1c1a42956e5:0#011(Slave -> Master 
vsanqa28)
     > > > > May 13 01:38:36 vsanqa28 pengine[4310]:   notice: LogActions:
    Demote  vha-924bf029-93a2-41a0-adcf-f1c1a42956e5:1#011(Master -> Slave
    vsanqa27)  <<<<< Why did this happen ?
     > > >
     > > > attach the file mentioned on the next line and we might be able to
    find out
     > > >
     > >
     > > Quick question, do you understand what globally-unique=false means and
    are you sure you want it?
     > > If the answer is 'yes and yes', are you sure that your agent is using
    crm_master correctly?
     > >
     > > If I run, 'tools/crm_simulate -Sx ~/Downloads/pe-input-818.bz2 -s |
    grep vha-924bf029-93a2-41a0-adcf-f1c1a42956e5', I see:
     > >
     > > vha-924bf029-93a2-41a0-adcf-f1c1a42956e5:0 promotion score on vsanqa28:
    INFINITY
     > > vha-924bf029-93a2-41a0-adcf-f1c1a42956e5:1 promotion score on 
vsanqa27: 2
     > >
     > >
     > > Although much of the 'INFINITY' is probably from:
     > >
     > >       <rsc_location
    id="location-ms-924bf029-93a2-41a0-adcf-f1c1a42956e5-vsanqa28-INFINITY"
    node="vsanqa28" rsc="ms-924bf029-93a2-41a0-adcf-f1c1a42956e5" 
score="INFINITY"/>
     > >
     > > This is somewhat odd to include for a clone/master resource.
     > >
     > > _______________________________________________
     > > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
    <mailto:Pacemaker@oss.clusterlabs.org>
     > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
     > >
     > > Project Home: http://www.clusterlabs.org
     > > Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
     > > Bugs: http://bugs.clusterlabs.org
     > >
     > >
     > > _______________________________________________
     > > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
    <mailto:Pacemaker@oss.clusterlabs.org>
     > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
     > >
     > > Project Home: http://www.clusterlabs.org
     > > Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
     > > Bugs: http://bugs.clusterlabs.org
     >
     >
     > _______________________________________________
     > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
    <mailto:Pacemaker@oss.clusterlabs.org>
     > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
     >
     > Project Home: http://www.clusterlabs.org
     > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
     > Bugs: http://bugs.clusterlabs.org
     >
     >
     > _______________________________________________
     > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
    <mailto:Pacemaker@oss.clusterlabs.org>
     > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
     >
     > Project Home: http://www.clusterlabs.org
     > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
     > Bugs: http://bugs.clusterlabs.org


    _______________________________________________
    Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
    <mailto:Pacemaker@oss.clusterlabs.org>
    http://oss.clusterlabs.org/mailman/listinfo/pacemaker

    Project Home: http://www.clusterlabs.org
    Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
    Bugs: http://bugs.clusterlabs.org




_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to