Hi All,

The constitution of the cluster became right by a redundant resource and 
addition and a change of the limitation as follows.
However, we do not want to perform the redundant setting.

(snip)
      <primitive class="ocf" id="vipCheck" provider="pacemaker" type="Dummy">
        <instance_attributes id="vipCheck-instance_attributes">
        </instance_attributes>
        <operations>
          <op id="vipCheck-start-0s" interval="0s" name="start" 
on-fail="restart" start-delay="4s" timeout="90s"/>
        </operations>
      </primitive>
      <primitive class="ocf" id="vipCheck2" provider="heartbeat" type="Dummy">
        <instance_attributes id="vipCheck2-instance_attributes">
        </instance_attributes>
        <operations>
          <op id="vipCheck2-start-0s" interval="0s" name="start" 
on-fail="restart" start-delay="4s" timeout="90s"/>
        </operations>
      </primitive>
(snip)
      <rsc_colocation id="rsc_colocation-7" rsc="vipCheck" score="INFINITY" 
with-rsc="msPostgresql" with-rsc-role="Master"/>
      <rsc_colocation id="rsc_colocation-8" rsc="vipCheck" score="INFINITY" 
with-rsc="vipCheck2"/>
      <rsc_order first="vipCheck" first-action="start" id="rsc_order-8" 
score="INFINITY" then="vipCheck2" then-action="start"/>
      <rsc_order first="vipCheck2" first-action="start" id="rsc_order-7" 
score="INFINITY" then="msPostgresql" then-action="promote"/>
(snip)
============
Last updated: Mon Jul  2 18:35:19 2012
Stack: Heartbeat
Current DC: rh62-test1 (90e5d5b7-d217-4386-a03d-069111772b54) - partition with 
quorum
Version: 1.0.12-unknown
1 Nodes configured, unknown expected votes
3 Resources configured.
============

Online: [ rh62-test1 ]

 Master/Slave Set: msPostgresql
     Slaves: [ rh62-test1 ]
     Stopped: [ postgresql:1 ]

Migration summary:
* Node rh62-test1: 
   vipCheck: migration-threshold=1 fail-count=1000000

Failed actions:
    vipCheck_start_0 (node=rh62-test1, call=5, rc=1, status=complete): unknown 
error


Best Regards,
Hideo Yamauchi.

--- On Mon, 2012/7/2, renayama19661...@ybb.ne.jp <renayama19661...@ybb.ne.jp> 
wrote:

> Hi Phillip,
> 
> Thank you for comment.
> However, the result was the same even if I used the Group resource.
> 
> (snip)
>       <group id="GrpvipCheck">      <primitive class="ocf" id="vipCheck" 
> provider="pacemaker" type="Dummy">        <instance_attributes 
> id="vipCheck-instance_attributes">        </instance_attributes>        
> <operations>          <op id="vipCheck-start-0s" interval="0s" name="start" 
> on-fail="restart" start-delay="4s" timeout="90s"/>        </operations>      
> </primitive>
>       </group>
> (snip)
>       <rsc_order first="GrpvipCheck" first-action="start" id="rsc_order-7" 
> score="INFINITY" then="msPostgresql" then-action="promote"/>
> (snip)
> 
> ============
> Last updated: Mon Jul  2 17:56:27 2012
> Stack: Heartbeat
> Current DC: rh62-test1 (6d534d4e-a3a1-4a92-86c7-eadf6c2f7570) - partition 
> with quorum
> Version: 1.0.12-unknown
> 1 Nodes configured, unknown expected votes
> 2 Resources configured.
> ============
> 
> Online: [ rh62-test1 ]
> 
>  Master/Slave Set: msPostgresql
>      Masters: [ rh62-test1 ]
>      Stopped: [ postgresql:1 ]
> 
> Migration summary:
> * Node rh62-test1: 
>    vipCheck: migration-threshold=1 fail-count=1000000
> 
> Failed actions:
>     vipCheck_start_0 (node=rh62-test1, call=4, rc=1, status=complete): 
> unknown error
> 
> Best Regards,
> Hideo Yamauchi.
> 
> --- On Fri, 2012/6/29, Phillip Frost <p...@macprofessionals.com> wrote:
> 
> > 
> > On Jun 28, 2012, at 10:26 PM, renayama19661...@ybb.ne.jp wrote:
> > 
> > >> We set order limitation as follows.
> > > 
> > >      <rsc_colocation id="rsc_colocation-7" rsc="vipCheck" 
> > >score="INFINITY" with-rsc="msPostgresql" with-rsc-role="Master"/>
> > >      <rsc_order first="vipCheck" first-action="start" id="rsc_order-7" 
> > >score="INFINITY" then="msPostgresql" then-action="promote"/>
> > > 
> > >> However, promote was carried out even if primitvei resource caused start 
> > >> trouble.
> > >> 
> > >> Online: [ rh62-test1 ]
> > >> 
> > >> Master/Slave Set: msPostgresql
> > >>      Masters: [ rh62-test1 ]
> > >>      Stopped: [ postgresql:1 ]
> > >> 
> > >> Migration summary:
> > >> * Node rh62-test1: 
> > >>    vipCheck: migration-threshold=1 fail-count=1000000
> > >> 
> > >> Failed actions:
> > >>     vipCheck_start_0 (node=rh62-test1, call=4, rc=1, status=complete): 
> > >>unknown error
> > 
> > What happens if you reverse the order of the of the colocation constraint? 
> > You've told pacemaker to decide where to put msPostgresql:Master first, and 
> > if it can't run that, then don't run vipCheck, but to start them in the 
> > opposite order. I'm not sure an order constraint will prevent one resource 
> > from running if another fails to start, but a colocation constraint will, 
> > if you get it in the right order.
> > 
> > You could also use a resource group, which combines colocation and order 
> > constraints in the order you'd expect.
> > 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to