I have it set already to ignore, because i do resource level fencing with custom app, which is run/triggered by drbd when it looses connection.
>
property $id="cib-bootstrap-options" \
   dc-version="1.0.6-f709c638237cdff7556cb6ab615f32826c0f8c06" \
   cluster-infrastructure="Heartbeat" \
   stonith-enabled="false" \
   no-quorum-policy="ignore" \
   default-resource-stickiness="1000" \
   last-lrm-refresh="1262007932"
>

I have just noticed an error with ms_drbd_r0.
>
ms ms_drbd_r0 drbd_r0 \
   meta notify="true" master-max="2" inteleave="true"
>
interleave is not inteleave, correct? :-)
If that won't help i'll also try node-master-max and/or clone(-master)-max meta options.

Will let you know if it works out.

Regards,
M.

hj lee wrote:
Hi,

Maybe this is related to no-quorum-policy. What is no-quorum-policy? You can check no-quorum-policy in "crm configure show" command. If you can not see in "crm configure show" command, then it is "stop" by default. If that is your case, then please set it to ignore by "crm configure property no-quorum-policy=ignore". and try it again.

Thanks
hj

On Tue, Dec 29, 2009 at 6:43 AM, Martin Gombač <mar...@isg.si <mailto:mar...@isg.si>> wrote:

    Hi guys once again :-)

    my resource Hosting on top of ms_drbd_r0 keeps restarting even if
    the changes aren't local to the node.
    By that, i mean Hosting get's restarted on node1 even if i restart
    or outdate and demote node 2.

    My constrains:
    colocation Hosting_on_ms_drbd_r0 inf: Hosting ms_drbd_r0:Master
    order ms_drbd_r0_b4_Hosting inf: ms_drbd_r0:promote Hosting:start

    My resources:
    primitive Hosting ocf:heartbeat:Xen \
      params xmfile="/etc/xen/Hosting.cfg" \
      meta target-role="Started" allow-migrate="true" is-managed="true" \
      op monitor interval="120s" timeout="300s"
    primitive drbd_r0 ocf:linbit:drbd \
      params drbd_resource="r0" \
      op monitor interval="15s" role="Master" timeout="30s" \
      op monitor interval="30s" role="Slave" timeout="30"
    ms ms_drbd_r0 drbd_r0 \
      meta notify="true" master-max="2" inteleave="true"

    I use location to:
    location cli-prefer-Hosting Hosting \
      rule $id="cli-prefer-rule-Hosting" inf: #uname eq ibm1

    Example:
    Shuting down heartbeat on node2/ibm2, while resource is running
    perfectly fine on node1, makes resource Hosting restart on node1.

    Dec 29 14:21:28 ibm1 pengine: [3716]: notice: native_print:
    Hosting    (ocf::heartbeat:Xen):    Started ibm1
    Dec 29 14:21:28 ibm1 pengine: [3716]: notice: clone_print:
     Master/Slave Set: ms_drbd_r0
Dec 29 14:21:28 ibm1 pengine: [3716]: notice: short_print: Masters: [ ibm1 ibm2]
    Dec 29 14:21:28 ibm1 pengine: [3716]: WARN: native_color: Resource
    drbd_r0:1 cannot run anywhere
    Dec 29 14:21:28 ibm1 pengine: [3716]: info: master_color:
    Promoting drbd_r0:0 (Master ibm1)
    Dec 29 14:21:28 ibm1 pengine: [3716]: info: master_color:
    ms_drbd_r0: Promoted 1 instances of a possible 2 to master
    Dec 29 14:21:28 ibm1 pengine: [3716]: info: master_color:
    Promoting drbd_r0:0 (Master ibm1
    Dec 29 14:21:28 ibm1 pengine: [3716]: info: master_color:
    ms_drbd_r0: Promoted 1 instances of a possible 2 to master
    Dec 29 14:21:28 ibm1 pengine: [3716]: info: stage6: Scheduling
    Node ibm2 for shutdown
    Dec 29 14:21:28 ibm1 pengine: [3716]: notice: LogActions: Restart
    resource Hosting    (Started ibm1)
    Dec 29 14:21:28 ibm1 pengine: [3716]: notice: LogActions: Leave
    resource drbd_r0:0    (Master ibm1)
    Dec 29 14:21:28 ibm1 pengine: [3716]: notice: LogActions: Demote
    drbd_r0:1    (Master -> Stopped ibm2)
    Dec 29 14:21:28 ibm1 pengine: [3716]: notice: LogActions: Stop
    resource drbd_r0:1    (ibm2)
    ...
    Dec 29 14:21:28 ibm1 crmd: [3713]: info: te_rsc_command:
    Initiating action 8: stop Hosting_stop_0 on ibm1(local)
    Dec 29 14:21:28 ibm1 crmd: [3713]: info: do_lrm_rsc_op: Performing
    key=8:4:0:13282265-c62f-4341-9fa5-363cd30ddd3e op=Hosting_stop_0 )
    Dec 29 14:21:28 ibm1 lrmd: [3710]: info: rsc:Hosting:19: stop
    ...
    Dec 29 14:21:34 ibm1 crmd: [3713]: info: match_graph_event: Action
    Hosting_stop_0 (8) confirmed on ibm1..si (rc=0)
    Dec 29 14:21:34 ibm1 crmd: [3713]: info: te_rsc_command:
    Initiating action 9: start Hosting_start_0 on ibm1..si (local)
    Dec 29 14:21:34 ibm1 lrmd: [3710]: info: rsc:Hosting:21: start
    ....
    ...
    .


    Please advise me on how to configure ordering and collocation
    constrains, to not get Hosting resource restarted each time when
    something happens to backup drbd resource on the second node.


    Thank you.
    Martin

    _______________________________________________
    Pacemaker mailing list
    Pacemaker@oss.clusterlabs.org <mailto:Pacemaker@oss.clusterlabs.org>
    http://oss.clusterlabs.org/mailman/listinfo/pacemaker




--
Dream with longterm vision!
kerdosa
------------------------------------------------------------------------

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to