On Mon, Jul 30, 2012 at 2:08 PM, <renayama19661...@ybb.ne.jp> wrote: > Hi Andrew, > > Thank you or commets. > >> > Online: [ drbd1 drbd2 ] >> > >> > Master/Slave Set: msDrPostgreSQLDB >> > Masters: [ drbd2 ] >> > Slaves: [ drbd1 ] -------------------------------> Started and Status >> > Slave. >> >> Yep, looks like a bug. I'll follow up on the bugzilla. > > I talked with David by Bugzilla. > > And I confirmed that I worked by two next methods well. > > The first method) > * Set colocation in clnPingd and msDrPostgreSQLDB. > > The second method) > * Set interleave option in clnPingd. > > Do my two methods include a mistake?
No. Looking closer, the initial constraint says only that the Master must be on a node running clnPingd. Slaves are free to run anywhere :) > > If you suspect the bug, please write in comment at Bugzilla. > > Many Thanks, > Hideo Yamauchi. > > > --- On Mon, 2012/7/30, Andrew Beekhof <and...@beekhof.net> wrote: > >> On Mon, Jul 23, 2012 at 9:43 AM, <renayama19661...@ybb.ne.jp> wrote: >> > Hi David, >> > >> > Thank you for comments. >> > >> >> http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch06s03s02.html >> > >> > I confirmed it in INFINITY. >> > >> > (snip) >> > <rsc_colocation id="rsc_colocation-1" rsc="msDrPostgreSQLDB" >> > rsc-role="Master" score="INFINITY" with-rsc="clnPingd"/> >> > <rsc_order first="clnPingd" id="rsc_order-1" score="INFINITY" >> > symmetrical="false" then="msDrPostgreSQLDB"/> >> > (snip) >> > >> > >> > By the start only for Master nodes, it was controlled well. >> > >> > [root@drbd1 ~]# crm_mon -1 -f >> > ============ >> > Last updated: Mon Jul 23 08:24:38 2012 >> > Stack: Heartbeat >> > Current DC: NONE >> > 1 Nodes configured, unknown expected votes >> > 2 Resources configured. >> > ============ >> > >> > Online: [ drbd1 ] >> > >> > >> > Migration summary: >> > * Node drbd1: >> > prmPingd:0: migration-threshold=1 fail-count=1000000 >> > >> > Failed actions: >> > prmPingd:0_start_0 (node=drbd1, call=4, rc=1, status=complete): >> > unknown error >> > >> > >> > However, the problem occurs when I send cib after the Slave node started >> > together. >> > >> > ============ >> > Last updated: Mon Jul 23 08:35:41 2012 >> > Stack: Heartbeat >> > Current DC: drbd2 (6d4b04de-12c0-499a-b388-febba50eaec2) - partition with >> > quorum >> > Version: 1.0.12-unknown >> > 2 Nodes configured, unknown expected votes >> > 2 Resources configured. >> > ============ >> > >> > Online: [ drbd1 drbd2 ] >> > >> > Master/Slave Set: msDrPostgreSQLDB >> > Masters: [ drbd2 ] >> > Slaves: [ drbd1 ] -------------------------------> Started and Status >> > Slave. >> >> Yep, looks like a bug. I'll follow up on the bugzilla. >> >> > Clone Set: clnPingd >> > Started: [ drbd2 ] >> > Stopped: [ prmPingd:0 ] >> > >> > Migration summary: >> > * Node drbd1: >> > prmPingd:0: migration-threshold=1 fail-count=1000000 >> > * Node drbd2: >> > >> > Failed actions: >> > prmPingd:0_start_0 (node=drbd1, call=4, rc=1, status=complete): >> > unknown error >> > >> > Best Regards, >> > Hideo Yamauchi. >> > >> > >> > >> > --- On Sat, 2012/7/21, David Vossel <dvos...@redhat.com> wrote: >> > >> >> >> >> >> >> ----- Original Message ----- >> >> > From: renayama19661...@ybb.ne.jp >> >> > To: "PaceMaker-ML" <pacemaker@oss.clusterlabs.org> >> >> > Sent: Friday, July 20, 2012 1:39:51 AM >> >> > Subject: [Pacemaker] [Problem] Order which combined a master with clone >> >> > is invalid. >> >> > >> >> > Hi All, >> >> > >> >> > We confirmed movement of order which combined a master with clone. >> >> > We performed it by a very simple combination. >> >> > >> >> > Step1) We change it to produce start error in Dummy resource. >> >> > >> >> > (snip) >> >> > dummy_start() { >> >> > return $OCF_ERR_GENERIC >> >> > dummy_monitor >> >> > (snip) >> >> > >> >> > Step2) We start one node and send cib. >> >> > >> >> > >> >> > However, as for the master, it is done start even if start of clone >> >> > fails. >> >> > And it becomes the Slave state. >> >> >> >> Not a bug, You are using advisory ordering in your order constraint. >> >> >> >> http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch06s03s02.html >> >> >> >> > >> >> > ============ >> >> > Last updated: Fri Jul 20 15:36:10 2012 >> >> > Stack: Heartbeat >> >> > Current DC: NONE >> >> > 1 Nodes configured, unknown expected votes >> >> > 2 Resources configured. >> >> > ============ >> >> > >> >> > Online: [ drbd1 ] >> >> > >> >> > Master/Slave Set: msDrPostgreSQLDB >> >> > Slaves: [ drbd1 ] >> >> > Stopped: [ prmDrPostgreSQLDB:1 ] >> >> > >> >> > Migration summary: >> >> > * Node drbd1: >> >> > prmPingd:0: migration-threshold=1 fail-count=1000000 >> >> > >> >> > Failed actions: >> >> > prmPingd:0_start_0 (node=drbd1, call=4, rc=1, status=complete): >> >> > unknown error >> >> > >> >> > >> >> > We confirmed it just to make sure in Pacemaker1.1.7. >> >> > However, the problem was the same. >> >> > >> >> > ============ >> >> > Last updated: Fri Jul 20 22:53:22 2012 >> >> > Last change: Fri Jul 20 22:53:09 2012 via cibadmin on fedora17-1 >> >> > Stack: corosync >> >> > Current DC: fedora17-1 (1) - partition with quorum >> >> > Version: 1.1.7-e6922a70f742d3eab63d7e22f3ea0408b54b5dae >> >> > 1 Nodes configured, unknown expected votes >> >> > 4 Resources configured. >> >> > ============ >> >> > >> >> > Online: [ fedora17-1 ] >> >> > >> >> > Master/Slave Set: msDrPostgreSQLDB [prmDrPostgreSQLDB] >> >> > Slaves: [ fedora17-1 ] >> >> > Stopped: [ prmDrPostgreSQLDB:1 ] >> >> > >> >> > Migration summary: >> >> > * Node fedora17-1: >> >> > prmPingd:0: migration-threshold=1 fail-count=1000000 >> >> > >> >> > Failed actions: >> >> > prmPingd:0_start_0 (node=fedora17-1, call=14, rc=1, >> >> > status=complete): unknown error >> >> > >> >> > >> >> > I think that this problem is similar to the bug that I reported >> >> > before. >> >> > >> >> > * http://bugs.clusterlabs.org/show_bug.cgi?id=5075. >> >> > >> >> > Is this problem a bug? >> >> > Or can we be improved by setting? >> >> >> >> see advisory ordering >> >> >> >> > >> >> > Best Regards, >> >> > Hideo Yamauchi. >> >> > >> >> > >> >> > _______________________________________________ >> >> > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >> >> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> >> > >> >> > Project Home: http://www.clusterlabs.org >> >> > Getting started: >> >> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> >> > Bugs: http://bugs.clusterlabs.org >> >> > >> >> >> > >> > _______________________________________________ >> > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> > >> > Project Home: http://www.clusterlabs.org >> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> > Bugs: http://bugs.clusterlabs.org >> > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org