>> On Fri, Aug 13, 2010 at 01:44:28PM +0000, Chris Picton wrote: >> I have a drbd backed mysql server which has the following resources: >> >> drbd0 -> lvm_data -> mount_data >> drbd1 -> lvm_logs -> mount_logs >> mysqld >> floatingip >> >> I would like the drbd based filesystems to start up in parallel. Once >> they have started, start mysql and the ip address. Obviously the >> reverse should happen on standby or shutdown. >> >> all_drbd inf: ms_drbd1:Master ms_drbd0:Master > > You want both on one node?
Yes - I need both masters on the same node - they are different filesystems on different physical disks, one for the database binlogs, one for the database data >> if I do a crm resource >> migrate MySQL, the active node shuts down the MySQL resource, but never >> releases the drbd masters or dependant resources > > The only thing that happened is mysql stop? Did you check logs for > errors? Yes MySQL stopped, and the dependant MailAlert and FloatingIP. The logs indicate there was no attempt to stop the other resources at all - > >> Master/Slave Set: ms_drbd0 >> Masters: [ chris-test-02.ecntelecoms.za.net ] Slaves: [ >> chris-test-01.ecntelecoms.za.net ] >> Master/Slave Set: ms_drbd1 >> Masters: [ chris-test-02.ecntelecoms.za.net ] Slaves: [ >> chris-test-01.ecntelecoms.za.net ] >> Resource Group: group_mysql >> MySQL (ocf::ecn:MySQL.ocf): Stopped MailAlert >> (ocf::heartbeat:ECNAlert): Stopped >> FloatingIP (ocf::heartbeat:IPaddr2): Stopped Resource Group: >> group_data >> lvm_data (ocf::heartbeat:LVM): Started chris- >> test-02.ecntelecoms.za.net >> mount_data (ocf::heartbeat:Filesystem): Started chris- >> test-02.ecntelecoms.za.net >> Resource Group: group_logs >> lvm_logs (ocf::heartbeat:LVM): Started chris- >> test-02.ecntelecoms.za.net >> mount_logs (ocf::heartbeat:Filesystem): Started chris- >> test-02.ecntelecoms.za.net >> Clone Set: STONITH-clone >> Started: [ chris-test-02.ecntelecoms.za.net chris- >> test-01.ecntelecoms.za.net ] >> >> >> Is there a better/more concise/more correct way of specifying the >> colocations and orderings? > > The constraints look OK. > > Something like this should also work (example for the logs db): > > order order_logs inf: ms_drbd1:promote group_logs group_mysql FloatingIP > collocation col_logs inf: ms_drbd1:Master group_logs group_mysql > FloatingIP I have tried that, and resource start up in the correct order and together on the same node. However, if I standby the 'master' node, I see the following in the logs a few seconds apart. info: rsc:FloatingIP:150: stop info: rsc:drbd0:0:151: demote info: rsc:drbd1:0:152: demote info: rsc:MailAlert:153: stop info: rsc:MySQL:154: stop Ie, it is tring to demote the drbd resources before attempting to stop mysql, which can;t be done, as the drbd devices are still mounted. Should I need to also specify the demote ordering, as well as the Slave colocations (to say MySQL cannot run where drbd is Slave?) I had added: colocation col_data inf: ms_drbd0:Master group_data group_mysql FloatingIP colocation col_logs inf: ms_drbd1:Master group_logs group_mysql FloatingIP order order_data inf: ms_drbd0:promote group_data group_mysql FloatingIP order order_logs inf: ms_drbd1:promote group_logs group_mysql FloatingIP _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker