Hi,

Using following installation under CentOS

corosync-1.4.1-7.el6_3.1.x86_64
resource-agents-3.9.2-12.el6.x86_64

and having the following configuration for a Master/Slave mysql

primitive mysqld ocf:heartbeat:mysql \
        params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf"
socket="/var/lib/mysql/mysql.sock" datadir="/var/lib/mysql" user="mysql"
replication_user="root" replication_passwd="testtest" \
        op monitor interval="5s" role="Slave" timeout="31s" \
        op monitor interval="6s" role="Master" timeout="30s"
ms ms_mysql mysqld \
        meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
property $id="cib-bootstrap-options" \
        dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="2" \
        no-quorum-policy="ignore" \
        stonith-enabled="false" \
        last-lrm-refresh="1359026356" \
        start-failure-is-fatal="false" \
        cluster-recheck-interval="60s"
rsc_defaults $id="rsc-options" \
        failure-timeout="50s"

Having only one node online (the Master; with a slave online the problem
also occurs, but for simplification I've left only the Master online)

I run into the bellow problem:
- Stopping once the mysql process results in corosync restarting the mysql
again and promoting it to Master.
- Stopping again the mysql process results in nothing; the failure is not
detected, corosync takes no action and still sees the node as Master and the
mysql running.
- The operation monitor is not running after the first failure, as there are
not entries in log of type:  INFO: MySQL monitor succeeded (master).
- Changing something in configuration results in corosync detecting
immediately that mysql is not running and promotes it. Also the operation
monitor will run until the first failure and which the same problem occurs.

If you need more information let me know. I could attach the log in the
messages files also.

Thanks for now,
Radu.

-- 
View this message in context: 
http://old.nabble.com/Master-Slave---Master-node-not-monitored-after-a-failure-tp34939865p34939865.html
Sent from the Linux-HA mailing list archive at Nabble.com.

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to