Hi, all I measure the performance of Pacemaker in the following combinations. Pacemaker-1.1.11.rc1 libqb-0.16.0 corosync-2.3.2
All nodes are KVM virtual machines. stopped the node of vm01 compulsorily from the inside, after starting 14 nodes. "virsh destroy vm01" was used for the stop. Then, in addition to the compulsorily stopped node, other nodes are separated from a cluster. The log of "Retransmit List:" is then outputted in large quantities from corosync. What is the reason which the node in which failure has not occurred carries out "lost"? Please advise, if there is a problem in a setup in something. I attached the report when the problem occurred. https://drive.google.com/file/d/0BwMFJItoO-fVMkFWWWlQQldsSFU/edit?usp=sharing Regards, Yusuke -- ---------------------------------------- METRO SYSTEMS CO., LTD Yusuke Iida Mail: yusk.i...@gmail.com ----------------------------------------
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org