Oh nop, sorry, my mistake, it doesn't works.... :( Ivan ________________________________
De: Ivan Coronado [mailto:icoron...@epcge.com] Enviado el: viernes, 14 de mayo de 2010 9:02 Para: The Pacemaker cluster resource manager Asunto: Re: [Pacemaker] two nodes fenced when drbd link fails Thanks! it's works!!! Ivan ________________________________ De: Vadym Chepkov [mailto:vchep...@gmail.com] Enviado el: viernes, 14 de mayo de 2010 4:03 Para: The Pacemaker cluster resource manager Asunto: Re: [Pacemaker] two nodes fenced when drbd link fails On May 13, 2010, at 1:37 PM, Ivan Coronado wrote: Hello to everybody, I have a problem with the corosync.conf setup. I have a drbd service runing on eth3, and a general network and the stonith device (idrac6) in the eth0. If I unplug the eth3 to simulate a network failure two nodes are fenced (first the slave followed by the master). If I only leave ringnumber 0 in the coroync.conf file I don't have this problem. Is this normal operation? Here you have the section of corosync.conf where I have the problem, and thanks for the help. rrp_mode: active interface { # eth0 ringnumber: 0 bindnetaddr: 200.200.201.0 mcastaddr: 226.94.1.1 mcastport: 5405 } interface { #eth3 ringnumber: 1 bindnetaddr: 192.168.2.0 mcastaddr: 226.94.1.2 mcastport: 5406 } ----- Ivan I read in the list open...@lists.osdl.org setting ports at least two apart helps (5405, 5407) Vadym
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf