On 26 Jun 2014, at 8:18 am, Gianluca Cecchi <gianluca.cec...@gmail.com> wrote:

> 
> On Sun, Jun 22, 2014 at 1:51 AM, Digimer <li...@alteeve.ca> wrote:
> Excellent.
> 
>   Please note; With IPMI-only fencing, you may find that killing all power to 
> the node will cause fencing to fail, as the IPMI's BMC will lose power as 
> well (unless it has it's own battery, but most don't).
> 
>   If you find this, then the solution I would recommend is to get a pair of 
> switched PDUs (I like the APC brand AP7900, very fast and the fence_apc_snmp 
> agent is very well tested). With this, you can then setup STONITH levels;
> 
> http://clusterlabs.org/wiki/STONITH_Levels
> 
>   With this, if the IPMI fails, Pacemaker will move on and try fencing by 
> cutting power to the lost node, providing a backup method of fencing. If you 
> use stacked switches, put the PDUs on one switch and the IPMI interface on 
> the other switch, and you will provide reliable fencing in a failed-switch 
> state, too.
> 
>   Cheers!
> 
> 
> Good points. At the moment this is a lab environment so it is not crucial, 
> but I'll take in mind for production use.
> 
> One point: after doing some tests and creating failures of nodes for test I 
> see this behaviour about the special fencing resource
> 
> normal behaviour
> [root@srvmgmt02 ~]# crm_mon -1
> ...
> [snip]
>  fence_srvmgmt01    (stonith:fence_intelmodular):    Started 
> srvmgmt01.localdomain.local 
>  fence_srvmgmt02    (stonith:fence_intelmodular):    Started 
> srvmgmt02.localdomain.local 
> 
> after fencing of srvmgmt01 (because of drbd problem deliberately produced by 
> me on it)
> [root@srvmgmt02 ~]# crm_mon -1
> ...
> [snip]
>  fence_srvmgmt01    (stonith:fence_intelmodular):    Started 
> srvmgmt02.localdomain.local 
>  fence_srvmgmt02    (stonith:fence_intelmodular):    Started 
> srvmgmt02.localdomain.local 
> 
> and the output above remains true while srvmgmt01 is rebooting but also after 
> it has completed startup and joins the cluster.
> So I presume I have to set an location constraint rule so that it can only 
> run on its node, correct?

Not really. It's not really relevant which node has the fencing device - thats 
mostly just the node that will check the device is still healthy/correctly 
configured.
Every node can use the device's configuration when needed.

> 
> something llike
> pcs constraint location fence_srvmgmt01 prefers 
> srvmgmt01.localdomain.local=INFINITY
> pcs constraint location fence_srvmgmt02 prefers 
> srvmgmt02.localdomain.local=INFINITY
> 
> Gianluca
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to