Lars, thanks for answering. Actually, there was no fencing. The reason was - even if one node "goes wild" and start its own Oracle instance - we could discard the faulty one no problem, and the invalidate drbd resource and resync. However, this will not do with shared storage: now we can not afford starting instance in case network connectivity is lost but shared storage access is still availiable.
fence_sanlock seems to be ideologically the same as sdb. However - no sound tutorials/explanations. I am even thinking of writing some custom RA to implement resource-level fencing (I have other resources, do not want to restart the whole node) 2014-03-19 20:14 GMT+04:00 Lars Marowsky-Bree <l...@suse.com>: > On 2014-03-19T19:20:35, Саша Александров <shurr...@gmail.com> wrote: > > > Now, we got shared storage over multipath FC there, so we need to move > from > > drbd to shared storage. And I got totally confused now - I can not find a > > guide on how to set things up. I see two options: > > - use gfs2 > > - use ext4 with sbd > > If you don't need concurrent access from both nodes to the same file > system, using ext4/XFS in a fail-over configuration is to be preferred > over the complexity of a cluster file system like GFS2/OCFS2. > > RHT has chosen to not ship sbd, unfortunately, so you can't use this > very reliable fencing mechanism on CentOS/RHEL. Or you'd have to build > it yourself. Assuming you have hardware fencing right now, you can > continue to use that too. > > > Regards, > Lars > > -- > Architect Storage/HA > SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix > Imendörffer, HRB 21284 (AG Nürnberg) > "Experience is the name everyone gives to their mistakes." -- Oscar Wilde > > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > -- С уважением, ААА.
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org