Hi!
Well, since I needed one thing - only one node starts the database on
shared storage - I made an ugly dirty hack :-), that seems to work for me.
I wrote a custom RA, that relies on frequent 'monitor' actions, and simply
writes timestamp+hostname to physical partition. In case it detects that
s
Hi!
I removed all clustr-related staff and installed from
http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/
However, stonith-ng uses fence_* agents here... So I cannot put into crmsh
primitive stonith_sbd stonith:external/sbd
:-(
2014-03-19 20:14
Lars,
thanks for answering.
Actually, there was no fencing. The reason was - even if one node "goes
wild" and start its own Oracle instance - we could discard the faulty one
no problem, and the invalidate drbd resource and resync.
However, this will not do with shared storage: now we can not affor
There is no sbd on centos(redhat), in your case you can use cman+pacemaker
and after that, create a volume group using the multipaths device and now
you can create the filesystem using the new volume group and copy the
oracle data, if you are using hp hadware you can use the ilo force fencing
and i
On 2014-03-19T19:20:35, Саша Александров wrote:
> Now, we got shared storage over multipath FC there, so we need to move from
> drbd to shared storage. And I got totally confused now - I can not find a
> guide on how to set things up. I see two options:
> - use gfs2
> - use ext4 with sbd
If you