> OK, so either patch PAF yourself (not recommended) or choose something > else. Note that two other ways are working with Pacemaker: > * the pgsql resource agent (see FAQ of PAF) > * a shared disk architecture (no pgsql replication) Probably, i will be interested by the solution "patroni-etcd-haproxy" you suggested
> I suppose you could find fencing agents for: > * the blade itself, but it would fence all the container running on it > * the access to the SAN from the failing container > I don't know if fencing agent exists for a container itself. Note that I'm not > familiar with the container world, I lack a lot of knowledge on this > technology. I am not familiar with this too. I heard that for the first time few days ago by reading how to improve HA ... :-) Many thanks for your opinions/advices Le mer. 5 sept. 2018 à 15:44, Jehan-Guillaume (ioguix) de Rorthais < iog...@free.fr> a écrit : > On Wed, 5 Sep 2018 15:06:21 +0200 > Thomas Poty <thomas.p...@gmail.com> wrote: > > > > In fact, PAF does not support slots. So it is not a good candidate if > > > slot are a requirement. > > Effectively slots are a requirement we prefer to keep > > OK, so either patch PAF yourself (not recommended) or choose something > else. Note that two other ways are working with Pacemaker: > * the pgsql resource agent (see FAQ of PAF) > * a shared disk architecture (no pgsql replication) > > > > > a proxy HAproxy and > > > > for fencincg, i am a bit disappointed, i don't know what to do/use > > > Depend on your hardware or your virtualization technology. > > Our production cluster (master and slave) runs on LXC container. Each LXC > > container runs on a HPE Blade Server. The storage is on a SAN 3PAR array. > > Any advice ? > > I suppose you could find fencing agents for: > > * the blade itself, but it would fence all the container running on it > * the access to the SAN from the failing container > > I don't know if fencing agent exists for a container itself. Note that I'm > not > familiar with the container world, I lack a lot of knowledge on this > technology. > > ++ >