On Thu, Mar 28, 2013 at 10:43 PM, Rainer Brestan wrote:
> Hi John,
> to get Corosync/Pacemaker running during anaconda installation, i have
> created a configuration RPM package which does a few actions before starting
> Corosync and Pacemaker.
>
> An excerpt of the post install of this RPM.
> # m
Ok, then. I learned something new. Thanks.
d.p.
On Thu, Mar 28, 2013 at 6:28 PM, Andrew Beekhof wrote:
> On Fri, Mar 29, 2013 at 7:42 AM, David Pendell wrote:
> > I have a two-node CentOS 6.4 based cluster, using pacemaker 1.1.8 with a
> > cman backend running primarily libvirt controlled kvm
On Fri, Mar 29, 2013 at 4:12 AM, Benjamin Kiessling
wrote:
> Hi,
>
> we've got a small pacemaker cluster running which controls an
> active/passive router. On this cluster we've got a semi-large (~30)
> number of primitives which are grouped together. On migration it takes
> quite a long time unti
On Fri, Mar 29, 2013 at 7:42 AM, David Pendell wrote:
> I have a two-node CentOS 6.4 based cluster, using pacemaker 1.1.8 with a
> cman backend running primarily libvirt controlled kvm VMs. For the VMs, I am
> using clvm volumes for the virtual hard drives and a single gfs2 volume for
> shared sto
I have a two-node CentOS 6.4 based cluster, using pacemaker 1.1.8 with a
cman backend running primarily libvirt controlled kvm VMs. For the VMs, I
am using clvm volumes for the virtual hard drives and a single gfs2 volume
for shared storage of the config files for the VMs and other shared data.
For
Hi,
we've got a small pacemaker cluster running which controls an
active/passive router. On this cluster we've got a semi-large (~30)
number of primitives which are grouped together. On migration it takes
quite a long time until each resource is brought up again because they
are started sequential
On 13-03-25 03:50 PM, Jacek Konieczny wrote:
>
> The first node to notice that the other is unreachable will fence (kill)
> the other, making sure it is the only one operating on the shared data.
Right. But with typical two-node clusters ignoring no-quorum, because
quorum is being ignored, as so
Hi Steve,
when pre promote notify is called, Pacemaker has already selected a node to promote, you cannot stop this sequency any more.
As the node with the highest score should have been promoted, this is a code to fail slaves coming up after promote node selection.
If you try to force another
Hi Steve,
i think, you have misunderstood how ip addresses are used with this setup, PGVIP should start after promotion.
Take a look at Takatoshi´s Wiki.
https://github.com/t-matsuo/resource-agents/wiki/Resource-Agent-for-PostgreSQL-9.1-streaming-replication
The promotion sequency is very s
Hi John,
to get Corosync/Pacemaker running during anaconda installation, i have created a configuration RPM package which does a few actions before starting Corosync and Pacemaker.
An excerpt of the post install of this RPM.
# mount /dev/shm if not already existing, otherwise openais cannot
I'm reading the additions that you added to the pgsql resource agent to allow
for streaming replication in Postgres 9.1+. I'm trying to determine if your
resource agent will compensate if node promoted ( new master ) does not have
the newest data.
>From the looks of the pgsql_pre_promote funct
11 matches
Mail list logo