Yes, it avoids the crashes. Thanks! But I am still seeing spurious VM
migrations/shutdowns when I stop/start a VM with a remote pacemaker
(similar to my last update, only no core dumped while fencing, nor indeed
does any fencing happen, even though I've now verified that fence_node
works again.
On 2013-07-10T14:32:04, Andrew Morgan wrote:
> First of all, setting the 3rd host to be a standby (this was done before
> any of the resources were created) didn't stop Pacemaker attempting to
> start the resources there (that fails as MySQL isn't installed on that
> server)
It didn't start
- Original Message -
> From: "Lindsay Todd"
> To: "The Pacemaker cluster resource manager"
> Sent: Wednesday, July 10, 2013 12:11:00 PM
> Subject: Re: [Pacemaker] Pacemaker remote nodes, naming, and attributes
>
> Hmm, I'll still submit the bug report, but it seems like crmd is dumping c
Hmm, I'll still submit the bug report, but it seems like crmd is dumping
core while attempting to fence a node. If I use fence_node to fence a real
cluster node, that also causes crmd to dump core. But apart from that, I
don't really see why pacemaker is trying to fence anything.
On Wed, Jul 10
Thanks! But there is still a problem.
I am now working from the master branch and building RPMs (well, I have to
also rebuild from the srpm to change the build number, since the RPMs built
directly are always 1.1.10-1). The patch is in the git log, and indeed
things are better ... But I still s
- Original Message -
> From: "David Vossel"
> To: "The Pacemaker cluster resource manager"
> Sent: Friday, July 5, 2013 4:06:16 PM
> Subject: Re: [Pacemaker] Pacemaker remote nodes, naming, and attributes
>
> - Original Message -
> > From: "David Vossel"
> > To: "The Pacemak
What is the suggested way to configure a highly available NFS service using
cman/pacemaker in RHEL6.4?
RHEL 6.4 has two init.d scripts - nfs and nfslock. The available pacemaker resource agent script in
/usr/libocf/resource.d/heartbeat/nfsserver has just a single parameter to point to the NFS i
Hi Johan,
I do forget it also often, but more than years ago
you have to give detailed informations about your stack:
- OS
- corosync version (or heartbeat)
- pacemaker version
- agent version
- etc.
Best regards
Andreas
-Ursprüngliche Nachricht-
Von: Johan Huysmans [mailto:johan.huysm.
Hi All,
I have a setup with a cloned resource and a resource group.
I also configured some colocation and order rules in such way that the
group can only run where the cloned resource is running.
On a 2 node setup I have no problems. The group moves away when the
local clone resource fails.
T
First of all, setting the 3rd host to be a standby (this was done before
any of the resources were created) didn't stop Pacemaker attempting to
start the resources there (that fails as MySQL isn't installed on that
server)
[root@drbd1 billy]# pcs status
Last updated: Wed Jul 10 13:56:20 2013
L
Hi All,
Every time a resource fails or recovers or any other action is performed
I see following messages in my log.
Which can be the cause for this problem, how can I see more information
about this message (view the patch / diff which is failing).
stonith-ng[25994]: warning: cib_process
On 09/07/2013, at 3:59 PM, Andrew Morgan wrote:
>
>
>
> On 9 July 2013 04:11, Andrew Beekhof wrote:
>
> On 08/07/2013, at 11:35 PM, Andrew Morgan wrote:
>
> > Thanks Florian.
> >
> > The problem I have is that I'd like to define a HA configuration that isn't
> > dependent on a specific s
12 matches
Mail list logo