On 17/07/14 02:39 PM, Alex Samad - Yieldbroker wrote:
-Original Message-
From: Digimer [mailto:li...@alteeve.ca]
Sent: Thursday, 17 July 2014 3:00 PM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] clusters on virtualised platforms
On 17/07/14 01:41 PM, Alex Samad - Y
> -Original Message-
> From: Digimer [mailto:li...@alteeve.ca]
> Sent: Thursday, 17 July 2014 3:00 PM
> To: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] clusters on virtualised platforms
>
> On 17/07/14 01:41 PM, Alex Samad - Yieldbroker wrote:
> >
> >
> >> -Origin
On 17/07/14 01:41 PM, Alex Samad - Yieldbroker wrote:
-Original Message-
From: Digimer [mailto:li...@alteeve.ca]
Sent: Thursday, 17 July 2014 2:02 PM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] clusters on virtualised platforms
Don't confuse quorum and fencing
> -Original Message-
> From: Digimer [mailto:li...@alteeve.ca]
> Sent: Thursday, 17 July 2014 2:02 PM
> To: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] clusters on virtualised platforms
>
> Don't confuse quorum and fencing (stonith), they serve different purposes.
>
Don't confuse quorum and fencing (stonith), they serve different
purposes. Basically, quorum is useful when things are working, fencing
is required when things go wrong. So regardless of quorum disk, you
still need to be able to fence. This requires that each VM be able to
call the hypervisor a
Hi
I wonder if there Best practise or how to, on how to run clusters on say VMWare.
I have built a 2 node cluster with 2 vm's I am hesitant to give rights to each
machine to restart the other node. Plus I would have to install the vmware lib
and
So what do other people do
Quick search giv
I remove it out, and let's Corosync design itself what address will be used.
It's work well.
Teenigma
On Wed, Jul 16, 2014 at 7:05 PM, Jan Friesse wrote:
> Teerapatr
>
>> Dear Honza,
>>
>> Sorry to say this, but I found new error again. LOL
>>
>> This time, I already install the 1.4.1-17 as your
Hi,
my first post on this might have been to complicated. I broke it down to
a test case.
I have four resources: A1, B1, C1 and B2. B1 is a Master/Slave.
The complete group should run on node korfwf01 (preferably) or on node
korfwf02, not on korfwm01, not on korfwm02.
B1:Master depends on A1, C
On 16/07/14 12:23, Tony Atkinson wrote:
I'm getting some weirdness from nodes coming back from standby
Sorry, disregard.
Issue has gone away (not sure what I did, but hey-ho)
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.cluste
Teerapatr
> Dear Honza,
>
> Sorry to say this, but I found new error again. LOL
>
> This time, I already install the 1.4.1-17 as your advice.
> And the nodename, without altname, is map to IPv6 using hosts file.
> Everything is fine, but the 2 node can't communicate to each other.
> So I add the
Hi,
I'm getting some weirdness from nodes coming back from standby, and was
wondering if anyone could have a look over my config to see if there's
any obvious errors?
2-node active/active cluster serving virtual machines (libvirt/KVM)
Storage is via DRBD
When I bring a node out of standby it
11 matches
Mail list logo