On Tue, Sep 20, 2011 at 1:58 PM, Brian J. Murrell wrote:
> On 11-09-19 11:02 PM, Andrew Beekhof wrote:
>> On Wed, Aug 24, 2011 at 6:56 AM, Brian J. Murrell
>> wrote:
>>>
>>> 2. preventing the active node from being STONITHed when the resource
>>> is moved back to it's failed-and-restored node
On 11-09-19 11:02 PM, Andrew Beekhof wrote:
> On Wed, Aug 24, 2011 at 6:56 AM, Brian J. Murrell
> wrote:
>>
>> 2. preventing the active node from being STONITHed when the resource
>> is moved back to it's failed-and-restored node after a failover.
>> IOW: BAR1 is available on foo1, which fail
On Sun, Sep 11, 2011 at 2:30 AM, Vadym Chepkov wrote:
>
> On Sep 8, 2011, at 3:40 PM, Florian Haas wrote:
>
On 09/08/11 20:59, Brad Johnson wrote:
> We have a 2 node cluster with a single resource. The resource must run
> on only a single node at one time. Using the pacemaker:ocf:ping
On Thu, Sep 1, 2011 at 8:22 PM, Prakash Velayutham
wrote:
> Hi,
>
> Is there a way to control which node in the cluster stoniths which other node
> based on some sort of connectivity information (using the heartbeat network)?
Use the same method as described here:
http://www.clusterlabs.org/
On Wed, Aug 24, 2011 at 6:56 AM, Brian J. Murrell wrote:
> Hi All,
>
> I am trying to configure pacemaker (1.0.10) to make a single filesystem
> highly available by two nodes (please don't be distracted by the dangers
> of multiply mounted filesystems and clustering filesystems, etc., as I
> am ab
On Tue, Aug 23, 2011 at 2:44 AM, Bobbie Lind wrote:
> I tested this out on our duplicate dev system to see if it was old scores or
> a bad configuration. I got the exact same ptest scores on a new cluster set
> up exactly the same way. This lead me to believe that it IS my
> configuration that n
Hi, Andrew
When the disk utilization of the DC node became 100%, I found the phenomenon
that memory was used in large quantities by pengine.
When pengine fails in the output of the pe-input file, this memory consumption
seems to happen.
When failed, the following log is output.
Sep 1 14:15:50 sb
On Thu, Aug 25, 2011 at 6:41 PM, ihjaz Mohamed wrote:
> Facing the same issue with pacemaker-1.1.5 also. Does it mean the new
> versions of the Pacemaker no longer support Heartbeat.
>
>
On RHEL and its derivatives - yes.
Unless you recompile it yourself.
Newer versions also print "heartbeat" ins
the RA should be calling crm_master to tell pacemaker which instance
should be promoted.
alternatively, you can use a location constraint with role=master
2011/9/6 项磊 :
> a resource of master is in slave when they start running, this behavior
> begins from pacemaker 1.1.5.
> how to promote one of
Thanks, I'll apply it shortly
2011/9/6 Yuusuke IIDA :
> Hi, Andrew
>
> I found small memory leak of pengine.
> It seems to leak out in Pacemaker-1.1 and Pacemaker-1.0.
>
> Best Regards,
> Yuusuke
> --
>
> METRO SYSTEMS CO., LTD
>
> Yuusuke Iida
> Mail: iida
On Thu, Sep 8, 2011 at 5:40 AM, Patrik Plank
wrote:
> Hallo Andrew,
>
> First of all thank you for the information!
>
> Now i have a stupid question:
>
> Why this configuration works only with a stonith device?
Because the FS cares that the node is confirmed to be completely dead,
not just missin
On Mon, Sep 19, 2011 at 1:29 AM, Filip Sakáloš wrote:
>> Hello there,
>>
>> I am currently developing a cluster, which main principle is to run some
>> resources on one node and another resources on another node. This is quite
>> simple to configure, but I have encountered a problem with configu
On Tue, Sep 13, 2011 at 7:09 PM, Thilo Uttendorfer
wrote:
> Hi,
>
> to check constraints of a pacemaker resource, I execute:
> crm_resource --constraints -r res1
>
> while this command returns the information I want, syslog show these
> messages:
>
> Sep 13 10:57:00 server01 crm_resource: [
Hello Everyone,
I have been experiencing some problems getting pacemaker going with
DRBD and MySQL
The Config:
primitive drbd_mysql ocf:linbit:drbd \
params drbd_resource="mysql" \
op monitor interval="15s"
ms ms_drbd_mysql drbd_mysql \
Hi,
On Sun, Sep 18, 2011 at 05:29:56PM +0200, Filip Sakáloš wrote:
> > Hello there,
> >
> > I am currently developing a cluster, which main principle is to run some
> > resources on one node and another resources on another node. This is quite
> > simple to configure, but I have encountered a p
15 matches
Mail list logo