On Thu, 2009-01-22 at 21:03 +0100, Michael Schwartzkopff wrote:
> Hi,
>
> I am quite desparate already. I compiled my Pacemaker-GUI. Compile and
> install
> are OK, besides a small error with gv.py. (I finally removed gv from the
> import
> line)
>
> When I call /usr/heartbeat-gui/haclient.py
On Thu, 2009-01-22 at 16:43 +0100, Michael Schwartzkopff wrote:
> Hi,
>
> Configure and compile and installation of a fresh Pacemaker-Python-GUI works
> for me. But calling
> # /usr/lib/heartbeat-gui/haclient.py
>
> results in:
>
> Traceback (most recent call last):
> File "/usr/lib/heartbeat
Alex Strachan schrieb:
-Original Message-
From: linux-ha-boun...@lists.linux-ha.org [mailto:linux-ha-
boun...@lists.linux-ha.org] On Behalf Of Hell, Robert
Sent: Thursday, 22 January 2009 9:33 PM
To: linux-ha@lists.linux-ha.org
Subject: [Linux-HA] Two-node clusters in split-sites
Hi,
2009/1/23 Michael Schwartzkopff :
> Hi,
>
> I am quite desparate already. I compiled my Pacemaker-GUI. Compile and install
> are OK, besides a small error with gv.py. (I finally removed gv from the
> import
> line)
gv is used when you try to visualize the transition graph.
The GUI can run fine as
ailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
> __ Information from ESET Smart Security, version of virus
> signature database 3789 (20090122) __
>
> The messag
On Thu, Jan 22, 2009 at 09:31:15PM +0100, Michael Schwartzkopff wrote:
> Am Donnerstag, 22. Januar 2009 21:17:45 schrieb Andrew Beekhof:
> > On Thu, Jan 22, 2009 at 21:03, Michael Schwartzkopff
> wrote:
> > > Hi,
> > >
> > > I am quite desparate already. I compiled my Pacemaker-GUI. Compile and
>
Hi,
On Thu, Jan 22, 2009 at 04:43:58PM +0100, Michael Schwartzkopff wrote:
> Hi,
>
> Configure and compile and installation of a fresh Pacemaker-Python-GUI works
> for me. But calling
> # /usr/lib/heartbeat-gui/haclient.py
>
> results in:
>
> Traceback (most recent call last):
> File "/usr/l
Hi,
that's my first attempt to mount a HA cluster. I have a strange behavior on my
drbd disks.
When I failover the cluster the DRBD Master node move correctly but the Slave
one doesn't start.
For simulate the fail over I change the weight of the preferred master node.
As you could see on my confi
Hi ,
Not sure of the cause for the below behaviour , any help or pointer is
appreciated.
Do I need to modify some parameters? or need to change the way I did the setup?
The VMs configured as HA resource is going unmanaged mode when powerdown the DC
node .
My two node Linux HA setup cons
On Thu, Jan 15, 2009 at 23:56, alexus wrote:
> Hello,
>
> Would someone help me try to understand the problem i'm having
> I'm using:
>
> [r...@mail2 ~]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 5.2 (Tikanga)
> [r...@mail2 ~]# uname -a
> Linux uftwfmail2 2.6.18-92.el5xen #
Am Donnerstag, 22. Januar 2009 21:17:45 schrieb Andrew Beekhof:
> On Thu, Jan 22, 2009 at 21:03, Michael Schwartzkopff
wrote:
> > Hi,
> >
> > I am quite desparate already. I compiled my Pacemaker-GUI. Compile and
> > install are OK, besides a small error with gv.py. (I finally removed gv
> > from
On Thu, Jan 22, 2009 at 21:03, Michael Schwartzkopff wrote:
> Hi,
>
> I am quite desparate already. I compiled my Pacemaker-GUI. Compile and install
> are OK, besides a small error with gv.py. (I finally removed gv from the
> import
> line)
>
> When I call /usr/heartbeat-gui/haclient.py I get the
Hi,
I am quite desparate already. I compiled my Pacemaker-GUI. Compile and install
are OK, besides a small error with gv.py. (I finally removed gv from the import
line)
When I call /usr/heartbeat-gui/haclient.py I get the error:
Traceback (most recent call last):
File "/usr/lib/heartbeat-gui
Hi,
Configure and compile and installation of a fresh Pacemaker-Python-GUI works
for me. But calling
# /usr/lib/heartbeat-gui/haclient.py
results in:
Traceback (most recent call last):
File "/usr/lib/heartbeat-gui/haclient.py", line 27, in ?
import sys, os, string, socket, syslog, webbrow
On Thu, 2009-01-22 at 13:51 +0100, Andrew Beekhof wrote:
>
> > If I wanted
> > the most stable version on my SLES 10 SP2 cluster should I install
> > packages from outside of the Suse official packages?
>
> This is possible, but does void your support contract.
> The best option is to get the new
On Thu, Jan 22, 2009 at 10:21, Darren Mansell
wrote:
> On Wed, 2009-01-21 at 23:11 +0100, Andrew Beekhof wrote:
>> Please dont use master/slave resources with that version of heartbeat
>> - its old and full of bugs.
>> Please use the latest version of Pacemaker instead.
>
> I seem to be using that
On Fri, Jan 16, 2009 at 20:47, Michael Mohr wrote:
> Thanks for your reply.
>
> I'm trying to configure LinuxHA to run a group of resources G_multi on
> BOTH members of a 2-node cluster, but apparently I cannot use clones
> because G_multi contains only LSB ResourceAgents:
>
> Requirements for Res
On Wed, Jan 21, 2009 at 17:47, Darren Mansell
wrote:
> Hi all.
>
> I've got a 2-node cluster set up using Heartbeat, CRM and Ldirectord.
>
> Most of the time it works great but randomly, resources won't fail over
> to the other node.
It looks related to the use of quorumd - I didn't think we ship
Am Donnerstag, 22. Januar 2009 12:32:57 schrieb Hell, Robert:
> Hi,
>
>
>
> we are planning to run multiple two-node clusters (with different
> functions: database cluster, samba cluster, ...) in two different sites
> with a fast, but unreliable network connections. All cluster nodes
> replicate da
Hi,
we are planning to run multiple two-node clusters (with different
functions: database cluster, samba cluster, ...) in two different sites
with a fast, but unreliable network connections. All cluster nodes
replicate data with drbd.
I'm specially afraid of split-brain in this scenario becaus
On Wed, 2009-01-21 at 23:11 +0100, Andrew Beekhof wrote:
> Please dont use master/slave resources with that version of heartbeat
> - its old and full of bugs.
> Please use the latest version of Pacemaker instead.
I seem to be using that version as it is part of SLES 10 SP2. Surely it
should be the
21 matches
Mail list logo