Thanks for pointing that out. So I'll have a try with current cluster-glue
tommorrow.
Keep up the great work!
Oliver
Am Donnerstag, 24. Juni 2010 15:43:35 schrieb Dejan Muhamedagic:
> Hi,
>
> On Thu, Jun 24, 2010 at 02:31:24PM +0200, Oliver Heinz wrote:
> > I'm
I'm trying to rebuild debian packages for the current stable-1.0 branch in the
repository on debian squeeze.
Rebuilding the debian packages (pacemaker-1.0.8+hg15494) works fine so build
dependencies should be satisfied (unless library version requirements changed).
I get:
Making all in pengine
Am Montag, 14. Juni 2010, um 16:43:54 schrieb Dejan Muhamedagic:
> Hi,
>
> On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote:
> > I configured a sbd fencing device on the shared storage to prevent data
> > corruption. It works basically, but when I pull the
I configured a sbd fencing device on the shared storage to prevent data
corruption. It works basically, but when I pull the network plugs on one node
to simulate a failure one of the nodes is fenced (not necessarily the one that
was unplugged). After the fenced node reboots it fences the other
Am Montag, 31. Mai 2010, um 09:45:26 schrieb marc genou:
> Hi
>
> I am trying to deploy an Active/active cluster but I found some troubles.
> When I try to mount a gfs2 filesystem in top of drbd I got this error:
>
> gfs_controld join connect error: Connection refused
> error mounting lockproto l
Just ignore my previous posting. I f I don't name the resource...
primitive resSBD stonith:external/sbd \
params sbd_device="/dev/disk/by-id/dm-uuid-part1-
mpath-3600c0ff000d8d78802faa14b0100"
works much better.
Oliver
Am Mittwoch, 19. Mai 2010 14:27:27 schri
Am Mittwoch, 19. Mai 2010 10:25:10 schrieb Florian Haas:
> Read up on resource placement based on ping connectivity and the
> ocf:pacemaker:ping RA, and on the external/sbd STONITH plugin. That
> should get you started.
Thanks for the hints.
I tried to configure sdb it according to the sources I
I have problems to create a 2 node cluster with shared storage and cluster fs
that behaves the way I want.
The decission should be based on network connectivity. If node a looses
network connectivity (i.e. i pull the plugs) node b should take over all
resources because he is able to ping i.e. t
Am Mittwoch, 28. April 2010, um 20:25:53 schrieb Andreas Gabriel:
> Hi,
>
> we are currently testing a cluster environment on debian lenny with
> a self compiled native kernel (2.6.32.2 amd64) because clvm
> hangs on start on the second node if we are using debian's kernel.
> Kill -9 cannot stop c
Am Mittwoch, 28. April 2010, um 20:25:53 schrieb Andreas Gabriel:
> Hi,
>
> we are currently testing a cluster environment on debian lenny with
> a self compiled native kernel (2.6.32.2 amd64) because clvm
> hangs on start on the second node if we are using debian's kernel.
> Kill -9 cannot stop c
pacemaker.c
> @@ -123,7 +123,7 @@ void dlm_process_node(gpointer key, gpointer
> value, gpointer user_data)
> } else if(rc == 0) {
> do_remove = TRUE;
>
> -} else if(is_active) {
> +} else if(is_active && node->addr) {
> do_add = TRUE;
&
Am Dienstag, 27. April 2010 13:00:27 schrieb Lars Ellenberg:
> On Tue, Apr 27, 2010 at 09:33:27AM +0200, Oliver Heinz wrote:
..
> > I installed every -dbg package that is available for any package
> > installed on the system (just to be sure). There are no debug packages
>
Am Dienstag, 27. April 2010 14:18:27 schrieb Ante Karamatić:
> On 27.04.2010 12:58, Oliver Heinz wrote:
> > I use bonding, vlans and bridge interfaces. No dhcp just fixed adresses.
>
> As a workaround, you could:
>
> sudo update-rc.d -f corosync disable S
>
> add
Am Dienstag, 27. April 2010 13:01:36 schrieb Ante Karamatić:
> On 27.04.2010 12:58, Oliver Heinz wrote:
> > I use bonding, vlans and bridge interfaces. No dhcp just fixed adresses.
>
> Other two people with this issue also use bonding and after disabling
> bonding issue was gone
Am Dienstag, 27. April 2010 12:44:57 schrieb Ante Karamatić:
> On 27.04.2010 09:41, Oliver Heinz wrote:
> > Pål Simensen reported that he has the same segfault error, so it would
> > be
> >
> > interessting if the new packages fixed it for him. Did they?
>
>
Am Dienstag, 27. April 2010, um 09:15:58 schrieb Ante Karamatić:
> On 26.04.2010 16:37, Oliver Heinz wrote:
> > Still segfaults.
>
> Urgh... I can not reproduce this. I'm using virtualized 64bit KVM
> machines with ppa:ubuntu-ha/lucid-cluster and I get no segfaults. Your
&g
Am Montag, 26. April 2010, um 18:50:24 schrieb Lars Ellenberg:
> On Mon, Apr 26, 2010 at 04:37:37PM +0200, Oliver Heinz wrote:
> > Am Montag, 26. April 2010 15:58:51 schrieb Ante Karamatić:
> > > On 26.04.2010 14:42, Oliver Heinz wrote:
> > > > Thanks for that infor
Am Montag, 26. April 2010 15:58:51 schrieb Ante Karamatić:
> On 26.04.2010 14:42, Oliver Heinz wrote:
> > Thanks for that information. I rebuild the complete stack with
> > cluster-glue 1.0.5 (which made it to the ppa repository an hour ago).
> > But dlm_controld.pcmk is stil
Am Montag, 26. April 2010 11:46:43 schrieb Lars Ellenberg:
...
> Note that
> http://ppa.launchpad.net/ubuntu-ha/lucid-cluster/ubuntu/pool/main/c/cluster
> -glue/cluster-glue_1.0.3+hg2366.orig.tar.gz
>
> Unfortunately contains the binary incompatibility we reverted for glue
> 1.0.5 details:
> https
r pacemaker
report: http://users.fbihome.de/~oheinz/ha-cluster/report_1.tar.bz2
core-file: http://users.fbihome.de/~oheinz/ha-cluster/core.2606.bz2
I cc:ed the ubuntu-ha list as it might be packaging related.
TIA,
Oliver
>
> On Sun, Apr 25, 2010 at 1:15 PM, Oliver Heinz wrote:
> >
to current 3.0.11 still
segfaulting. So try and error seems not to work. Maybe someone with a little
more understanding what's going on can do an educated guess?
TIA,
Oliver
>
> On Sat, Apr 24, 2010 at 12:25 PM, Oliver Heinz wrote:
> > Hi,
> >
> > when rebootin
21 matches
Mail list logo