On 06/11/14 16:35, Kostiantyn Ponomarenko wrote:
> And that is like roulette, in case we lose the lowest nodeid we lose all.
> So I can lose only the node which doesn't have the lowest nodeid?
> And it's not useful in 2 node cluster.
> Am i correct?
It may be usefull. If you define roles of the no
On 2014-01-07 13:33, Bauer, Stefan (IZLBW Extern) wrote:
> How can i check if the current node i’m connected to is the active?
>
> It should be parseable because i want to use it in a script.
Pacemaker is not limited to Active-Passive setups, in fact it has no
notion of 'Active' node – every node
On Wed, 26 Jun 2013 18:38:37 +1000
Andrew Beekhof wrote:
> >> trace Jun 25 13:40:10 gio_read_socket(366):0: 0xa6c140.4 1
> >> (ref=1) trace Jun 25 13:40:10 lrmd_ipc_accept(89):0: Connection
> >> 0xa6d110 infoJun 25 13:40:10 crm_client_new(276):0: Connecting
> >> 0xa6d110 for uid=17 gid=0 p
On Wed, 26 Jun 2013 14:35:03 +1000
Andrew Beekhof wrote:
> Urgh:
>
> infoJun 25 13:40:10 lrmd_ipc_connect(913):0: Connecting to lrmd
> trace Jun 25 13:40:10 pick_ipc_buffer(670):0: Using max message
> size of 51200 error Jun 25 13:40:10
> qb_sys_mmap_file_open(92):2147483648: couldn't ope
On Tue, 25 Jun 2013 20:24:00 +1000
Andrew Beekhof wrote:
> On 25/06/2013, at 5:56 PM, Jacek Konieczny wrote:
>
> > On Tue, 25 Jun 2013 10:50:14 +0300
> > Vladislav Bogdanov wrote:
> >> I would recommend qb 1.4.4. 1.4.3 had at least one nasty bug which
> >&
On Tue, 25 Jun 2013 10:50:14 +0300
Vladislav Bogdanov wrote:
> I would recommend qb 1.4.4. 1.4.3 had at least one nasty bug which
> affects pacemaker.
Just tried that. It didn't help.
Jacek
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
htt
On Tue, 25 Jun 2013 08:59:19 +0200
Jacek Konieczny wrote:
> On Tue, 25 Jun 2013 16:43:54 +1000
> Andrew Beekhof wrote:
> >
> > Ok, I was just checking Pacemaker was built for the running version
> > of libqb.
>
> Yes it was. corosync 2.2.0 and libqb 0.14.0 both on
On Tue, 25 Jun 2013 16:43:54 +1000
Andrew Beekhof wrote:
>
> Ok, I was just checking Pacemaker was built for the running version
> of libqb.
Yes it was. corosync 2.2.0 and libqb 0.14.0 both on the build system and
on the cluster systems.
Hmm… I forgot libqb is a separate package… I guess I shou
On Tue, 25 Jun 2013 10:10:13 +1000
Andrew Beekhof wrote:
> On 24/06/2013, at 9:31 PM, Jacek Konieczny wrote:
>
> >
> > After I have upgraded Pacemaker from 1.1.8 to 1.1.9 on a node I get
> > the following errors in my syslog and Pacemaker doesn't seem to be
>
After I have upgraded Pacemaker from 1.1.8 to 1.1.9 on a node I get the
following errors
in my syslog and Pacemaker doesn't seem to be able to start services on this
node.
Jun 24 13:19:44 dev1n2 crmd[5994]:error: qb_sys_mmap_file_open: couldn't
open file /dev/shm/qb-lrmd-request-5991-5994-
On Thu, 13 Jun 2013 15:50:26 +0400
Andrey Groshev wrote:
> 11.06.2013, 22:52, "Michael Schwartzkopff" :
>
> Am Dienstag, 11. Juni 2013, 22:33:32 schrieb Andrey Groshev:
>
> > Hi,
>
> > I want to make Postgres cluster.
>
> > As far as I understand, for the proper functioning of the cluster
> >
On Fri, 29 Mar 2013 11:37:37 +1100
Andrew Beekhof wrote:
> On Thu, Mar 28, 2013 at 10:43 PM, Rainer Brestan
> wrote:
> > Hi John,
> > to get Corosync/Pacemaker running during anaconda installation, i
> > have created a configuration RPM package which does a few actions
> > before starting Corosy
On Mon, 25 Mar 2013 20:01:28 +0100
"Angel L. Mateo" wrote:
> >quorum {
> > provider: corosync_votequorum
> > expected_votes: 2
> > two_node: 1
> >}
> >
> >Corosync will then manage quorum for the two-node cluster and
> >Pacemaker
>
> I'm using corosync 1.1 which is the one provided
On Mon, 25 Mar 2013 13:54:22 +0100
> My problem is how to avoid split brain situation with this
> configuration, without configuring a 3rd node. I have read about
> quorum disks, external/sbd stonith plugin and other references, but
> I'm too confused with all this.
>
> For example, [
On Wed, 06 Mar 2013 22:41:51 +0100
Sven Arnold wrote:
> In fact, disabling the udev rule
>
> SUBSYSTEM=="block", ACTION=="add|change",
> ENV{ID_FS_TYPE}=="lvm*|LVM*",\ RUN+="watershed sh -c '/sbin/lvm
> vgscan; /sbin/lvm vgchange -a y'"
>
> seems to resolve the problem for me.
This rule looks l
Hi,
It used to be possible to access the Pacemaker's CIB from any user in
the 'haclient' group, but after one of the upgrades it stopped working
(I didn't care about this issue match then, so I cannot recall the exact
point). Now I would like to restore the cluster state overview
functionality in
On Thu, 24 Jan 2013 09:04:14 +0100
Jacek Konieczny wrote:
> I should probably upgrade my CIB somehow
Indeed. 'cibadmin --upgrade --force' solved my problem.
Thanks for all the hints.
Greets,
Jacek
___
Pacemaker mailing li
Hi,
On Wed, 23 Jan 2013 18:52:20 +0100
Dejan Muhamedagic wrote:
> >
> >
>
> Note sure if id can start with a digit.
Corosync node id's are always digits-only.
> This should really work with versions >= v1.2.4
Yeah… I have looked into the crmsh code and it has explicit support for
On Wed, 23 Jan 2013 16:44:45 +0100
Lars Marowsky-Bree wrote:
> On 2013-01-23T16:31:20, Jacek Konieczny wrote:
>
> > I have recently upgraded Pacemaker on one of my clusters from
> > 1.0.something to 1.1.8 and installed crmsh to manage it as I used
> > to.
>
> It
Hi,
I have recently upgraded Pacemaker on one of my clusters from
1.0.something to 1.1.8 and installed crmsh to manage it as I used to.
crmsh mostly works for me, until I try to change the configuration with
'crm configure'. Any, even trivial change shows verification errors and
fails to commit:
On Thu, Nov 01, 2012 at 11:05:04AM +1100, Andrew Beekhof wrote:
> On Thu, Nov 1, 2012 at 7:40 AM, Jacek Konieczny wrote:
> > On Wed, Oct 31, 2012 at 05:33:03PM +1100, Andrew Beekhof wrote:
> >> I havent seen that before. What version?
> >
> > Pacemaker 1.1.8, coros
On Wed, Oct 31, 2012 at 05:33:03PM +1100, Andrew Beekhof wrote:
> I havent seen that before. What version?
Pacemaker 1.1.8, corosync 2.1.0, cluster-glue 1.0.11
> On Wed, Oct 31, 2012 at 12:42 AM, Jacek Konieczny wrote:
> > Hello,
> >
> > Probably this is not a critica
Hello,
Probably this is not a critical problem, but it become annoying during
my cluster setup/testing time:
Whenever I restart corosync with 'systemctl restart corosync.service' I
get message about stonithd crashing with SIGSEGV:
> stonithd[3179]: segfault at 10 ip 00403144 sp 7fffe
23 matches
Mail list logo