Hello,
Can somebody tell the status of this issue?
Regards,
Vlad.
On 06/10/14 04:23, renayama19661...@ybb.ne.jp wrote:
Hi Andrew,
lrmd[1632]:error: crm_abort: crm_glib_handler: Forked child 1840 to
record non-fatal assert at logging.c:73 : Source ID 51 was not found when
attempting to r
On Sat, 25 Oct 2014 19:11:02 -0400
Digimer wrote:
> On 25/10/14 06:35 PM, Vladimir wrote:
> > On Sat, 25 Oct 2014 17:30:07 -0400
> > Digimer wrote:
> >
> >> On 25/10/14 05:09 PM, Vladimir wrote:
> >>> Hi,
> >>>
> >>> currently
On Sat, 25 Oct 2014 17:30:07 -0400
Digimer wrote:
> On 25/10/14 05:09 PM, Vladimir wrote:
> > Hi,
> >
> > currently I'm testing a 2 node setup using ubuntu trusty.
> >
> > # The scenario:
> >
> > All communication links betwenn the 2 nodes are cu
Hi,
currently I'm testing a 2 node setup using ubuntu trusty.
# The scenario:
All communication links betwenn the 2 nodes are cut off. This results
in a split brain situation and both nodes take their resources online.
When the communication links get back, I see following behaviour:
On drbd l
On Thu, Jun 12, 2014 at 2:44 AM, Jay Pipes wrote:
> On 05/26/2014 11:16 AM, Vladimir Kuklin wrote:
>
>> Hi all
>>
>> We are working on HA solutions for OpenStack(-related) services and
>> figured out that sometimes we need clones to be notified if one of the
>&
fies other clones on node offline/fence/cold_shutdown ?
--
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuk
anks in advance.
Kind regards
Vladimir
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_S
On Wed, 19 Mar 2014 16:09:12 -0500
Chris Feist wrote:
> On 03/19/2014 09:17 AM, Vladimir wrote:
> > Hey everyone,
> >
> > does anybody know if there is pcs already available on debian
> > wheezy?
> >
> > I first tried to ask on debian-ha-maintainers (su
On Wed, 19 Mar 2014 15:32:50 +0100
Kristoffer Grönlund wrote:
> On Wed, 19 Mar 2014 15:17:21 +0100
> Vladimir wrote:
>
> > Hey everyone,
> >
> > does anybody know if there is pcs already available on debian
> > wheezy?
> >
> > I first tried to ask
heezy (if
possible at all).
Is it possible to first use crmsh and later to switch to pcs? Is the
actual cib the same and crmsh and pcs are only different frontends (like
cibadmin)?
Thanks in advance.
Kind regards
Vladimir
___
Pacemaker mailing
Hi,
try kamailio mailing list. The OCF script for kamailio was posted
and discussed in this thread:
[sr-dev] kamailio OCF resource agent for pacemaker - Jan 7 10:12:58
CET 2014
http://lists.sip-router.org/pipermail/sr-dev/2014-January/022639.html
Hello everyone,
Is there a built-in mechanizm in pacemaker to trigger a pre or post
script or do the ocf resource agents bring something like that?
I could also create a kind of dummy resource after a primitive
resource. But I asked my self if there is another/better way.
Thanks.
_
does not appear to exist in configuration
[root@sbct1 ~]# crm_node --list
sbct1 sbct2
Thanks in advance,
-Vladimir
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlab
anaged" status on the node? is it
possible to force this behavior in any way?
Here some specs of the software used on our cluster nodes:
node1:~# lsb_release -d && dpkg -l pacemaker | awk '/ii/{print $2,$3}'
&& uname -ri Descriptio
On Fri, 8 Mar 2013 19:48:11 +0100
Lars Marowsky-Bree wrote:
> > What are your experience? Is it possible to combine Resource Groups
> > and colocations? Or do I have to give up Resource Groups when using
> > colocations? If so I maybe have to restructure my resource setup to
> > colocations only.
On Fri, 8 Mar 2013 12:08:05 +0100
Lars Marowsky-Bree wrote:
> On 2013-03-08T11:59:33, Vladimir wrote:
>
> > Collocations were exactly what I try to avoid. The setup is planned
> > to get >15 resources (and an upper limit is not defined). I think
> > it would get
On Fri, 8 Mar 2013 11:05:01 +0100
Lars Marowsky-Bree wrote:
> On 2013-03-07T21:34:47, Vladimir wrote:
>
> > All resources are only able to run if they are distributed in the
> > right combination. A working example could like:
>
> The algorithm is somewhat simplistic,
Hey everyone,
---
I built up a two node test setup using utilization attributes. In brief
it consists of:
node-1: provides cores="4"
node-2: provides cores="4"
res-dummy01 requires cores="1"
res-dummy02 requires cores="2"
res-dummy03 requires cores="3"
res-dummy04 requires cores="2"
---
All res
That's why I'm wondering! The "crm resource cleanup fs_movies" doesn't
work, but running the same with crm_resource DOES work.
Regards,
Vladimir.
On Thu, 2012-11-15 at 09:26 +0100, Dejan Muhamedagic wrote:
> On Wed, Nov 14, 2012 at 07:45:33PM +0100, Vladimir Elis
t (i thought) we fixed it quite some time ago.
>
> On Thu, Nov 15, 2012 at 5:45 AM, Vladimir Elisseev wrote:
> > Thanks! I've tried "crm resource cleanup fs_movies", but it didn't work,
> > while your command did the trick. What is the difference? However,
I'm running pacemaker-1.0.10. This is the latest "stable" version for
Gentoo.
Regards,
Vladimir.
On Thu, 2012-11-15 at 14:00 +1100, Andrew Beekhof wrote:
> What version of pacemaker?
> I recall that error but (i thought) we fixed it quite some time ago.
>
> On T
ribute does not exist
Regards,
Vlad.
On Wed, 2012-11-14 at 13:35 -0500, David Vossel wrote:
>
> - Original Message -
> > From: "Vladimir Elisseev"
> > To: "pacemaker"
> > Sent: Wednesday, November 14, 2012 12:03:09 PM
> > Subject: [Pac
stickiness: Forcing fs_movies away from srv2 after 100
failures (max=100)
I'd appreciate if somebody would help me with cleaning up these errors.
Regards,
Vladimir.
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.
stems and I had to mask glib-2.32.4-r1 and use
> glib-2.30.3 in order to get corosync/pacemaker to work. I had the same
> problem. Nodes couldn't talk to each other. Sorry I didn't notice this
> thread earlier, as I might have been able to help.
>
> Pat.
>
>
&g
tems and I had to mask glib-2.32.4-r1 and use
> glib-2.30.3 in order to get corosync/pacemaker to work. I had the same
> problem. Nodes couldn't talk to each other. Sorry I didn't notice this
> thread earlier, as I might have been able to help.
>
> Pat.
>
>
lib-2.32.4-CVE-2012-3524.patch?view=markup
For the moment I simply masked this particular glib version. Hopefully
I'll be able to find time to do a complete debug as you described.
Regards,
Vlad.
On Sun, 2012-11-04 at 13:54 +0300, Vladislav Bogdanov wrote:
> 03.11.2012 18:22, Vladi
Vladislav,
Thanks for the hint! Upgrading glig from 2.30.3 to 2.32.4 triggers this
behavior of corosync. Do you know where I can find more info regarding
this problem?
Vlad.
On Sat, 2012-11-03 at 16:22 +0300, Vladislav Bogdanov wrote:
> 03.11.2012 15:26, Vladimir Elisseev wrote:
> >
Ups... It this a known fact? What is the base of your guess?
Regards,
Vlad.
On Sat, 2012-11-03 at 16:22 +0300, Vladislav Bogdanov wrote:
> 03.11.2012 15:26, Vladimir Elisseev wrote:
> > I've been able to reproduce the problem. Herewith I've attached
> > crm_report
Yes, hb_report is there, thanks!
On Thu, 2012-11-01 at 11:40 +1100, Andrew Beekhof wrote:
> On Tue, Oct 30, 2012 at 4:35 PM, Vladimir Elisseev wrote:
> > Thanks for trying to help! Currently I can't provide crm_report from the
> > failed node, as I've decided to resto
ds to
this behavior. BTW, how can I create crm_report? I can't find this
binary anywhere on the system. Let me know what kind of input you'll
need if I'll be able to reproduce this problem.
Regards,
Vlad.
On Tue, 2012-10-30 at 16:00 +1100, Andrew Beekhof wrote:
> On Sun, Oct 28,
Hello,
I'm having problem that after reboot one cluster node can't join cluster
anymore. Form the log file I can't understand what actually is going on.
I only can see, that cib and crm both are respawned frequently. I'd
appreciate any help. Below is relevant part of the log file:
Oct 28 10:52:22
Hello,
I'm trying to find a good solution to properly shutdown a DRBD
(master/slave) cluster in case of outages. The problem, at least for me,
is that if you simply shutdown one node, when it comes back in 99% on
the cases you'll end up with split brain scenarios. So for me the proper
logic in cas
The only solution I know, is to change "*clone-node-max*" param on the fly.
See http://oss.clusterlabs.org/pipermail/pacemaker/2010-November/008148.htmlfor
details.
Vladimir
On Tue, Nov 9, 2010 at 7:51 PM, Chris Picton wrote:
> From a previous thread (crm_resource - migrating/
Hi everybody,
I've fix my problem by inserting followed code to the application
consistency checking script.
Please let me know whether you found a better solution.
PS. Thanks for replies
Vladimir
...
check_marker(){
# Hardcoded to avoid extra forks.
MARKER_CO
Hello,
On Fri, Oct 29, 2010 at 12:35 PM, Dan Frincu wrote:
> Hi,
>
>
> Vladimir Legeza wrote:
>
> *Hello folks.
>
> I try to setup four ip balanced nodes but, I didn't found the right way to
> balance load between nodes when some of them are filed.
>
&g
*Hello folks.
I try to setup four ip balanced nodes but, I didn't found the right way to
balance load between nodes when some of them are filed.
I've done:*
[r...@node1 ~]# crm configure show
node node1
node node2
node node3
node node4
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="
36 matches
Mail list logo