You only show a piece of your config, I think you have the xml of your
vm under Filesystem_CDrive1, that filesystem need to be available on
both node.
2015-12-04 17:14 GMT+01:00 Klecho :
> Hi list,
> My issue is the following:
>
> I have very stable cluster, using Corosync 2.1.0.26 and Pacemaker 1
I don't remember well, But I think in Redhat 6.5 you need to use
cman+pacemaker and please your config and you need to be sure you have
fencing configured.
2015-11-24 11:18 GMT+01:00 Cayab, Jefrey E. :
> Hi all,
>
> I searched online but couldn't find a detailed answer. OS is RHEL 6.5.
>
> Problem
If I remeber, there is a way to make real time pacemaker process, but
I don't know if this can help you, you need to tell us if your server
high load is system space or userspace.
2015-10-05 19:53 GMT+02:00 Radoslaw Garbacz
:
> Hi,
>
> I have a situation, when resource monitor operations timeout o
>From what I see, he is using heartbeat.
2015-08-03 17:14 GMT+02:00 Thomas Meagher :
>
> Sounds similar to the issue I described here last week. We also had two
> nodes, and lost network connection between the two nodes while one was
> starting up after a fence. Although we had stonith resources
are you sure you process are not in working directory /home/cluster/virt ?
I'm using suse 11 Sp2 and I don't know if the agent is the same in
redhat 6, but i think so, anyway for umounting the fs the script uses
the following functions Filesystem_stop -> fs_stop -> signal_processes
In the fs_stop
map your ip cluster to hostname using /etc/hosts and try to use an
example like this
http://clusterlabs.org/doc/fr/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/_sample_corosync_configuration.html
2015-04-25 12:19 GMT+02:00 Patrick Zwahlen :
>> Are you sure your cluster hostnames are ok?
>>
>> get_
Are you sure your cluster hostnames are ok?
get_node_name: Could not obtain a node name for corosync nodeid 2
2015-04-24 17:09 GMT+02:00 Patrick Zwahlen :
> Hi,
>
> I'm running a CentOS 7.0 2-nodes cluster providing iSCSI/SAN features. In
> order to upgrade to CentOS 7.1, I'm testing the whole p
have you try to check your system resource? i had the some msg
"corosync [TOTEM ] Process pause detected for 5370 ms" some time ago,
the problem was kernel system use was so hight, you can try to check
using sar, my system was suse 11 sp1 as your.
2015-04-15 14:06 GMT+02:00 Timi :
> Hi guys,
>
> w
need to configure the cluster
> fencing".
>
> On 11 February 2015 at 00:11, emmanuel segura wrote:
>>
>> try to change your controld daemon
>>
>> OCF_ROOT=/usr/lib/ocf /usr/lib/ocf/resource.d/pacemaker/controld meta-data
>>
>>
>>
>> The
ILOVER-ADDR (ocf::heartbeat:IPaddr2): Started nodo1
> Master/Slave Set: WebDataClone [WebData]
> Masters: [ nodo1 ]
> Slaves: [ nodo2 ]
> Clone Set: dlm_clone [dlm]
> Started: [ nodo2 nodo1 ]
>
>
> El estado de DRBD es:
>
> Nodo 1
> 1: cs:
I'm using debian 7
apt-cache show gfs-pcmk
..
This package contains the GFS module for pacemaker.
...
2015-02-10 8:55 GMT+01:00 José Luis Rodríguez Rodríguez :
> Hello,
>
> I would like to create an active/active cluster by using pacemaker and
> corosync on Debian. I have followed the do
}
:::
Thanks
2015-01-30 12:47 GMT+01:00 Lars Marowsky-Bree :
> On 2015-01-30T08:29:18, emmanuel segura wrote:
>
>> from one of two:
>>
>> /dev/sdX and /dev/sdY
>>
>> sbd -d "/dev/sdX;/dev/sdY" message node1 exit
>> sbd -d
from one of two:
/dev/sdX and /dev/sdY
sbd -d "/dev/sdX;/dev/sdY" message node1 exit
sbd -d "/dev/sdX;/dev/sdY" message node2 exit
sbd -d /dev/sdA create && sbd -d /dev/sdB create
Now in every cluster node
sbd -d "/dev/sdA;/dev/sdB" -D -W watch
If you speficy your sbd devices in /etc/sysconf
please show your configuration and your logs.
2015-01-27 14:24 GMT+01:00 Andrea :
> emmanuel segura writes:
>
>>
>>
>> if you are using cman+pacemaker you need to enabled the stonith and
> configuring that in you crm config
>>
>>
>> 2015-01-27 1
if you are using cman+pacemaker you need to enabled the stonith and
configuring that in you crm config
2015-01-27 14:05 GMT+01:00 Vinod Prabhu :
> is stonith enabled in crm conf?
>
> On Tue, Jan 27, 2015 at 6:21 PM, emmanuel segura
> wrote:
>
>> When a node is dead th
When a node is dead the registration key is removed.
2015-01-27 13:29 GMT+01:00 Andrea :
> emmanuel segura writes:
>
>>
>> sorry, but i forgot to tell you, you need to know the fence_scsi
>> doesn't reboot the evicted node, so you can combine fence_vmware with
>&
sorry, but i forgot to tell you, you need to know the fence_scsi
doesn't reboot the evicted node, so you can combine fence_vmware with
fence_scsi as the second option.
2015-01-27 11:44 GMT+01:00 emmanuel segura :
> In normal situation every node can in your file system, fence_scsi is
>
In normal situation every node can in your file system, fence_scsi is
used when your cluster is in split-braint, when your a node doesn't
comunicate with the other node, i don't is good idea.
2015-01-27 11:35 GMT+01:00 Andrea :
> Andrea writes:
>
>>
>> Michael Schwartzkopff ...> writes:
>>
>> >
maybe you can use sar for checking if your server was tight of resources?
Jan 25 04:10:30 lb02 lrmd: [9972]: info: RA output:
(Nginx-rsc:monitor:stderr) Killed
/usr/lib/ocf/resource.d//heartbeat/nginx: 910:
/usr/lib/ocf/resource.d//heartbeat/nginx: Cannot fork
2015-01-26 18:22 GMT+01:00 Oscar Sa
if you want to see the time stamp of your resource operations without
failure-timeout, because is used for reset the failcount, you can use
the "-t" option:
crm_mon -Arf1t
..
..
Operations:
* Node node01:
sambaip: migration-threshold=100
+ (30) start: rc=0 (ok)
Dumm
de-vmdata_fs^O vmdata_fs^O 200^O: master
> location^O ^OSecondaryNode-libvirt^O libvirt^O 10^O: slave
> location^O ^OSecondaryNode-vmdata_fs^O vmdata_fs^O 10^O: slave
> colocation^O ^Olibvirt-with-fs^O inf^O: libvirt^O vmdata_fs^O
> colocation^O ^Oservices_colo^O inf^O: vmdata_f
I think pacemaker doesn't care about the sbd resource status when it
needs to make a fencing call, that what i think, but i hope some one,
will give me some more information.
Thanks
2014-11-26 15:11 GMT+01:00 Dejan Muhamedagic :
> On Wed, Nov 26, 2014 at 11:13:41AM +0100, emmanuel segu
aging delay: 40
Thanks
2014-11-26 10:26 GMT+01:00 Dejan Muhamedagic :
> Hi,
>
> On Tue, Nov 25, 2014 at 04:20:32PM +0100, e
Hi list,
The last night, i had a cluster in fencing race using sbd as stonith
device, i would like to know what is the effect to use start-delay in
my stonith resource in this way:
primitive stonith-sbd stonith:external/sbd \
params sbd_device="/dev/mapper/SBD \
op start interval=
you need to configure the fencing in pacemaker
2014-11-14 13:04 GMT+01:00 Heiner Meier :
> Hello,
>
> i now have configured fencing in drbd:
>
> disk {
> fencing resource-only;
> }
> handlers {
> fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
And you need to configure your cluster fencing and you need to be sure
sure to configure drbd to use the pacemaker fencing
http://www.drbd.org/users-guide/s-pacemaker-fencing.html
2014-11-13 14:58 GMT+01:00 Dejan Muhamedagic :
> Hi,
>
> On Thu, Nov 13, 2014 at 01:57:08PM +0100, Heiner Meier wrote:
I think, you don't have fencing configured in your cluster.
2014-11-10 17:02 GMT+01:00 Daniel Dehennin :
> Daniel Dehennin writes:
>
>> Hello,
>
> Hello,
>
>> I just have an issue on my pacemaker setup, my dlm/clvm/gfs2 was
>> blocked.
>>
>> The “dlm_tool ls” command told me “wait ringid”.
>
> It
for guest fencing you can use, something like this
http://www.daemonzone.net/e/3/, rather to have a full cluster stack in
your guest, you can try to use pacemaker-remote for your virtual guest
2014-10-02 18:41 GMT+02:00 Daniel Dehennin :
> Hello,
>
> I'm setting up a 3 nodes OpenNebula[1] cluster
I don't know if you can use Dummy primitivi as MS
egrep "promote|demote" /usr/lib/ocf/resource.d/pacemaker/Dummy
echo $?
1
2014-10-02 12:02 GMT+02:00 Andrei Borzenkov :
> According to documentation (Pacemaker 1.1.x explained) "when
> [Master/Slave] the resource is started, it must come up in t
try to use interleave meta attribute in your clone definition
http://www.hastexo.com/resources/hints-and-kinks/interleaving-pacemaker-clones
2014-09-28 9:56 GMT+02:00 Andrei Borzenkov :
> I have two node cluster with single master/slave resource (replicated
> database) using pacemaker+openais on
telnet oracle-test 3121 ?
2014-09-12 9:55 GMT+02:00 Саша Александров :
> Hi!
>
> I am trying to set up a cluster on two servers with several KVM virtual
> machines. (CentOS 6.5, Pacemaker 1.10 from native repo)
> Using this as guideline:
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single
systemctl enable pcsd.service ?
2014-09-09 9:37 GMT+02:00 Sihan Goi :
> Hi,
>
> I had a basic HA setup working with 2 nodes previously running a simple
> Apache web server on a private local network. However, I'm having trouble
> getting it to work right now, and I haven't changed anything other t
s will not affect sip2?
>
> sorry for my noob question but I must be careful as this is in production ;)
>
> So, "fence_bladecenter_snmp reboot" right?
>
> br
> miha
>
> Dne 8/19/2014 11:53 AM, piše emmanuel segura:
>
>> sorry,
>>
>> That
Why on sip2 cluster service is not running if
> still virual ip and etc are all properly running?
>
> tnx
> miha
>
>
> Dne 8/19/2014 9:08 AM, piše emmanuel segura:
>
>> Your config look ok, have you tried to use fence_bladecenter_snmp by
>> had for poweroff sp1?
>&
it:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> br
> miha
>
> Dne 8/18/2014 11:33 AM, piše emmanuel segura:
>>
>>
t; (id:c_mysql_on_drbd)
> opensips with g_mysql (INFINITY) (id:c_opensips_on_mysql)
>
> Cluster Properties:
> cluster-infrastructure: cman
> dc-version: 1.1.10-14.el6-368c726
> no-quorum-policy: ignore
> stonith-enabled: true
> Node Attributes:
> sip1: standby
8/14/2014 2:35 PM, piše emmanuel segura:
>
>> Node sip2: UNCLEAN (offline) is unclean because the cluster fencing
>> failed to complete the operation
>>
>> 2014-08-14 14:13 GMT+02:00 Miha :
>>>
>>> hi.
>>>
>>> another thing.
>>>
t; everything was working fine till now. Now I need to find out what realy
>> heppend beffor I do something stupid.
>>
>>
>>
>> tnx
>>
>> Dne 8/14/2014 1:58 PM, piše emmanuel segura:
>>>
>>> are you sure your cluster fencing is worki
are you sure your cluster fencing is working?
2014-08-14 13:40 GMT+02:00 Miha :
> Hi,
>
> I noticed today that I am having some problem with cluster. I noticed the
> master server is offilne but still virutal ip is assigned to it and all
> services are running properly (for production).
>
> If I d
Hello Lars,
I was just trying in my virtual lab.
Thanks anyway
2014-07-15 20:48 GMT+02:00 Lars Marowsky-Bree :
> On 2014-07-15T14:03:00, emmanuel segura wrote:
>
>> Thanks Lars for you answer,
>>
>> I was using crm_resource, because using crmsh i can't start a r
Thanks Lars for you answer,
I was using crm_resource, because using crmsh i can't start a resource
on a specific node.
example:
crm(live)resource# start Dummy2 node02
usage: start
2014-07-15 12:59 GMT+02:00 Lars Marowsky-Bree :
> On 2014-07-15T12:44:49, emmanuel segura wrote:
&g
ot;stopped" \
params target_role="stopped"
:::
So, from my point of view, the man page is wrong for two reasons, the
meta attributes target_role it should be target-role and the --meta
options is missing
2014-07-14
02:00 Lars Marowsky-Bree :
> On 2014-07-10T21:57:24, emmanuel segura wrote:
>
>> I know heartbeat is deprecated, but we have an old cluster, and today
>> i tryed to disable the cluster monitoring for maintance on a resource
>> using the following command "crm_resource -r myre
blackblox
2014-07-09 0:52 GMT+02:00 Andrew Beekhof :
>
> On 9 Jul 2014, at 2:51 am, emmanuel segura wrote:
>
>> Hello,
>>
>> Reading the pacemaker Changelog i saw:
>>
>> - Features added since Pacemaker-1.1.10
>>
>> .
>> +
Hello List,
I know heartbeat is deprecated, but we have an old cluster, and today
i tryed to disable the cluster monitoring for maintance on a resource
using the following command "crm_resource -r myresource -t primitive
-p is_managed -v off", but after this, the cluster stopped the
resource, so m
Hello,
Reading the pacemaker Changelog i saw:
- Features added since Pacemaker-1.1.10
.
+ Core: Allow blackbox logging to be disabled with SIGUSR2
Now it's possible?
2014-04-04 3:45 GMT+02:00 Andrew Beekhof :
>
> On 24 Mar 2014, at 10:07 pm, emmanuel segura wrote:
http://clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/s-failure-migration.html
2014-07-06 14:39 GMT+02:00 david :
> Hello everyone:
>
> What happened if an resource agent monitor return fail. Retry start it or
> move it to another node?
>
> Thanks
>
>
> __
{I'm trying to set up a three node cluster using pacemaker+corosync} =
but you trying to use cman, depending on your distro you can use
different cluster es: cman + pacemaker or corosync+pacemaker
2014-06-27 2:22 GMT+02:00 Vijay B :
> Hi,
>
> I'm trying to set up a three node cluster using pacemak
why you doesn't you a group for fs ip and apache resource?
example:
colocation fs_colocation inf: mygroup ms_drbd:Master
order fs_order inf: ms_drbd:promote mygroup:start
2014-06-26 15:27 GMT+02:00 Xzarth :
> I have a pacemaker cluster with following config:
>
> crm(live)configure# show
>
I think your Terracotta script needs to be installed on every node,
because pacemaker look if the resource is running in more then one
node, anyway you the script should be present in all nodes
2014-06-25 17:17 GMT+02:00 Digimer :
> The reason I mentioned it now is that, many times, strange proble
did you try "apt-get install corosync pacemaker"
2014-06-24 23:50 GMT+02:00 Vijay B :
> Hi,
>
> This is my first time using/trying to setup pacemaker with corosync and I'm
> trying to set it up on 2 ubuntu 13.10 VMs. I'm following the instructions in
> this link - http://clusterlabs.org/quickstart
o we need Thanachit to clarify.
>>
>> If EL7, then no, there is no cman. You can use pcsd to form the cluster
>> initially, but also, that version should not attempt to start cman at all,
>> so I am still thinking it is RHEL/CentOS 6.
>>
>>> On 16/06/14 12
>From the previous output i seen this "Version: 1.1.10-5.el7-9abe687",
so my question is "cman is used on redhat7?"
2014-06-16 18:10 GMT+02:00 Digimer :
> You need to setup a skeleton cluster.conf file. You can use the 'ccs' tool,
> here is an example (I use for my cluster, adjust the names to sui
If you are using redhat 7, you sould use corosync+pacemaker as cluster stack
2014-06-16 13:13 GMT+02:00 Thanachit Wichianchai :
> Hello Pacemaker guys,
>
> I am doing a proof of concept of active/passive HA cluster on RHEL 6.5
> Since the customer who I am working for will finally get support f
or powered off, can you suggest if using ocfs2 in here is
> good option or not.
>
> Thank you
>
>
> On Tue, Jun 3, 2014 at 6:31 PM, emmanuel segura
> wrote:
>
>> maybe i wrong, but i think you forgot the cluster logs
>>
>>
>> 2014-06-03 14:34 GMT+02
maybe i wrong, but i think you forgot the cluster logs
2014-06-03 14:34 GMT+02:00 kamal kishi :
> Hi all,
>
> I'm sure many have come across same question and yes i've gone
> through most of the blogs and mailing list without much results.
> I'm trying to configure XEN HVM DOMU on DRBD r
I was doing a nic firmware upgrade and i forgot to stop the cluster on node
where i was working, but something strange happened, both node are fenced
at the same time.
I'm using sbd as stonith device, with the following parameters.
watchdog time = 10 ; msgwait 20 ; stonith-timeout = 40(pacemaker)
This isn't related to your problem, but i seen this in your cluster config
primitive cluster-ntp lsb:ntp, i don't think is a good idea to have a ntp
in failover(local service), in a cluster the time needs to be synchronized
on all nodes
2014-05-22 9:31 GMT+02:00 Danilo Malcangio :
> Hi everyone
d Pacemaker configs as attached,
> still no results.
>
> The Exit Code is 3 now
>
> Have attached logs too
>
>
>
> On Thu, May 15, 2014 at 2:48 PM, emmanuel segura wrote:
>
>> Are you sure that your drbd work by hand? this is from your log
>>
>> May 15
"ocfs2" \
>
> instead of
>
> params device="/dev/drbd/by-res/r0" directory="/cluster" fstype="ocfs2" \
>
> even that failed.
>
> But my doubt is that i'm able to manually work with DRBD without any issue
> but why can't vi
;
address 10.1.1.32:7789;
meta-disk internal;
}
}
I don't know if this a the problem, but the resource r0 is declared
out side of global tag
2014-05-15 10:21 GMT+02:00 emmanuel segura :
> You don't declared your drbd resource r0 in the configuration, read this
> htt
You don't declared your drbd resource r0 in the configuration, read this
http://www.drbd.org/users-guide/s-configure-resource.html
2014-05-15 9:33 GMT+02:00 kamal kishi :
> Hi All,
>
> My configuration is simple and straight, UBUNTU 12.04 used to run
> pacemaker.
> Pacemaker runs DRBD and OCFS2.
I found the monitor operation remain active if the resource is in unmanaged
state, my stupid question, what is the different between monitor and
umanaged? sorry i looking around and i don't find any document about this.
2014-05-09 17:13 GMT+02:00 emmanuel segura :
> Hello List,
>
>
Hello List,
I would like to know if it's normal, that pacemaker does the monitor action
while the resource is in unmanage state.
Thanks
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Proj
Why are you using ssh as stonith? i don't think the fencing is working
because your nodes are in unclean state
2014-05-08 5:37 GMT+02:00 :
> Hi All,
>
> I composed Master/Slave resource of three nodes that set
> quorum-policy="freeze".
> (I use Stateful in Master/Slave resource.)
>
> ---
Hello Jan,
Thanks very much for your help :), i will try to read the patches you posted
Emmanuel
2014-05-05 16:14 GMT+02:00 Jan Friesse :
> Emmanuel,
>
> emmanuel segura napsal(a):
> > Helllo Jan,
> >
> > I'm using corosync+pacemaker on Sles 11 Sp1 and this is
7 GMT+02:00 Jan Friesse :
> Emmanuel,
>
> emmanuel segura napsal(a):
> > Hello Jan,
> >
> > Thanks for the explanation, but i saw this in my log.
> >
> >
> :::
en was lost but during gather state all nodes replied, then there is
> no change of membership and no need to fence.
>
> I believe your situation was:
> - one node is little overloaded
> - token lost
> - overload over
> - gather state
> - every node is alive
> -> no fencin
two hp blades
2014-04-30 7:46 GMT+02:00 Andrey Groshev :
>
>
> 24.04.2014, 21:12, "emmanuel segura" :
>
>
> Hello List,
>
> I have this two lines in my cluster logs, somebody can help to know what
> this means.
>
>
>
Hello Jan,
I found this problem in two hp blade system and the strange thing is the
fencing was triggered :(
2014-04-25 9:27 GMT+02:00 Jan Friesse :
> Emanuel,
>
> emmanuel segura napsal(a):
>
> Hello List,
>>
>> I have this two lines in my cluster logs, somebody ca
Hello Jan,
Forget the last mail:
Hello Jan,
I found this problem in two hp blade system and the strange thing is the
fencing was not triggered :(, but it's enabled
2014-04-25 18:36 GMT+02:00 emmanuel segura :
> Hello Jan,
>
> I found this problem in two hp blade system and the
Hello List,
I have this two lines in my cluster logs, somebody can help to know what
this means.
::
corosync [TOTEM ] Process pause detected for 577 ms, flushing me
Thanks
This is ok
2014-04-07 14:10 GMT+02:00 Kristoffer Grönlund :
> On Mon, 7 Apr 2014 12:36:47 +0200
> emmanuel segura wrote:
>
> > Hello,
> >
> > Sorry if this has been asked, but i would like to know how do that,
> > using the crm shell.
> >
>
Hello,
Sorry if this has been asked, but i would like to know how do that, using
the crm shell.
Thanks
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.
Hello List,
I trying to install a virtual cluster using redhat 7 beta, the first thing
i noticed is this.
:::
[root@localhost ~]# cibadmi
ssic openais (with plugin)" \
> expected-quorum-votes="2" \
> stonith-enabled="false" \
> default-action-timeout="240"
>
>
> On Sun, Mar 30, 2014 at 3:23 PM, emmanuel segura wrote:
>
>> where is your cluster config?
>>
>>
>&
where is your cluster config?
2014-03-30 13:52 GMT+02:00 Dennis Zheleznyak :
> Hi,
>
> I had no experience with Corosync / Pacemaker before, therefore, I
> followed the following guide to setup a Corosync/Pacemaker HA solution for
> an apache website:
>
> http://www.unixmen.com/adding-deleting-c
where is your config? anyway why your log shows you are using 127.0.0.1 as
cluster ip?
2014-03-27 17:54 GMT+01:00 Stefan Bauer :
> Hi Developers & Users,
>
> I'm trying to use Corosync 2.3.3 in combination with pacemaker 1.1 to
> build new packages for Debian but have problems to start pacemaker
but it will be implemented?
2014-03-24 2:22 GMT+01:00 Andrew Beekhof :
>
> On 24 Mar 2014, at 11:04 am, emmanuel segura wrote:
>
> > how can i turn off the debug without reboot the pacemaker?
>
> you cant.
>
> >
> >
> > 2014-03-24 0:36 GMT+01:00 Andr
how can i turn off the debug without reboot the pacemaker?
2014-03-24 0:36 GMT+01:00 Andrew Beekhof :
>
> On 20 Mar 2014, at 11:24 pm, Andreas Mock wrote:
>
> > Hi all,
> >
> > today I faced a problem which I couldn't solve reading
> > several man pages and other found hint on the web.
> >
> >
There is no sbd on centos(redhat), in your case you can use cman+pacemaker
and after that, create a volume group using the multipaths device and now
you can create the filesystem using the new volume group and copy the
oracle data, if you are using hp hadware you can use the ilo force fencing
and i
do you have stonith configured?
2014-03-18 13:07 GMT+01:00 Alex Samad - Yieldbroker <
alex.sa...@yieldbroker.com>:
> Im not expert but
>
>
>
> Current DC: linux02 - partition WITHOUT quorum
> Version: 1.1.10-14.el6_5.2-368c726
> 2 Nodes configured, 2 expected votes
>
>
>
> I think your 2nd node
maybe you are missing crm dev library
2014-03-14 13:39 GMT+01:00 Stephan Buchner :
> Hey everyone!
> I am trying to compile pacemaker from source for some time - but i keep
> getting the same errors, despite using different versions.
>
> I did the following to get this:
>
> 1. ./autogen.sh
> 2.
i seen this in your pacemaker config
stonith-enabled="false"
2014-03-05 11:54 GMT+01:00 Anne Nicolas :
> Le 05/03/2014 11:26, emmanuel segura a écrit :
> > because you don't have fencing configured
>
> As said I've added it in /etc/drbd.d/global_comm
because you don't have fencing configured
2014-03-05 9:28 GMT+01:00 Anne Nicolas :
> Hi
>
> I'm having trouble setting a very simple cluster with 2 nodes. After all
> reboot I'm getting split brain that I have to solve by hand then.
> Looking for a solution for that one...
>
> Both nodes have 4
example: colocation ipwithpgsql inf: virtualip psql:Master
2014-02-17 6:25 GMT+01:00 Tomasz Kontusz :
>
>
> Andrew Beekhof napisał:
> >
> >On 16 Feb 2014, at 6:53 am, emmanuel segura wrote:
> >
> >> i think, if you use pacemaker_remote inside the containe
i think, if you use pacemaker_remote inside the container, the container
will be a normal node of you cluster, so you can run pgsql + vip in it
2014-02-15 19:40 GMT+01:00 Tomasz Kontusz :
> Hi
> I'm setting up a cluster which will use OpenVZ containers for separating
> resource's environments.
>
https://bugzilla.redhat.com/show_bug.cgi?id=566573
2014/1/24 Parveen Jain
> while starting the cman(service cman start), I was getting following error:
>
> dlm_controld[2665]: daemon cpg_join error retrying
> gfs_controld[2733]: daemon cpg_join error retrying
>
> Mine is just a 2 node cluster,
maybe you missing log when you had fenced the node? because i think the
clvmd hungup because your node are in unclean state, use dlm_tool ls to see
if you any pending fencing operation.
2014/1/1 Bob Haxo
> Greetings ... Happy New Year!
>
> I am testing a configuration that is created from exam
Your shorewall cannot handle ms Master and Slave operations, because is a
lsb script, if you want your script to act as drbd ms, look that one and do
it an script agent
2013/12/22 Gaëtan Slongo
> Hi !
>
> Someone has any idea ?
>
> Thanks !
>
>
> Le 18/12/13 15:08, Gaëtan Slongo a écrit :
>
>
ker deals with the failing logic loop, resulting in a re-start of the
> VM.
>
> I hoping that "Unfortunately we still don't have a good answer for you."
> is no longer the case, and that there is a fix or that there is a community
> accepted workaround for the issue.
Maybe the problem is this, the cluster try to start the vm and libvirtd
isn't started
2013/12/19 emmanuel segura
> if don't set your vm to start at boot time, you don't to put in cluster
> libvirtd, maybe the problem isn't this, but why put the os services in
&g
if don't set your vm to start at boot time, you don't to put in cluster
libvirtd, maybe the problem isn't this, but why put the os services in
cluster, for example crond .. :)
2013/12/19 Bob Haxo
> Hello,
>
> Earlier emails related to this topic:
> [pacemaker] chicken-egg-problem with libv
Anybody have experience with nood scheduler? i know the sbd process is a
real time process, so with default linux io scheduler cfq, the sbd process
works very well on high io heave load.
The noop scheduler only merge the io request in fifo queue, can i have
problem with my cluster using nood sched
find /var -name '*cib.xml*'
'
2013/12/15 Thomaz Luiz Santos
> use the http://lcmc.sourceforge.net/
>
>
> On Sun, Dec 15, 2013 at 7:46 PM, Michał Margula wrote:
>
>> Hello,
>>
>> I want to get configuration file from Pacemaker cluster, but corosync is
>> offline for now (because it was causing s
ent in loopback interfaces of other two nodes which are real backend
> servers for IPVS.
>
> Thanks
> Eswar
>
>
>
> On Wed, Dec 11, 2013 at 8:04 PM, emmanuel segura wrote:
>
>> why you have cluster-ip and nvp_vip primitivies with same ip address and
>>
why you have cluster-ip and nvp_vip primitivies with same ip address and
after that you cloned nvp_vip, what would you archive?
2013/12/11 ESWAR RAO
> small update.
> I could enforce changes using -F option so that I could get rid of " Do
> you still want to commit?"
>
>
> On Wed, Dec 11, 2013
gt; Where should I configure the heartbeat interconnect interfaces?
>
>
>
> With corosync-cfgtool –s it shows the wrong ip? But where does that come
> from?
>
>
>
> Andreas
>
>
>
> *Von:* emmanuel segura [mailto:emi2f...@gmail.com]
> *Gesendet:* Montag, 9. Dezem
I think they sould be
pcs constraint location ipmi-fencing-sv2837 prefers sv2837=-INFINITY
pcs constraint location ipmi-fencing-sv2836 prefers sv2836=-INFINITY
2013/12/9 Michael Schwartzkopff
> Am Montag, 9. Dezember 2013, 14:58:13 schrieben Sie:
> > > > pcs stonith create ipmi-fencing-sv2837
;
> Duplex: full
>
> Link Failure Count: 0
>
> Permanent HW addr: c8:1f:66:d7:3b:fe
>
> Slave queue ID: 0
>
>
>
> Slave Interface: p3p3
>
> MII Status: up
>
> Speed: 1000 Mbps
>
> Duplex: full
>
> Link Failure Count: 0
>
> Permanent HW
1 - 100 of 273 matches
Mail list logo