On Fri, Oct 12, 2012 at 3:18 AM, Andrew Beekhof wrote:
> This has been a topic that has popped up occasionally over the years.
> Unfortunately we still don't have a good answer for you.
>
> The "least worst" practice has been to have the RA return OCF_STOPPED
> for non-recurring monitor operations
On 10/02/2012 08:22 AM, Michael Schwartzkopff wrote:
>> Hi all,
>>
>> I've running several pacemaker clusters in KVM virtual machines (everything
>> based on Debian 6) and now it's up to configure fencing...
>>
>> I've found that I have to use "fence-virt" for that task
>> (http://www.clusterlabs.o
On Mon, Jun 25, 2012 at 1:40 PM, Andrew Beekhof wrote:
> I've added the concept of a 'system service' that expands to whatever
> standard the local machine supports.
> So you could say, in xml,
> and the cluster would use 'lsb' on RHEL, 'upstart' on Ubuntu and 'systemd' on
> newer fedora relea
On 06/15/12 16:37, Andrew Beekhof wrote:
> On Fri, Jun 15, 2012 at 12:19 AM, Stallmann, Andreas
> wrote:
>> Hi!
>>
>>
>>
>> Excuse my blindness; I found the „Stateful“ script, which is obviously the
>> template / skeleton I was looking for. Unfortunately it comes without
>> explanaition. Does anyo
On 06/10/12 22:21, Arnold Krille wrote:
> On 10.06.2012 21:48, Florian Haas wrote:
>> However, why do you want automatic failback? If your cluster nodes are
>> interchangeable in terms of performance, you shouldn't need to care
>> which node is the master. In other wor
On Sun, Jun 10, 2012 at 4:13 PM, Jake Smith wrote:
> I ran into that (scheduler change) also after upgrading. I only accidentally
> stumbled onto that fact. I wish Ubuntu had made it a little clearer that not
> having a separate server kernel had more implications than just kernel!
It's correct t
On Sun, Jun 10, 2012 at 3:07 PM, "Stefan Günther" wrote:
>
> Hello,
>
> I have a general question about the features of pacemaker.
>
> We are planning to setup a HA solution with pacemaker, corosync and drbd.
>
> After a failure of the master at later its recovery, drbd will sync the
> data from t
On Fri, Jun 8, 2012 at 1:01 PM, Juan M. Sierra wrote:
> Problem with state: UNCLEAN (OFFLINE)
>
> Hello,
>
> I'm trying to get up a directord service with pacemaker.
>
> But, I found a problem with the unclean (offline) state. The initial state
> of my cluster was this:
>
> Online: [ node2 node1 ]
On Fri, Jun 8, 2012 at 8:49 AM, Sébastien Riccio wrote:
> Hi,
>
> while reading the corosync log file i'm seeing a lot of these entries:
>
> Jun 08 04:11:43 filer-01-b pengine: [13718]: ERROR:
> create_notification_boundaries: Creating boundaries for ms_DATA1
> Jun 08 04:11:43 filer-01-b pengine:
On Wed, Jun 6, 2012 at 4:56 PM, Paul Damken wrote:
> Thanks Florian and Emmanuel,
>
> Yes I have,
>
> "tnsnames.ora" file:
> ->
> XIB11_HA.WORLD =
> (DESCRIPTION =
> (ADDRESS_LIST =
> (ADDRESS = (PROTOCOL = TCP)(HOST = aztecavip.domainame)(PORT = 1521))
> )
>
On Wed, Jun 6, 2012 at 12:44 AM, Paul Damken wrote:
> Im facing issues with my cluster setup. "N+1"
> Pacemaker Hosting Oracle 11g Instances. Node name "azteca"
>
> I cannot get "oralsnr" to start my DB listener, it refuses on both nodes.
> "Oracle" RA is starting first, after all File systems and
On Tue, Jun 5, 2012 at 1:43 AM, Andrew Beekhof wrote:
> On Mon, Jun 4, 2012 at 9:02 PM, Lars Marowsky-Bree wrote:
>> On 2012-06-04T11:21:57, Andrew Beekhof wrote:
>>
>> Hi Andrew,
>>
>> I am getting a slightly defensive-to-aggressive vibe from your response
>> to Florian. Can we tune that down?
On Mon, Jun 4, 2012 at 3:21 AM, Andrew Beekhof wrote:
> On Sat, Jun 2, 2012 at 12:56 AM, Florian Haas wrote:
>> On Fri, Jun 1, 2012 at 1:40 AM, Chris Feist wrote:
>>> I'd like to announce the existence of the "Pacemaker/Corosync configuration
>>> system&
On Mon, Jun 4, 2012 at 1:02 PM, Lars Marowsky-Bree wrote:
> I am getting a slightly defensive-to-aggressive vibe from your response
> to Florian. Can we tune that down? I much prefer to do the shouting at
> each other in person, because then the gestures come across much more
> vividly and the foo
On Tue, Jun 5, 2012 at 1:55 AM, Cliff Massey wrote:
> My config is:
>
> http://pastebin.com/5qYiHe56
Yep, you completely forgot your order and colo constraints. You need
those to tie your foo-kvm primitive to its corresponding ms-foo
master/slave set.
http://www.drbd.org/users-guide-8.3/s-pacema
On Mon, Jun 4, 2012 at 9:51 PM, Cliff Massey wrote:
>
> I am trying to setup a cluster consisting of KVM DRBD and pacemaker. Without
> pacemaker DRBD and KVM are working. I can even stop everything on one node,
> promote the other to drbd primary and start the KVM machine on the other.
>
> Howe
On Fri, Jun 1, 2012 at 1:40 AM, Chris Feist wrote:
> I'd like to announce the existence of the "Pacemaker/Corosync configuration
> system", PCS.
Be warned, I will surely catch flak for what I'm about to say. Nothing
of this should be understood in a personal way; my critique is about
the work not
On Fri, May 25, 2012 at 11:38 AM, Lars Ellenberg
wrote:
> On Fri, May 25, 2012 at 11:15:32AM +0200, Florian Haas wrote:
>> On Fri, May 25, 2012 at 10:45 AM, Lars Ellenberg
>> wrote:
>> > Sorry, sent to early.
>> >
>> > That would not catch the case of
On Fri, May 25, 2012 at 10:45 AM, Lars Ellenberg
wrote:
> Sorry, sent to early.
>
> That would not catch the case of cluster partitions joining,
> only the pacemaker startup with fully connected cluster communication
> already up.
>
> I thought about a dc-priority default of 100,
> and only trigge
On Mon, May 21, 2012 at 8:14 PM, Matthew O'Connor wrote:
> On 05/21/2012 05:43 AM, Florian Haas wrote:
>> Does it have "fencing resource-and-stonith" in the DRBD configuration,
>> and stonith_admin-fence-peer.sh as its fence-peer handler?
> That was the problem.
On Sun, May 20, 2012 at 6:40 AM, Matthew O'Connor wrote:
> After using the tutorial on the Hastexo site for setting up stonith via
> libvirt, I believe I have it working correctly...but...some strange things
> are happening. I have two nodes, with shared storage provided by a
> dual-primary DRBD
On Mon, May 21, 2012 at 1:36 AM, Christoph Bartoschek
wrote:
> Hi,
>
> we currently have the problem that when the NFS server is highly used the
> heartbeat:exportfs monitor script fails with a timeout because it cannot
> write the rmtab to the exported filesystem within the given time.
So, how a
On Sun, May 20, 2012 at 12:05 PM, Christoph Bartoschek
wrote:
> Hi,
>
> we have a two node setup with drbd below LVM and an Ext4 filesystem that is
> shared vi NFS. The system shows low performance and lots of timeouts
> resulting in unnecessary failovers from pacemaker.
>
> The connection between
On Sat, May 12, 2012 at 2:49 AM, Steve Davidson
wrote:
> We want to run the Corosync heartbeat on the private net and, as a backup
> heartbeat, allow Corosync heartbeat on our "public" net as well.
>
> Thus in /etc/corosync/corosync.conf we need something like:
>
> bindaddr_primary: 192.168.57.0
>
On Mon, Apr 2, 2012 at 7:00 AM, Ruwan Fernando wrote:
> Hi,
>
> I was required to build oracle cluster so I configured pacemaker+
> corosync+drbd+ocfs2 and built Active-active cluster.
Why?
pacemaker+corosync+drbd+xfs+oracle works just fine and is fully
integrated with Pacemaker.
RAC is primari
On Tue, Apr 3, 2012 at 5:53 PM, David Vossel wrote:
> I see the same thing. I'm using the latest pacemaker source from the master
> branch, so this definitely still exists. For me the file leak occurs every
> time I issue a "cibadmin --replace --xml-file" command. The shell is doing
> the sa
On Mon, Apr 2, 2012 at 12:32 PM, Andrew Beekhof wrote:
>> Well, but you did read the technical reason I presented here?
>
> Yes, and it boiled down to "don't let the user hang themselves".
> Which is a noble goal, I just don't like the way we're achieving it.
>
> Why not advertise the requirements
On Mon, Apr 2, 2012 at 11:54 AM, Andrew Beekhof wrote:
> On Fri, Mar 30, 2012 at 7:34 PM, Florian Haas wrote:
>> On Fri, Mar 30, 2012 at 1:12 AM, Andrew Beekhof wrote:
>>> Because it was felt that RAs shouldn't need to know.
>>> Those options change pa
On Mon, Apr 2, 2012 at 11:34 AM, Hugo Deprez wrote:
> Dear community,
>
> I am using a puppet mode in order to manage my cluster.
> I get a weird thing with the start & stop of the corosync daemon.
>
> When I modify the corosync.conf file, puppet is asked to restart / reload
> corosync, but it fai
On Mon, Apr 2, 2012 at 11:33 AM, Andrew Beekhof wrote:
> On Fri, Mar 30, 2012 at 8:33 PM, Florian Haas wrote:
>> On Fri, Mar 30, 2012 at 10:37 AM, Andrew Beekhof wrote:
>>> I blogged about it, which automatically got sent to twitter, and I
>>> updated the IRC channel
On Fri, Mar 30, 2012 at 8:26 PM, Brian J. Murrell wrote:
> In my cluster configuration, each resource can be run on one of two node
> and I designate a "primary" and a "secondary" using location constraints
> such as:
>
> location FOO-primary FOO 20: bar1
> location FOO-secondary FOO 10: bar2
>
>
On Fri, Mar 30, 2012 at 7:45 PM, Gregg Stock wrote:
> The full shutdown and restart fixed it.
Hrm. So it's transient after all. Andrew, think you nailed that one
with the commit I referred to upthread, or do you call heisenbug?
Cheers,
Florian
--
Need help with High Availability?
http://www.ha
On Fri, Mar 30, 2012 at 6:09 PM, Gregg Stock wrote:
> That looks good. They were all the same and had the correct ip addresses.
So you've got both healthy rings, and all 5 nodes have 5 members in
the membership list?
Then this would make it a Pacemaker problem. IIUC the code causing
Pacemaker to
On Fri, Mar 30, 2012 at 5:38 PM, Gregg Stock wrote:
> I took the last 200 lines of each.
Can you check the health of the Corosync membership, as per this URL?
http://www.hastexo.com/resources/hints-and-kinks/checking-corosync-cluster-membership
Do _all_ nodes agree on the health of the rings, a
On Fri, Mar 30, 2012 at 10:37 AM, Andrew Beekhof wrote:
> I blogged about it, which automatically got sent to twitter, and I
> updated the IRC channel topic, but alas I forgot to mention it here
> :-)
>
> So in case you missed it, 1.1.7 is finally out.
> Special mention is due to David and Yan for
On Fri, Mar 30, 2012 at 1:12 AM, Andrew Beekhof wrote:
> Because it was felt that RAs shouldn't need to know.
> Those options change pacemaker's behaviour, not the RAs.
>
> But subsequently, in lf#2391, you convinced us to add notify since it
> allowed the drbd agent to error out if they were not
On Thu, Mar 29, 2012 at 8:35 AM, Andrew Beekhof wrote:
> On Thu, Mar 29, 2012 at 5:28 PM, Vladislav Bogdanov
> wrote:
>> Hi Andrew, all,
>>
>> Pacemaker restarts resources when resource they depend on (ordering
>> only, no colocation) is migrated.
>>
>> I mean that when I do crm resource migrate
Lars (lmb), or Andrew -- maybe one of you remembers what this was all about.
In this commit, Lars enabled the
OCF_RESKEY_CRM_meta_{ordered,notify,interleave} attributes to be
injected into the environment of RAs:
https://github.com/ClusterLabs/pacemaker/commit/b0ba01f61086f073be69db3e6beb0914642f7
On Thu, Mar 29, 2012 at 11:40 AM, Vladislav Bogdanov
wrote:
> Hi Florian,
>
> 29.03.2012 11:54, Florian Haas wrote:
>> On Thu, Mar 29, 2012 at 10:07 AM, Vladislav Bogdanov
>> wrote:
>>> Hi Andrew, all,
>>>
>>> I'm continuing experiments with
On Thu, Mar 29, 2012 at 10:07 AM, Vladislav Bogdanov
wrote:
> Hi Andrew, all,
>
> I'm continuing experiments with lustre on stacked drbd, and see
> following problem:
At the risk of going off topic, can you explain *why* you want to do
this? If you need a distributed, replicated filesystem with
a
On Wed, Mar 28, 2012 at 5:07 PM, Brian J. Murrell wrote:
> On 12-03-28 10:39 AM, Florian Haas wrote:
>>
>> Probably because your resource agent reports OCF_SUCCESS on a probe
>> operation
>
> To be clear, is this the "status" $OP in the agent?
Nope, monit
On Wed, Mar 28, 2012 at 4:26 PM, Brian J. Murrell wrote:
> We seem to have occasion where we find crm_resource reporting that a
> resource is running on more (usually all!) nodes when we query right
> after adding it:
>
> # crm_resource -resource chalkfs-OST_3 --locate
> resource chalkfs-OST00
On Fri, Mar 23, 2012 at 6:07 PM, Lajos Pajtek wrote:
>
>
> Hi,
>
> I am building a two-node, active-standby cluster with shared storage. I think
> I got the basic primitives right, but fencing, implemented using SCSI
> persistent reservations, gives me some headache.First, I am unable to get
>
Hi everyone,
for those interested in contributing to a community documentation
project focusing on performance optimization in high availability
clusters, please take a look at the following URLs:
https://github.com/fghaas/hp-ha-guide (GitHub repo)
http://www.hastexo.com/node/173 (blog post -- fe
Hi everyone,
apologies for the cross-post; I believe this might be interesting to
people on both the openstack and the pacemaker lists. Please see
below.
On Tue, Feb 14, 2012 at 9:07 AM, i3D.net - Tristan van Bokkem
wrote:
> Hi Stackers,
>
> It seems running Openstack components in High Availabi
On Tue, Mar 20, 2012 at 4:18 PM, Fiorenza Meini wrote:
> Hi there,
> has anybody configured successfully the RA specified in the object of the
> message?
>
> I got this error: if_eth0_monitor_0 (node=fw1, call=2297, rc=-2,
> status=Timed Out): unknown exec error
Your ethmonitor RA missed its 50-s
On Tue, Mar 20, 2012 at 11:15 AM, Rasto Levrinc wrote:
> 2012/3/20 Mars gu :
>> Hi,
>> I want to excute the command ,the problem occurred:
>>
>> [root@h10_148 ~]# ptest
>> -bash: ptest: command not found
>>
>> How can I preview the shadow configuration?
>
> ptest has been replaced by crm_simul
On Mon, Mar 19, 2012 at 9:00 PM, Phil Frost wrote:
> On Mar 19, 2012, at 15:22 , Florian Haas wrote:
>> On Mon, Mar 19, 2012 at 8:00 PM, Phil Frost
>> wrote:
>>> I'm attempting to automate my cluster configuration with Puppet. I'm
>>> already using P
On Mon, Mar 19, 2012 at 8:14 PM, Mathias Nestler
wrote:
> Hi everyone,
>
> I am trying to setup an active/passive (2 nodes) Linux-HA cluster with
> corosync and pacemaker to hold a PostgreSQL-Database up and running. It works
> via DRBD and a service-ip. If node1 fails, node2 should take over. T
On Mon, Mar 19, 2012 at 8:00 PM, Phil Frost wrote:
> I'm attempting to automate my cluster configuration with Puppet. I'm already
> using Puppet to manage the configuration of my Xen domains. I'd like to
> instruct puppet to apply the configuration (via cibadmin) to a shadow config,
> but I can
On Fri, Mar 16, 2012 at 4:55 PM, Lars Marowsky-Bree wrote:
> On 2012-03-16T13:36:34, Florian Haas wrote:
>
>> > Would this not be more readily served by a simple while loop doing the
>> > monitoring, even if systemd/upstart aren't around? Pacemaker is kind of
>&g
On Fri, Mar 16, 2012 at 12:24 PM, ruslan usifov wrote:
> Luster looks very cool and stability, but it doesn't provide scalable block
> device (Ceph allow it throw RDB), require patched kernel (i doesn't find
> more modern patched kernels for ubuntu lucid), so i think that it doesn't
> acceptable f
On Fri, Mar 16, 2012 at 12:42 PM, Lars Marowsky-Bree wrote:
> On 2012-03-16T11:28:36, Florian Haas wrote:
>
>> > is there a reason for integrating ceph with pacemaker? ceph does
>> > internal monitoring of OSTs etc anyway, doesn't it?
>> Assuming you're
On Fri, Mar 16, 2012 at 12:50 PM, ruslan usifov wrote:
> I crash i have follow stack trcae
How about taking that to the ceph-devel list?
Florian
--
Need help with High Availability?
http://www.hastexo.com/now
___
Pacemaker mailing list: Pacemaker@os
On Fri, Mar 16, 2012 at 11:14 AM, Lars Marowsky-Bree wrote:
> On 2012-03-16T11:13:17, Florian Haas wrote:
>
>> Which Ceph version are you using? Both the Ceph daemons and RBD are
>> fully integrated into Pacemaker in upstream git.
>>
>> https://github.com/ceph/ceph
On Fri, Mar 16, 2012 at 11:06 AM, Vladislav Bogdanov
wrote:
> 16.03.2012 12:13, ruslan usifov wrote:
>> Hello
>>
>> I search a solution for scalable block device (dist that can extend if
>> we add some machines to cluster). Only what i find accepten on my task
>> is ceph + RDB, but ceph on my test
On Fri, Mar 16, 2012 at 10:13 AM, ruslan usifov wrote:
> Hello
>
> I search a solution for scalable block device (dist that can extend if we
> add some machines to cluster). Only what i find accepten on my task is ceph
> + RDB, but ceph on my test i very unstable(regulary crash of all it daemons)
On Wed, Mar 14, 2012 at 5:55 PM, Phillip Frost
wrote:
> On Mar 14, 2012, at 12:33 PM, Florian Haas wrote:
>
>>> However, sometimes pacemakerd will not stop cleanly.
>>
>> OK. Whether this is related to your original problem or not a complete
>> open question
On Wed, Mar 14, 2012 at 4:58 PM, Phillip Frost
wrote:
>> Can you confirm that you're running the ~bpo60+2 (note trailing "2")
>> build, that you're actually running an lrmd binary from that version
>> (meaning: that you properly killed your lrmd prior to installing that
>> package), _and_ that "lr
On Wed, Mar 14, 2012 at 2:37 PM, Phillip Frost
wrote:
> On Mar 14, 2012, at 9:25 AM, Florian Haas wrote:
>
>>> Do you have upstart at all? In that case, the debian package
>>> shouldn't have the upstart enabled when building cluster-glue.
>>
>> Th
On Wed, Mar 14, 2012 at 2:16 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Tue, Mar 13, 2012 at 05:59:35PM -0400, Phillip Frost wrote:
>> On Mar 13, 2012, at 2:21 PM, Jake Smith wrote:
>>
>> >> From: "Phillip Frost"
>> >> Subject: [Pacemaker] getting started - crm hangs when adding resources,
>> >
On Fri, Mar 9, 2012 at 11:24 PM, Scott Piazza
wrote:
> I have a two-node active/passive pacemaker cluster running with a single
> DRBD resource set up as master-slave. Today, we restarted both servers
> in the cluster, and when they came back up, both started pacemaker and
> corosync correctly, b
On Sat, Mar 10, 2012 at 12:39 AM, Larry Brigman wrote:
> I have looked and cannot seem to find the pre-built 1.1.6 rpm set in the
> clusterlabs repo.
It ships with RHEL/CentOS 6.2.
On RHEL 5 however, 1.1.6 doesn't build. If you don't want to wait for
1.1.7, you'll either need to apply this post-
On Tue, Mar 6, 2012 at 1:49 PM, Florian Crouzat
wrote:
> I have Florian's rsyslog config:
> https://github.com/fghaas/pacemaker/blob/syslog/extra/rsyslog/pacemaker.conf.in
I should mention that that rsyslog configuration is no longer being
considered for upstream inclusion. See the discussion on
Jiaju,
would you mind pushing your git tags your GitHub booth repo?
Currently, as far as I can see, there are no tags in that repo at all.
It would be nice to be able to find out what exactly is the git
revision that you guys ship in SP2. Thanks!
Cheers,
Florian
--
Need help with High Availabil
On Thu, Mar 1, 2012 at 12:11 AM, Andrew Beekhof wrote:
> On Thu, Mar 1, 2012 at 8:08 AM, Florian Haas wrote:
>> Andrew,
>>
>> just a quick question out of curiosity: the ocf:pacemaker:o2cb resource
>> and ocfs2_controld.pcmk require the OpenAIS CKPT service which is
Andrew,
just a quick question out of curiosity: the ocf:pacemaker:o2cb resource
and ocfs2_controld.pcmk require the OpenAIS CKPT service which is
currently deprecated (as all of OpenAIS) and going away completely
(IIUC) with Corosync 2.0. Does that mean that OCFS2 will be unsupported
from Corosync
Jean-François,
I realize I'm late to this discussion, however allow me to chime in here anyhow:
On Mon, Feb 27, 2012 at 11:45 PM, Jean-Francois Malouin
wrote:
>> Have you looked at fence_virt? http://www.clusterlabs.org/wiki/Guest_Fencing
>
> Yes I did.
>
> I had a quick go last week at compilin
On Wed, Feb 29, 2012 at 8:21 AM, Andrew Beekhof wrote:
> 2012/2/27 Ante Karamatić :
>> On 27.02.2012 12:27, Florian Haas wrote:
>>
>>> Alas, to the best of my knowledge the only way to change a specific
>>> job's respawn policy is by modifying its job definit
2012/2/27 Ante Karamatić :
> On 27.02.2012 12:27, Florian Haas wrote:
>
>> Alas, to the best of my knowledge the only way to change a specific
>> job's respawn policy is by modifying its job definition. Likewise,
>> that's the only way to enable or disable startin
On 02/27/12 11:37, Andrew Beekhof wrote:
> On Sun, Feb 26, 2012 at 8:54 AM, Ante Karamatic wrote:
>> On 23.02.2012 23:52, Andrew Beekhof wrote:
>>
>>> On Thu, Feb 23, 2012 at 6:43 PM, Ante Karamatic wrote:
Well... Upstart actually does notice if the job failed and respawns it -
dependin
On 02/27/12 11:31, Andrew Beekhof wrote:
> On Mon, Feb 27, 2012 at 6:16 PM, Florian Haas wrote:
>> On 02/27/12 06:15, Andrew Beekhof wrote:
>>> Excellent, now that we have Florian's blessing I can merge this today.
>>
>> You seem to operate on a very interestin
On 02/27/12 06:15, Andrew Beekhof wrote:
> Excellent, now that we have Florian's blessing I can merge this today.
You seem to operate on a very interesting definition of "blessing."
Florian
--
Need help with High Availability?
http://www.hastexo.com/now
On Sun, Feb 26, 2012 at 1:08 AM, David Vossel wrote:
> - Original Message -
>> From: "Florian Haas"
>> Yeah, actually using a resource type that is capable of running in
>> master/slave mode would be a good start. :) Use
>> ocf:pacemaker:Stateful
>
On Sat, Feb 25, 2012 at 12:31 AM, David Vossel wrote:
> Hey,
>
> I have a 2 node cluster with a multi-state master/slave resource. When the
> multi-state resources start up on each node they enter the Slave role. At
> that point I can't figure out how to promote the resource to activate the
>
On 02/23/12 23:48, Andrew Beekhof wrote:
> On Thu, Feb 23, 2012 at 6:31 PM, Ante Karamatic wrote:
>> On 23.02.2012 00:10, Andrew Beekhof wrote:
>>
>>> Do you still have LSB scripts on a machine thats using upstart?
>>
>> Yes, some LSB scripts can't be easily converted to upstart jobs. Or,
>> let's
On 02/24/12 09:21, Johan Rosing Bergkvist wrote:
> Sorry parameter, you're right.
> But still It didn't mount untill I added the drbdconf parameter.
>
> primitive clusterDRBD ocf:linbit:drbd \
> params drbd_resource="cluster-ocfs" *drbdconf="/etc/drbd.conf"
> *#This is what I added \
>
On 02/24/12 08:50, Johan Rosing Bergkvist wrote:
> Hi
> Just an update.
> So I upgraded to pacemaker 1.1.6 and tried to configure it all again,
> without dlm.
> It didn't work, I still got the "OCF_ERR_INSTALLED so I started looking
> through the setup and found that I didn't specify the drbd.conf
On 02/24/12 02:53, Andrew Beekhof wrote:
> We're about to lock in the syntax for cluster tickets (used for
> multi-ste clusters).
>
> The syntax rules are at:
>
> https://github.com/gao-yan/pacemaker/commit/9e492f6231df2d8dd548f111a2490f02822b29ea
>
> And its use, along with some examples, can
On Wed, Feb 22, 2012 at 5:06 PM, Jean-Francois Malouin
wrote:
> Hi,
>
> I have a question about colocation.
>
> (This is on a tiny 2 nodes virtual test cluster with pacemaker-1.1.6
> from Debian Squeeze backports)
>
> I've read the Pacemaker_Explained doc (1.1) and the thread
>
> http://www.gossam
>
> Last updated: Wed Feb 22 11:51:22 2012
> Stack: openais
> Current DC: cluster01 - partition with quorum
> Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b
Please go to 1.1.5, at least, as soon as you can.
> primitive Cluster-FS-DLM ocf:pacemaker:controld \
> op mon
On Tue, Feb 21, 2012 at 4:22 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Tue, Feb 21, 2012 at 02:26:31PM +0100, Florian Haas wrote:
>> On 02/21/12 13:39, Johan wrote:
>> >
>> > I keep getting the:
>> > info: RA output: (Cluster-FS-Mount:1:start:stderr) FAT
Jake,
sorry, I missed your original post due to travel; let me toss in one
more thing here:
On Tue, Feb 21, 2012 at 3:32 PM, Jake Smith wrote:
>> > Are upstart jobs expected to conform to the LSB spec with regards
>> > to exit codes, etc?
>> > Is there any reference documentation using upstart r
On Tue, Feb 21, 2012 at 3:57 PM, Pieter Baele wrote:
> After upgrading a node (RHEL 6.1 to 6.2), my /var/log/messages grows
> really really fast
> because of this error, what can be wrong?
So you upgraded just one node, and the other is still unchanged? Can
you give the Pacemaker and Corosync ver
On 02/21/12 13:39, Johan wrote:
> I've been following this
> http://publications.jbfavre.org/virtualisation/cluster-
> xen-corosync-pacemaker-drbd-ocfs2.en tutorial on how to setup a pacemaker xen
> cluster. I'm all new to this so pls bear over with me.
> The big problem is that when UI get to th
On Mon, Feb 20, 2012 at 10:16 PM, diego fanesi wrote:
> Actually I'm studying this technology. This is only a test. I'm trying to
> understand all possible configuration and at the moment I have some problems
> to understand the differences among file systems. Now I'm trying to use
> ocfs2 and it
On 02/18/12 10:59, diego fanesi wrote:
> are you saying I can install drbd + gfs2 + pacemaker without using cman?
> It seems that gfs2 depends on cman...
Only on RHEL/CentOS/Fedora. Not on Debian.
> I want to realize active/active cluster and I'm following the document
> "cluster from scratch" th
On Sat, Feb 18, 2012 at 7:19 PM, David Coulson wrote:
> I have an active/active LVS cluster, which uses pacemaker for managing IP
> resources. Currently I have one environment running on it which utilizes ~30
> IP addresses, so a group was created so all resources could be
> stopped/started togeth
On Sun, Feb 12, 2012 at 10:01 PM, diego fanesi wrote:
> Hi,
>
> I'm trying to install corosync with pacemaker using drbd + gfs2 with cman
> support.
Why?
GFS2 with dual-Primary DRBD with Pacemaker 1.1.6 is working very well
in squeeze-backports with the dlm_controld.pcmk and gfs_controld.pcmk
da
On Fri, Feb 10, 2012 at 1:38 PM, Nick Khamis wrote:
> May I ask where the original blog resides? The one
> with the "bizerk blog comments"
http://www.lmgtfy.com/?q=percona+replication+manager&l=1
SCNR. :)
Florian
--
Need help with High Availability?
http://www.hastexo.com/now
___
On Wed, Feb 8, 2012 at 2:29 PM, Hugo Deprez wrote:
> Dear community,
>
> I am currently running different corosync / drbd cluster using VM running on
> vmware esxi host.
> Guest Os are Debian Squeeze.
>
> the active member of the cluster just freeze the VM was unreachable.
> But the resources didn
On Wed, Feb 8, 2012 at 3:09 PM, Dan Frincu wrote:
> I've reviewed both files and made some minor additions and fixed a
> couple of typos, other than that looks great.
>
> One question though, shouldn't these have been in Docbook format?
Should be easy to generate that from Asciidoc. "asciidoc -b
On Wed, Jan 25, 2012 at 6:49 PM, Gregg Stock wrote:
> Hi,
> I'm trying to setup a 5 node cluster, the same topology as described in
> Roll Your Own Cloud: Enterprise Virtualization with KVM, DRBD, iSCSI and
> Pacemaker
> http://blip.tv/linuxconfau/roll-your-own-cloud-enterprise-virtualization-with
On Thu, Jan 26, 2012 at 12:43 AM, Peter Scott wrote:
> Hello. Our problem is that a Corosync restart on the idle machine in a
> 2-node cluster shutds down the mysqld process there and we need it to stay
> up for replication.
Well if you just want to restart Corosync by administrative
interventio
On Mon, Jan 16, 2012 at 10:59 AM, Andrew Beekhof wrote:
> By "Nuclear", I meant nothing at all from Pacemaker.
Which is not what it does.
> If thats what you want, there's a far easier way to achieve this and
> keep usable logs around for debugging, set facility to none and add a
> logfile.
No
On Sun, Jan 15, 2012 at 9:27 PM, Andrew Beekhof wrote:
> On Thu, Jan 12, 2012 at 11:01 PM, Florian Haas wrote:
>> On Thu, Jan 5, 2012 at 10:15 PM, Florian Haas wrote:
>>> Florian Haas (2):
>>> extra: add rsyslog configuration snippet
>>> extra:
On Thu, Jan 12, 2012 at 2:15 PM, Vladislav Bogdanov
wrote:
> I marked that message as "Important" and will include into my builds
> even if it does not go upstream.
>
> One question - does it break default hb_report and crm_report behavior?
Good point. I presume it would make sense to include any
On Thu, Jan 5, 2012 at 10:15 PM, Florian Haas wrote:
> Florian Haas (2):
> extra: add rsyslog configuration snippet
> extra: add logrotate configuration snippet
>
> configure.ac | 4 +++
> extra/Makefile.am | 2 +-
On Tue, Jan 10, 2012 at 10:24 PM, Arnold Krille wrote:
> Is it possible for slaves to modify their score for promotion? I think that
> would be an interesting feature.
>
> Probably something like that could already be achieved with dependency-rules
> and variables. But I think a function for resou
On Wed, Jan 11, 2012 at 1:44 AM, Andrew Beekhof wrote:
> On Wed, Jan 11, 2012 at 3:30 AM, Andrew Martin wrote:
>> 3. Limit the DRBD, nfs, and smbd resources to only node1 and node2 by adding
>> a location rule for the g_nfs group (which includes p_fs_drbd0
>> p_lsb_nfsserver p_exportfs_drbd0 p_ip
1 - 100 of 461 matches
Mail list logo