Thanks Fernando
this it's correct?
==
primitive OracleFS ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/OraData" directory="/u01" fstype="ext1"
===
ext1 ?
Il giorno 02 aprile 2012 05:50, Ruwan Fernando h
Dear community,
I am using a puppet mode in order to manage my cluster.
I get a weird thing with the start & stop of the corosync daemon.
When I modify the corosync.conf file, puppet is asked to restart / reload
corosync, but it failed on the command :
start-stop-daemon --stop --quiet --retry fo
On Fri, Mar 30, 2012 at 8:33 PM, Florian Haas wrote:
> On Fri, Mar 30, 2012 at 10:37 AM, Andrew Beekhof wrote:
>> I blogged about it, which automatically got sent to twitter, and I
>> updated the IRC channel topic, but alas I forgot to mention it here
>> :-)
>>
>> So in case you missed it, 1.1.7
On Fri, Mar 30, 2012 at 7:34 PM, Florian Haas wrote:
> On Fri, Mar 30, 2012 at 1:12 AM, Andrew Beekhof wrote:
>> Because it was felt that RAs shouldn't need to know.
>> Those options change pacemaker's behaviour, not the RAs.
>>
>> But subsequently, in lf#2391, you convinced us to add notify sinc
On Mon, Apr 2, 2012 at 11:33 AM, Andrew Beekhof wrote:
> On Fri, Mar 30, 2012 at 8:33 PM, Florian Haas wrote:
>> On Fri, Mar 30, 2012 at 10:37 AM, Andrew Beekhof wrote:
>>> I blogged about it, which automatically got sent to twitter, and I
>>> updated the IRC channel topic, but alas I forgot to
On Mon, Apr 2, 2012 at 11:34 AM, Hugo Deprez wrote:
> Dear community,
>
> I am using a puppet mode in order to manage my cluster.
> I get a weird thing with the start & stop of the corosync daemon.
>
> When I modify the corosync.conf file, puppet is asked to restart / reload
> corosync, but it fai
On Mon, Apr 2, 2012 at 11:54 AM, Andrew Beekhof wrote:
> On Fri, Mar 30, 2012 at 7:34 PM, Florian Haas wrote:
>> On Fri, Mar 30, 2012 at 1:12 AM, Andrew Beekhof wrote:
>>> Because it was felt that RAs shouldn't need to know.
>>> Those options change pacemaker's behaviour, not the RAs.
>>>
>>> Bu
On Mon, Apr 2, 2012 at 8:05 PM, Florian Haas wrote:
> On Mon, Apr 2, 2012 at 11:54 AM, Andrew Beekhof wrote:
>> On Fri, Mar 30, 2012 at 7:34 PM, Florian Haas wrote:
>>> On Fri, Mar 30, 2012 at 1:12 AM, Andrew Beekhof wrote:
Because it was felt that RAs shouldn't need to know.
Those op
On Mon, Apr 2, 2012 at 12:32 PM, Andrew Beekhof wrote:
>> Well, but you did read the technical reason I presented here?
>
> Yes, and it boiled down to "don't let the user hang themselves".
> Which is a noble goal, I just don't like the way we're achieving it.
>
> Why not advertise the requirements
Hi everyone.
I have 2 nodes running on ESX hosts in 2 geographically diverse data
centres. The link between them is a DWDM fibre link which is the only
thing I can think of as being the cause of this.
SLES 11 SP1 with HAE. All latest updates.
If Corosync is set to Multicast on the defau
Hello,
I'm just looking to verify that I'm understanding/configuring SBD
correctly. It works great in the controlled cases where you unplug a node
from the network (it gets fenced via SBD) or remove its access to the
shared disk (the node suicides). However, In the event of a hardware
failure or
On 2012-04-02T11:34:23, Hugo Deprez wrote:
> I am using a puppet mode in order to manage my cluster.
> I get a weird thing with the start & stop of the corosync daemon.
>
> When I modify the corosync.conf file, puppet is asked to restart / reload
> corosync, but it failed on the command :
>
> s
On 2012-04-02T14:53:53, darren.mans...@opengi.co.uk wrote:
> I have 2 nodes running on ESX hosts in 2 geographically diverse data
> centres. The link between them is a DWDM fibre link which is the only
> thing I can think of as being the cause of this.
>
> SLES 11 SP1 with HAE. All latest updates
On 2012-04-02T09:33:22, mark - pacemaker list wrote:
> Hello,
>
> I'm just looking to verify that I'm understanding/configuring SBD
> correctly. It works great in the controlled cases where you unplug a node
> from the network (it gets fenced via SBD) or remove its access to the
> shared disk (
Hi Lars,
On Mon, Apr 2, 2012 at 10:35 AM, Lars Marowsky-Bree wrote:
> On 2012-04-02T09:33:22, mark - pacemaker list
> wrote:
>
> > Hello,
> >
> > I'm just looking to verify that I'm understanding/configuring SBD
> > correctly. It works great in the controlled cases where you unplug a
> node
>
> On 2012-04-02T14:53:53, darren.mans...@opengi.co.uk wrote:
>
> > I have 2 nodes running on ESX hosts in 2 geographically
diverse data
> > centres. The link between them is a DWDM fibre link which is
the only
> > thing I can think of as being the cause of this.
> >
On Mon, Apr 2, 2012 at 9:04 PM, Florian Haas wrote:
> On Mon, Apr 2, 2012 at 12:32 PM, Andrew Beekhof wrote:
>>> Well, but you did read the technical reason I presented here?
>>
>> Yes, and it boiled down to "don't let the user hang themselves".
>> Which is a noble goal, I just don't like the way
sorry. It should be ext4. i did some formatting to that text to suit
with email content and mistakenly it has changed.
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://w
Resolved this issue by upgrading pacemaker to 1.1.6 and by adding the following
to corosync.conf:
aisexec {
user: root
group: root
}
service {
name: pacemaker
ver: 0
use_mgmtd: yes
}
19 matches
Mail list logo