Hi Andrew,
i dont think SBD can be used for persistent reservation. this sbd disk is
just used as heartbeat disk. nodes use this disk to communicate fencing
request. i was trying to conifgure this sometime back by taking help from
dejan. if a node looses access to SBD disk it reset itself. we can
On Mon, Feb 16, 2009 at 10:10:59PM +0100, Andrew Beekhof wrote:
> Far later than I had hoped, but here is the latest and greatest from the
> Pacemaker project.
>
> As mentioned previously, this release has only been verified with the
> OpenAIS stack (in order not to delay it any more than it has
Far later than I had hoped, but here is the latest and greatest from
the Pacemaker project.
As mentioned previously, this release has only been verified with the
OpenAIS stack (in order not to delay it any more than it has been
already).
There is no reason to suspect it doesn't work with
I believe its called SBD, but I'm no expert on it
On Feb 16, 2009, at 4:15 PM, Glory Smith wrote:
Hi Andrew,
how do we configure pesisten reservation fencing in suse 11.
Thanks,
On Mon, Feb 16, 2009 at 4:48 PM, Glory Smith
wrote:
I get the feeling that by "resource fencing", you just
Hi Andrew,
how do we configure pesisten reservation fencing in suse 11.
Thanks,
On Mon, Feb 16, 2009 at 4:48 PM, Glory Smith wrote:
>
>
>> I get the feeling that by "resource fencing", you just mean scsi
>> reservations which are already possible in the current framework.
>>
>
> Yes i want p
Hi Andrew,
and thanks for reply..
On Mon, Feb 16, 2009 at 02:16:08PM +0100, Andrew Beekhof wrote:
> indeed. i've fixed a few memory leaks recently, I'll have to check if
> pingd was affected.
I'm using 090212 snapshot, according to mercurial, You've fixed some leaks
before,
none of them after
On Feb 16, 2009, at 2:19 PM, Andreas Kurz wrote:
You can have multiple stonith devices. If one fails, we just try
the next.
That sounds interesting ... is there somewhere an example how to
configure this? Especially how to configure the order of presedence?
I forget the details, but De
Andrew Beekhof wrote:
>
> On Feb 16, 2009, at 11:53 AM, Glory Smith wrote:
>
>>
>>
>> having an unreliable stonith mechanism is worse than not having one at
>> all.
>>
>> what if your resource fencing has a bug? its the same problem.
>>
>> reliable fencing is a fundamental requirement of the clus
Hi Andrew,
I've been forced to install latest 1.0 snapshot to one of production servers
since pingd in 1.0.1 was unusable for me. So I'm already testing what's going
to be 1.0.2 release :)
Everything works as expected (even the pingd with multiple NICs).
But today I noticed that there might be one
On Feb 16, 2009, at 2:07 PM, Nikola Ciprich wrote:
Hi Andrew,
I've been forced to install latest 1.0 snapshot to one of production
servers
since pingd in 1.0.1 was unusable for me. So I'm already testing
what's going
to be 1.0.2 release :)
Everything works as expected (even the pingd with
>
> I get the feeling that by "resource fencing", you just mean scsi
> reservations which are already possible in the current framework.
>
Yes i want persistent scsi reservation .it's really great that it is
possible in current framework , but i couldnot find it and when posted a
query about this
On Feb 16, 2009, at 11:53 AM, Glory Smith wrote:
having an unreliable stonith mechanism is worse than not having one
at all.
what if your resource fencing has a bug? its the same problem.
reliable fencing is a fundamental requirement of the cluster.
i understand . but If i can do both (
having an unreliable stonith mechanism is worse than not having one at all.
>
> what if your resource fencing has a bug? its the same problem.
>
> reliable fencing is a fundamental requirement of the cluster.
i understand . but If i can do both (node as well as resource fencing) then
there is ve
On Feb 16, 2009, at 11:24 AM, Glory Smith wrote:
we kill the node with STONITH.
very hard for a machine to write to shared media when its powered off.
we can kill nodes when:
- nodes become unresponsive
- nodes are not part of the cluster that has quorum
- resources fail to stop when inst
Glory Smith wrote:
>>
>>
>>
>> we kill the node with STONITH.
>> very hard for a machine to write to shared media when its powered off.
>>
>>
>> we can kill nodes when:
>> - nodes become unresponsive - nodes are not part of the cluster that has
>> quorum
>> - resources fail to stop when instructed
>
>
>
>
> we kill the node with STONITH.
> very hard for a machine to write to shared media when its powered off.
>
>
> we can kill nodes when:
> - nodes become unresponsive - nodes are not part of the cluster that has
> quorum
> - resources fail to stop when instructed
> - resources fail in any wa
On Mon, Feb 9, 2009 at 17:19, Raoul Bhatia [IPAX] wrote:
> Raoul Bhatia [IPAX] wrote:
>>>
>>> Last updated: Fri Dec 5 19:53:33 2008
>>> Current DC: NONE
>>> 2 Nodes configured.
>>> 9 Resources configured.
>>>
>>>
>>> Node: wc02 (f36760d8-d84a-46b2-b452-4c8cac8b3396): onl
On Feb 16, 2009, at 9:40 AM, Glory Smith wrote:
Many Thanks Andrew ,
The same way heartbeat did (if you're familiar with that).
Check out the section on STONITH in the configuration explained
document (http://clusterlabs.org/wiki/Documentation)
Unfortunetly this document's stonith chapter
Many Thanks Andrew ,
The same way heartbeat did (if you're familiar with that).
>> Check out the section on STONITH in the configuration explained document (
>> http://clusterlabs.org/wiki/Documentation)
>
>
Unfortunetly this document's stonith chapter is blank. Anways i have gone
through some ot
On Thu, Feb 12, 2009 at 15:18, Vit Pelcak wrote:
> Hi people.
>
> I'd like to ask you for help with some problem.
>
> I have to test scsi_reservation in pacemaker.
>
> Currently, I have two operating systems in vmware + 2 shared disks set
> with scsi_reservation. On each shared disk there is lvm w
On Feb 16, 2009, at 8:48 AM, Glory Smith wrote:
Hi All,
I need an urgent help . i am planning to use suse 11 openais
cluster. i need to know one very basic question . how does it handle
data intigrity.
The same way heartbeat did (if you're familiar with that).
Check out the section on ST
21 matches
Mail list logo