On the resource's start action, set requires=quorum
I think that should work
> On 23 Jan 2015, at 1:25 pm, Rahim Millious wrote:
>
> To clarify, I want to do this only for the one resource and keep the others
> running. I am currently using ignore quorum loss. Sorry for being vague in my
> pre
To clarify, I want to do this only for the one resource and keep the others
running. I am currently using ignore quorum loss. Sorry for being vague in my
previous email.
On January 22, 2015 6:21:34 PM MST, Andrew Beekhof wrote:
>
>> On 23 Jan 2015, at 8:50 am, Rahim Millious
>wrote:
>>
>> Hel
> On 23 Jan 2015, at 8:50 am, Rahim Millious wrote:
>
> Hello,
>
> I am hoping someone can help me. I have a custom resource agent which
> requires access (via ssh) to the passive node in order to function correctly.
> Is it possible to stop the resource when quorum is lost and restart it whe
< snip >
It sounds like default-resource-stickiness does not kick in; and with
default resource-stickiness=1 it is expected (10 > 6). Documentation
says default-recource-stickiness is deprecated so may be it is ignored
in your version altogether? What "ptest -L -s" shows?
I see now that defaul
Hello,
I am hoping someone can help me. I have a custom resource agent which requires
access (via ssh) to the passive node in order to function correctly. Is it
possible to stop the resource when quorum is lost and restart it when it is
regained? Thanks.
Rahim___
El jue, 22-01-2015 a las 11:20 +0100, Dejan Muhamedagic escribió:
> Hi,
>
> On Wed, Jan 21, 2015 at 05:14:48PM +0100, A.Rubio wrote:
> > Hello
> >
> > I have
> >
> > CentOS 7
> > Pacemaker 1.1.10-32.el7_0.1
> > Corosync Cluster Engine, version '2.3.3'
> > libvirtd (libvirt) 1.1.1
> >
> > with a
On Wed, Jan 21, 2015 at 11:06 PM, brook davis wrote:
> Hi,
>
> I've got a master-slave resource and I'd like to achieve the following
> behavior with it:
>
> * Only ever run (as master or slave) on 2 specific nodes (out of N possible
> nodes). These nodes are predetermined and are specified at re
Hi,
On Wed, Jan 21, 2015 at 05:14:48PM +0100, A.Rubio wrote:
> Hello
>
> I have
>
> CentOS 7
> Pacemaker 1.1.10-32.el7_0.1
> Corosync Cluster Engine, version '2.3.3'
> libvirtd (libvirt) 1.1.1
>
> with a virtual machine defined
>
> Resource: srvdev02 (class=ocf provider=heartbeat type=VirtualD
Michael Schwartzkopff writes:
>
> Am Donnerstag, 22. Januar 2015, 10:03:38 schrieb E. Kuemmerle:
> > On 21.01.2015 11:18 Digimer wrote:
> > > On 21/01/15 08:13 AM, Andrea wrote:
> > >> > Hi All,
> > >> >
> > >> > I have a question about stonith
> > >> > In my scenarion , I have to create 2 node
Am Donnerstag, 22. Januar 2015, 10:03:38 schrieb E. Kuemmerle:
> On 21.01.2015 11:18 Digimer wrote:
> > On 21/01/15 08:13 AM, Andrea wrote:
> >> > Hi All,
> >> >
> >> > I have a question about stonith
> >> > In my scenarion , I have to create 2 node cluster, but I don't have any
> >> > hardware de
Andrew Beekhof writes:
>>> you'll want a recurring monitor with role=Stopped
>>>
>>
>> How is it done?
>
> I don't know the crmsh syntax. Sorry
>
>>
>> I've tried on 1.1.12 with:
>> primitive Nginx lsb:nginx \
>> op monitor interval=2s \
>> op monitor interval=3s role=Stopped
>>
>>
On 21.01.2015 11:18 Digimer wrote:
> On 21/01/15 08:13 AM, Andrea wrote:
>> > Hi All,
>> >
>> > I have a question about stonith
>> > In my scenarion , I have to create 2 node cluster, but I don't have any
>> > hardware device for stonith. No APC no IPMI ecc, no one of the list
>> > returned
>> > b
12 matches
Mail list logo