hi,
found several threads about that in archive - no solution for me.
---
i'm running pacemaker 1.1.12 (+corosync 2.3.3 on sles 11.3). compiled
including esmtp support for crm_mon.
crm_mon help shows mailing parameters. i tried 2 ClusterMon Agent
implementations for notification:
-T/-F/-H and -E
Von:Andrew Beekhof
An: The Pacemaker cluster resource manager
Datum: 30.07.2014 10:54
Betreff:Re: [Pacemaker] Pacemaker 1.1.12 - crm_mon email
notification
On 30 Jul 2014, at 6:08 pm, philipp.achmuel...@arz.at wrote:
>> hi,
>>
>> found several threads about that in archi
hi,
is it possible to set up different move types for VM?
- infinity colocation with pingd-clone -> when failing on one node, live
migrate VM(s) to remaining nodes
- infinity colocation to LVM-clone -> when failing on one node, cold
migration (stop/restart) VM(s) on remaining nodes
thank you!
hi,
configuration and behavoir:
$ crm configure show
node lnx0012a \
attributes standby="off"
node lnx0012b \
attributes standby="on"
primitive pingd ocf:heartbeat:pingd \
params host_list="10.1.236.100" multiplier="100" \
op monitor interval="15s" timeout="20s"
pr
Ich bin ab 23.03.2010 nicht im Büro. Sie erreichen mich wieder am
25.03.2010.
Ich werde Ihre Nachricht nach meiner Rückkehr beantworten. In dringenden
Fällen wenden sie sich bitte an meinen Kollegen Sammer Bernhard (DW 1443)
bzw an die UNIX-Hotline DW 1444.
hi,
following configuration:
node lnx0047a
node lnx0047b
primitive lnx0101a ocf:heartbeat:KVM \
params name="lnx0101a" \
meta allow-migrate="1" target-role="Started" \
op migrate_from interval="0" timeout="3600s" \
op migrate_to interval="0" timeout="3600s" \
>> any ideas on the "unrunnable" problem?
>That's expected: one can't run operations on a node which is offline.
i would expect a failover of the resources to node lnx0047b. since
lnx0047a is stonith'ed, the resources should start on remaining node.
>> any ideas on the stonith problem?
> We'd ne
i removed the clone, set the global cluster property for stonith-timeout.
the nodes need about 3-5 minutes to startup after they get "shot"
i did some more tests and found out that if the node, which runs resource
sbd_fence, get "shot" the remaining node see the stonith resource online
on both
hi,
any recommendation/documentation for a reliable fencing implementation on
a multi-node cluster (4 or 6 nodes on 2 site).
i think of implementing multiple node-fencing devices for each host to
stonith remaining nodes on other site?
thank you!
Philipp
hi,
Von:Dejan Muhamedagic
An: The Pacemaker cluster resource manager
Datum: 28.10.2014 16:45
Betreff:Re: [Pacemaker] fencing with multiple node cluster
>
>
>Hi,
>
>On Tue, Oct 28, 2014 at 09:51:02AM -0400, Digimer wrote:
>>> On 28/10/14 05:59 AM, philipp.achmuel...@arz.at w
hi,
how to cleanup cib from node after unexpected system halt?
failed node still thinks of running VirtualDomain resource, which is
already running on other node in cluster(sucessful takeover:
executing "pcs cluster start" -
Apr 8 13:41:10 lnx0083a daemon:info lnx0083a
VirtualDomain(lnx0
>Am Donnerstag, 9. April 2015, 10:27:51 schrieben Sie:
>(...)
> why does pacemaker try to move VM to joining node?
> ...
(...)
>
> role="Started" rsc="lnx0106a" score="-INFINITY"/>
> ...
>You orderd pacemaker to do so. Probably by a crm resource migrate
command.
>Location constraints tha
Von:Michael Schwartzkopff
An: The Pacemaker cluster resource manager
Datum: 08.04.2015 17:12
Betreff:Re: [Pacemaker] update cib after fence
Am Mittwoch, 8. April 2015, 15:03:48 schrieb philipp.achmuel...@arz.at:
> hi,
>
> how to cleanup cib from node after unexpected system
13 matches
Mail list logo