Hi Andreas!
On 15.05.2013 22:55, Andreas Kurz wrote:
On 2013-05-15 15:34, Klaus Darilion wrote:
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist="pace1" dom0="xentest1"
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist="pace1" dom0="xentest1" \
op start start-delay="15s" interval="0"
Try;
primitive st-pace1
Hi!
I have a 2 nodes cluster: a simple test setup with a
ocf:heartbeat:IPaddr2 resource, using xen VMs and stonith:external/xen0.
Please see the complete config below.
Basically everything works fine, except in the case of broken corosync
communication between the nodes (simulated by shuttin
Just for the records: I had forgotten to setup a "order" constraint to
start the filesystem after the promotion of the master.
order drbd_before_grp_database inf: ms_drbd0:promote grp_database:start
regards
Klaus
Am 09.06.2011 16:18, schrieb Klaus Darilion:
>
>
> Am 09.06.
Am 09.06.2011 01:05, schrieb Anton Altaparmakov:
> Hi Klaus,
>
> On 8 Jun 2011, at 22:21, Klaus Darilion wrote:
>> Hi!
>>
>> Currently I have a 2 node cluster and I want to add a 3rd node to use
>> quorum to avoid split brain.
>>
>> The service (DRBD
Klaus Darilion wrote:
Hi!
Currently I have a 2 node cluster and I want to add a 3rd node to use
quorum to avoid split brain.
The service (DRBD+DB) should only run either on node1 or node2. Node3
can not provide the service - it should just help the other nodes to
find out if their network is
Hi!
Currently I have a 2 node cluster and I want to add a 3rd node to use
quorum to avoid split brain.
The service (DRBD+DB) should only run either on node1 or node2. Node3
can not provide the service - it should just help the other nodes to
find out if their network is broken or the other node's
Hi!
Recently I got hit by running out of inodes due to too many files in
/var/lib/pengine.
I wonder, are there other hidden stumbling block that may hit me?
Thanks
Klaus
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlab
Am 14.03.2011 12:49, schrieb Pavel Levshin:
> 14.03.2011 12:27, Klaus Darilion пишет:
>>>> 2. before adding the IP address, it will delete the IP address if the
>>>> address is already configured (on any interface, with any netmask).
>>>> Thus
>>>&
Am 14.03.2011 12:49, schrieb Pavel Levshin:
> 14.03.2011 12:27, Klaus Darilion пишет:
>>>> 2. before adding the IP address, it will delete the IP address if the
>>>> address is already configured (on any interface, with any netmask).
>>>> Thus
>>>&
Am 12.03.2011 10:36, schrieb Pavel Levshin:
> 10.03.2011 21:25, Klaus Darilion:
>> 2. before adding the IP address, it will delete the IP address if the
>> address is already configured (on any interface, with any netmask). Thus
>> the "add" will always work.
&
Hi!
I wonder what a proper value for "dampen" would be. Dampen is documented as:
# attrd_updater --help|grep dampen
-d, --delay=value The time to wait (dampening) in seconds further
changes occur
So, I would read this as the delay to forward changes, e.g. to not
trigger fail-over on the f
Hi!
For maintenance reasons (e.g. updating pacemaker) it might be necessary
to shut down pacemaker. But in such cases I want that the services to
keep running.
Is it possible to shut down pacemaker but keep the current service
state, ie. all services should keep running on their current node.
th
I fixed a bug in the patch.
regards
Klaus
Am 10.03.2011 19:25, schrieb Klaus Darilion:
> Am 08.03.2011 13:32, schrieb Klaus Darilion:
>> Hi!
>>
>> Instead of adding a virtual IP address to an interface
>> (ocf:heartbeat:IPaddr2), how do I manage a physical interfac
Am 08.03.2011 13:32, schrieb Klaus Darilion:
> Hi!
>
> Instead of adding a virtual IP address to an interface
> (ocf:heartbeat:IPaddr2), how do I manage a physical interface? Are there
> any special resource scripts?
Hi!
I tried to reuse ocf:heartbeat:IPaddr2 to manage and IP add
Hi!
I installed Pacemaker (Debian Squeeze) but starting fails (see below).
The only difference to my other installation is, that this one is
running in an OpenVZ container. Are there any known issues?
Memory shouldn't be the problem: # cat /proc/meminfo
MemTotal:2433592 kB
MemFree:
Hi!
Are there somewhere any debian packages of 1.1 branch available? If no,
are there somewhere instructions how to build them? I tried but failed.
regards
Klaus
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailm
Hi!
Instead of adding a virtual IP address to an interface
(ocf:heartbeat:IPaddr2), how do I manage a physical interface? Are there
any special resource scripts?
Thanks
Klaus
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterl
Hi!
I have node A (armani) and node B (bulgari). I have configured a virtual
IP address and Kamailio (SIP proxy) as cloned resource running on both
nodes. If Kamailio failes two times on the node with the IP address, the
IP address is migrated.
The problem happens when Kamailio fails on both node
Hi!
I just updated by Debian box to wheezy to test Pacemaker 1.0.10. dpkg
reports version 1.0.10 but crm_mon reports version 1.0.9. So, which
version is really running? Is really 1.0.9 running or is this due to the
previously used 1.0.9 version?
# dpkg -l|grep pacem
ii pacemaker1.0.1
Hi!
I still suffer the problem, that the fail-count is not cleared after
failure-timeout. After the second failure of Kamailio the IP-address is
moved to the other node, and restarted on the previous node after
failure-timeout.
But as the fail-count is not cleared, subsequent failures will cause
Am 14.02.2011 16:43, schrieb ruslan usifov:
> I have two internet providers, with preferred one of them(main provider),
> and want when main provider if down, then second provider is up, and
> internet will steel work. I see for myself this like:
>
> define 2 primitive throw "ocf:pacemaker:ping
Am 14.02.2011 14:45, schrieb Raoul Bhatia [IPAX]:
> On 02/14/2011 02:37 PM, Klaus Darilion wrote:
>> Somehow pacemaker does not react as I would expect it. My config is:
>>
>> primitive failover-ip ocf:heartbeat:IPaddr \
>> params ip="83.136.32.161"
Am 14.02.2011 14:45, schrieb Raoul Bhatia [IPAX]:
> On 02/14/2011 02:37 PM, Klaus Darilion wrote:
> ps. i'd very much love to see a ocf compatible ra instead of the lsb
> script ;)
But if the LSB script is conform, will I get better results? I will
replacing the lsb with an ocf
Am 11.02.2011 16:13, schrieb Raoul Bhatia [IPAX]:
> On 02/11/2011 03:07 PM, Klaus Darilion wrote:
...
>> Or, how should pacemaker behave if Kamailio on the active node crashes.
>> Shall it just restart Kamailio or shall it migrate the IP address to the
>> other node an
Hi!
I'm using pacemaker on Lenny with packages from lenny-backports and miss
man pages. Is there a certain package I need to install to get the man
pages, or is it just a deficiency of the packages?
regards
Klaus
___
Pacemaker mailing list: Pacemaker@o
Florian Haas wrote:
On 02/11/2011 07:58 PM, paul harford wrote:
Hi Florian
i had seen apache 2 in one of the pacemaker mails, it may have been a
typo but i just wanted to check, thanks for your help
Welcome. And I noticed I left out a "2" in the dumbest of places in my
original reply, but I tr
Hi Raoul!
Am 11.02.2011 16:13, schrieb Raoul Bhatia [IPAX]:
> On 02/11/2011 03:07 PM, Klaus Darilion wrote:
...
>> Is there some protection in pacemaker to not endlessly trying to restart
>> such a broken service?
>>
>> Or, how should pacemaker behave if Kamailio
Am 11.02.2011 11:27, schrieb Raoul Bhatia [IPAX]:
> hi,
>
> On 02/09/2011 03:04 PM, Klaus Darilion wrote:
> ...
>>
>>
>> server1 server2
>> ip1ip2
>> <-virtual-IP-
Hi!
I wonder if someone gives me some ideas how to achieve automatic
failover between my redundant load balancers. The setup I want to
implement is:
server1 server2
ip1ip2
<-virtual-IP--->
The load-balanc
ns" \
dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-opt
Am 08.02.2011 18:20, schrieb Florian Haas:
> On 02/08/2011 06:03 PM, Klaus Darilion wrote:
>> Now I put server2 online again: # crm node online server2.
>> That means, server2 is online and has ping connectivity, server1 is
>> online and doesn't have ping connectivity.
forgot to mention that armani=server1 and bulgari=server2 (showing some
respect to fashion brands :-)
Am 09.02.2011 10:16, schrieb Klaus Darilion:
>
>
> Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
>> On Tuesday 08 February 2011 18:03:31 Klaus Darilion wrote:
> ...
>
Am 09.02.2011 09:48, schrieb Florian Haas:
> On 2011-02-09 09:25, Klaus Darilion wrote:
>> Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
>>>> Then I changed node server2 to standby: # crm node standby server2.
>>>>>
>>>>>
>
Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
> On Tuesday 08 February 2011 18:03:31 Klaus Darilion wrote:
...
>> 3. Now, server1, hosting the virtual-IP, loost connectivity to the ping
>> target (I inserted a firewall rule) -> The virtual-IP stayed with server1.
>&
Am 08.02.2011 18:20, schrieb Florian Haas:
> On 02/08/2011 06:03 PM, Klaus Darilion wrote:
>> Hi!
>>
>> I'm a newbie and have a problem with a simple virtual-IP config. I want
>> the virtual-IP to be either on server1 or server2, depending on which of
>> the
Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
>> Then I changed node server2 to standby: # crm node standby server2.
>> >
>> >
>> > Node server2: standby
>> > Online: [ server1 ]
>> >
>> > failover-ip (ocf::heartbeat:IPaddr):Started server1
>> >Clone Set: clonePing
>> 3. Now, server1, hosting the virtual-IP, loost connectivity to the ping
>> target (I inserted a firewall rule) -> The virtual-IP stayed with server1.
>>
>> Now I put server2 online again: # crm node online server2.
>> That means, server2 is online and has ping connectivity, server1 is
>> online
Hi!
I'm a newbie and have a problem with a simple virtual-IP config. I want
the virtual-IP to be either on server1 or server2, depending on which of
the server is having network connectivity (ping) to the outside. My
config is:
node server1 \
attributes standby="off"
node server2 \
39 matches
Mail list logo