On 02/09/2011 02:48 PM, Stephan-Frank Henry wrote:
> Hello agian,
>
> after fixing up my VirtualIP problem, I have been doing some Split Brain
> tests and while everything 'returns to normal', it is not quite what I had
> desired.
>
> My scenario:
> Acive/Passive 2 node cluster (serverA & serve
2011/1/31 José Luis Rodríguez García :
> Is pacemaker compatible with Solaris 10 x64 using IPV6?
Never tried it, but if you can get corosync or heartbeat running
there, pacemaker should work just fine.
>
> Anyone tried it?
>
>
> Best regards
>
> ___
> P
On Wed, Feb 9, 2011 at 2:48 PM, Stephan-Frank Henry wrote:
> Hello agian,
>
> after fixing up my VirtualIP problem, I have been doing some Split Brain
> tests and while everything 'returns to normal', it is not quite what I had
> desired.
>
> My scenario:
> Acive/Passive 2 node cluster (serverA
You can try crm_mon -o
That will at least tell you what Pacemaker did with your resource.
At the very least there should be a _monitor_0 operation
On Wed, Feb 9, 2011 at 3:58 PM, Testuser SST wrote:
> Hi Vladislav,
>
> no, selinux is disabled right now.
>
> Kind Regards
>
> Sven
>
>
> --
On Wed, Feb 9, 2011 at 3:59 PM, wrote:
> Hi There,
>
> After a network and power shutdown, my LAMP cluster servers were totally
> screwed up.
>
> Now crm status gives me
>
> crm status
>
> Last updated: Wed Feb 9 09:44:17 2011
> Stack: Heartbeat
> Current DC: arsvr2 (bc6bf61d-6b5f-
The "default timeout" referred to here is a cluster-wide one.
The warning is indicating that this is insufficient for the resource
they are configuring.
-ENOTABUG
On Thu, Feb 10, 2011 at 12:49 AM, Simon Horman wrote:
> Hi Pacemaker upstream people,
>
> could someone comment on this bug report.
>
Hi Pacemaker upstream people,
could someone comment on this bug report.
The bug report can be seen at http://bugs.debian.org/612682
CCing 612...@bugs.debian.org should append any responses
to the but report.
Thanks
- Forwarded message from Michael Prokop -
Date: Wed, 09 Feb 2011 23:34
Hi Pacemaker upstream people,
could someone comment on this bug report.
The bug report can be seen at http://bugs.debian.org/612678
CCing 612...@bugs.debian.org should append any responses
to the but report.
Thanks
- Forwarded message from Michael Prokop -
Date: Wed, 09 Feb 2011 23:23
hi,
On 09.02.2011 08:43, u.schmel...@online.de wrote:
I don't use ocf:heartbeat:apache because of my special configuration:
apache is active on both nodes, the access is via haproxy, we have an
url http://localhost:80 which accesses the local apache via ssl and
delivers a static page (returning
On 02/09/2011 05:47 PM, Pentarh Udi wrote:
> I noticed that pacemaker does not correctly failover nodes under heavy
> load when they go into deep swap or heavy IO.
>
> I configuring >1 nodes running apache with MaxClients big enough to
> swap out the node, putting there some heavy php scripts (Wo
I noticed that pacemaker does not correctly failover nodes under heavy load
when they go into deep swap or heavy IO.
I configuring >1 nodes running apache with MaxClients big enough to swap
out the node, putting there some heavy php scripts (Wordpress ^_^) and then
run heavy webserver benchmarks.
Hy
I need a technical solution for following challenge:
I need a logging daemon, which accepts remote logs from other servers and
writes those to a shared space in an active/passive cluster. If we do this
over syslog-ng not monitored over the cluster, we can't failover when the
logging continu
Forgot mentioning that the pair of nodes work before. And I can still run "crm
configure show". Here is the configuration.
node $id="bc6bf61d-6b5f-4307-85f3-bf7bb11531bb" arsvr2 \
attributes standby="off"
node $id="bf0e7394-9684-42b9-893b-5a9a6ecddd7e" arsvr1 \
attributes standby=
Hi There,
After a network and power shutdown, my LAMP cluster servers were totally
screwed up.
Now crm status gives me
crm status
Last updated: Wed Feb 9 09:44:17 2011
Stack: Heartbeat
Current DC: arsvr2 (bc6bf61d-6b5f-4307-85f3-bf7bb11531bb) - partition with
quorum
Version: 1.0.
Hi Vladislav,
no, selinux is disabled right now.
Kind Regards
Sven
Original-Nachricht
> Datum: Wed, 09 Feb 2011 16:22:49 +0200
> Von: Vladislav Bogdanov
> An: pacemaker@oss.clusterlabs.org
> Betreff: Re: [Pacemaker] problem with apache coming up
> 08.02.2011 17:13, Testuser
2011/2/7 Yuusuke IIDA :
> Hi,
>
> I am going to manage the virtual machine in environment of Pacemaker-1.0.10.
>
> However, the problem that two virtual machines are started occurs when a
> transition graph of "Live Migration" was canceled on the way.
>
> I show a procedure to generate below a prob
08.02.2011 17:13, Testuser SST wrote:
> Hi,
>
> I´m implementing a two node webserver on CentOS 5 with
> heartbeat/pacemaker and DRBD. The first new installed node works
> fine, but when the second node becomes active, there seems to be a
> problem with the apache starting up. Is there a way to ge
Hi Paul,
I´m not quite sure about the first point, but the second is used by the
resource-agent to monitor the function of the httpd/apache process.
Kind Regards
Sven
Original-Nachricht
> Datum: Tue, 8 Feb 2011 23:05:47 +
> Von: paul harford
> An: The Pacemaker cluster
Hi Raoul,
1. Configuration
node $id="0454dc6c-bcd0-49c4-8c8c-5b8e5e99068d" astinos \
attributes standby="off"
node $id="4f254de7-c369-4066-8d5e-bf7bb2d1f128" daxos \
attributes standby="off"
primitive Apache ocf:heartbeat:apache \
params configfile="/etc/httpd/conf/httpd.c
Hi!
I wonder if someone gives me some ideas how to achieve automatic
failover between my redundant load balancers. The setup I want to
implement is:
server1 server2
ip1ip2
<-virtual-IP--->
The load-balanc
Hello agian,
after fixing up my VirtualIP problem, I have been doing some Split Brain tests
and while everything 'returns to normal', it is not quite what I had desired.
My scenario:
Acive/Passive 2 node cluster (serverA & serverB) with Corosync, DRBD & PGSQL.
The resources are configured as Mas
Howdy,
> On Mon, 7 Feb 2011 16:36:46 +0100, Dejan Muhamedagic wrote:> Hi,
>
> On Mon, Feb 07, 2011 at 02:01:11PM +0100, Stephan-Frank Henry wrote:
> > Hello again,
> >
> > I am having some possible problems with Corosync and IPAddr.
> > To be more specific, when I do a /etc/init.d/corosync stop,
On Wed, Feb 9, 2011 at 1:36 PM, Klaus Darilion
wrote:
> Hi!
>
> I managed to sovle the problem by using the 'ping' OCF resource instead
> of 'pingd'. Although pingd is deprecated I thought it should work.
It should, but kinda doesn't and its not clear why/how we can fix it.
Which is why we ditche
Hi!
I managed to sovle the problem by using the 'ping' OCF resource instead
of 'pingd'. Although pingd is deprecated I thought it should work.
Anyway, for the records, my config which seems to work now (some
tweaking of ping checks is still missing):
node server1 \
attributes standby="o
Am 08.02.2011 18:20, schrieb Florian Haas:
> On 02/08/2011 06:03 PM, Klaus Darilion wrote:
>> Now I put server2 online again: # crm node online server2.
>> That means, server2 is online and has ping connectivity, server1 is
>> online and doesn't have ping connectivity. But the virtual-IP stayed
>
> So I compared the /etc/ais/openais.conf in non-sp1 with
> /etc/corosync/corosync.conf from sp1 and found this bit missing which
> could be quite useful...
>
> service {
> # Load the Pacemaker Cluster Resource Manager
> ver: 0
> name: pacemaker
>
forgot to mention that armani=server1 and bulgari=server2 (showing some
respect to fashion brands :-)
Am 09.02.2011 10:16, schrieb Klaus Darilion:
>
>
> Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
>> On Tuesday 08 February 2011 18:03:31 Klaus Darilion wrote:
> ...
>
>>> 3. Now, server1,
On 2011-02-09 10:17, jiaju liu wrote:
>
> Hi list
> I find we all use drbd with mysql and pacemaker to realize HA. If I use
> unison to synchronize database, and then add mysql in HA cluster.Is it OK?
That sounds like a terrible idea to me.
If you don't want to use DRBD, use Pacemaker managed My
Am 09.02.2011 09:48, schrieb Florian Haas:
> On 2011-02-09 09:25, Klaus Darilion wrote:
>> Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
Then I changed node server2 to standby: # crm node standby server2.
>
>
> Node server2: standby
> Online: [ server1 ]
>
>
Hi,
On Tue, Feb 1, 2011 at 6:55 PM, paul harford wrote:
> Hi Again :-)
>
> I think my main problem is my location configuration when i bring down eth0
> on node1 the and looking at crm_m -f the count on node 2 never increases
>
> Could anyone help me out with the pingd / location restraints requi
Hi listI find we all use drbd with mysql and pacemaker to realize HA. If I use
unison to synchronize database, and then add mysql in HA cluster.Is it OK?
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailma
Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
> On Tuesday 08 February 2011 18:03:31 Klaus Darilion wrote:
...
>> 3. Now, server1, hosting the virtual-IP, loost connectivity to the ping
>> target (I inserted a firewall rule) -> The virtual-IP stayed with server1.
>>
>> Now I put server2 on
On 2011-02-09 09:25, Klaus Darilion wrote:
> Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
>>> Then I changed node server2 to standby: # crm node standby server2.
Node server2: standby
Online: [ server1 ]
failover-ip (ocf::heartbeat:IPaddr):Star
Am 08.02.2011 18:20, schrieb Florian Haas:
> On 02/08/2011 06:03 PM, Klaus Darilion wrote:
>> Hi!
>>
>> I'm a newbie and have a problem with a simple virtual-IP config. I want
>> the virtual-IP to be either on server1 or server2, depending on which of
>> the server is having network connectivity
Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
>> Then I changed node server2 to standby: # crm node standby server2.
>> >
>> >
>> > Node server2: standby
>> > Online: [ server1 ]
>> >
>> > failover-ip (ocf::heartbeat:IPaddr):Started server1
>> >Clone Set: clonePing
the solution was to change the ressource constraint to a single ressource
in the group and not have the constraint between dlm-clvm-clone and the
group itself. - this doesn't work.
kr patrik
Mit freundlichen Grüßen / Best Regards
Patrik Rapposch, BSc
System Administration
KNAPP Systemintegrat
>> 3. Now, server1, hosting the virtual-IP, loost connectivity to the ping
>> target (I inserted a firewall rule) -> The virtual-IP stayed with server1.
>>
>> Now I put server2 online again: # crm node online server2.
>> That means, server2 is online and has ping connectivity, server1 is
>> online
> >> Liang Ma
> >> Contractuel | Consultant | SED Systems Inc.
> >> Ground Systems Analyst
> >> Agence spatiale canadienne | Canadian Space Agency
> >> 6767, Route de l'A?roport, Longueuil (St-Hubert), QC, Canada, J3Y 8Y9
> >> T?l/Tel : (450) 926-5099 | T?l?c/Fax: (450) 926
38 matches
Mail list logo