Hi All,
I registered these contents with Bugzilla.
* http://bugs.clusterlabs.org/show_bug.cgi?id=5074
And I attached hb_report.
Best Regards,
Hideo Yamauchi.
--- On Wed, 2012/6/27, renayama19661...@ybb.ne.jp
wrote:
> Hi All,
>
> We confirmed it about negative setting of colocation.
>
> It
Hi All,
We confirmed it about negative setting of colocation.
It is colocation with with-rsc-role designation.
We checked it in the next procedure.
Step1) Start the first node. And we send cib.
Last updated: Wed Jun 27 22:58:15 2012
Stack: Heartbeat
Current DC: rh62-test1 (4a7c480
On Wed, Jun 27, 2012 at 6:45 AM, Brian J. Murrell wrote:
> So, I have an 18 node cluster here (so a small haystack, indeed, but
> still a haystack in which to try to find a needle) where a certain
> set of (yet unknown, figuring that out is part of this process)
> operations are pooching pacemaker
On Tue, Jun 26, 2012 at 3:30 PM, Brent Harsh wrote:
> Seems like bug http://bugs.clusterlabs.org/show_bug.cgi?id=5040 and and
> earlier thread:
> http://thread.gmane.org/gmane.linux.highavailability.pacemaker/13185/focus=13321
I believe we've finally got to the bottom of this one.
Looks like it w
So, I have an 18 node cluster here (so a small haystack, indeed, but
still a haystack in which to try to find a needle) where a certain
set of (yet unknown, figuring that out is part of this process)
operations are pooching pacemaker. The symptom is that on one or
more nodes I get the following ki
Hi,
Initially I had the following resource setup.
1. Clone -> Group -> Primitives (ocf:pacemaker:controld, ocf:ocfs2:o2cb)
2. Clone -> Primitives (ocf:heartbeat:Filesystem)
3. Group -> Primitives (ocf:heartbeat:IPaddr2, ocf:heartbeat:mysql)
I had a resource colocation where 3, 2 and 1 would all
Hello,
I am setting up a 3 node cluster with Corosync + Pacemaker on Ubuntu 12.04
server. Two of the nodes are "real" nodes, while the 3rd is in standby mode as
a quorum node. The two "real" nodes each have two NICs, one that is connected
to a shared LAN and the other that is directly connect
On 06/26/2012 12:59 PM, Velayutham, Prakash wrote:
Hi,
I have a Corosync (1.3.0-5.6.1) / Pacemaker (1.1.5-5.5.5) cluster
where I am using a Time-based rule for resource stickiness
(http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-rules-cluster-options.html).
Ever
Hi,
I have a Corosync (1.3.0-5.6.1) / Pacemaker (1.1.5-5.5.5) cluster where I am
using a Time-based rule for resource stickiness
(http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-rules-cluster-options.html).
Everything works as expected, except that the resources ge
On 06/26/2012 03:49 PM, coma wrote:
> Hello,
>
> i running on a 2 node cluster with corosync & drbd in active/passive
> mode for mysql hight availablity.
>
> The cluster working fine (failover/failback & replication ok), i have no
> network outage (network is monitored and i've not seen any failu
On 06/22/2012 05:58 AM, Sergey Tachenov wrote:
Why the score for the DBIP is -INFINITY on the srvplan2? The only INF
rule in my config is the collocation rule for the postgres group.
This sounds not unlike an issue I'm experencing. See here:
http://oss.clusterlabs.org/pipermail/pacemaker/2012-
Look here
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch09s03s03.html
:-)
2012/6/26 coma
> Hello,
>
> i running on a 2 node cluster with corosync & drbd in active/passive mode
> for mysql hight availablity.
>
> The cluster working fine (failover/failback & replic
On 06/22/2012 04:40 AM, Andreas Kurz wrote:
I took a look at the cib in case2 and saw this in the status for storage02.
>
>
>
>
>
>
>
>
>storage02 will not give up the drbd master since it has a higher score that storage01. This coupled with
Hello,
i running on a 2 node cluster with corosync & drbd in active/passive mode
for mysql hight availablity.
The cluster working fine (failover/failback & replication ok), i have no
network outage (network is monitored and i've not seen any failure) but
split-brain occurs very often and i don't
Hello,
I have been testing a classic pacemaker+corosync cluster for virtualization but
it was lacking a cluster filesystem to host the virtual machine config files.
(I was using the guidelines I found at Linbit to setup a virtualization
cluster).
Therefore I would like to add a GFS2 filesyst
On Tue, Jun 26, 2012 at 4:44 PM, Florian Haas wrote:
> On Mon, Jun 25, 2012 at 1:40 PM, Andrew Beekhof wrote:
>> I've added the concept of a 'system service' that expands to whatever
>> standard the local machine supports.
>> So you could say, in xml,
>> and the cluster would use 'lsb' on RHEL
Gentlemen,
I have troubles with NFS failover times when the NFS clients are not
located in the same network as the HA NFS server (the connection
between the nfs client and the nfs server is routed and not switched).
The HA NFS server is a Pacemaker+Corosync cluster which consists of 2
NFS nodes (a
Hi Jiaju,
I could find how to add your address to cc list.
we need to make a user account in bugs.clusterlabs.org when we add a
mail address to cc list.
If necessary, Please make a user account.
Sincerely,
Yuichi
--
Yuichi SEINO
METROSYSTEMS CORPORATION
E-mail:seino.clust...@gmail.com
18 matches
Mail list logo