On Thu, May 27, 2010 at 2:54 PM, Vadym Chepkov wrote:
>
> On May 27, 2010, at 7:21 AM, Andrew Beekhof wrote:
>
>> On Wed, May 26, 2010 at 9:07 PM, Vadym Chepkov wrote:
>>> Hi,
>>>
>>> What would be the proper way to shutdown members of two-node cluster in
>>> case of a power outage?
>>> I assume
On Thu, May 27, 2010 at 6:41 PM, Gianluca Cecchi
wrote:
> On Tue, May 25, 2010 at 3:39 PM, Dejan Muhamedagic
> wrote:
>>
>> Hi,
>> [snip]
>> > So I presume the problem could be caused by the fact that the second
>> > part is
>> > a clone and not a resource? or a bug?
>> > I can eventually send th
On Tue, May 25, 2010 at 3:39 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Thu, May 20, 2010 at 06:09:01PM +0200, Gianluca Cecchi wrote:
>> Hello,
>> manual for 1.0 (and 1.1) reports this for Advisory Ordering:
>>
>> On the other-hand, when score="0" is specified for a constraint, the
>> constraint is
Basically atomic.
Changes are immediately sync'd to the other hosts as part of the CIB
protocol (not at the file level).
On Thu, May 27, 2010 at 3:58 PM, Jean-Francois Le Breton
wrote:
> Hello,
>
> We are wondering if setting a value to a named parameter of a resource
> through the crm_resource
On May 27, 2010, at 11:40 AM, Diego Remolina wrote:
> Is there any workaround for this? Perhaps a slightly older version of the
> rpms? If so where do I find those?
chkconfig corosync off
chkconfig heartbeat on
Unfortunately, that's what I had to do on PPC64 RHEL5
>
> I cannot get the opens
On 05/27/2010 10:20 AM, Gianluca Cecchi wrote:
On Thu, May 27, 2010 at 5:50 PM, Steven Dake mailto:sd...@redhat.com>> wrote:
On 05/27/2010 08:40 AM, Diego Remolina wrote:
Is there any workaround for this? Perhaps a slightly older
version of
the rpms? If so where do I
On Thu, May 27, 2010 at 5:50 PM, Steven Dake wrote:
> On 05/27/2010 08:40 AM, Diego Remolina wrote:
>
>> Is there any workaround for this? Perhaps a slightly older version of
>> the rpms? If so where do I find those?
>>
>>
> Corosync 1.2.1 doesn't have this issue apparently. With corosync 1.2.1,
On Tue, May 25, 2010 at 3:39 PM, Dejan Muhamedagic wrote:
> Hi,
> [snip]
> > So I presume the problem could be caused by the fact that the second part
> is
> > a clone and not a resource? or a bug?
> > I can eventually send the whole config.
>
> Looks like a bug to me. Clone or resource, constrain
On 05/27/2010 08:40 AM, Diego Remolina wrote:
Is there any workaround for this? Perhaps a slightly older version of
the rpms? If so where do I find those?
Corosync 1.2.1 doesn't have this issue apparently. With corosync 1.2.1,
please don't use "debug: on" keyword in your config options. I a
Ok,
So for now the fix seems to be to remove the latest version of corosync:
1.2.2-1.1 and install the older rpms 1.2.1-1
Here is what I did:
[r...@phys-ha01 corosync]# rpm -e --nodeps corosynclib corosync
[r...@phys-ha01 corosync]# rpm -ivh
http://www.clusterlabs.org/rpm/epel-5/x86_64/corosy
Is there any workaround for this? Perhaps a slightly older version of
the rpms? If so where do I find those?
I cannot get the opensuse-ha rpms any more so I am stuck with a
non-functioning cluster.
Diego
Steven Dake wrote:
This is a known issue on some platforms, although the exact cause is
This is a known issue on some platforms, although the exact cause is
unknown. I have tried RHEL 5.5 as well as CentOS 5.5 with clusterrepo
rpms and been unable to reproduce. I'll keep looking.
Regards
-steve
On 05/27/2010 06:07 AM, Diego Remolina wrote:
Hi,
I was running the old rpms from
On 2010-05-27, at 10:21 AM, Florian Haas wrote:
>
>
> On 2010-05-27 16:12, daniel qian wrote:
>>
>> On 2010-05-27, at 5:06 AM, Florian Haas wrote:
>>
>>> On 2010-05-26 16:26, daniel qian wrote:
I followed this link to setup a two-node cluster on Ubuntu 10.4 -
https://wiki.ubuntu.co
On 2010-05-27 16:12, daniel qian wrote:
>
> On 2010-05-27, at 5:06 AM, Florian Haas wrote:
>
>> On 2010-05-26 16:26, daniel qian wrote:
>>> I followed this link to setup a two-node cluster on Ubuntu 10.4 -
>>> https://wiki.ubuntu.com/ClusterStack/LucidTesting#Pacemaker,%20drbd8%20and%20OCFS2%2
On 2010-05-27, at 5:06 AM, Florian Haas wrote:
> On 2010-05-26 16:26, daniel qian wrote:
>> I followed this link to setup a two-node cluster on Ubuntu 10.4 -
>> https://wiki.ubuntu.com/ClusterStack/LucidTesting#Pacemaker,%20drbd8%20and%20OCFS2%20or%20GFS2
>>
>> Everything is working fine except
Hello,
We are wondering if setting a value to a named parameter of a resource
through the crm_resource command is an atomic operation accross the cluster.
In other words, is there any underlying consensus protocol (such as Paxos),
or are the CIB modifications "lazy replicated" on all the
Hi,
I was running the old rpms from the opensuse repo and wanted to change
over to the latest packages from the clusterlabs repo in my RHEL 5.5
machines.
Steps I took
1. Disabled the old repo
2. Set the nodes to standby (two node drbd cluster) and turned of openais
3. Enabled the new repo.
4.
On May 27, 2010, at 7:21 AM, Andrew Beekhof wrote:
> On Wed, May 26, 2010 at 9:07 PM, Vadym Chepkov wrote:
>> Hi,
>>
>> What would be the proper way to shutdown members of two-node cluster in case
>> of a power outage?
>> I assume as soon I issue 'crm node standby node-1 reboot' resources will
On Wed, May 26, 2010 at 9:07 PM, Vadym Chepkov wrote:
> Hi,
>
> What would be the proper way to shutdown members of two-node cluster in case
> of a power outage?
> I assume as soon I issue 'crm node standby node-1 reboot' resources will
> start to fail-over to the second node and,
> first of all
Hi, Dejan
I registered myself with Bugzilla.
Please confirm it.
http://developerbugs.linux-foundation.org/show_bug.cgi?id=2428
Best Regards,
Yuusuke IIDA
(2010/05/26 19:37), Dejan Muhamedagic wrote:
Hi,
On Wed, May 26, 2010 at 08:31:12AM +0200, Andrew Beekhof wrote:
I think the shell also h
On 2010-05-26 16:26, daniel qian wrote:
> I followed this link to setup a two-node cluster on Ubuntu 10.4 -
> https://wiki.ubuntu.com/ClusterStack/LucidTesting#Pacemaker,%20drbd8%20and%20OCFS2%20or%20GFS2
>
> Everything is working fine except for running MySQL on both nodes with MySQL
> datadir
21 matches
Mail list logo