On Fri, Jan 14, 2011 at 4:59 PM, Bob Haxo wrote:
>
>> Where there (m)any logs containing the text "crm_abort" ...
> Sorry Andrew,
>
> Since I'm testing installations, all of the nodes in the cluster have
> been installed several times since I solved this issue, and the original
> log files are gon
> Where there (m)any logs containing the text "crm_abort" ...
Sorry Andrew,
Since I'm testing installations, all of the nodes in the cluster have
been installed several times since I solved this issue, and the original
log files are gone.
I did not see "crm_abort" logged, otherwise I would have
On Thu, Jan 13, 2011 at 9:31 PM, Bob Haxo wrote:
> Hi Tom (and Andrew),
>
> I figured out an easy fix for the problem that I encountered. However,
> there would seem to be a problem lurking in the code.
Where there (m)any logs containing the text "crm_abort" from the PE in
your history (on the b
Hi Tom (and Andrew),
I figured out an easy fix for the problem that I encountered. However,
there would seem to be a problem lurking in the code.
Here is what I found. On one of the servers that was online and hosting
resources:
r2lead1:~ # netstat -a | grep crm
Proto RefCnt Flags Type
So, Tom ...how do you get the failed node online?
I've re-installed with the same image that is running on three other
nodes, but still fails. This node was quite happy for the past 3
months. As I'm testing installs, this and other nodes have been
installed a significant number of times withou
I don't know. I still have this issue (and it seems, that I'm not the
only one...). I'll have a look, if there are pacemaker-updates through
the zypper-update-channel available (sles11-sp1).
Regards,
Tom
2011/1/13 Bob Haxo :
> Tom, others,
>
> Please, what was the solution to this issue?
>
> Tha
Tom, others,
Please, what was the solution to this issue?
Thanks,
Bob Haxo
On Mon, 2010-09-06 at 09:50 +0200, Tom Tux wrote:
> Yes, corosync is running after the reboot. It comes up with the
> regular init-procedure (runlevel 3 in my case).
>
> 2010/9/6 Andrew Beekhof :
> > On Mon, Sep 6, 20
Yes, corosync is running after the reboot. It comes up with the
regular init-procedure (runlevel 3 in my case).
2010/9/6 Andrew Beekhof :
> On Mon, Sep 6, 2010 at 7:57 AM, Tom Tux wrote:
>> No, I don't have such failed-messages. In my case, the "Connection to
>> our AIS plugin" was established.
>
On Mon, Sep 6, 2010 at 7:57 AM, Tom Tux wrote:
> No, I don't have such failed-messages. In my case, the "Connection to
> our AIS plugin" was established.
>
> The /dev/shm is also not full.
Is corosync running?
> Kind regards,
> Tom
>
> 2010/9/3 Michael Smith :
>> Tom Tux wrote:
>>
>>> If I disjo
No, I don't have such failed-messages. In my case, the "Connection to
our AIS plugin" was established.
The /dev/shm is also not full.
Kind regards,
Tom
2010/9/3 Michael Smith :
> Tom Tux wrote:
>
>> If I disjoin one clusternode (node01) for maintenance-purposes
>> (/etc/init.d/openais stop) and r
Tom Tux wrote:
If I disjoin one clusternode (node01) for maintenance-purposes
(/etc/init.d/openais stop) and reboot this node, then it will not join
himself automatically into the cluster. After the reboot, I have the
following error- and warn-messages in the log:
Sep 3 07:34:15 node01 mgmtd:
Hi
If I disjoin one clusternode (node01) for maintenance-purposes
(/etc/init.d/openais stop) and reboot this node, then it will not join
himself automatically into the cluster. After the reboot, I have the
following error- and warn-messages in the log:
Sep 3 07:34:09 node01 mgmtd: [9201]: ERROR:
12 matches
Mail list logo