) and before I start samba and set the
virtual IP.
Hope you can help.
Thanks
Michael Smith
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
G
Michael Schwartzkopff wrote:
Imagine the consequences for a cloud cluster consisting of 30 nodes hosting
100 virtual machines. All machines would be migrated to the least possible
number of real machines during the night when there no work to do. In the next
morning when work starts virtual ma
Bart Coninckx wrote:
By the way: things seem better when I change the monitor time out to 30
seconds in stead of 10 seconds. Very strange though, because the resource
agent basically does a "xm list --long" while monitoring, which takes less
than half a second in a console.
I think sometimes
On Wed, 22 Sep 2010, Andrew Beekhof wrote:
> On Tue, Sep 21, 2010 at 3:28 PM, Vadym Chepkov wrote:
> > On Tue, Sep 21, 2010 at 9:14 AM, Dan Frincu wrote:
> >> However I don't know of any automatic method to clear the failcount.
> > in pacemaker 1.0 nothing will clean failcount automatically, th
Phil Armstrong wrote:
Hi,
This is my first post to this list so if I'm doing this wrong, please be
patient. I am using pacemaker-1.1.2-0.2.1 on sles11sp1. Thanks in
advance for any help anyone can give me.
> Sep 21 10:35:45 pry crmd: [5601]: info: abort_transition_graph:
> need_abort:59 - Tr
Andrew Beekhof wrote:
I spoke to Steve, and the only thing he could come up with was that
the group might not be correct.
When the cluster is in this state, please run:
ps x -o pid,euser,ruser,egroup,rgroup,command
And compare it to the "normal" output.
Also, confirm that there is only one
Michael Smith wrote:
On Mon, 6 Sep 2010, Andrew Beekhof wrote:
Is /dev/shm full (or not mounted) by any chance?
No - I tried clearing that out, too.
And corosync is actually running?
Yes, it's logging "[IPC ] Invalid IPC credentials." when cib tries to
connect.
For wha
On Mon, 6 Sep 2010, Andrew Beekhof wrote:
> >> Is /dev/shm full (or not mounted) by any chance?
> >
> > No - I tried clearing that out, too.
>
> And corosync is actually running?
Yes, it's logging "[IPC ] Invalid IPC credentials." when cib tries to
connect.
Mike
Tom Tux wrote:
If I disjoin one clusternode (node01) for maintenance-purposes
(/etc/init.d/openais stop) and reboot this node, then it will not join
himself automatically into the cluster. After the reboot, I have the
following error- and warn-messages in the log:
Sep 3 07:34:15 node01 mgmtd:
On Thu, 2 Sep 2010, Andrew Beekhof wrote:
> On Mon, Aug 30, 2010 at 10:04 PM, Michael Smith wrote:
> > Hi,
> >
> > I have a pacemaker/corosync setup on a bunch of fully patched SLES11 SP1
> > systems. On one of the systems, if I /etc/init.d/openais stop, then
&g
Hi,
I have a pacemaker/corosync setup on a bunch of fully patched SLES11 SP1
systems. On one of the systems, if I /etc/init.d/openais stop, then
/etc/init.d/openais start, pacemaker fails to come up:
Aug 30 15:48:09 xen-test1 cib: [5858]: info: crm_cluster_connect:
Connecting to OpenAIS
Aug
Tim Serong wrote:
On 8/27/2010 at 03:22 PM, Michael Smith wrote:
I have a pacemaker setup using the Xen resource agent and I've found
something weird during migration: if a VM is in the middle of
live-migrating from node 1 to node 2, and I stop the resource in crm,
pacemaker forgets
Tim Serong wrote:
On 8/27/2010 at 03:37 PM, Michael Smith wrote:
I think I'd consider it a bug: I've disabled stonith, so dlm shouldn't
wait forever for a fence operation that isn't going to happen.
I reckon if you set the args parameter of your ocf:pacemaker:contro
On Thu, 26 Aug 2010, Tim Serong wrote:
> > for now I have stonith-enabled="false" in
> > my CIB. Is there a way to make clvmd/dlm respect it?
>
> No. At least, I don't think so, and/or I hope not :)
I think I'd consider it a bug: I've disabled stonith, so dlm shouldn't
wait forever for a fe
Hi,
I have a pacemaker setup using the Xen resource agent and I've found
something weird during migration: if a VM is in the middle of
live-migrating from node 1 to node 2, and I stop the resource in crm,
pacemaker forgets about the migration and immediately thinks the resource
is stopped, alt
On Thu, 26 Aug 2010, Tim Serong wrote:
> > Aug 26 18:31:51 xen-test1 cluster-dlm[8870]: fence_node_time: Node
> > 236655788/xen-test2 has not been shot yet
> Do you have STONITH configured? Note that it says "xen-test2 has not
> been shot yet" and "clvmd ... not fenced". It's just going to si
> Xinwei Hu writes:
>
> > That sounds worrying actually.
> > I think this is logged as bug 585419 on SLES' bugzilla.
> > If you can reproduce this issue, it worths to reopen it I think.
I've got a pair of fully patched SLES11 SP1 nodes and they're showing
what I guess is the same behaviour: if
Xinwei Hu writes:
> 2010/8/16 Rainer Lutz :
> > Xinwei Hu writes:
> >> This sounds a like a fixed issue for SLE11SP1 indeed.
> > Well it is not fixed with SP1, but with some Patch after SP1 - don`t know
> > which thou, as the clvmd is the same for SP1 before and after Online
> > Patches.
> T
18 matches
Mail list logo