Hi All,
In the first place lrmd does not return the result of the cancellation request
that made pending.
However, a result of the cancellation is necessary for crmd.
Will not the following correction be necessary?
(The correction does not consider errors properly; is temporary.)
example :
(s
Replying to myself
On 11/11/2011 12:43 AM, Andreas Kurz wrote:
> On 11/10/2011 07:59 PM, Dmitry Golubev wrote:
> Actually, it's called ManageVE. Well, I wonder what else an RA
> could do other than manage :->
ah, yes .. ManageVE ... thx
>>>
>>> What I meant to say is that i
> ManageVE has migration support using chkpt/restore since resource-agents
> version 1.0.4. but if I understand the OpenVZ migration concept
> correct ... please someone correct me if I'm wrong! ... there is no need
> for a shared storage.
>
> The vzmigrate script rsyncs complete data, config
On 11/10/2011 07:59 PM, Dmitry Golubev wrote:
Actually, it's called ManageVE. Well, I wonder what else an RA
could do other than manage :->
>>>
>>> ah, yes .. ManageVE ... thx
>>
>> What I meant to say is that it's really hard to recall such a
>> name.
>
> With all due respect, ManageVE
On 11/10/2011 04:28 PM, Senftleben, Stefan (itsc) wrote:
> Thx, i tried to tune the parameter, but the errors still arrived.
> After restarting the corosysnc-daemon on each node the errors disappeared.
Restarting corosync for changes to take effect? Should really not be
necessary.
>
> The brings
Hi there,
I have a problem with pacemaker 1.0.9 on heartbeat on a 2 node cluster.
I created some resources and a dummy resource to move around.
here is the config:
node $id="07c9653f-22c3-47be-8b52-cd8da183ce2e" debian60-clnode1
node $id="f97d4817-7043-4644-bcf0-57159233acea" debian60-clnode2
> > > Actually, it's called ManageVE. Well, I wonder what else an RA
> > > could do other than manage :->
> >
> > ah, yes .. ManageVE ... thx
>
> What I meant to say is that it's really hard to recall such a
> name.
With all due respect, ManageVE does not have live migration. It just manages
st
Il 10/11/2011 16:30, Dejan Muhamedagic ha scritto:
On Tue, Nov 08, 2011 at 07:24:48PM +0100, Mailing List SVR wrote:
Il 08/11/2011 16:30, Dejan Muhamedagic ha scritto:
Hi,
On Mon, Nov 07, 2011 at 06:48:22PM +0100, Mailing List SVR wrote:
Hi,
thanks for your answer
Il 07/11/2011 18:27, Dejan
On Thu, Nov 10, 2011 at 04:10:29PM +0100, Andreas Kurz wrote:
> On 11/10/2011 03:45 PM, Dejan Muhamedagic wrote:
> > Hi,
> >
> > On Thu, Nov 10, 2011 at 01:01:12PM +0100, Andreas Kurz wrote:
> >> On 11/10/2011 12:48 PM, Dmitry Golubev wrote:
> >>> Hi,
> >>>
> > I am trying to make a live migra
Sorry to bump an old thread but I just thought I'd post an update as to how I
solved this issue with Pacemaker. I bit of a work-around but it does work
I configured a "ocf:heartbeat:anything" resource that runs "rndc reload", like
this:
primitive rndc_reload ocf:heartbeat:anything params bin
On Tue, Nov 08, 2011 at 07:24:48PM +0100, Mailing List SVR wrote:
> Il 08/11/2011 16:30, Dejan Muhamedagic ha scritto:
>> Hi,
>>
>> On Mon, Nov 07, 2011 at 06:48:22PM +0100, Mailing List SVR wrote:
>>> Hi,
>>>
>>> thanks for your answer
>>>
>>> Il 07/11/2011 18:27, Dejan Muhamedagic ha scritto:
>>>
Thx, i tried to tune the parameter, but the errors still arrived.
After restarting the corosysnc-daemon on each node the errors disappeared.
The brings me to next question:
--> How do I activate new configurations, that I have made wir crm-cmd to a
resource?
Regards,
Stefan
-Ursprüngliche
On 11/10/2011 03:45 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Thu, Nov 10, 2011 at 01:01:12PM +0100, Andreas Kurz wrote:
>> On 11/10/2011 12:48 PM, Dmitry Golubev wrote:
>>> Hi,
>>>
> I am trying to make a live migration of virtual machines (custom resource
> agent). The idea is to suspend
Hi,
On Thu, Nov 10, 2011 at 01:01:12PM +0100, Andreas Kurz wrote:
> On 11/10/2011 12:48 PM, Dmitry Golubev wrote:
> > Hi,
> >
> >>> I am trying to make a live migration of virtual machines (custom resource
> >>> agent). The idea is to suspend the virtual machine on one node, remount a
> >>> files
On 11/10/2011 01:26 PM, Senftleben, Stefan (itsc) wrote:
> Okay, need I have to tune something to get rid oft he log entries?
increase the interval or tune the timeout/retry settings for your ping
resources ... I guess the defaults in your ping RA version don't fit
when using such a short interval
After discussions with a teammate exists a second network connection to the
other datacenter. so
my question is answered. thanks to all.
-Ursprüngliche Nachricht-
Von: Andreas Kurz [mailto:andr...@hastexo.com]
Gesendet: Donnerstag, 10. November 2011 13:26
An: pacemaker@oss.clusterlabs.or
Dejan:
> How long does the monitor take? I didn't see your configuration,
> but if it takes longer than the interval you set for monitor,
> this looks like exactly that case.
I have run monitor several times and it has never taken as long as monitor.
Under heavy load it might take as long, I su
On 11/10/2011 10:21 AM, Senftleben, Stefan (itsc) wrote:
> Hello,
>
> first of all I want to say hello all recipients of the pacemaker
> mailinglist!
>
> I manage a two node active-passive cluster with an ms-drbd-resource and
> a depending resource.
> Each node is located in a separate datacent
Okay, need I have to tune something to get rid oft he log entries?
thx and regards
Stefan
-Ursprüngliche Nachricht-
Von: Andreas Kurz [mailto:andr...@hastexo.com]
Gesendet: Donnerstag, 10. November 2011 12:53
An: pacemaker@oss.clusterlabs.org
Betreff: Re: [Pacemaker] Repeating Log entrie
> Thanks,
> what is your opinion to this:
> I configure a third node and let the quorum votes decide which node is
> active?
>
> Regards
> Stefan
You would need a third host in a third datacenter and two independent links to
the other both datacenters. So I doubt this will work in your case if y
2011/11/10 Andreas Kurz :
> hello,
>
> On 11/10/2011 10:29 AM, JiaQiang Xu wrote:
>> Hi,
>>
>> I'm running pacemaker 1.0.9.
>> I configured a resource and want it to be stopped if monitor returns
>> OCF_ERR_GENERIC.
>> But I found pacemaker tries to restart the resource if monitor fails.
>> And if
On 11/10/2011 12:48 PM, Dmitry Golubev wrote:
> Hi,
>
>>> I am trying to make a live migration of virtual machines (custom resource
>>> agent). The idea is to suspend the virtual machine on one node, remount a
>>> filesystem (on top of DRBD) on another node and resume the virtual machine.
>>
>> Ma
Hi,
> > I am trying to make a live migration of virtual machines (custom resource
> > agent). The idea is to suspend the virtual machine on one node, remount a
> > filesystem (on top of DRBD) on another node and resume the virtual machine.
>
> May I ask: Why? There is already the VirtualDomain RA
hello,
On 11/10/2011 11:35 AM, Senftleben, Stefan (itsc) wrote:
> Hello again,
>
> on a two node active passive cluster runs a clone of ping resources.
>
> In the corosync.log following entries reappear in short time phrases. Is this
> an error?
as the logs explain, it's an info ;-)
>
> Rega
On 11/09/2011 10:46 PM, Dmitry Golubev wrote:
> Hi,
>
> I am trying to make a live migration of virtual machines (custom resource
> agent). The idea is to suspend the virtual machine on one node, remount a
> filesystem (on top of DRBD) on another node and resume the virtual machine.
May I ask: Wh
hello,
On 11/10/2011 10:29 AM, JiaQiang Xu wrote:
> Hi,
>
> I'm running pacemaker 1.0.9.
> I configured a resource and want it to be stopped if monitor returns
> OCF_ERR_GENERIC.
> But I found pacemaker tries to restart the resource if monitor fails.
> And if I configure monitor on-fail action to
Hello again,
on a two node active passive cluster runs a clone of ping resources.
In the corosync.log following entries reappear in short time phrases. Is this
an error?
Regards,
Stefan.
log entries:
Nov 10 11:29:38 lxds07 lrmd: [5478]: info: perform_op:2883: postponing all ops
on resource
Hello again,
on a two node active passive cluster runs a clone of ping resources.
In the corosync.log following entries reappear in short time phrases. Is this
an error?
Nov 10 11:29:38 lxds07 lrmd: [5478]: info: perform_op:2883: postponing all ops
on resource pri_ping:1 by 1000 ms
Nov 10 11:2
Thanks,
what is your opinion to this:
I configure a third node and let the quorum votes decide which node is active?
Regards
Stefan
-Ursprüngliche Nachricht-
Von: Michael Schwartzkopff [mailto:mi...@schwartzkopff.org]
Gesendet: Donnerstag, 10. November 2011 10:37
An: The Pacemaker cluste
On 11/09/2011 12:04 AM, Attila Megyeri wrote:
> -Original Message-
> From: Attila Megyeri [mailto:amegy...@minerva-soft.com]
> Sent: 2011. november 8. 16:13
> To: The Pacemaker cluster resource manager
> Subject: Re: [Pacemaker] Multinode cluster question
>
> -Original Message-
>
> Hello,
>
> first of all I want to say hello all recipients of the pacemaker
> mailinglist!
>
> I manage a two node active-passive cluster with an ms-drbd-resource and a
> depending resource. Each node is located in a separate datacenter,
> connected by a single dwdm-connection, a second network
Hi,
I'm running pacemaker 1.0.9.
I configured a resource and want it to be stopped if monitor returns
OCF_ERR_GENERIC.
But I found pacemaker tries to restart the resource if monitor fails.
And if I configure monitor on-fail action to "stop" manually, the
resource gets stopped on fail.
As I read in
Hello,
first of all I want to say hello all recipients of the pacemaker mailinglist!
I manage a two node active-passive cluster with an ms-drbd-resource and a
depending resource.
Each node is located in a separate datacenter, connected by a single
dwdm-connection, a second network connection is
33 matches
Mail list logo