Thank you Tim for your detailed response. This information was exactly what I
was
looking for. Even though parsing "crm_mon -o -1" would be faster initially, the
XML is probably less prone to sudden changes and the parsing of it is easier to
fix when changes are made. I think I will take the XML
On Thu, Aug 11, 2011 at 8:51 PM, Larry Brigman wrote:
> On Thu, Aug 11, 2011 at 5:37 PM, Andrew Beekhof wrote:
>
>> On Fri, Aug 12, 2011 at 1:13 AM, Larry Brigman
>> wrote:
>> > On Wed, Aug 10, 2011 at 10:50 PM, Marco van Putten
>> > wrote:
>> >>
>> >> On 08/10/2011 06:23 PM, David Coulson wrote
On Thu, Aug 11, 2011 at 5:37 PM, Andrew Beekhof wrote:
> On Fri, Aug 12, 2011 at 1:13 AM, Larry Brigman
> wrote:
> > On Wed, Aug 10, 2011 at 10:50 PM, Marco van Putten
> > wrote:
> >>
> >> On 08/10/2011 06:23 PM, David Coulson wrote:
> >>>
> >>> On 8/10/11 11:43 AM, Marco van Putten wrote:
> >>
On Thu, Aug 11, 2011 at 9:23 PM, pskrap wrote:
> Andrew Beekhof writes:
>
>>
>> On Thu, Jul 21, 2011 at 4:13 PM, pskrap wrote:
>> > Devin Reade writes:
>> >
>> >>
>> >> --On Wednesday, July 20, 2011 09:19:33 AM + pskrap
>> >> wrote:
>> >>
>> >> > I have a cluster where some of the resource
We'd need access to the files in /var/lib/pengine/ from the DC too.
On Tue, Aug 2, 2011 at 7:08 PM, Matt Anderson wrote:
>
> Hi!
>
> Sorry for the repost, but the links in my previous message expired.
> Now these new ones shouldn't do that. I also added the DC's log at the end
> of this message.
On Fri, Aug 12, 2011 at 12:27 AM, Patrik Plank
wrote:
>
> Hallo!
>
> I don't really understand?!
> What do you mean with
>
> "should be running everywhere"
I don't know how to rearrange the words any differently.
Try reading Clusters from Scratch for some background on these kinds of setups.
On Fri, Aug 12, 2011 at 1:13 AM, Larry Brigman wrote:
> On Wed, Aug 10, 2011 at 10:50 PM, Marco van Putten
> wrote:
>>
>> On 08/10/2011 06:23 PM, David Coulson wrote:
>>>
>>> On 8/10/11 11:43 AM, Marco van Putten wrote:
Thanks Andreas. But our managers persist on using Redhat.
>>>
>>> I
On 08/11/2011 03:05 AM, Sebastian Kaps wrote:
> Hi,
>
> On 04.08.2011, at 18:21, Steven Dake wrote:
>
>>> Jul 31 03:51:02 node01 corosync[5870]: [TOTEM ] Process pause detected
>>> for 11149 ms, flushing membership messages.
>>
>> This process pause message indicates the scheduler doesn't schedu
I have discovered that sometimes when migrating a VM, the migration itself will
succeed, but the migrate_from call on the target node will fail, as apparently
the status hasn't settled down yet. This is more likely to happen when
stopping pacemaker on a node, causing all its VMs to migrate away
On 08/11/2011 12:58 PM, Alex Forster wrote:
> I have a two node Pacemaker/Corosync cluster with no resources configured yet.
> I'm running RHEL 6.1 with the official 1.1.5-5.el6 package.
>
> While doing various network configuration, I happened to notice that if I
> issue
> a "service network res
I have a two node Pacemaker/Corosync cluster with no resources configured yet.
I'm running RHEL 6.1 with the official 1.1.5-5.el6 package.
While doing various network configuration, I happened to notice that if I issue
a "service network restart" on one node, then approx. four seconds later issue
On Wed, Aug 10, 2011 at 10:50 PM, Marco van Putten <
marco.vanput...@tudelft.nl> wrote:
> On 08/10/2011 06:23 PM, David Coulson wrote:
>
>> On 8/10/11 11:43 AM, Marco van Putten wrote:
>>
>>>
>>> Thanks Andreas. But our managers persist on using Redhat.
>>>
>>
>> I think the idea would be to take
Hallo!
I don't really understand?!
What do you mean with
"should be running everywhere"
s/dlm-clone/fs-clone/
thank you
best regards
Patrik
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo
On 11/08/11 21:51, pskrap wrote:
Hi,
I have a setup with tens of resources over several nodes. The interface that is
used to administer the system has a page showing all resources, their state and
which node they are running on.
I can get the information of one resource using 'crm_resource -W
Hi,
I have a setup with tens of resources over several nodes. The interface that is
used to administer the system has a page showing all resources, their state and
which node they are running on.
I can get the information of one resource using 'crm_resource -W -r ' but
running this command ov
Andrew Beekhof writes:
>
> On Thu, Jul 21, 2011 at 4:13 PM, pskrap wrote:
> > Devin Reade writes:
> >
> >>
> >> --On Wednesday, July 20, 2011 09:19:33 AM + pskrap
> >> wrote:
> >>
> >> > I have a cluster where some of the resources cannot run on the same node.
> >> > All resources must b
Hi,
On 04.08.2011, at 18:21, Steven Dake wrote:
>> Jul 31 03:51:02 node01 corosync[5870]: [TOTEM ] Process pause detected
>> for 11149 ms, flushing membership messages.
>
> This process pause message indicates the scheduler doesn't schedule
> corosync for 11 seconds which is greater then the fa
I suspect this function is going to become very similar to sort_clone_instance.
I'll commit a slightly more involved patch shortly. Thanks for
pointing out the problem.
2011/8/2 Yuusuke IIDA :
> Hi, Yan
> Hi, Andrew
>
> When I operated the resource, I think that I do not want to affect the
> irre
18 matches
Mail list logo