Thank you very much for your reply sir. But I am sorry, I did not responded
after i correct it. I managed it with the pingd resource it self. I am
going to try it with ping resource also. i have already sownloaded the
resource since it is not available out of box in my '2.99' version available
in
Hi Dejan,
It seems to be a problem to make the vnc setting of the guest automatic somehow
or other.
The problem seems to be evaded by every guest by appointing a port.
Thanks,
Hideo Yamauchi.
--- renayama19661...@ybb.ne.jp wrote:
> Hi Dejan,
>
> > IIRC, that port has to do with
Hi Andrew,
I ask you a question one more.
Our real resource constitution is a little more complicated.
We do colocation of the clone(clnG3dummy1, clnG3dummy2) which does not treat
the update of the
attribute such as pingd.
(snip)
Hi Andrew,
Please confirm it last time because I revised loop processing pointed out.
I do not understand the point that this processing does not have good well.
>>> Not sure about this bit:
>>>
>>> +if(failcount>0) {
>>> + printed = TRUE;
>>> + print_as(": Resource is failur
Hi Andrew,
Thank you for comment.
> I was suggesting:
>
> with-rsc="clnUMgroup01" score="INFINITY"/>
>
>
>
>operation="not_defined"/>
>attribute="clnPingd" operation="lt" type="integer" value="1"/>
>attribute="clnPingd2" operation="not_defined"/>
>attribu
Hi Andrew,
Thank you for comment.
> So if I can summarize, you're saying that clnUMdummy02 should not be
> allowed to run on srv01 because the combined number of failures is 6
> (and clnUMdummy02 is a non-unique clone).
>
> And that the current behavior is that clnUMdummy02 continues to run.
>
I would like to know if it is possible to configure more than two resources
within a collocation group.
Simply put I have 10 Virtual IPs that will need to migrate from Node A to
Node B in the event of any failures. I also need these IPs to all start on
the same Node in the event that both Nodes ar
The following rules give me the behavior I was looking for:
primitive master ocf:pacemaker:Dummy meta resource-stickiness="INFINITY"
is-managed="true"
location l-master_a master 1: fc12-a
location l-master_b master 1: fc12-b
primitive master ocf:pacemaker:Dummy
location l-worker_a worker 1: fc12-a
BTW: The order matters in the colocation rule. When I configure:
colocation colo-master_worker -1: master worker
Then "failback" is blocked by the stickiness. In my opinion this is a bug,
but others may have an explanation.
This is the default version that installs on FC12 using the GUI software
On Tue, Mar 23, 2010 at 10:13 PM, Florian Haas wrote:
> On 03/23/2010 10:00 PM, Andrew Beekhof wrote:
>> On Tue, Mar 23, 2010 at 9:47 PM, Matthias Schlarb
>> wrote:
>>> Hi,
>>>
>>>
>>>
>>> I'm aware of the external/vmware plugin and want to ask if someone did
>>> already some tests with it and w
On 03/23/2010 10:00 PM, Andrew Beekhof wrote:
> On Tue, Mar 23, 2010 at 9:47 PM, Matthias Schlarb wrote:
>> Hi,
>>
>>
>>
>> I'm aware of the external/vmware plugin and want to ask if someone did
>> already some tests with it and would share the results.
>
> I was using it a while ago, but it was
On Tue, Mar 23, 2010 at 9:47 PM, Matthias Schlarb wrote:
> Hi,
>
>
>
> I'm aware of the external/vmware plugin and want to ask if someone did
> already some tests with it and would share the results.
I was using it a while ago, but it was mostly just something I used
for release testing.
It could
On Tue, 2010-03-23 at 16:26 +0100, Andrew Beekhof wrote:
> > Killed Corosync on data01, the node goes down as expected and the
> > resource fails over to data02. After data01 is up again the failover-ip
> > moves back to data01.
> >
> > Any ideas?
>
> yes, you told it to:
>
> > location cli-pref
Hi,
I'm aware of the external/vmware plugin and want to ask if someone did already
some tests with it and would share the results.
In Q2 this year I will start tests regarding stonith for VMware guests and how
the cluster is affected by vMotion and FT operations. If someone is interested
in su
We'd need a stack trace, that screen dump doesn't help much I'm afraid.
Try using hb_report to grab the logs etc. It also includes backtraces
from any cores it finds.
On Tue, Mar 23, 2010 at 6:55 PM, wrote:
> Hi,
>
> Some tests today...
> If I switch off my network interface (ifdown eth0) or if
2010/3/19 :
> Hi Andrew,
>
>> I've been extremely busy.
>> Sometimes I defer more complex questions until I have time to give
>> them my full attention.
>
> I understand that you are busy.
> Thank you for comment.
>
>> I don't really understand the question here.
>
> Sorry..
> I made a mistake in
Hi,
Some tests today...
If I switch off my network interface (ifdown eth0) or if i kill (-9) corosync,
i've got a segfault of dlm_controld and the node reboot.
Is it normal ? My tests are too hard ?
Thanks a lot ;-)
Regards
- Mail Original -
De: r...@free.fr
À: pacemaker@oss.clusterla
On Tue, Mar 23, 2010 at 6:11 PM, Andrew Beekhof wrote:
> Hard to say with no logs.
>
> But you should be using ocf:pacemaker:ping instead of ocf:heartbeat:pingd
>
Oh, I see the logs now.
But the part about using ocf:pacemaker:ping stands.
pingd has proven to be quite troublesome and is easily r
Hard to say with no logs.
But you should be using ocf:pacemaker:ping instead of ocf:heartbeat:pingd
On Tue, Feb 23, 2010 at 9:21 AM, Jayakrishnan wrote:
> Sir,
> Thank you for your advice but still my resources cant run anywhere as per
> crm_verify -LV.
> My slony resources are dependent of vir-
On Tue, Mar 9, 2010 at 4:04 PM, Erich Weiler wrote:
> Thanks for the reply! Yes, I have checked that my LSB scripts are
> compliant.
I'm pretty sure it fails step 3 of
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/ap-lsb.html
> If this can provide any insight,
2010/3/12 :
> Hi,
>
> We tested the trouble of the clone.
>
> I confirmed it in the next procedure.
>
> Step1)I start all nodes and update cib.xml.
>
>
> Last updated: Fri Mar 12 14:53:38 2010
> Stack: openais
> Current DC: srv01 - partition with quorum
> Version: 1.0.7-049006f172774f
On Fri, Mar 19, 2010 at 2:32 AM, Junko IKEDA wrote:
> Hi,
>
>>> # crm_mon -1
>>>
>>>
>>> Stack: openais
>>> Current DC: cspm01 - partition with quorum
>>> Version: 1.0.8-2a76c6ac04bc stable-1.0 tip
>>> 2 Nodes configured, 2 expected votes
>>> 2 Resources configured.
>>>
>
I plan to move the mailing list to a new server over the weekend.
The new server is already in place, but the move may still involve a short
amount of downtime while I copy the latest data and/or a couple of test emails.
So if you're having trouble sending over the weekend, please be patient.
Go
On Tue, Mar 23, 2010 at 3:58 PM, frank wrote:
>
> Hey Guys,
> wondering why resource stickiness does not work.
>
> node data01 \
> attributes standby="off"
> node data02 \
> attributes standby="off"
> primitive data01-stonith stonith:external/riloe \
> params hostlist="data01"
On Mon, Mar 22, 2010 at 9:18 PM, Alan Jones wrote:
> Well, I guess my configuration is not as common.
> In my case, one of these resources, say resource A, suffers greater
> disruption if it is moved.
> So, after a failover I would prefer that resource B move, reversing the node
> placement.
> Is
On Tue, Mar 23, 2010 at 3:16 PM, Mario Giammarco wrote:
> Hello,
> I am trying again to setup a dual node drbd/iscsi/pacemaker.
>
> Do I need a quorumd?
No. Some a quite vocal about it being busted by design.
But also, it doesn't support corosync.
> If yes ( as I suppose ) I would like to run t
Hey Guys,
wondering why resource stickiness does not work.
node data01 \
attributes standby="off"
node data02 \
attributes standby="off"
primitive data01-stonith stonith:external/riloe \
params hostlist="data01" ilo_user="root"
ilo_hostname="data01-ilo" ilo_password="x
Hello,
I am trying again to setup a dual node drbd/iscsi/pacemaker.
Do I need a quorumd?
If yes ( as I suppose ) I would like to run the quorumd on an embedded node,
like an openwrt router or similar.
Have you some suggestions?
Thanks in advance for any help!
Mario
__
Rather than expressing it directly, is it possible to create a resource
(maybe anything) that runs on failover to modify the configuration to make
the resource stick to the current node?
Cheers,
Jie
On Tue, Mar 23, 2010 at 11:44 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Mon, Mar 22, 2010 at 01
Hi,
On Mon, Mar 22, 2010 at 01:18:35PM -0700, Alan Jones wrote:
> Well, I guess my configuration is not as common.
> In my case, one of these resources, say resource A, suffers greater
> disruption if it is moved.
> So, after a failover I would prefer that resource B move, reversing the node
> pla
Ich bin ab 23.03.2010 nicht im Büro. Sie erreichen mich wieder am
25.03.2010.
Ich werde Ihre Nachricht nach meiner Rückkehr beantworten. In dringenden
Fällen wenden sie sich bitte an meinen Kollegen Sammer Bernhard (DW 1443)
bzw an die UNIX-Hotline DW 1444.
31 matches
Mail list logo