>> debug Jul 23 03:10:51 stonith_choose_peer(765):0: Couldn't find anyone
>> to fence an-c03n02.alteeve.ca with fence_n02_psu1_off
>>
>> psu != pdu
>
> *sigh*
>
> Probably means there is a matching bug on the wiki. I'll look/fix.
>
> --
> Digimer
lol... You knew when you were working on it that
It supposed to be Master/Salve concept... if am not wrong...
Let me try with DRBD and GFS setup... .
On Mon, Jul 22, 2013 at 10:09 PM, Gopalakrishnan N <
gopalakrishnan...@gmail.com> wrote:
> Now I got one more issue, when I stop the complete pacemaker application,
> the other one node automati
Hi,
I have currently set up 3 machines with Pacemaker 2.3 with Corosync 1.19. I
have tested some scenarios and have encountered some problem which I hope
to get some advice on.
My scenario is as follows:
The 3 machines, name A,B,C are all running with A being the node which
started the resource
On 22/07/13 20:25, Andrew Beekhof wrote:
On 23/07/2013, at 3:27 AM, Digimer wrote:
On 21/07/13 20:53, Andrew Beekhof wrote:
Announcing the seventh release candidate for Pacemaker 1.1.10
https://github.com/ClusterLabs/pacemaker/releases/Pacemaker-1.1.10-rc7
This RC is a result of bugfi
On 23/07/2013, at 3:27 AM, Digimer wrote:
> On 21/07/13 20:53, Andrew Beekhof wrote:
>> Announcing the seventh release candidate for Pacemaker 1.1.10
>>
>> https://github.com/ClusterLabs/pacemaker/releases/Pacemaker-1.1.10-rc7
>>
>> This RC is a result of bugfixes to the policy engine, fen
On 2013-07-22T16:25:29, "\"Tomcsányi, Domonkos\"" wrote:
> You were right, the version in the Ubuntu repository is outdated, so I
> decided to install it from a PPA-repo. Now I have the latest version of
> everything, but naturally I already have an error:
>
> ERROR: running cibadmin -Ql: Could
I suppose that since one of the two servers in the cluster will be master that
I only need this line. The only possible slave (3rd node) is not in the
cluster, so I am guessing that I don't need that config:
op monitor interval="29s" role="Master" timeout="30s"
On 07/21/2013 10:11 PM, Andre
On 21/07/13 20:53, Andrew Beekhof wrote:
Announcing the seventh release candidate for Pacemaker 1.1.10
https://github.com/ClusterLabs/pacemaker/releases/Pacemaker-1.1.10-rc7
This RC is a result of bugfixes to the policy engine, fencing daemon
and crmd. We've squashed a bug involving const
2013.07.22. 12:16 keltezéssel, Lars Marowsky-Bree írta:
On 2013-07-22T12:09:22, "\"Tomcsányi, Domonkos\"" wrote:
crm(live)configure# colocation by_color inf: HTTPS_SERVICE_GROUP
OPENVPN_SERVICE_GROUP node-attribute=color
ERROR: 4: constraint by_color references a resource node-attribute=color
Now I got one more issue, when I stop the complete pacemaker application,
the other one node automatically projects me the http service.
But when I stop the http service alone in one node which is pointing to
ClusterIP, the page is not opening.
Basically, the crm_mon -1 shows it is always pointed
Hello,
I'm trying the debug the following scenario:
My cluster setup is two AWS instances providing an elastic ip as an HA
resource. I have written a custom resource script for managing that, the
script passes all the tests specified in ocf-tester. It behaves properly in
other test scenarios.
In
Hi Andrew,
On 07/19/2013 12:22 AM, Andrew Beekhof wrote:
>> I've added the PKG_CONFIG_PATH and the two libqb_ lines in an attempt to
>> > make things work, as recommended by the configure help. So far, no
>> > dice. Is this something that needs to be fixed in the autoconf/autogen
>> > stuff? So
On 2013-07-22T14:00:34, Thibaut Pouzet wrote:
> NPS-8HD20-2
> The fence agent fence_wti that is shipped with
> "fence-agents-3.1.5-25.el6_4.2.x86_64" on CentOS 6.4 cannot work with named
> port groups. It only works with single outlets. We have patched fence_wti in
> order to make it also work wi
Hi,
There have been a lot of discussions lately regarding pacemaker's
configuration to support multiple PDUs for fencing. These discussions
apply to two possible setup:
* One node dually powered by two physically separated PDUs that are on
two separated power supply circuits
* One node dually
On 2013-07-22T12:09:22, "\"Tomcsányi, Domonkos\"" wrote:
> crm(live)configure# colocation by_color inf: HTTPS_SERVICE_GROUP
> OPENVPN_SERVICE_GROUP node-attribute=color
> ERROR: 4: constraint by_color references a resource node-attribute=color
> which doesn't exist
Which crmsh version are you us
2013.07.22. 11:15 keltezéssel, Lars Marowsky-Bree írta:
On 2013-07-19T23:18:29, Lars Ellenberg wrote:
You may use node attributes in colocation constraints.
Ohhh, good thinking. I had forgotten about that too.
But I wonder if that really is bi-directional; is the PE smart enough to
figure ou
On 2013-07-19T23:18:29, Lars Ellenberg wrote:
> You may use node attributes in colocation constraints.
Ohhh, good thinking. I had forgotten about that too.
But I wonder if that really is bi-directional; is the PE smart enough to
figure out where resources need to go if one of them can't run on
2013.07.20. 17:32 keltezéssel, Lars Marowsky-Bree írta:
On 2013-07-19T16:49:21, "\"Tomcsányi, Domonkos\"" wrote:
Now the behaviour I would like to achieve:
If NODE 1 goes offline its services should get migrated to NODE 2 AND NODE
3's services should get migrated to NODE 4.
If NODE 3 goes offl
2013.07.22. 4:57 keltezéssel, Andrew Beekhof írta:
On 20/07/2013, at 7:18 AM, Lars Ellenberg wrote:
On Fri, Jul 19, 2013 at 04:49:21PM +0200, "Tomcsányi, Domonkos" wrote:
Hello everyone,
I have been struggling with this issue for quite some time so I
decided to ask you to see if maybe you ca
19 matches
Mail list logo