Am Mittwoch, 6. August 2014, 20:08:24 schrieb Thomas Müller:
> >>> I've got a 2 node cluster with 4 VM's. If a node fails 2 of them
> >>> should be stopped (the dev machines) to prevent the physical machine
> >>> from swap'ing.
> >>
> >> Do this with utilizations and priority:
> >>
> >> http://cl
>>> I've got a 2 node cluster with 4 VM's. If a node fails 2 of them
>>> should be stopped (the dev machines) to prevent the physical machine
>>> from swap'ing.
>>
>> Do this with utilizations and priority:
>>
>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/
Pacemaker_Explained/
> _utili
- Original Message -
> Hello everyone:
>
> my tools version
> pacemaker: 1.1.10
> corosync: 1.4.5
> crmsh-2.0
>
> I have 2 nodes node1 and node2, resource agent Test must running on node1,
> and Test should not run on node2 if node1 is offline. So I do the following
> config:
> location
Hi,
Sorry, I have managed to fix this now. I noticed in the logline:
Aug 6 13:26:23 ldb03 cibadmin[2140]: notice: crm_log_args: Invoked:
cibadmin -M -c -o status --xml-text
the id is ldb03, not the ID of the node, 12303.
I removed using: crm_node -R "ldb03" --force
and rebooted.
Nodes are
Hi,
I have setup a 2 node cluster, using the following packages:
pacemaker 1.1.10+git20130802-1ubuntu2
corosync2.3.3-1ubuntu1
My cluster config is as so:
node $id="12303" ldb03
node $id="12304" ldb04
primitive p_fence_ldb03 stonith:external/
Hi,
I have setup a 2 node cluster, using the following packages:
pacemaker 1.1.10+git20130802-1ubuntu2
corosync2.3.3-1ubuntu1
My cluster config is as so:
node $id="12303" ldb03
node $id="12304" ldb04
primitive p_fence_ldb03 stonith:external/
Hello everyone:
my tools version
pacemaker: 1.1.10
corosync: 1.4.5
crmsh-2.0
I have 2 nodes node1 and node2, resource agent Test must running on node1,
and Test should not run on node2 if node1 is offline. So I do the
following config:
location TestOnNode1 Test INFINITY: node1
If node1 and no