Hi together,
thank you very much for pointing that out.
stefan
-Ursprüngliche Nachricht-
Von: Andrew Beekhof [mailto:and...@beekhof.net]
Gesendet: Freitag, 26. Juli 2013 01:24
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] order required if group is present?
On 26
(13.07.25 18:03), Kazunori INOUE wrote:
(13.07.25 11:00), Andrew Beekhof wrote:
On 24/07/2013, at 7:40 PM, Kazunori INOUE wrote:
(13.07.18 19:23), Andrew Beekhof wrote:
On 17/07/2013, at 6:53 PM, Kazunori INOUE wrote:
(13.07.16 21:18), Andrew Beekhof wrote:
On 16/07/2013, at 7:04 PM,
Hi
My report is late for 1.1.10 :(
I am using pacemaker 1.1.10-0.1.ab2e209.git.
It seems that master's monitor is stopped when slave is started.
Does someone encounter same problem ?
I attach a log and settings.
Thanks,
Takatoshi MATSUO
2013/7/26 Digimer :
> Congrats!! I know this was a long
Congrats!! I know this was a long time in the making.
digimer
On 25/07/13 20:43, Andrew Beekhof wrote:
Announcing the release of Pacemaker 1.1.10
https://github.com/ClusterLabs/pacemaker/releases/Pacemaker-1.1.10
There were three changes of note since rc7:
+ Bug cl#5161 - crmd: Preven
Ops sorry for providing the wrong version number. I am using Corosync
2.3.0-1 and Pacemaker 1.1.9-8. What should be done to clear the resources
properly that are still running in the event that pacemaker isn't running?
On Thu, Jul 25, 2013 at 10:39 AM, Andrew Beekhof wrote:
>
> On 24/07/2013, a
Announcing the release of Pacemaker 1.1.10
https://github.com/ClusterLabs/pacemaker/releases/Pacemaker-1.1.10
There were three changes of note since rc7:
+ Bug cl#5161 - crmd: Prevent memory leak in operation cache
+ cib: Correctly read back archived configurations if the primary is corru
On 26/07/2013, at 12:59 AM, Andreas Mock wrote:
> Hi Stefan,
>
> a) yes, the ordered behaviour is intentional.
> b) In former version you could change this behaviour with an attribute.
> But this attribute is depreciated in newer versions of pacemaker.
> c) The solution for parallel starting r
- Original Message -
> From: "Digimer"
> To: "The Pacemaker cluster resource manager"
> Sent: Thursday, July 25, 2013 10:53:27 AM
> Subject: Re: [Pacemaker] Two-Nodes Cluster fencing : Best Practices
>
> With two-node clusters, quorum can't be used. This is fine *if* you
> have
> good f
Hi Stefan,
a) yes, the ordered behaviour is intentional.
b) In former version you could change this behaviour with an attribute.
But this attribute is depreciated in newer versions of pacemaker.
c) The solution for parallel starting resources are resource sets.
Best regards
Andreas Mock
With two-node clusters, quorum can't be used. This is fine *if* you have
good fencing. If the nodes partition (ie: network failure), both will
try to fence the other. In theory, the faster node will power off the
other node before the slower node can kill the faster node. In practice,
this isn'
Hi Andrew.
You are right. I renamed vmware machines to have the name in lowercase and
it worked. I tested also dash and bracket [ . With unusual characters I
mentioned stonith failed.
However my vmware machine gets infinitely rebooted ower and ower. I've found
somewhere on the web this was a bug of
Hello,
As suggested I'm trying to add the nodes via corosync-objctl.
My current config file is this one:
https://gist.github.com/therobot/4327cd0a2598d1d6bb93 using 5001 as the
nodeid on the second node.
Then I try to add the nodes with the following commands:
corosync-objctl -n totem.interface
Hi List,
i have 5 resources configured (p_bond1, p_conntrackd, p_vlan118,p_vlan119,
p_openvpn)
additionally I have put all of them in a group with:
group cluster1 p_bond1,p_vlan118,p_vlan119,p_openvpn,p_conntrackd
By this, crm is starting the resources in the order, the group is defined
(p_bo
19.07.2013 14:38, Howley, Tom wrote:
Hi,
I have been doing some testing of a fairly standard pacemaker/corosync setup
with DRBD (with resource-level fencing) and have noticed the following in
relation to testing network failures:
- Handling of all ports being blocked is OK, based on hundreds
Some modifications about my first mail :
After some researches I found that external/ipmi isn't available on my
system, so I must use fence-agents.
My second question must be modified to relfect this changes like this :
configure primitive pStN1 stonith:fence_ipmilan params
ipaddr=192.16
(13.07.25 11:00), Andrew Beekhof wrote:
On 24/07/2013, at 7:40 PM, Kazunori INOUE wrote:
(13.07.18 19:23), Andrew Beekhof wrote:
On 17/07/2013, at 6:53 PM, Kazunori INOUE wrote:
(13.07.16 21:18), Andrew Beekhof wrote:
On 16/07/2013, at 7:04 PM, Kazunori INOUE wrote:
(13.07.15 11:00)
Hi,
I've just made a two-nodes Active/Passive cluster to have an iSCSI
Failover SAN.
Some details about my configuration :
- I've two nodes with 2 bonds : 1 for DRBD replication and 1
for communication
- iSCSI Target, iSCSI Lun and VirtualIP are constraints
together
17 matches
Mail list logo