On Nov 7, 2013, at 8:59 PM, Sean Lutner wrote:
>
> On Nov 7, 2013, at 8:34 PM, Andrew Beekhof wrote:
>
>>
>> On 8 Nov 2013, at 4:45 am, Sean Lutner wrote:
>>
>>> I have a confusing situation that I'm hoping to get help with. Last night
>>> after configuring STONITH on my two node cluster,
On 07/11/13 20:38, Jean-Francois Malouin wrote:
> Hi,
> After a few smooth years with a very simple but sturdy 2-nodes HA
> cluster running pacemaker/drbd/Xen I've been given the task to build
> another one, but the hardware that they dropped on my lap doesn't have
> IPMI and I will definitely requ
On Nov 7, 2013, at 8:34 PM, Andrew Beekhof wrote:
>
> On 8 Nov 2013, at 4:45 am, Sean Lutner wrote:
>
>> I have a confusing situation that I'm hoping to get help with. Last night
>> after configuring STONITH on my two node cluster, I suddenly have a "ghost"
>> node in my cluster. I'm lookin
On 6 Nov 2013, at 9:36 am, emmanuel segura wrote:
> Hello everybody,
>
> On Fedora 20 i got a crm_mon segment fault with the following configuration
> http://ur1.ca/fzndq maybe my configuration is wrong, but in any case the is
> what i saw with gdb http://ur1.ca/fznf2
Best to include it in t
On 7 Nov 2013, at 9:30 pm, Robert H. wrote:
>> This does a reasonable job of explaining:
>> http://blog.clusterlabs.org/blog/2013/pacemaker-and-rhel-6-dot-4/
>
> I see, thanks for the hint ... small step for man, huge step for mankind ..
> (or something like this :))
>
>> I would be interes
On 8 Nov 2013, at 4:45 am, Sean Lutner wrote:
> I have a confusing situation that I'm hoping to get help with. Last night
> after configuring STONITH on my two node cluster, I suddenly have a "ghost"
> node in my cluster. I'm looking to understand the best way to remove this
> node from the c
On 7 Nov 2013, at 9:34 pm, s.oreilly wrote:
> Having some trouble getting a location rule to work.
>
> Here is my current config:
>
>
> Resources:
> Master: master_drbd
> Meta Attrs: master-max=1 master-node-max=1 clone-max=2 clone-node-max=1
> notify=true
> Resource: drbd_mysql (class=ocf
Something seems very wrong with this at the corosync level.
Even fenced and the dlm are having issues.
Jan: Could this be firewall related?
On 27 Sep 2013, at 10:44 pm, Bartłomiej Wójcik
wrote:
> W dniu 2013-09-27 04:26, Andrew Beekhof pisze:
>> On 26/09/2013, at 8:35 PM, Bartłomiej Wójcik
>>
On 21 Aug 2013, at 1:50 pm, Wen Wen (NCS) wrote:
> Hi all,
> I am doing dual nodes practice.
> I use CentOS 6.3 x86_64 pacemaker DRBD and GFS2 for my cluster.
> I already test many times I have a design question.
>
> Here is my crm status on one node after I set this node from standby to onli
On 8 Nov 2013, at 10:27 am, Andrew Beekhof wrote:
>
> On 7 Oct 2013, at 5:52 pm, Mailing List SVR wrote:
>
>> Il 07/10/2013 04:16, Andrew Beekhof ha scritto:
>>> On 05/10/2013, at 7:11 AM, Mailing List SVR
>>> wrote:
>>>
>>>
Hi,
I have a pacemaker cluster running fine since
On 7 Oct 2013, at 5:52 pm, Mailing List SVR wrote:
> Il 07/10/2013 04:16, Andrew Beekhof ha scritto:
>> On 05/10/2013, at 7:11 AM, Mailing List SVR
>> wrote:
>>
>>
>>> Hi,
>>>
>>> I have a pacemaker cluster running fine since 2 months, I noticed that in
>>> the folder /var/lib/pacemaker/co
Hi, PPL!
I need help. I do not understand... Why has stopped working.
This configuration work on other cluster, but on corosync1.
So... cluster postgres with master/slave.
Classic config as in wiki.
I build cluster, start, he is working.
Next I kill postgres on Master with 6 signal, as if "disk sp
On 8 Nov 2013, at 12:10 am, yusuke iida wrote:
> Hi, Andrew
>
> The shown code seems not to process correctly.
> I wrote correction.
> Please check.
> https://github.com/yuusuke/pacemaker/commit/3b90af1b11a4389f8b4a95a20ef12b8c259e73dc
Ah, yes that looks better.
Did it help at all?
>
> Regar
Hello,
On Thu, Nov 7, 2013 at 1:38 PM, Jean-Francois Malouin <
jean-francois.malo...@bic.mni.mcgill.ca> wrote:
> ... the hardware that they dropped on my lap doesn't have
> IPMI and I will definitely require stonith.
>
> What would you recommend? A switchable PDU/power fencing?
>
>
Do you have sh
Hi,
After a few smooth years with a very simple but sturdy 2-nodes HA
cluster running pacemaker/drbd/Xen I've been given the task to build
another one, but the hardware that they dropped on my lap doesn't have
IPMI and I will definitely require stonith.
What would you recommend? A switchable PDU/p
I have a confusing situation that I'm hoping to get help with. Last night after
configuring STONITH on my two node cluster, I suddenly have a "ghost" node in
my cluster. I'm looking to understand the best way to remove this node from the
config.
I'm using the fence_ec2 device for for STONITH. I
Hi, Andrew
The shown code seems not to process correctly.
I wrote correction.
Please check.
https://github.com/yuusuke/pacemaker/commit/3b90af1b11a4389f8b4a95a20ef12b8c259e73dc
Regards,
Yusuke
2013/11/7 Andrew Beekhof :
>
> On 7 Nov 2013, at 12:43 pm, yusuke iida wrote:
>
>> Hi, Andrew
>>
>> 20
> When the PREFERRED_SRC_IP resource is started, the following commands
> were failing:
>
>> stderr: + 09:21:44: srca_start:184: ip route replace 192.168.0.0/24 dev em2
>> src 192.168.0.6
>> stderr: + 09:21:44: srca_start:187: ip route change to default via
>> 192.168.0.1 dev em1 src 192.168.0.
Having some trouble getting a location rule to work.
Here is my current config:
Resources:
Master: master_drbd
Meta Attrs: master-max=1 master-node-max=1 clone-max=2 clone-node-max=1
notify=true
Resource: drbd_mysql (class=ocf provider=linbit type=drbd)
Attributes: drbd_resource=cluster
This does a reasonable job of explaining:
http://blog.clusterlabs.org/blog/2013/pacemaker-and-rhel-6-dot-4/
I see, thanks for the hint ... small step for man, huge step for
mankind .. (or something like this :))
I would be interested to know what for.
We have a setup of load balancing v
> I noticed you didn't create a order constraint between the IPaddr and the
> IPsrcaddr resources. You'll want to guarantee the IP address starts before
> setting it as the IPsrcaddr.
>
> pcs constraint order VIP_EM1 then PREFERRED_SRC_IP
>
> If that doesn't help anything, we'll need some debug
21 matches
Mail list logo