03.07.2013, 06:43, "Takatoshi MATSUO" :
> Hi Stefano
>
> 2013/7/2 Stefano Sasso :
>
>> Hello folks,
>> I have the following setup in mind, but I need some advice and one hint on
>> how to realize a particular function.
>>
>> I have a N (>= 2) nodes cluster, with data storage on postgresql.
Hi Stefano
2013/7/2 Stefano Sasso :
> Hello folks,
> I have the following setup in mind, but I need some advice and one hint on
> how to realize a particular function.
>
> I have a N (>= 2) nodes cluster, with data storage on postgresql.
> I would like to manage postgres master-slave replication
- Original Message -
> From: "Lindsay Todd"
> To: "The Pacemaker cluster resource manager"
> Sent: Tuesday, July 2, 2013 5:36:43 PM
> Subject: Re: [Pacemaker] Pacemaker remote nodes, naming, and attributes
>
> You didn't notice that after setting attributes on "db02", the remote node
On 21/06/2013, at 11:36 PM, andreas graeper wrote:
> hi,
> n1 active node is started and everything works fine, but after reboot n2
> drbd is not started by pacemaker.
Are you sure?
I see a couple of start operations:
Jun 21 15:10:29 [5093] n2 lrmd:debug: operation_finished:
dr
I wouldn't be doing anything without corosync2 and its option that requires all
nodes to be online before quorum is granted.
Otherwise I can imagine ways that the old master might try to promote itself.
On 02/07/2013, at 7:18 PM, Michael Schwartzkopff wrote:
> Am Dienstag, 2. Juli 2013, 09:47:3
You didn't notice that after setting attributes on "db02", the remote node
"db02" went offline as "unclean", even though vm-db02 was still running?
That strikes me as wrong! Once it gets into this state, I can order
vm-db02 to stop, but it never will. Indeed, pacemaker doesn't do much at
this po
- Original Message -
> From: "Lindsay Todd"
> To: "The Pacemaker cluster resource manager"
> Sent: Tuesday, July 2, 2013 4:05:22 PM
> Subject: Re: [Pacemaker] Pacemaker remote nodes, naming, and attributes
>
> Sorry for the delayed response, but I was out last week. I've applied this
> p
Sorry for the delayed response, but I was out last week. I've applied this
patch to 1.1.10-rc5 and have been testing:
# crm_attribute --type status --node "db02" --name "service_postgresql"
--update "true"
# crm_attribute --type status --node "db02" --name "service_postgresql"
scope=status name=
On 07/02/2013 04:02 AM, Dejan Muhamedagic wrote:
> On Mon, Jul 01, 2013 at 11:53:29AM -0400, Digimer wrote:
>> On 07/01/2013 04:52 AM, Dejan Muhamedagic wrote:
>>> Right. It is often missed that actually more than one failure is
>>> required for that setup to fail. In case of dual PDU/PSU/UPS an
>>
On 07/02/2013 04:14 AM, Dejan Muhamedagic wrote:
> On Mon, Jul 01, 2013 at 12:18:23PM -0400, Digimer wrote:
>> On 07/01/2013 08:19 AM, Vladislav Bogdanov wrote:
Well its possible right now, it "just" not super pretty to configure.
>>> I already set "Important" IMAP flag on that message and rea
On 2013-07-02T21:58:57, Andrew Beekhof wrote:
> Ah, thats probably the issue.
> Occasionally a diff doesn't apply correctly (think ordering changes) and your
> copy of the error handling code results in cli_config_update() being called
> with a NULL pointer.
>
> Fix the case statements and you
On 02/07/2013, at 8:20 PM, Lars Marowsky-Bree wrote:
> On 2013-07-02T20:12:08, Andrew Beekhof wrote:
>
>>> It seems related to the number of times I poll the CIB, too; I seem to
>>> hit a transient window there, maybe. Since I dropped the number of polls
>>> (instead of requesting the full CIB
On 2013-07-02T20:12:08, Andrew Beekhof wrote:
> > It seems related to the number of times I poll the CIB, too; I seem to
> > hit a transient window there, maybe. Since I dropped the number of polls
> > (instead of requesting the full CIB once per second) it hasn't
> > reproduced. But I'll reinsta
Am 02.07.2013 um 01:35 schrieb Andreas Mock :
> Hi Leon,
>
> thank you for the pointer to the manuals. I read it already.
>
> My 2-node-cluster seems not to fence the other node
> at startup. And I do not have an explanation. That's the reason
> I asked (after reading the docs).
>
> - CMAN_QUORU
On 02/07/2013, at 7:54 PM, Lars Marowsky-Bree wrote:
> On 2013-07-02T08:25:18, Andrew Beekhof wrote:
>
>>> if (cli_config_update(&cib_copy, NULL, FALSE) == FALSE) {
>> Also, change FALSE -> TRUE here so that you see the validation errors.
>
> OK.
>
>>> What could cause cli_config_updat
On 2013-07-02T08:25:18, Andrew Beekhof wrote:
> >if (cli_config_update(&cib_copy, NULL, FALSE) == FALSE) {
> Also, change FALSE -> TRUE here so that you see the validation errors.
OK.
> > What could cause cli_config_update() to fail in this way?
> Beats me. Can you log the xml before th
On 2013-07-02T08:56:06, Andrew Beekhof wrote:
> > My *very* initial testing of op monitor="30" didn't detect the failure
> > or recovery of the fence device.
> That might come down to the quality of the monitor action in the agent though.
Would be my best guess - suggest to file a bugzilla for t
On 02/07/2013, at 7:32 PM, Lars Marowsky-Bree wrote:
> On 2013-07-02T10:46:09, Andrew Beekhof wrote:
>
>>> Our problem is that if i give "crm resource stop vm1" and immediatly after
>>> "crm resource stop vm2"
>>> it happens that pacemaker begins to stop vm2 only after vm1 is stopped.
>> Ther
On 2013-07-02T10:46:09, Andrew Beekhof wrote:
> > Our problem is that if i give "crm resource stop vm1" and immediatly after
> > "crm resource stop vm2"
> > it happens that pacemaker begins to stop vm2 only after vm1 is stopped.
> There's nothing in the config to suggest that this would happen.
Am Dienstag, 2. Juli 2013, 09:47:31 schrieb Stefano Sasso:
> Hello folks,
> I have the following setup in mind, but I need some advice and one hint
> on how to realize a particular function.
>
> I have a N (>= 2) nodes cluster, with data storage on postgresql.
> I would like to manage postgres m
On Mon, Jul 01, 2013 at 12:18:23PM -0400, Digimer wrote:
> On 07/01/2013 08:19 AM, Vladislav Bogdanov wrote:
> >> Well its possible right now, it "just" not super pretty to configure.
> > I already set "Important" IMAP flag on that message and really willing
> > to copy that into my internal wiki ;
On Mon, Jul 01, 2013 at 11:53:29AM -0400, Digimer wrote:
> On 07/01/2013 04:52 AM, Dejan Muhamedagic wrote:
> > Right. It is often missed that actually more than one failure is
> > required for that setup to fail. In case of dual PDU/PSU/UPS an
> > IPMI based fencing is sufficient.
>
> You are rig
Hello folks,
I have the following setup in mind, but I need some advice and one hint
on how to realize a particular function.
I have a N (>= 2) nodes cluster, with data storage on postgresql.
I would like to manage postgres master-slave replication in this way: one
node is the "master", one is t
02.07.2013 03:10, Andrew Beekhof wrote:
>
> On 02/07/2013, at 8:51 AM, Andrew Beekhof wrote:
>
>>
>> On 01/07/2013, at 10:19 PM, Vladislav Bogdanov wrote:
>>
>>> 01.07.2013 15:10, Andrew Beekhof wrote:
>>>
And if people start using it, then we might look at simplifying it.
>>>
>>> May
24 matches
Mail list logo