On Thu, Mar 11, 2010 at 05:26:19PM +0800, Martin Aspeli wrote:
> Matthew Palmer wrote:
>> On Thu, Mar 11, 2010 at 03:34:50PM +0800, Martin Aspeli wrote:
>>> I was wondering, though, if fencing at the DRBD level would get around
>>> the possible problem with a full power outage taking the fencing de
On Thu, Mar 11, 2010 at 06:58:30AM +1100, Matthew Palmer wrote:
> On Wed, Mar 10, 2010 at 11:10:31PM +0800, Martin Aspeli wrote:
> > Dejan Muhamedagic wrote:
> >> ocfs2 introduces an extra level of complexity. You don't want
> >> that unless really necessary.
> >
> > How would that complexity manif
On Thu, Mar 11, 2010 at 03:34:50PM +0800, Martin Aspeli wrote:
> I was wondering, though, if fencing at the DRBD level would get
> around the possible problem with a full power outage taking the
> fencing device down.
>
> In my poor understanding of things, it'd work like this:
>
> - Pacemaker r
Matthew Palmer wrote:
On Thu, Mar 11, 2010 at 03:34:50PM +0800, Martin Aspeli wrote:
I was wondering, though, if fencing at the DRBD level would get around
the possible problem with a full power outage taking the fencing device
down.
In my poor understanding of things, it'd work like this:
-
On Thu, Mar 11, 2010 at 03:34:50PM +0800, Martin Aspeli wrote:
> I was wondering, though, if fencing at the DRBD level would get around
> the possible problem with a full power outage taking the fencing device
> down.
>
> In my poor understanding of things, it'd work like this:
>
> - Pacemaker
Serge Dubrouski wrote:
On Wed, Mar 10, 2010 at 6:59 PM, Martin Aspeli wrote:
Serge Dubrouski wrote:
On Wed, Mar 10, 2010 at 5:30 PM, Martin Aspeli
wrote:
Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be acti
On Wed, Mar 10, 2010 at 11:10:31PM +0800, Martin Aspeli wrote:
> Dejan Muhamedagic wrote:
>> ocfs2 introduces an extra level of complexity. You don't want
>> that unless really necessary.
>
> How would that complexity manifest?
Have you noticed the number of extra daemons and kernel bits that have
[Up-front disclaimer: I'm not a fan of cluster filesystems, having had large
chunks of my little remaining sanity shredded by GFS. So what I say is
likely tinged with lingering loathing, although I do *try* to stay factual]
On Wed, Mar 10, 2010 at 09:01:01PM +0800, Martin Aspeli wrote:
> Matthew
On Thu, Mar 11, 2010 at 08:30:29AM +0800, Martin Aspeli wrote:
> Martin Aspeli wrote:
>> Hi folks,
>>
>> Let's say have a two-node cluster with DRBD and OCFS2, with a database
>> server that's supposed to be active on one node at a time, using the
>> OCFS2 partition for its data store.
>>
>> If we
On Wed, Mar 10, 2010 at 11:26:41AM -, darren.mans...@opengi.co.uk wrote:
>
> On Wed, Mar 10, 2010 at 02:32:05PM +0800, Martin Aspeli wrote:
> > Florian Haas wrote:
> >> On 03/09/2010 06:07 AM, Martin Aspeli wrote:
> >>> Hi folks,
> >>>
> >>> Let's say have a two-node cluster with DRBD and OCFS
On Wed, Mar 10, 2010 at 6:59 PM, Martin Aspeli wrote:
> Serge Dubrouski wrote:
>>
>> On Wed, Mar 10, 2010 at 5:30 PM, Martin Aspeli
>> wrote:
>>>
>>> Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's suppos
Serge Dubrouski wrote:
On Wed, Mar 10, 2010 at 5:30 PM, Martin Aspeli wrote:
Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node at a time, using the
OCFS2 partition for its data store.
If we de
On Wed, Mar 10, 2010 at 5:30 PM, Martin Aspeli wrote:
> Martin Aspeli wrote:
>>
>> Hi folks,
>>
>> Let's say have a two-node cluster with DRBD and OCFS2, with a database
>> server that's supposed to be active on one node at a time, using the
>> OCFS2 partition for its data store.
>>
>> If we detec
Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node at a time, using the
OCFS2 partition for its data store.
If we detect a failure on the active node and fail the database over to
the other node,
Dejan Muhamedagic wrote:
Hi,
On Wed, Mar 10, 2010 at 11:10:31PM +0800, Martin Aspeli wrote:
Dejan Muhamedagic wrote:
Hi,
On Wed, Mar 10, 2010 at 09:02:48PM +0800, Martin Aspeli wrote:
Lars Ellenberg wrote:
Or, if this is as infrequent as you say it is, have those blobs in a
regular file sy
Hi,
On Wed, Mar 10, 2010 at 11:10:31PM +0800, Martin Aspeli wrote:
> Dejan Muhamedagic wrote:
> >Hi,
> >
> >On Wed, Mar 10, 2010 at 09:02:48PM +0800, Martin Aspeli wrote:
> >>Lars Ellenberg wrote:
> >>
> >>>Or, if this is as infrequent as you say it is, have those blobs in a
> >>>regular file syst
Dejan Muhamedagic wrote:
Hi,
On Wed, Mar 10, 2010 at 09:02:48PM +0800, Martin Aspeli wrote:
Lars Ellenberg wrote:
Or, if this is as infrequent as you say it is, have those blobs in a
regular file system on a regular partition or LV, and replace every
"echo> blob" with "echo> blob&& csyn
Hi,
On Wed, Mar 10, 2010 at 09:02:48PM +0800, Martin Aspeli wrote:
> Lars Ellenberg wrote:
>
> >Or, if this is as infrequent as you say it is, have those blobs in a
> >regular file system on a regular partition or LV, and replace every
> >"echo> blob" with "echo> blob&& csync2 -x blob" (you ge
darren.mans...@opengi.co.uk wrote:
Please forgive my ignorance, I seem to have missed the specifics about
using OCFS2 on DRBD dual-primary but what are the main issues? How can
you use PgSQL on dual-primary without OCFS2?
For the record, we are *not* using dual primary in our setup. We'll have
Lars Ellenberg wrote:
Or, if this is as infrequent as you say it is, have those blobs in a
regular file system on a regular partition or LV, and replace every
"echo> blob" with "echo> blob&& csync2 -x blob" (you get the idea).
Unfortunately, that'd mean modifying software I don't really hav
Matthew Palmer wrote:
On Wed, Mar 10, 2010 at 02:32:05PM +0800, Martin Aspeli wrote:
Florian Haas wrote:
On 03/09/2010 06:07 AM, Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node at a time, usi
On Wed, Mar 10, 2010 at 02:32:05PM +0800, Martin Aspeli wrote:
> Florian Haas wrote:
>> On 03/09/2010 06:07 AM, Martin Aspeli wrote:
>>> Hi folks,
>>>
>>> Let's say have a two-node cluster with DRBD and OCFS2, with a
database
>>> server that's supposed to be active on one node at a time, using the
On Wed, Mar 10, 2010 at 07:13:28PM +1100, Matthew Palmer wrote:
> On Wed, Mar 10, 2010 at 02:32:05PM +0800, Martin Aspeli wrote:
> > Florian Haas wrote:
> >> On 03/09/2010 06:07 AM, Martin Aspeli wrote:
> >>> Hi folks,
> >>>
> >>> Let's say have a two-node cluster with DRBD and OCFS2, with a databa
On Wed, Mar 10, 2010 at 02:32:05PM +0800, Martin Aspeli wrote:
> Florian Haas wrote:
>> On 03/09/2010 06:07 AM, Martin Aspeli wrote:
>>> Hi folks,
>>>
>>> Let's say have a two-node cluster with DRBD and OCFS2, with a database
>>> server that's supposed to be active on one node at a time, using the
Florian Haas wrote:
On 03/09/2010 06:07 AM, Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node at a time, using the
OCFS2 partition for its data store.
*cringe* Which database is this?
Postgres
On 03/09/2010 06:07 AM, Martin Aspeli wrote:
> Hi folks,
>
> Let's say have a two-node cluster with DRBD and OCFS2, with a database
> server that's supposed to be active on one node at a time, using the
> OCFS2 partition for its data store.
*cringe* Which database is this?
Florian
signature.a
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node at a time, using the
OCFS2 partition for its data store.
If we detect a failure on the active node and fail the database over to
the other node, we need to fence o
27 matches
Mail list logo