On Fri, Aug 17, 2007 at 09:59:26AM -0400, Tom Lane wrote:
> "Mikko Partio" <[EMAIL PROTECTED]> writes:
> > This was my original intention. I'm still quite hesitant to trust the
> > fencing devices ability to quarantee that only one postmaster at a time is
> > running, because of the disastrous poss
On Fri, Aug 17, 2007 at 04:19:57PM +0200, Hannes Dorbath wrote:
> On 17.08.2007 15:59, Tom Lane wrote:
> >On the other side of the coin, I have little confidence in DRBD
> >providing the storage semantics we need (in particular guaranteeing
> >write ordering). So that path doesn't sound exactly ri
Hi,
> On the other side of the coin, I have little confidence in DRBD
> providing the storage semantics we need (in particular guaranteeing
> write ordering). So that path doesn't sound exactly risk-free either.
DRBD seems to enforce strict write ordering on both sides of the link
according to
On 17.08.2007 15:59, Tom Lane wrote:
On the other side of the coin, I have little confidence in DRBD
providing the storage semantics we need (in particular guaranteeing
write ordering). So that path doesn't sound exactly risk-free either.
To my understanding DRBD provides this. I think a discu
"Mikko Partio" <[EMAIL PROTECTED]> writes:
> This was my original intention. I'm still quite hesitant to trust the
> fencing devices ability to quarantee that only one postmaster at a time is
> running, because of the disastrous possibility of corrupting the whole
> database.
Making that guarantee
On 8/17/07, Hannes Dorbath <[EMAIL PROTECTED]> wrote:
>
> On 17.08.2007 11:12, Mikko Partio wrote:
> > Maybe I'm just better off using the more simple (crude?) method of drbd
> +
> > heartbeat?
>
> Crude? Use what you like to use, but you should keep one thing in mind:
> If you don't know the softw
On 17.08.2007 11:12, Mikko Partio wrote:
Maybe I'm just better off using the more simple (crude?) method of drbd +
heartbeat?
Crude? Use what you like to use, but you should keep one thing in mind:
If you don't know the software you are running in each and every detail,
how it behaves in each
On 8/16/07, Douglas McNaught <[EMAIL PROTECTED]> wrote:
>
> Devrim GÜNDÜZ <[EMAIL PROTECTED]> writes:
>
> >> What I'm pondering here is that is the cluster able to keep the
> >> postmasters synchronized at all times so that the database won't get
> >> corrupted.
> >
> > Keep all the $PGDATA in the
Devrim GÜNDÜZ <[EMAIL PROTECTED]> writes:
>> What I'm pondering here is that is the cluster able to keep the
>> postmasters synchronized at all times so that the database won't get
>> corrupted.
>
> Keep all the $PGDATA in the shared disk. That would minimize data loss
> (Of course, there is stil
Hi,
On Thu, 2007-08-16 at 10:05 +0300, Devrim GÜNDÜZ wrote:
> (Of course, there is still a risk of data loss -- the postmasters are
> not aware of each other and they don't share each other's buffers,
> etc.)
Err... I was talking about uncommitted transactions, and of course this
does not mean a
Hi,
On Thu, 2007-08-16 at 09:42 +0300, Mikko Partio wrote:
> The idea would be that the cluster programs with gfs (and HP ilo)
> would make sure that only one postmaster at a time would be able to
> access the shared disk, and in case the active node fails the cluster
> software would shift the se
On 16.08.2007 08:42, Mikko Partio wrote:
I have a mission to implement a two-node active-passive PostgreSQL cluster.
The databases at the cluster are rather large (hundreds of GB's) which opts
me to consider a shared disk environment. I know this is not natively
supported with PostgreSQL, but I h
12 matches
Mail list logo