I've been able to reproduce the behavior described here:
http://archives.postgresql.org/pgsql-general/2011-03/msg00538.php
It's specific to UTF8 locales on Mac OS X. I'm not sure if the
problem can manifest anywhere else; considering that OS X's UTF8
locales have a general reputation of being brok
Hello hom,
Frankly I am a learner as well. The experts here are almost always ready
to help and would be a better source of information.
Moreover I am also using eclipse but I do not use it for building the
source. I use it only as a source code browser (its easy in GUI; isn't
it? ). I am trying
Hi,
We encountered a deadlock involving VACUUM FULL (surprise surprise!
:)) in PG 8.3.13 (and still not fixed in 9.0 AFAICS although the
window appears much smaller). The call spits out the following
deadlock info:
ERROR: SQLSTATE 40P01: deadlock detected
DETAIL: Process 12479 waits for AccessE
Hi,
We encountered a deadlock involving VACUUM FULL (surprise surprise!
:)) in PG 8.3.13 (and still not fixed in 9.0 AFAICS although the
window appears much smaller). The call spits out the following
deadlock info:
ERROR: SQLSTATE 40P01: deadlock detected
DETAIL: Process 12479 waits for AccessE
Robert Haas writes:
> On Fri, Mar 18, 2011 at 10:19 PM, Andrew Dunstan wrote:
>> On 03/18/2011 09:18 PM, Robert Haas wrote:
>>> "all balls" seems like a colloquialism best avoided in our documentation.
>> It's already there, although I agree it's infelicitous.
> I vote for taking it out. I thi
I'm making pretty good progress on the task of splitting input and
output collations for expression nodes. There remains one case in the
regression tests that is giving a non-expected result. It involves this
function:
CREATE FUNCTION dup (f1 anyelement, f2 out anyelement, f3 out anyarray)
A
On Fri, Mar 18, 2011 at 10:25 PM, Robert Haas wrote:
> On Tue, Mar 8, 2011 at 7:05 AM, Fujii Masao wrote:
>> * Smart shutdown
>> Smart shutdown should wait for all the waiting backends to be acked, and
>> should not cause them to forcibly exit. But this leads shutdown to get stuck
>> infinitely i
On Tue, Mar 8, 2011 at 7:05 AM, Fujii Masao wrote:
> * Smart shutdown
> Smart shutdown should wait for all the waiting backends to be acked, and
> should not cause them to forcibly exit. But this leads shutdown to get stuck
> infinitely if there is no walsender at that time. To enable them to be a
On Fri, Mar 18, 2011 at 9:35 PM, Robert Haas wrote:
> On Fri, Mar 18, 2011 at 7:23 PM, Bruce Momjian wrote:
>> I just applied a doc patch for pg_last_xact_replay_timestamp, and the
>> text now says:
>>
>> Get timestamp of last transaction replayed during recovery.
>> This is the time
On Fri, Mar 18, 2011 at 7:23 PM, Bruce Momjian wrote:
> I just applied a doc patch for pg_last_xact_replay_timestamp, and the
> text now says:
>
> Get timestamp of last transaction replayed during recovery.
> This is the time at which the commit or abort WAL record for that
> t
On Fri, Mar 18, 2011 at 1:19 PM, Erik Rijkers wrote:
> This is OK and expected. But then it continues (in the logfile) with:
>
> FATAL: lock file "postmaster.pid" already exists
> HINT: Is another postmaster (PID 20519) running in data directory
> "/var/data1/pg_stuff/pg_installations/pgsql.van
Robert Haas writes:
> As a side note, it's not very obvious why some parts of PostmasterMain
> report problems by doing write_stderr() and exit() while other parts
> use ereport(ERROR). This check and the nearby checks on WAL level are
> immediately preceded and followed by other checks that use
Fujii Masao writes:
> On Fri, Mar 18, 2011 at 1:17 AM, Robert Haas wrote:
>>> Sorry, I've not been able to understand the point well yet. We should
>>> just use elog(ERROR) instead? But since ERROR in startup process
>>> is treated as FATAL, I'm not sure whether it's worth using ERROR
>>> instead
On 3/18/11 11:15 AM, Jim Nasby wrote:
> To take the opposite approach... has anyone looked at having the OS just
> manage all caching for us? Something like MMAPed shared buffers? Even if we
> find the issue with large shared buffers, we still can't dedicate serious
> amounts of memory to them b
"Kevin Grittner" Thursday 17 March 2011 22:02:18
> Rados*aw Smogura wrote:
> > I have implemented initial concept of 2nd level cache. Idea is to
> > keep some segments of shared memory for special buffers (e.g.
> > indices) to prevent overwrite those by other operations. I added
> > those function
I just applied a doc patch for pg_last_xact_replay_timestamp, and the
text now says:
Get timestamp of last transaction replayed during recovery.
This is the time at which the commit or abort WAL record for that
transaction was generated on the primary.
If no transact
On Fri, Mar 18, 2011 at 5:48 PM, Kevin Grittner
wrote:
> Robert Haas wrote:
>> Well, the idea is that we don't want to let people depend on the
>> value until it's guaranteed to be durably committed.
>
> OK, so if you see it on the replica, you know it is in at least two
> places. I guess that m
Responding to this again, somewhat out of order...
On Fri, Mar 18, 2011 at 1:15 PM, Simon Riggs wrote:
> Together that's about a >20% hit in performance in Yeb's tests. I think
> you should spend a little time thinking how to retune that.
I've spent some time playing around with pgbench and so f
"Kevin Grittner" wrote:
> I'm still looking at whether it's sane to try to issue a warning
> when an HTAB exceeds the number of entries declared as its
> max_size when it was created.
I think this does it.
If nothing else, it might be instructive to use it while testing the
SSI patch. Would
Robert Haas wrote:
> Well, the idea is that we don't want to let people depend on the
> value until it's guaranteed to be durably committed.
OK, so if you see it on the replica, you know it is in at least two
places. I guess that makes sense. It kinda "feels" wrong to see a
view of the repli
On Fri, Mar 18, 2011 at 5:24 PM, Kevin Grittner
wrote:
> Robert Haas wrote:
>
>> Since the current solution is intended to support data-loss-free
>> failover, but NOT to guarantee a consistent view of the world from
>> a SQL level, I doubt it's worth paying any price for this.
>
> Well, that brin
On Fri, Mar 18, 2011 at 2:55 PM, Alvaro Herrera
wrote:
> Excerpts from Robert Haas's message of vie mar 18 14:25:16 -0300 2011:
>> On Fri, Mar 18, 2011 at 1:15 PM, Simon Riggs wrote:
>
>> > SyncRepUpdateSyncStandbysDefined() is added into walwriter, which means
>> > waiters won't be released if w
On Fri, 2011-03-18 at 16:24 -0500, Kevin Grittner wrote:
> Robert Haas wrote:
>
> > Since the current solution is intended to support data-loss-free
> > failover, but NOT to guarantee a consistent view of the world from
> > a SQL level, I doubt it's worth paying any price for this.
>
> Well, t
On Fri, 2011-03-18 at 17:08 -0400, Aidan Van Dyk wrote:
> On Fri, Mar 18, 2011 at 3:41 PM, Markus Wanner wrote:
> > On 03/18/2011 08:29 PM, Simon Riggs wrote:
> >> We could do that easily enough, actually, if we wished.
> >>
> >> Do we wish?
> >
> > I personally don't see any problem letting a sta
Robert Haas wrote:
> Since the current solution is intended to support data-loss-free
> failover, but NOT to guarantee a consistent view of the world from
> a SQL level, I doubt it's worth paying any price for this.
Well, that brings us back to the question of why we would want to
suppress the
On Fri, Mar 18, 2011 at 3:29 PM, Simon Riggs wrote:
> On Fri, 2011-03-18 at 20:19 +0100, Markus Wanner wrote:
>> Simon,
>>
>> On 03/18/2011 05:19 PM, Simon Riggs wrote:
>> >>> Simon Riggs wrote:
>> In PostgreSQL other users cannot observe the commit until an
>> acknowledgement has been
On Fri, Mar 18, 2011 at 2:15 PM, Jim Nasby wrote:
> +1
>
> To take the opposite approach... has anyone looked at having the OS just
> manage all caching for us? Something like MMAPed shared buffers? Even if we
> find the issue with large shared buffers, we still can't dedicate serious
> amounts
While investigating Simon's complaint about my patch of a few days
ago, I discovered that synchronous replication appears to slow to a
crawl if fsync is turned off on the standby.
I'm not sure why this is happening or what the right behavior is in
this case, but I think some kind of adjustment is
Dan Ports wrote:
> I am surprised to see that error message without SSI's hint about
> increasing max_predicate_locks_per_xact.
After reviewing this, I think something along the following lines
might be needed, for a start. I'm not sure the Asserts are actually
needed; they basically are chec
On 03/18/2011 08:29 PM, Simon Riggs wrote:
> We could do that easily enough, actually, if we wished.
>
> Do we wish?
I personally don't see any problem letting a standby show a snapshot
before the master. I'd consider it unneeded network traffic. But then
again, I'm completely biased.
Regards
Simon Riggs wrote:
> On Fri, 2011-03-18 at 20:19 +0100, Markus Wanner wrote:
>> >>> Simon Riggs wrote:
>> In PostgreSQL other users cannot observe the commit until an
>> acknowledgement has been received.
>>
>> On other nodes as well? To me that means the standby needs to
>> hold ba
On Fri, 2011-03-18 at 20:19 +0100, Markus Wanner wrote:
> Simon,
>
> On 03/18/2011 05:19 PM, Simon Riggs wrote:
> >>> Simon Riggs wrote:
> In PostgreSQL other users cannot observe the commit until an
> acknowledgement has been received.
>
> On other nodes as well? To me that means the
On 03/18/2011 05:27 PM, Kevin Grittner wrote:
> Basically, what Heikki addresses. It has to be committed after
> crash and recovery, and deal with replicas which may or may not have
> been notified and may or may not have applied the transaction.
Huh? I'm not quite following here. Committing ad
Simon,
On 03/18/2011 05:19 PM, Simon Riggs wrote:
>>> Simon Riggs wrote:
In PostgreSQL other users cannot observe the commit until an
acknowledgement has been received.
On other nodes as well? To me that means the standby needs to hold back
COMMIT of an ACKed transaction, until receiv
On 03/18/2011 06:35 PM, Greg Stark wrote:
> I think promising that the COMMIT doesn't return until the transaction
> and all previous transactions are replicated is enough. We don't have
> to promise that nobody else will see it either. Those same
> transactions eventually have to commit as well
N
Excerpts from Robert Haas's message of vie mar 18 14:25:16 -0300 2011:
> On Fri, Mar 18, 2011 at 1:15 PM, Simon Riggs wrote:
> > SyncRepUpdateSyncStandbysDefined() is added into walwriter, which means
> > waiters won't be released if we do a sighup during a fast shutdown,
> > since the walwriter
On Mar 18, 2011, at 11:19 AM, Robert Haas wrote:
> On Fri, Mar 18, 2011 at 11:14 AM, Kevin Grittner
> wrote:
> A related area that could use some looking at is why performance tops
> out at shared_buffers ~8GB and starts to fall thereafter. InnoDB can
> apparently handle much larger buffer pools
It would probably also be worth monitoring the size of pg_locks to see
how many predicate locks are being held.
On Fri, Mar 18, 2011 at 12:50:16PM -0500, Kevin Grittner wrote:
> Even with the above information it may be far from clear where
> allocations are going past their maximum, since one HT
All,
I've heard from JPUG and all directors are OK. Not sure about all
members, though. All staff of SRA are also OK.
--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com
YAMAMOTO Takashi wrote:
> thanks for quickly fixing problems.
Thanks for the rigorous testing. :-)
> i tested the later version
> (a2eb9e0c08ee73208b5419f5a53a6eba55809b92) and only errors i got
> was "out of shared memory". i'm not sure if it was caused by SSI
> activities or not.
> PG_
On Fri, Mar 18, 2011 at 4:33 PM, Robert Haas wrote:
> The fundamental problem here is that once you update CLOG and flush
> the corresponding WAL record, there is no going backward. You can
> hold the system in some intermediate state where the transaction still
> holds locks and is excluded from
On Fri, Mar 18, 2011 at 1:15 PM, Simon Riggs wrote:
> On Thu, 2011-03-17 at 09:33 -0400, Robert Haas wrote:
>> Thanks for the review!
>
> Lets have a look here...
>
> You've added a test inside the lock to see if there is a standby, which
> I took out for performance reasons. Maybe there's another
I am not sure the following pg_ctl behaviour is really a bug, but I find it
unexpected enough to
report.
I was testing synchronous replication in a test setup on a single machine.
(After all, one could
have different instances on different arrays, right? If you think this is an
unlikely use-c
On Thu, 2011-03-17 at 09:33 -0400, Robert Haas wrote:
> Thanks for the review!
Lets have a look here...
You've added a test inside the lock to see if there is a standby, which
I took out for performance reasons. Maybe there's another way, I know
that code is fiddly.
You've also added back in th
Excerpts from rsmogura's message of vie mar 18 11:57:48 -0300 2011:
> Actually idea of this patch was like this:
> Some operations requires many buffers, PG uses "clock sweep" to get
> next free buffer, so it may overwrite index buffer. From point of view
> of good database design We should
Robert Haas wrote:
> Simon Riggs wrote:
>> No, only in the case where you choose not to failover to the
>> standby when you crash, which would be a fairly strange choice
>> after the effort to set up the standby. In a correctly configured
>> and operated cluster what I say above is fully correc
On Thu, Mar 10, 2011 at 11:25 AM, Robert Haas wrote:
> On Thu, Mar 10, 2011 at 10:59 AM, Tom Lane wrote:
>> Robert Haas writes:
>> Speaking of running scripts, I think we should run pgindent now.
>>
>>> Yeah, +1 for doing it as soon as Tom is at a good stopping point. It
>>> makes things a
On Fri, Mar 18, 2011 at 11:58 AM, Heikki Linnakangas
wrote:
> On 18.03.2011 17:38, Jeff Davis wrote:
>>
>> On Fri, 2011-03-18 at 10:27 -0400, Robert Haas wrote:
>>>
>>> ERRCODE_(WARNING_?)REPLICATION_WAIT_CANCELLED
>>>
>>> ...which might have something to recommend it.
>>
>> Works for me.
>
> Yes,
On Fri, Mar 18, 2011 at 12:19 PM, Simon Riggs wrote:
> On Fri, 2011-03-18 at 17:47 +0200, Heikki Linnakangas wrote:
>> On 18.03.2011 16:52, Kevin Grittner wrote:
>> > Simon Riggs wrote:
>> >
>> >> In PostgreSQL other users cannot observe the commit until an
>> >> acknowledgement has been received
> On 18.03.2011 16:52, Kevin Grittner wrote:
>> Simon Riggs wrote:
>>
>>> In PostgreSQL other users cannot observe the commit until an
>>> acknowledgement has been received.
>>
>> Really? I hadn't picked up on that. That makes for a lot of
>> complication on crash-and-recovery of a master, but
On Fri, Mar 18, 2011 at 11:14 AM, Kevin Grittner
wrote:
> Maybe the thing to focus on first is the oft-discussed "benchmark
> farm" (similar to the "build farm"), with a good mix of loads, so
> that the impact of changes can be better tracked for multiple
> workloads on a variety of platforms and
On Fri, 2011-03-18 at 17:47 +0200, Heikki Linnakangas wrote:
> On 18.03.2011 16:52, Kevin Grittner wrote:
> > Simon Riggs wrote:
> >
> >> In PostgreSQL other users cannot observe the commit until an
> >> acknowledgement has been received.
> >
> > Really? I hadn't picked up on that. That makes fo
On Fri, Mar 18, 2011 at 2:37 PM, Markus Wanner wrote:
> Hi,
>
> On 03/18/2011 02:40 PM, Kevin Grittner wrote:
>> Then the only thing you would consider sync replication, as far as I
>> can see, is two phase commit
>
> I think waiting for the ACK before actually making the changes from the
> transa
On 18.03.2011 17:38, Jeff Davis wrote:
On Fri, 2011-03-18 at 10:27 -0400, Robert Haas wrote:
ERRCODE_(WARNING_?)REPLICATION_WAIT_CANCELLED
...which might have something to recommend it.
Works for me.
Yes, sounds reasonable. Without "WARNING_", please.
--
Heikki Linnakangas
EnterpriseDB
On Fri, Mar 18, 2011 at 2:19 PM, Markus Wanner wrote:
> Their documentation [1] isn't entirely clear on that first: "the master
> blocks after the commit is done and waits until at least one
> semisynchronous slave acknowledges that it has received all events for
> the transaction" and the "slave
On 18.03.2011 16:52, Kevin Grittner wrote:
Simon Riggs wrote:
In PostgreSQL other users cannot observe the commit until an
acknowledgement has been received.
Really? I hadn't picked up on that. That makes for a lot of
complication on crash-and-recovery of a master, but if we can pull
it of
On 03/18/2011 03:52 PM, Kevin Grittner wrote:
> Really? I hadn't picked up on that. That makes for a lot of
> complication on crash-and-recovery of a master
What complication do you have in mind here?
I think of it the opposite way (at least for Postgres, that is):
committing a transaction that
On Fri, 2011-03-18 at 11:07 -0400, Robert Haas wrote:
> On Fri, Mar 18, 2011 at 10:55 AM, Greg Stark wrote:
> > On Thu, Mar 17, 2011 at 5:46 PM, Robert Haas wrote:
> >> What makes more sense to me after having thought about this more
> >> carefully is to simply make a blanket rule that when
> >>
On Fri, 2011-03-18 at 10:27 -0400, Robert Haas wrote:
> ERRCODE_(WARNING_?)REPLICATION_WAIT_CANCELLED
>
> ...which might have something to recommend it.
Works for me.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subs
rsmogura wrote:
> Yes, there is some change, and I looked at this more carefully, as
> my performance results wasn't such as I expected. I found PG uses
> BufferAccessStrategy to do sequence scans, so my test query took
> only 32 buffers from pool and didn't overwritten index pool too
> much. T
On Fri, Mar 18, 2011 at 10:55 AM, Greg Stark wrote:
> On Thu, Mar 17, 2011 at 5:46 PM, Robert Haas wrote:
>> What makes more sense to me after having thought about this more
>> carefully is to simply make a blanket rule that when
>> synchronous_replication=on, synchronous_commit has no effect. T
On Thu, 17 Mar 2011 16:02:18 -0500, Kevin Grittner wrote:
Rados*aw Smogura wrote:
I have implemented initial concept of 2nd level cache. Idea is to
keep some segments of shared memory for special buffers (e.g.
indices) to prevent overwrite those by other operations. I added
those functionality
On Thu, Mar 17, 2011 at 5:46 PM, Robert Haas wrote:
> What makes more sense to me after having thought about this more
> carefully is to simply make a blanket rule that when
> synchronous_replication=on, synchronous_commit has no effect. That is
> easy to understand and document.
For what it's w
Simon Riggs wrote:
> In PostgreSQL other users cannot observe the commit until an
> acknowledgement has been received.
Really? I hadn't picked up on that. That makes for a lot of
complication on crash-and-recovery of a master, but if we can pull
it off, that's really cool. If we do that and
2011/3/18 Vaibhav Kaushal :
> Hi,
> That was the question I was facing 5 months ago and trust me I am doing it
> even now. With an average of 6+ hours going into PostgreSQL Code, even with
> best practices (as suggested by the developers) I still think I know less
> than 10 percent. It is too huge
On 18 March 2011 14:23, Robert Haas wrote:
> On Thu, Mar 17, 2011 at 1:59 PM, Thom Brown wrote:
>> On 17 March 2011 17:55, Robert Haas wrote:
>>> On Thu, Mar 17, 2011 at 1:24 PM, Thom Brown wrote:
errdetail("The transaction has already been committed locally but
might have not been re
Hi,
On 03/18/2011 02:40 PM, Kevin Grittner wrote:
> Then the only thing you would consider sync replication, as far as I
> can see, is two phase commit
I think waiting for the ACK before actually making the changes from the
transaction visible (COMMIT) would suffice for disallowing such an
incons
2011/3/18 Brendan Jurd :
> On 18 March 2011 01:57, hom wrote:
>> I try to known how a database is implemented
>
> This objective is so vast and so vague that it's difficult to give
> meaningful help.
>
> I'd emphasise Kevin Grittner's very worthwhile advice. Try to break
> your question down int
2011/3/18 Markus Wanner :
> Hom,
>
> On 03/17/2011 04:49 PM, Kevin Grittner wrote:
>> That's ambitious.
>
> Absolutely, yes. Exercise patience with yourself.
>
> A method that hasn't been mentioned, yet, is digging out your debugger
> and attach it to a connected Postgres backend. You can then is
On Fri, Mar 18, 2011 at 10:17 AM, Jeff Davis wrote:
> On Fri, 2011-03-18 at 08:27 -0400, Robert Haas wrote:
>> On Thu, Mar 17, 2011 at 6:00 PM, Jeff Davis wrote:
>> > On Wed, 2011-03-16 at 13:35 -0400, Robert Haas wrote:
>> >> 2. If a query cancel interrupt is received (pg_cancel_backend or ^C),
On Thu, Mar 17, 2011 at 1:59 PM, Thom Brown wrote:
> On 17 March 2011 17:55, Robert Haas wrote:
>> On Thu, Mar 17, 2011 at 1:24 PM, Thom Brown wrote:
>>> errdetail("The transaction has already been committed locally but
>>> might have not been replicated to the standby.")));
>>> errdetail("The t
Mark,
On 03/18/2011 02:16 PM, MARK CALLAGHAN wrote:
> We didn't invent the term, we just implemented something that Heikki
> Tuuri briefly described, for example:
> http://bugs.mysql.com/bug.php?id=7440
Oh, okay, good to know who to blame ;-) However, I didn't mean to
offend anybody.
> I do not
On Fri, 2011-03-18 at 13:16 +, MARK CALLAGHAN wrote:
> On Fri, Mar 18, 2011 at 9:27 AM, Markus Wanner wrote:
> > Google invented the term "semi-syncronous" for something that's
> > essentially the same that we have, now, I think. However, I full
> > heartedly hate that term (based on the reas
On Fri, 2011-03-18 at 08:27 -0400, Robert Haas wrote:
> On Thu, Mar 17, 2011 at 6:00 PM, Jeff Davis wrote:
> > On Wed, 2011-03-16 at 13:35 -0400, Robert Haas wrote:
> >> 2. If a query cancel interrupt is received (pg_cancel_backend or ^C),
> >> then cancel the sync rep wait and issue a warning bef
On Fri, 2011-03-18 at 09:39 -0400, Robert Haas wrote:
> On Thu, Mar 17, 2011 at 5:32 PM, Robert Haas wrote:
> > On Thu, Mar 17, 2011 at 5:29 PM, Andrew Dunstan wrote:
> >>> Is this really intended?
> >>
> >> I sure hope not.
> >
> > That's a bug. Not sure if it's a psql bug or a backend bug, but
2011/3/17 Kevin Grittner :
> hom wrote:
>
>> I try to known how a database is implemented and I have been
>> reading PG source codes for a month.
>
> That's ambitious.
>
> find -name '*.h' -or -name '*.c' \
> | egrep -v '^\./src/test/.+/tmp_check/' \
> | xargs cat | wc -l
> 1059144
>
> Depending
2011/3/17 Bruce Momjian :
> hom wrote:
>> Hi,
>>
>> I try to known how a database is implemented and I have been reading
>> PG source codes for a month.
>>
>> Now, I only know a little about how PG work. :(
>>
>> I just know PG work like this but I don't know why PG work like this. :( :(
>>
>>
On Mon, Mar 7, 2011 at 3:44 AM, Fujii Masao wrote:
> On Mon, Mar 7, 2011 at 5:27 PM, Fujii Masao wrote:
>> On Mon, Mar 7, 2011 at 7:51 AM, Simon Riggs wrote:
>>> Efficient transaction-controlled synchronous replication.
>>> If a standby is broadcasting reply messages and we have named
>>> one or
On Mon, Mar 14, 2011 at 9:56 AM, Robert Haas wrote:
> On Sat, Mar 12, 2011 at 12:56 AM, Bruce Momjian wrote:
>>> Presumably the point of deprecating the feature is that we'd
>>> eventually remove it. If 4 major releases isn't long enough, what is?
>>
>> Good point.
>
> Unless there are further o
MARK CALLAGHAN wrote:
> Markus Wanner wrote:
>> Google invented the term "semi-syncronous" for something that's
>> essentially the same that we have, now, I think. However, I full
>> heartedly hate that term (based on the reasoning that there's no
>> semi-pregnant, either).
To be fair, what
On Thu, Mar 17, 2011 at 5:32 PM, Robert Haas wrote:
> On Thu, Mar 17, 2011 at 5:29 PM, Andrew Dunstan wrote:
>>> Is this really intended?
>>
>> I sure hope not.
>
> That's a bug. Not sure if it's a psql bug or a backend bug, but it's
> definitely a bug.
It's a backend bug. Prior to Simon's pat
On Fri, Mar 18, 2011 at 9:16 AM, MARK CALLAGHAN wrote:
> On Fri, Mar 18, 2011 at 9:27 AM, Markus Wanner wrote:
>> Google invented the term "semi-syncronous" for something that's
>> essentially the same that we have, now, I think. However, I full
>> heartedly hate that term (based on the reasonin
On Fri, Mar 18, 2011 at 9:27 AM, Markus Wanner wrote:
> Google invented the term "semi-syncronous" for something that's
> essentially the same that we have, now, I think. However, I full
> heartedly hate that term (based on the reasoning that there's no
> semi-pregnant, either).
We didn't invent
On Fri, Mar 18, 2011 at 8:27 AM, Heikki Linnakangas
wrote:
> You could also argue for "log a warning, continue until we can open for Hot
> standby, then pause".
I don't like that one much.
> I can write the patch once we know what we want. All of those options sound
> reasonable to me. This is s
On Thu, Mar 17, 2011 at 6:00 PM, Jeff Davis wrote:
> On Wed, 2011-03-16 at 13:35 -0400, Robert Haas wrote:
>> 2. If a query cancel interrupt is received (pg_cancel_backend or ^C),
>> then cancel the sync rep wait and issue a warning before acknowledging
>> the commit.
>
> When I saw this commit, I
On 18.03.2011 14:14, Robert Haas wrote:
On Fri, Mar 18, 2011 at 3:22 AM, Heikki Linnakangas
wrote:
If recovery target is set to before its consistent, ie. before
minRecoveryPoint, we should throw an error before recovery even starts. I'm
not sure if we check that at the moment.
I don't see h
On Fri, Mar 18, 2011 at 2:25 AM, Fujii Masao wrote:
> In the first place, I think that it's complicated to keep those two parameters
> separately. What about merging them to one parameter? What I'm thinking
> is to remove synchronous_replication and to increase the valid values of
> synchronous_co
On Fri, Mar 18, 2011 at 3:52 AM, Simon Riggs wrote:
>> I agree to get rid of write_location.
>
> No, don't remove it.
>
> We seem to be just looking for things to tweak without any purpose.
> Removing this adds nothing for us.
>
> We will have the column in the future, it is there now, so leave it
On Fri, Mar 18, 2011 at 3:22 AM, Heikki Linnakangas
wrote:
> If recovery target is set to before its consistent, ie. before
> minRecoveryPoint, we should throw an error before recovery even starts. I'm
> not sure if we check that at the moment.
I don't see how you could check that anyway. How do
On 18.03.2011 10:48, Heikki Linnakangas wrote:
On 17.03.2011 21:39, Robert Haas wrote:
On Mon, Jan 31, 2011 at 10:45 PM, Fujii Masao
wrote:
On Tue, Feb 1, 2011 at 1:31 AM, Heikki Linnakangas
wrote:
Hmm, good point. It's harmless, but creating the history file in the
first
place sure seems lik
Hi,
That was the question I was facing 5 months ago and trust me I am doing it
even now. With an average of 6+ hours going into PostgreSQL Code, even with
best practices (as suggested by the developers) I still think I know less
than 10 percent. It is too huge to be swallowed at once.
I too had t
On 18 March 2011 01:57, hom wrote:
> I try to known how a database is implemented
This objective is so vast and so vague that it's difficult to give
meaningful help.
I'd emphasise Kevin Grittner's very worthwhile advice. Try to break
your question down into smaller, more specific ones. With a
Hom,
On 03/17/2011 04:49 PM, Kevin Grittner wrote:
> That's ambitious.
Absolutely, yes. Exercise patience with yourself.
A method that hasn't been mentioned, yet, is digging out your debugger
and attach it to a connected Postgres backend. You can then issue a
query you are interested in and fo
Hi,
sorry for being late to join that bike-shedding discussion.
On 03/07/2011 05:09 PM, Alvaro Herrera wrote:
> I think these terms are used inconsistenly enough across the industry
> that what would make the most sense would be to use the common term and
> document accurately what we mean by it,
On 17.03.2011 21:39, Robert Haas wrote:
On Mon, Jan 31, 2011 at 10:45 PM, Fujii Masao wrote:
On Tue, Feb 1, 2011 at 1:31 AM, Heikki Linnakangas
wrote:
Hmm, good point. It's harmless, but creating the history file in the first
place sure seems like a waste of time.
The attached patch change
On Fri, 2011-03-18 at 14:45 +0900, Fujii Masao wrote:
> On Fri, Mar 18, 2011 at 2:52 AM, Robert Haas wrote:
> > On Thu, Mar 10, 2011 at 3:04 PM, Robert Haas wrote:
> >>> - /* Let the master know that we received some
> >>> data. */
> >>> - XLogWalRcvSe
On 18.03.2011 07:13, Fujii Masao wrote:
On Fri, Mar 18, 2011 at 1:17 AM, Robert Haas wrote:
One thing I'm not quite clear on is what happens if we reach the
recovery target before we reach consistency. i.e. create restore
point, flush xlog, abnormal shutdown, try to recover to named restore
po
hi,
thanks for quickly fixing problems.
i tested the later version (a2eb9e0c08ee73208b5419f5a53a6eba55809b92)
and only errors i got was "out of shared memory". i'm not sure if
it was caused by SSI activities or not.
YAMAMOTO Takashi
the following is a snippet from my application log:
PG_DIAG_
98 matches
Mail list logo