On Mon, Apr 2, 2012 at 12:58 PM, Kevin Grittner
wrote:
> I can't help thinking that the "background hinter" I had ideas about
> writing would prevent many of the reads of old CLOG pages, taking a
> lot of pressure off of this area. It just occurred to me that the
> difference between that idea an
On Sun, Apr 1, 2012 at 12:31 PM, Heikki Linnakangas
wrote:
> Currently, only regular backends set the stack base pointer, for the
> check_stack_depth() mechanism, in PostgresMain. We don't have stack overrun
> protection in auxiliary processes. However, autovacuum workers at least can
> run arbitr
On Mon, Apr 2, 2012 at 8:14 AM, Peter Geoghegan wrote:
> While the graph that I produced was about the same shape as yours, the
> underlying hardware was quite different, and indeed with my benchmark
> group commit's benefits are more apparent earlier - at 32 clients,
> throughput has more-than do
On Apr 2, 2012, at 3:16 PM, Simon Riggs wrote:
> Agreed, though I think it means the fsync is happening on a filesystem
> that causes a full system fsync. That time is not normal.
It's ext4, which AFAIK does not have that problem.
>
...Robert
--
Sent via pgsql-hackers mailing list (pgsql-hac
On Apr 2, 2012, at 3:04 PM, Tom Lane wrote:
> Seems like basically what you've proven is that this code path *is* a
> performance issue, and that we need to think a bit harder about how to
> avoid doing the fsync while holding locks.
Hmm, good idea. I wonder if we couldn't just hand off the fsync
On Mon, Apr 2, 2012 at 12:04 PM, Tom Lane wrote:
> Robert Haas writes:
>> Long story short, when a CLOG-related stall happens,
>> essentially all the time is being spent in this here section of code:
>
>> /*
>> * If not part of Flush, need to fsync now. We assume this happens
>> *
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
>
>> I'm getting HEAD errors on my build farm animal fennec.
> Oh, I looked at that the other day. The animal started failing after
> you installed a new libxml in /usr/local.
Ah, okay, that makes sense. So MediaWiki wanted a new version but
On Mon, Apr 2, 2012 at 8:16 PM, Simon Riggs wrote:
> Agreed, though I think it means the fsync is happening on a filesystem
> that causes a full system fsync. That time is not normal.
I don't know what you mean. It looks like there are two cases where
this code path executes. Either more than 16
"Greg Sabino Mullane" writes:
> I'm getting HEAD errors on my build farm animal fennec.
Oh, I looked at that the other day. The animal started failing after
you installed a new libxml in /usr/local. It looks like it is compiling
against the /usr/local copy but still executing against the .so i
Simon Riggs writes:
> I suggest we optimise that by moving the dirty block into shared
> buffers and leaving it as dirty. That way we don't need to write or
> fsync at all and the bgwriter can pick up the pieces. So my earlier
> patch to get the bgwriter to clean the clog would be superfluous.
[
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
I'm getting HEAD errors on my build farm animal fennec.
I've narrowed it down to this test case:
greg=# CREATE TEMP TABLE boom AS SELECT 'ABC'::bytea;
greg=# SELECT table_to_xml('boom',false,false,'');
server closed the connection unexpectedly
On Mon, Apr 2, 2012 at 8:04 PM, Tom Lane wrote:
> Robert Haas writes:
>> Long story short, when a CLOG-related stall happens,
>> essentially all the time is being spent in this here section of code:
>
>> /*
>> * If not part of Flush, need to fsync now. We assume this happens
>> * i
Robert Haas writes:
> Long story short, when a CLOG-related stall happens,
> essentially all the time is being spent in this here section of code:
> /*
> * If not part of Flush, need to fsync now. We assume this happens
> * infrequently enough that it's not a performance issue.
>
On Apr 2, 2012, at 11:58 AM, Tom Lane wrote:
>> Sounds like a lot of work for core to maintain various version comparison
>> schemes
>
> Well, the primary argument for avoiding version comparison semantics to
> begin with was exactly that we didn't want to mandate a particular
> version-numberin
"David E. Wheeler" writes:
> On Apr 2, 2012, at 11:24 AM, Peter Eisentraut wrote:
>> Or an extension could specify itself which version numbering scheme it
>> uses. This just has to be a reference to a type, which in turn could be
>> semver, debversion, or even just numeric or text (well, maybe n
On Apr 2, 2012, at 11:24 AM, Peter Eisentraut wrote:
> Or an extension could specify itself which version numbering scheme it
> uses. This just has to be a reference to a type, which in turn could be
> semver, debversion, or even just numeric or text (well, maybe name).
> Then you'd just need to
On tor, 2012-03-29 at 14:48 -0400, Robert Haas wrote:
> Frankly, I'm not sure we bet on the right horse in not mandating a
> version numbering scheme from the beginning. But given that we
> didn't, we probably don't want to get too forceful about it too
> quickly. However, we could ease into it b
On Mon, Apr 2, 2012 at 5:29 AM, Jay Levitt wrote:
> So this is pointless to the discussion now, but if you want to engage
> off-list, I'd frankly love to be reconvinced:
It may not be an unreasonable thing for an individual user to do to
their own machine. But it's not really Postgres's place to
http://mojicacnc.com/wp-content/plugins/extended-comment-options/02efpk.html";>
http://mojicacnc.com/wp-content/plugins/extended-comment-options/02efpk.html
Andrew Dunstan writes:
> On 04/02/2012 12:44 PM, Tom Lane wrote:
>> You could do something like having a list of pending chunks for each
>> value of (pid mod 256). The length of each such list ought to be plenty
>> short under ordinary circumstances.
> Yeah, ok, that should work. How big would w
Robert Haas wrote:
> This particular example shows the above chunk of code taking >13s
> to execute. Within 3s, every other backend piles up behind that,
> leading to the database getting no work at all done for a good ten
> seconds.
>
> My guess is that what's happening here is that one backe
On 04/02/2012 12:44 PM, Tom Lane wrote:
Andrew Dunstan writes:
On 04/02/2012 12:00 PM, Tom Lane wrote:
This seems like it isn't actually fixing the problem, only pushing out
the onset of trouble a bit. Should we not replace the fixed-size array
with a dynamic data structure?
But maybe your
Andrew Dunstan writes:
> On 04/02/2012 12:00 PM, Tom Lane wrote:
>> This seems like it isn't actually fixing the problem, only pushing out
>> the onset of trouble a bit. Should we not replace the fixed-size array
>> with a dynamic data structure?
> But maybe your're right. If we do that and stic
On Mon, Apr 2, 2012 at 7:01 AM, Simon Riggs wrote:
> Do you consider this proof that it can only be I/O? Or do we still
> need to find out?
I stuck a bunch more debugging instrumentation into the SLRU code. It
was fairly clear from the previous round of instrumentation that the
problem was that
On 04/02/2012 12:00 PM, Tom Lane wrote:
Andrew Dunstan writes:
On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
Some of my PostgreSQL Experts colleagues have been complaining to me
that servers under load with very large queries cause CSV log files
that are corrupted,
We could just increase CH
On Fri, Mar 30, 2012 at 12:48:07AM +0200, Boszormenyi Zoltan wrote:
> 2012-03-29 19:03 keltez?ssel, Noah Misch ?rta:
one of the new sections about readahead should somehow reference the hazard
around volatile functions.
>>> Done.
>> I don't see the mention in your latest patch. You do m
David Johnston wrote:
> Just trying to bridge an apparent gap since the original e-mail seems to
> have come across as too adversarial that the underlying thoughts have
> been overlooked. Trying to contribute in my own way with my current
> resources.
Thanks, but it's my own fault for basing a h
Andrew Dunstan writes:
> On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
>> Some of my PostgreSQL Experts colleagues have been complaining to me
>> that servers under load with very large queries cause CSV log files
>> that are corrupted,
> We could just increase CHUNK_SLOTS in syslogger.c, but I
On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
Some of my PostgreSQL Experts colleagues have been complaining to me
that servers under load with very large queries cause CSV log files
that are corrupted, because lines are apparently multiplexed. The log
chunking protocol between the errlog rou
On Mon, Apr 2, 2012 at 8:17 AM, Jay Levitt wrote:
>
> Sure, and if humans read docs, instead of just glancing at them, that'd be
> all you needed. In any case, I could counter myself that nobody reads the
> doc period, so it doesn't matter what version is listed; that's just the
> source of my own
:) yah that makes sense no big deal. i'll probably just push this head
buiild of pg_dump onto the production machines till it comes out.
Thanks again!
On Sat, Mar 31, 2012 at 3:44 PM, Tom Lane wrote:
> Mike Roest writes:
> > Any idea when 9.1.4 with this change will be out so we can pull the
Dave Page wrote:
On Mon, Apr 2, 2012 at 12:29 AM, Jay Levitt wrote:
Just as an FYI, a large percentage of the PostgreSQL developers are
Mac users, including myself. They're also the company standard at
EnterpriseDB - so we're not entirely unfamiliar with software
development on them.
Good to k
On 1 April 2012 06:41, Robert Haas wrote:
> There seem to be too relevant differences between your test and mine:
> (1) your test is just a single insert per transaction, whereas mine is
> pgbench's usual update, select, update, update, insert and (2) it
> seems that, to really see the benefit of
On 04/02/2012 05:23 AM, Dave Page wrote:
There are hundreds of thousands of pieces of malware for Windows that
relied on the ability to write to "system" directories like this to do
their misdeeds. Anywhere they can write (or modify existing) software
that may get executed at boot time or by an
On Mon, Apr 2, 2012 at 11:49 AM, Greg Stark wrote:
> On Mon, Apr 2, 2012 at 8:15 AM, Simon Riggs wrote:
>> Not true, please refer to code at line 544, as I already indicated.
>>
>> My understanding of the instrumentation is that the lock acquired at
>> line 526 will show as the blocker until we r
On Mon, Apr 2, 2012 at 8:15 AM, Simon Riggs wrote:
> Not true, please refer to code at line 544, as I already indicated.
>
> My understanding of the instrumentation is that the lock acquired at
> line 526 will show as the blocker until we reach line 555, so anything
> in between could be responsib
On Mon, Apr 2, 2012 at 12:29 AM, Jay Levitt wrote:
>
> At this point I agree with you, but I'm still going to go into detail,
> because I think there are two markets for Postgres, and the database
> community has been so focused around enterprise for so long that you're
> missing opportunities wit
On Mon, Apr 2, 2012 at 8:36 AM, Simon Riggs wrote:
> On Mon, Apr 2, 2012 at 1:17 AM, Joe Van Dyk wrote:
>
>> Anyone else want event scheduling / cron / temporal triggers in
>> postgresql? http://dev.mysql.com/doc/refman/5.1/en/events-overview.html
>> shows how it works in mysql.
>>
>> Can we thro
On Sat, Mar 31, 2012 at 6:37 AM, Dobes Vandermeer wrote:
> On Sat, Mar 31, 2012 at 1:44 AM, Daniel Farina wrote:
>>
>> On Fri, Mar 30, 2012 at 10:21 AM, Daniel Farina wrote:
>> > Any enhancement here that can't be used with libpq via, say, drop-in
>> > .so seems unworkable to me, and that's why
On Mon, Apr 2, 2012 at 12:00 AM, Greg Stark wrote:
> On Sun, Apr 1, 2012 at 4:05 AM, Robert Haas wrote:
>> My guess based on previous testing is
>> that what's happening here is (1) we examine a tuple on an old page
>> and decide we must look up its XID, (2) the relevant CLOG page isn't
>> in cac
On Mon, Apr 2, 2012 at 1:17 AM, Joe Van Dyk wrote:
> Anyone else want event scheduling / cron / temporal triggers in
> postgresql? http://dev.mysql.com/doc/refman/5.1/en/events-overview.html
> shows how it works in mysql.
>
> Can we throw money at someone to get this in postgres? Is there work
>
On Sun, Apr 1, 2012 at 11:12 PM, Greg Stark wrote:
> On Sun, Apr 1, 2012 at 10:27 PM, Simon Riggs wrote:
>> So lock starvation on the control lock would cause a long wait after
>> each I/O, making it look like an I/O problem.
>
> Except that both of the locks involved in his smoking gun occur
> *
Robert Haas wrote:
> I suppose one interesting question is to figure out if there's a way I
> can optimize the disk configuration in this machine, or the Linux I/O
> scheduler, or something, so as to reduce the amount of time it spends
> waiting for the disk.
I'd be curious to know if using the de
43 matches
Mail list logo