Tom Lane writes:
> What I intend to do over the next day or so is commit the prefetch
> infrastructure and the bitmap scan prefetch logic, but I'm bouncing the
> indexscan part back for rework. I think that it should be implemented
> in or near index_getnext() and pay attention to
> effective_i
Gregory Stark writes:
> Tom Lane writes:
>> 2. I fixed it so that setting effective_io_concurrency to zero disables
>> prefetching altogether; there was no way to do that in the patch as
>> submitted.
> Hm. the original intent was that effective_io_concurrency 1 meant no
> prefetching because th
Tom Lane writes:
> "Robert Haas" writes:
>> OK, here's an update of Greg's patch with the runtime configure test
>> ripped out, some minor documentation tweaks, and a few unnecessary
>> whitespace diff hunks quashed. I think this is about ready for
>> committer review.
>
> I've applied most of
"Robert Haas" writes:
> OK, here's an update of Greg's patch with the runtime configure test
> ripped out, some minor documentation tweaks, and a few unnecessary
> whitespace diff hunks quashed. I think this is about ready for
> committer review.
I've applied most of this, with a couple of notab
"Robert Haas" writes:
> FWIW, following your first set of commits, I extracted, cleaned up,
> and re-posted the uncommitted portion of the patch last night. Based
> on this it doesn't sound like there's much point in continuing to do
> that, but I'm happy to do so if anyone thinks otherwise.
Pro
Tom Lane writes:
> "Robert Haas" writes:
>> OK, here's an update of Greg's patch with the runtime configure test
>> ripped out, some minor documentation tweaks, and a few unnecessary
>> whitespace diff hunks quashed. I think this is about ready for
>> committer review.
>
> I've started to look
> * As coded, it generates prefetch bursts that are much too large and too
> widely spaced to be effective, not to mention that they entirely
> ignore the effective_io_concurrency control knob as well as the order
> in which the pages will actually be needed. I wonder now whether
> Robert's inabil
"Robert Haas" writes:
> OK, here's an update of Greg's patch with the runtime configure test
> ripped out, some minor documentation tweaks, and a few unnecessary
> whitespace diff hunks quashed. I think this is about ready for
> committer review.
I've started to look through this, and the only p
> I think at a minimum there should be a manual configuration switch
> (ie something in pg_config_manual.h) to allow the builder to disable
> use of posix_fadvise, even if configure thinks it's there. Depending
> on buildfarm results we may have to do more than that.
This seems pretty pointless,
Tom Lane wrote:
> Peter Eisentraut writes:
> > The way I read this is that this was a temporary kernel/libc mismatch in
> > a development version of Debian 3 years ago that was fixed within 2
> > months of being reported and was never released to the general public.
> > So it would be on the sa
Peter Eisentraut writes:
> The way I read this is that this was a temporary kernel/libc mismatch in
> a development version of Debian 3 years ago that was fixed within 2
> months of being reported and was never released to the general public.
> So it would be on the same level as any of a milli
Gregory Stark wrote:
Peter Eisentraut writes:
On Friday 02 January 2009 06:49:57 Greg Stark wrote:
The guc run-time check is checking for known-buggy versions of glibc
using sysconf to check what version of glibc you have.
Could you please mention the bug number in the relevant source code
Peter Eisentraut writes:
> On Friday 02 January 2009 06:49:57 Greg Stark wrote:
>> The guc run-time check is checking for known-buggy versions of glibc
>> using sysconf to check what version of glibc you have.
>
> Could you please mention the bug number in the relevant source code comments?
It
On Friday 02 January 2009 06:49:57 Greg Stark wrote:
> The guc run-time check is checking for known-buggy versions of glibc
> using sysconf to check what version of glibc you have.
Could you please mention the bug number in the relevant source code comments?
--
Sent via pgsql-hackers mailing l
On Fri, Jan 2, 2009 at 11:13 PM, Robert Haas wrote:
> When I did that, it when back from 50 s to 33 s, which I think means
> that posix_fadvise is getting called and that that is what is making
> it slower.
>
>> And is this on a system with multiple spindles? How many?
>
> Latitude D830 laptop. S
Tom Lane writes:
> The point of the suggestion is to prove that the patch works as
> advertised. How wide the sweet spot is for this test isn't nearly as
> interesting as proving that there *is* a sweet spot. If you can't
> find one it suggests that either the patch or the local posix_fadvise
> Any chance you could put back the code in explain.c which showed
> whether posix_fadvise is actually getting used? Another thing I did
> when testing was attaching with strace to see if posix_fadvise (the
> syscall on linux was actually fadvise64 iirc) is actually getting
> called.
I tried chang
On Fri, Jan 2, 2009 at 8:42 PM, Robert Haas wrote:
>> Hm, what were those plans? You might want to put the old code back in
>> explain.c to print the prefetching target to see how well it's doing.
>
> Well, bad news. Here's one where prefetching seems to make it WORSE.
>
> rhaas=# explain select
> Hm, what were those plans? You might want to put the old code back in
> explain.c to print the prefetching target to see how well it's doing.
Well, bad news. Here's one where prefetching seems to make it WORSE.
rhaas=# explain select sum(1) from enormous where l_shipdate in
('1992-01-01', '199
Gregory Stark writes:
> Tom Lane writes:
>> In principle you should be able to adjust the constant so that vmstat
>> shows about 50% CPU busy, and then enabling fadvise should improve
>> matters significantly.
> I think in practice individual queries don't interleave much cpu with i/o
> work. A
On Fri, Jan 2, 2009 at 5:36 PM, Robert Haas wrote:
>> I've got a stack of hardware I can do performance testing of this patch on,
>> what I haven't been able to find time for is setting up any sort of test
>> harness right now. If you or Greg have any benchmark or test program you
>> could sugges
> I've got a stack of hardware I can do performance testing of this patch on,
> what I haven't been able to find time for is setting up any sort of test
> harness right now. If you or Greg have any benchmark or test program you
> could suggest that should show off the improvements here, I'd be gla
Greg Smith wrote:
> On Fri, 2 Jan 2009, Tom Lane wrote:
>
> > ISTM that you *should* be able to see an improvement on even
> > single-spindle systems, due to better overlapping of CPU and I/O effort.
>
> The earlier synthetic tests I did:
>
> http://archives.postgresql.org/pgsql-hackers/2008-09/
Tom Lane writes:
> In principle you should be able to adjust the constant so that vmstat
> shows about 50% CPU busy, and then enabling fadvise should improve
> matters significantly.
I think in practice individual queries don't interleave much cpu with i/o
work. A single random page fetch is 5ms
Greg Smith writes:
> On Fri, 2 Jan 2009, Tom Lane wrote:
>> ISTM that you *should* be able to see an improvement on even
>> single-spindle systems, due to better overlapping of CPU and I/O effort.
> The earlier synthetic tests I did:
> http://archives.postgresql.org/pgsql-hackers/2008-09/msg01401
On Fri, 2 Jan 2009, Tom Lane wrote:
ISTM that you *should* be able to see an improvement on even
single-spindle systems, due to better overlapping of CPU and I/O effort.
The earlier synthetic tests I did:
http://archives.postgresql.org/pgsql-hackers/2008-09/msg01401.php
Showed a substantial
Greg Smith writes:
> On Thu, 1 Jan 2009, Robert Haas wrote:
>> The only thing I haven't been able to do is demonstrate that this change
>> actually produces a performance improvement. Either I'm testing the
>> wrong thing, or it just doesn't provide any benefit on a single-spindle
>> system.
On Thu, 1 Jan 2009, Robert Haas wrote:
The only thing I haven't been able to do is demonstrate that this change
actually produces a performance improvement. Either I'm testing the
wrong thing, or it just doesn't provide any benefit on a single-spindle
system.
When I did a round of testing o
On Thu, Jan 1, 2009 at 11:49 PM, Greg Stark wrote:
> Sorry for top-posting -- phone mail client sucks.
>
> I thought the autoconf ac_run_check was the test that people were
> questioning. That calls posix_fadvise to see if it crashes at configure
> time.
Yes, that's what I removed.
> The guc run
Sorry for top-posting -- phone mail client sucks.
I thought the autoconf ac_run_check was the test that people were
questioning. That calls posix_fadvise to see if it crashes at
configure time.
The guc run-time check is checking for known-buggy versions of glibc
using sysconf to check wha
> Now that there's an actual run-time sysconf check for the buggy glibc called
> by the guc function we arguably don't need the autoconf check_run check
> anymore anyways.
Isn't that the check I just removed for you, or are you talking about
some other check that can also be removed?
...Robert
-
In theory there should be no benefit on a single spindle system. There
could be a slight benefit due to reordering of I/o but only on a raid
array would you see a significant speedup -- which should be about
equal to the number of spindles.
What would be interesting is whether you see a not
On Thu, Jan 1, 2009 at 3:55 PM, Tom Lane wrote:
> "Robert Haas" writes:
>> Am I correct in thinking that the only thing we're really checking for
>> here is whether a trivial posix_fadvise() call returns success? If
>> so, is this test really worth doing?
>
> Runtime tests performed during confi
"Robert Haas" writes:
> Am I correct in thinking that the only thing we're really checking for
> here is whether a trivial posix_fadvise() call returns success? If
> so, is this test really worth doing?
Runtime tests performed during configure are generally a bad idea to
start with --- it's impo
I tried this on my laptop running FC9, and because I forgot to run
autoconf, I got this error message when I tried to turn on
posix_fadvise.
rhaas=# set effective_io_concurrency to 3;
ERROR: could not determine if this system has a working posix_fadvise
DETAIL: Check configure.log produced by co
Here's the update
I also skimmed through and cleaned a couple other things. There's *still* a
function prototype which I don't see what header file to put it in, that's the
one in port/posix_fadvise.c which contains one function with one caller, guc.c.
posix_fadvise_v23.diff.gz
Description: Bi
On Thu, Dec 11, 2008 at 4:29 PM, Tom Lane wrote:
> Greg Stark writes:
>>> A variable prefetch_pages is defined as "unsigned" or "int"
>>> in some places. Why don't you define it only once in a header
>>> and include the header in source files?
>
>> Just... Which header?
>
> MHO: the header that g
Greg Stark <[EMAIL PROTECTED]> writes:
>> A variable prefetch_pages is defined as "unsigned" or "int"
>> in some places. Why don't you define it only once in a header
>> and include the header in source files?
> Just... Which header?
MHO: the header that goes with the source file that is most con
I'll send another path with at least 1 and 3 fixed and hunt around
again for a header file to put this guc into.
On 10 Dec 2008, at 04:22, ITAGAKI Takahiro <[EMAIL PROTECTED]
> wrote:
Hello,
Gregory Stark <[EMAIL PROTECTED]> wrote:
Here's an update to eliminate two small bitrot conflicts.
Hello,
Gregory Stark <[EMAIL PROTECTED]> wrote:
> Here's an update to eliminate two small bitrot conflicts.
I read your patch with interest, but found some trivial bad manners.
* LET_OS_MANAGE_FILESIZE is already obsoleted.
You don't have to cope with the option.
* Type mismatch in prefetch_pag
Here's an update to eliminate two small bitrot conflicts.
posix_fadvise_v22.diff.gz
Description: Binary data
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgr
41 matches
Mail list logo