On Aug 18, 2008, at 11:49 AM, Tom Lane wrote:
Perhaps what's also needed here is to measure just how accurate
the cpu_*
costs are. Perhaps they need to be raised somewhat if we're
underestimating
the cost of digging through 200 tuples on a heap page and the
benefit of a
binary search on the
Gregory Stark <[EMAIL PROTECTED]> writes:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>> I'm not necessarily opposed to making this change --- it does sound
>> kinda plausible --- but I want to see some hard evidence that it does
>> more good than harm before we put it in.
> I don't want to see this
On Mon, 2008-08-18 at 16:44 +0100, Gregory Stark wrote:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>
> > Gregory Stark <[EMAIL PROTECTED]> writes:
> >>> On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
> This is only going to matter for a table of 1 block (or at least
> very
> few block
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>>> On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
This is only going to matter for a table of 1 block (or at least very
few blocks), and for such a table it's highly likely that it's in RAM
anywa
On Aug 13, 2008, at 10:45 PM, Andrew Gierth wrote:
You could likely expose a difference using LIMIT 1 in the subselect,
but that doesn't tell us anything we didn't already know (which is
that yes, index scan is much faster than seqscan even for 1-block
tables, except in the rare case when neither
> "Decibel" == Decibel! <[EMAIL PROTECTED]> writes:
Decibel> OK, ran the test again via this query:
Decibel> explain analyze select (select value from oneblock where id = i)
Decibel> from generate_series(1,1) i, generate_series(1,10) j;
Decibel> changing 1,1 to 200,200 as needed. I
Decibel! <[EMAIL PROTECTED]> writes:
> It'd be useful to get strace data on this, but OS X doesn't have
> that :/ (and I'm on 10.4 so no dtrace either).
See ktrace.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
On Aug 13, 2008, at 3:52 PM, Decibel! wrote:
The problem is that by looking for a constant row, you're actually
eliminating the entire effect being tested, because the uncorrelated
subselect is run ONCE as an initplan, and the entire query time is
then spent elsewhere. The differences in runtime
Gregory Stark <[EMAIL PROTECTED]> writes:
> That means going to the index meta page, find the fast root pointer, look up
> that page, look at the single leaf page pointer, look up that page, and do a
> binary search of the 200 leaf pointers. Once you find the resulting match,
> look up the heap pag
"Decibel!" <[EMAIL PROTECTED]> writes:
> Makes sense, and yeah, I was wondering a bit about that. I'll try to
> fake it out with offset 0 later on if no one beats me to it; I do still
> think we could just be seeing the effect of slogging through 200 tuples
> instead of going directly to the one w
On Wed, Aug 13, 2008 at 07:33:40PM +0100, Andrew Gierth wrote:
> The following message is a courtesy copy of an article
> that has been posted to pgsql.hackers as well.
>
> > "Decibel!" == Decibel! <[EMAIL PROTECTED]> writes:
>
> Decibel> Roughly what I get on my MBP (I'm about a factor of
> "Decibel!" == Decibel! <[EMAIL PROTECTED]> writes:
Decibel> Roughly what I get on my MBP (I'm about a factor of 2
Decibel> slower). This makes me think it's an issue of having to slog
Decibel> through an entire page one row at a time vs just using the
Decibel> index. To test this I test
On Aug 12, 2008, at 4:52 PM, Andrew Gierth wrote:
"Tom" == Tom Lane <[EMAIL PROTECTED]> writes:
Proposal: Make the first block of a seq scan cost
random_page_cost, then after that every additional block costs
seq_page_cost.
?Tom> This is only going to matter for a table of 1 block (or at lea
> > > Proposal: Make the first block of a seq scan cost random_page_cost, then
> > > after that every additional block costs seq_page_cost.
+1
> AFAICS the cost cross-over is much higher than the actual elapsed time
> cross-over for both narrow and wide tables.
Which makes absolute sense, since
On Tue, 2008-08-12 at 23:22 -0400, Tom Lane wrote:
> Gregory Stark <[EMAIL PROTECTED]> writes:
> >> On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
> >>> This is only going to matter for a table of 1 block (or at least very
> >>> few blocks), and for such a table it's highly likely that it's in
On Tue, 2008-08-12 at 23:58 +0100, Gregory Stark wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>
> > On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
> >> Simon Riggs <[EMAIL PROTECTED]> writes:
> >> > Proposal: Make the first block of a seq scan cost random_page_cost, then
> >> > after th
Gregory Stark <[EMAIL PROTECTED]> writes:
>> On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
>>> This is only going to matter for a table of 1 block (or at least very
>>> few blocks), and for such a table it's highly likely that it's in RAM
>>> anyway. So I'm unconvinced that the proposed chang
On Tue, 2008-08-12 at 23:58 +0100, Gregory Stark wrote:
> People lower random_page_cost because we're not doing a good job
> estimating how much of a table is in cache.
Is it because of a bad estimate about how much of a table is in cache,
or a bad assumption about the distribution of access to
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
>> Simon Riggs <[EMAIL PROTECTED]> writes:
>> > Proposal: Make the first block of a seq scan cost random_page_cost, then
>> > after that every additional block costs seq_page_cost.
>>
>> This is only go
> "Tom" == Tom Lane <[EMAIL PROTECTED]> writes:
>> Proposal: Make the first block of a seq scan cost
>> random_page_cost, then after that every additional block costs
>> seq_page_cost.
Tom> This is only going to matter for a table of 1 block (or at least
Tom> very few blocks), and for su
On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > Proposal: Make the first block of a seq scan cost random_page_cost, then
> > after that every additional block costs seq_page_cost.
>
> This is only going to matter for a table of 1 block (or at least
Simon Riggs <[EMAIL PROTECTED]> writes:
> Proposal: Make the first block of a seq scan cost random_page_cost, then
> after that every additional block costs seq_page_cost.
This is only going to matter for a table of 1 block (or at least very
few blocks), and for such a table it's highly likely tha
22 matches
Mail list logo