On Mon, Oct 17, 2005 at 09:30:24PM +0100, Simon Riggs wrote:
> On Mon, 2005-10-17 at 14:55 -0500, Jim C. Nasby wrote:
> > On Tue, Oct 11, 2005 at 10:58:58AM +0100, Simon Riggs wrote:
> > > On Mon, 2005-10-10 at 15:14 -0500, Kevin Grittner wrote:
> > > > We are looking at doing much more with Postgr
On Mon, 2005-10-17 at 14:55 -0500, Jim C. Nasby wrote:
> On Tue, Oct 11, 2005 at 10:58:58AM +0100, Simon Riggs wrote:
> > On Mon, 2005-10-10 at 15:14 -0500, Kevin Grittner wrote:
> > > We are looking at doing much more with PostgreSQL over the
> > > next two years, and it seems likely that this iss
On Tue, Oct 11, 2005 at 10:58:58AM +0100, Simon Riggs wrote:
> On Mon, 2005-10-10 at 15:14 -0500, Kevin Grittner wrote:
> > We are looking at doing much more with PostgreSQL over the
> > next two years, and it seems likely that this issue will come up
> > again where it is more of a problem. It so
On Mon, Oct 10, 2005 at 09:10:38PM -0400, Tom Lane wrote:
> I recall thinking about changing the formula to more accurately count
> the number of pages touched; but I desisted when I realized that it
> would drastically increase the cost estimates for index searches, and
> that's surely the wrong d
You don't get to read part of a page, but you may be dealing with
probabilities. For example, consider a case where there are ten data
pages, and you are going to read 15% of the tuples. There is a 50%
chance that your scan will start in the first half of a leaf page and
only need two leaf pages.
On Mon, 2005-10-10 at 15:14 -0500, Kevin Grittner wrote:
> We are looking at doing much more with PostgreSQL over the
> next two years, and it seems likely that this issue will come up
> again where it is more of a problem. It sounded like there was
> some agreement on HOW this was to be fixed, ye
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> A couple questions occur to me, though. I'm not clear on why
> ceil is called -- do we need to eliminate the fraction here?
Well, you don't get to read part of a page. In particular, fetching 1.0
index tuples requires fetching 1.0 pages, not (say) 0
Hmmm... With that much direction, I really have no excuse
not to try a change and provide some test cases, do I?
A couple questions occur to me, though. I'm not clear on why
ceil is called -- do we need to eliminate the fraction here?
It seems to me that it wouldn't matter much except when the
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> 10/06/05 9:28 PM >>>
>> There's a known issue that the planner tends to overestimate the cost of
>> inner-index-scan nestloops, because it doesn't allow for the strong
>> caching effects associated with repeated scans of th
Thanks, Tom.
I spent a few hours trying different searches in the archives, and
found three very interesting threads on the topic. All were from
2003. Should I keep digging for more recent threads, or would
these probably represent the current state of the issue?
These left me somewhat concer
I am absolutely sure that I never changed any of the enable_xxx
settings other than enable_mergejoin during these tests. The only
other setting I played with was random_page_cost, but the nested
loop query always chose the plain index scan in my tests.
There was one odd thing. I started testing
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> In both the 8.1beta2 and using a build from this morning's
> dev snapshot, this query ran slower than expected:
There's a known issue that the planner tends to overestimate the cost of
inner-index-scan nestloops, because it doesn't allow for the stron
In both the 8.1beta2 and using a build from this morning's
dev snapshot, this query ran slower than expected:
select count(*)
from "DbTranRepository" AS "dtr"
inner join "DbTranLogRecord" AS "dtlr"
on ("dtlr"."countyNo" = "dtr"."countyNo"
and "dtlr"."tranImageSeqNo" = "dt
13 matches
Mail list logo