Bruce Momjian writes:
> On Mon, Jan 14, 2013 at 12:56:37PM -0500, Tom Lane wrote:
>> Remember also that "enable_seqscan=off" merely adds 1e10 to the
>> estimated cost of seqscans. For sufficiently large tables this is not
>> exactly a hard disable, just a thumb on the scales. But I don't know
>>
On Mon, Jan 14, 2013 at 12:56:37PM -0500, Tom Lane wrote:
> > The reported behavior was that the planner would prefer to
> > sequential-scan the table rather than use the index, even if
> > enable_seqscan=off. I'm not sure what the query looked like, but it
> > could have been something best imple
Robert Haas writes:
> On Mon, Jan 14, 2013 at 12:23 PM, Tom Lane wrote:
>> The old code definitely had an unreasonably large charge for indexes
>> exceeding 1e8 or so tuples. This wouldn't matter that much for simple
>> single-table lookup queries, but I could easily see it putting the
>> kibosh
On Mon, Jan 14, 2013 at 12:23 PM, Tom Lane wrote:
> Robert Haas writes:
>> I'm not sure I have anything intelligent to add to this conversation -
>> does that make me the wisest of all the Greeks? - but I do think it
>> worth mentioning that I have heard occasional reports within EDB of
>> the qu
Robert Haas writes:
> I'm not sure I have anything intelligent to add to this conversation -
> does that make me the wisest of all the Greeks? - but I do think it
> worth mentioning that I have heard occasional reports within EDB of
> the query planner refusing to use extremely large indexes no ma
On Thu, Jan 10, 2013 at 8:07 PM, Tom Lane wrote:
> Comments?
I'm not sure I have anything intelligent to add to this conversation -
does that make me the wisest of all the Greeks? - but I do think it
worth mentioning that I have heard occasional reports within EDB of
the query planner refusing to
I wrote:
> I'm fairly happy with the general shape of this formula: it has a
> principled explanation and the resulting numbers appear to be sane.
> The specific cost multipliers obviously are open to improvement based
> on future evidence. (In particular, I intend to code it in a way that
> doesn
On Mon, Jan 7, 2013 at 3:27 PM, Tom Lane wrote:
>
> One issue that needs some thought is that the argument for this formula
> is based entirely on thinking about b-trees. I think it's probably
> reasonable to apply it to gist, gin, and sp-gist as well, assuming we
> can get some estimate of tree
Simon Riggs writes:
> On 7 January 2013 17:35, Tom Lane wrote:
>> That gives a formula of
>> cpu_operator_cost * log2(N) + cpu_operator_cost * 50 * (H+2)
> Again, this depends on N and H, so thats good.
> I think my retinas detached while reading your explanation, but I'm a
> long way from
On 7 January 2013 17:35, Tom Lane wrote:
> That gives a formula of
>
> cpu_operator_cost * log2(N) + cpu_operator_cost * 50 * (H+2)
>
> This would lead to the behavior depicted in the attached plot, wherein
> I've modified the comparison lines (historical, 9.2, and HEAD behaviors)
> to in
I wrote:
> Now, if we consider only CPU costs of index descent, we expect
> about log2(P) comparisons to be needed on each of the H upper pages
> to be descended through, that is we have total descent cost
> cpu_index_tuple_cost * H * log2(P)
> If we ignore the ceil() step as being a second-o
On 6 January 2013 23:03, Tom Lane wrote:
> Simon Riggs writes:
>> On 6 January 2013 18:58, Tom Lane wrote:
>>> IIRC, one of my very first attempts to deal with this was to charge
>>> random_page_cost per level of index descended. This was such a horrid
>>> overestimate that it never went anywhe
I wrote:
> [ slightly bogus graph ]
Ooops, it seems the ^ operator doesn't do what I thought in gnuplot.
Here's a corrected version.
regards, tom lane
<>set terminal png small color
set output 'new_fudge.png'
set xlabel "Index tuples"
set ylabel "Added cost"
set logscale
Simon Riggs writes:
> On 6 January 2013 18:58, Tom Lane wrote:
>> IIRC, one of my very first attempts to deal with this was to charge
>> random_page_cost per level of index descended. This was such a horrid
>> overestimate that it never went anywhere. I think that reflects that in
>> practical
On 6 January 2013 18:58, Tom Lane wrote:
> Simon Riggs writes:
>> On 5 January 2013 22:18, Tom Lane wrote:
>>> No. The argument is that if we don't have some such correction, the
>>> planner is liable to believe that different-sized indexes have *exactly
>>> the same cost*, if a given query wou
Simon Riggs writes:
> On 5 January 2013 22:18, Tom Lane wrote:
>> No. The argument is that if we don't have some such correction, the
>> planner is liable to believe that different-sized indexes have *exactly
>> the same cost*, if a given query would fetch the same number of index
>> entries.
>
On 6 January 2013 16:29, Jeff Janes wrote:
> Worse, this over-punishment of bloat is more likely to penalize partial
> indexes. Since they are vacuumed on the table's schedule, not their own
> schedule, they likely get vacuumed less often relative to the amount of
> turn-over they experience an
On 5 January 2013 22:18, Tom Lane wrote:
>> But I am wondering if it should be present at all in 9.3. When it was
>> introduced, the argument seemed to be that smaller indexes might be easier
>> to keep in cache.
>
> No. The argument is that if we don't have some such correction, the
> planner
Jeff Janes writes:
> On Saturday, January 5, 2013, Tom Lane wrote:
>> Jeff Janes > writes:
>>> One thing which depends on the index size which, as far as I can tell, is
>>> not currently being counted is the cost of comparing the tuples all the way
>>> down the index. This would be proportional t
On Saturday, January 5, 2013, Tom Lane wrote:
> Jeff Janes > writes:
> > [moved to hackers]
> > On Wednesday, December 5, 2012, Tom Lane wrote:
> >> Hm. To tell you the truth, in October I'd completely forgotten about
> >> the January patch, and was thinking that the 1/1 cost had a lot
> >> o
Jeff Janes writes:
> [moved to hackers]
> On Wednesday, December 5, 2012, Tom Lane wrote:
>> Hm. To tell you the truth, in October I'd completely forgotten about
>> the January patch, and was thinking that the 1/1 cost had a lot
>> of history behind it. But if we never shipped it before 9.2
On Tue, Dec 18, 2012 at 5:05 PM, Jeff Janes wrote:
Sorry for the malformed and duplicate post. I was not trying to be
emphatic; I was testing out gmail offline. Clearly the test didn't go
too well.
Jeff
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
[moved to hackers]
On Wednesday, December 5, 2012, Tom Lane wrote:
> Jeff Janes > writes:
> > I now see where the cost is coming from. In commit 21a39de5809 (first
> > appearing in 9.2) the "fudge factor" cost estimate for large indexes
> > was increased by about 10 fold, which really hits this
[moved to hackers]
On Wednesday, December 5, 2012, Tom Lane wrote:
> Jeff Janes writes:
> > I now see where the cost is coming from. In commit 21a39de5809 (first
> > appearing in 9.2) the "fudge factor" cost estimate for large indexes
> > was increased by about 10 fold, which really hits this i
24 matches
Mail list logo