John Siracusa <[EMAIL PROTECTED]> writes:
> I want to do something that will convince Postgres that using the date
> index is, by far, the best plan when running my queries, even when the
> date column correlation stat drops well below 1.0.
Have you tried experimenting with random_page_cost?
Kevin Brown <[EMAIL PROTECTED]> writes:
> One problem I've been running into is the merge join spilling to disk
> because sort_mem isn't big enough. The problem isn't that this is
> happening, it's that I think the planner is underestimating the impact
> that doing this will have on the time the m
Ed L. wrote:
> How do you know the merge join is spilling to disk? How are you identifying
> that? Just assuming from vmstat? iostat?
The existence of files in $PG_DATA/base//pgsql_tmp while the
query is running, combined with the EXPLAIN output (which shows what
sorts and joins are being perf
On Tuesday February 24 2004 1:14, Kevin Brown wrote:
>
> One problem I've been running into is the merge join spilling to disk
> because sort_mem isn't big enough. The problem isn't that this is
> happening, it's that I think the planner is underestimating the impact
> that doing this will have on
Tom Lane wrote:
> Josh Berkus <[EMAIL PROTECTED]> writes:
> > Kevin,
> >> 1. set enable_seqscan = on
> >> 2. set random_page_cost =
> >> 3. EXPLAIN ANALYZE query
> >> 4. record the ratio of estimated to actual scan times.
> >> 5. set enable_seqscan = off
> >> 6. set random_page_cost =
> >>
Josh Berkus <[EMAIL PROTECTED]> writes:
> Kevin,
>> 1. set enable_seqscan = on
>> 2. set random_page_cost =
>> 3. EXPLAIN ANALYZE query
>> 4. record the ratio of estimated to actual scan times.
>> 5. set enable_seqscan = off
>> 6. set random_page_cost =
>> 7. EXPLAIN ANALYZE query
>> 8. r
Kevin,
> 1. set enable_seqscan = on
> 2. set random_page_cost =
> 3. EXPLAIN ANALYZE query
> 4. record the ratio of estimated to actual scan times.
> 5. set enable_seqscan = off
> 6. set random_page_cost =
> 7. EXPLAIN ANALYZE query
> 8. record the actual index scan time(s)
> 9. tweak r
Josh Berkus wrote:
> John,
>
> > I think the key is to get the planner to correctly ballpark the number of
> > rows in the date range. If it does, I can't imagine it ever deciding to
> > read 1,000,000 rows instead of 1,000 with any sane "cost" setting. I'm
> > assuming the defaults are sane :)
John Siracusa <[EMAIL PROTECTED]> writes:
> Does the planner get estimates from both plans before deciding whether or
> not to use the one that references the date index?
The rowcount estimate is made prior to the plan cost estimate, much less
the plan selection. So you'd see the same number eith
John,
> I think the key is to get the planner to correctly ballpark the number of
> rows in the date range. If it does, I can't imagine it ever deciding to
> read 1,000,000 rows instead of 1,000 with any sane "cost" setting. I'm
> assuming the defaults are sane :)
The default for random_page_co
On 2/22/04 6:40 PM, Tom Lane wrote:
> John Siracusa <[EMAIL PROTECTED]> writes:
>> I think the key is to get the planner to correctly ballpark the number of
>> rows in the date range.
>
> I thought it was. What you showed was
>
> -> Index Scan using mytable_date_idx on mytable (cost=0.00..3071.7
John Siracusa <[EMAIL PROTECTED]> writes:
> I think the key is to get the planner to correctly ballpark the number of
> rows in the date range.
I thought it was. What you showed was
-> Index Scan using mytable_date_idx on mytable (cost=0.00..3071.70 rows=52155
width=23) (actual time=2.610..1688
On 2/22/04 5:06 PM, Tom Lane wrote:
> John Siracusa <[EMAIL PROTECTED]> writes:
>> I want to do something that will convince Postgres that using the date
>> index is, by far, the best plan when running my queries, even when the
>> date column correlation stat drops well below 1.0.
>
> Have you tri
On 2/22/04 2:05 PM, Josh Berkus wrote:
> On Saturday 21 February 2004 16:18, John Siracusa wrote:
>> Next, thanks to my earlier thread, I clustered the table on the date
>> column and then "SET STATISTICS" on the date column to be 100. That
>> did the trick, and I stopped explicitly disabling seqs
On Saturday 21 February 2004 16:18, John Siracusa wrote:
John,
> Next, thanks to my earlier thread, I clustered the table on the date
> column and then "SET STATISTICS" on the date column to be 100. That
> did the trick, and I stopped explicitly disabling seqscan.
100? Are you sure you don't me
15 matches
Mail list logo