On Thu, 12 Feb 2004, Bas Scheffers wrote:

> Hi Scot,
> 
> As "unrealistic" as it should be, I need <1 before Postgres takes the
> bait. Initialy 0.7, to be exact, but later It also worked at a little
> higher setting of 1. I have given PG 96Mb of memory to play with, so
> likely all my data will be in cache. So no very fast disk (6MB/sec reads),
> but loads of RAM.
> 
> Should I try tweaking any of the other parameters?

Yes.  drop cpu_tuple_index_cost by a factor of 100 or so 

cpu_index_tuple_cost = 0.001
to
cpu_index_tuple_cost = 0.0001
or
cpu_index_tuple_cost = 0.00001

Also up effective_cache_size.  It's measured in 8k blocks, so for a 
machine with 1 gig of ram, and 700 meg of that in kernel cache, you'd want 
approximately 90000 for that.  Note that this is not an exact measure, and 
it's ok if you like and make it even larger to ensure the database 
thinks we have gobs of RAM.

> > performance of seq versus index.  you'll often find that a query that
> > screams when the caches are full of your data is quite slow when the cache
> > is empty.
> True, but as this single query is going to be the work horse of the web
> service I am developing, it is likely all data will always be in memory,
> even if I'd have to stick several gigs of ram in.

Note that rather than "set enable_seqscan=off" for the whole database, you 
can always set it for just this session / query.

When you run explain analyze <query> are any of the estimates of rows way 
off versus the real number of rows?  If so, you may need to analyze more 
often or change the column's stat target to get a good number.  and some 
query plans just don't have any way of knowing, so they just guess, and 
there's no way to change what they're guessing, so setting random page 
cost to <1 may be the only answer.


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faqs/FAQ.html

Reply via email to