On 5/13/13 6:36 PM, Mike McCann wrote:
stoqs_march2013_s=# explain analyze select * from
stoqs_measuredparameter order by datavalue;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=422106.15..430560.68 rows=3381814 width=20) (actual
time=2503.078..2937.130 rows=3381814 loops=1)
Sort Key: datavalue
Sort Method: quicksort Memory: 362509kB
-> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14
rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814 loops=1)
Total runtime: 3094.601 ms
(5 rows)
I tried changing random_page_cost to from 4 to 1 and saw no change.
Have you tried putting an index by datavalue on this table? Once you've
done that, then changing random_page_cost will make using that index
look less expensive. Sorting chews through a good bit of CPU time, and
that's where all of your runtime is being spent at--once you increase
work_mem up very high that is.
I'm wondering now what changes might get this query to run in less than
one second. If all the data is in memory, then will faster CPU and
memory be the things that help?
You're trying to fix a fundamental design issue with hardware. That
usually doesn't go well. Once you get a box big enough to hold the
whole database in RAM, beyond that the differences between server
systems are relatively small.
--
Greg Smith 2ndQuadrant US g...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance