Wei Weng wrote:
I have a table that has roughly 200,000 entries and many columns.
The query is very simple:
SELECT Field1, Field2, Field3... FieldN FROM TargetTable;
TargetTable has an index that is Field1.
The thing is on this machine with 1Gig Ram, the above query still takes
about 20 seconds to finish. And I need it to run faster, ideally around
5 seconds.
-------------------------------------------------------------------------------------------------------------------------------------------
Seq Scan on TargetTable (cost=0.00..28471.72 rows=210872 width=988)
(actual time=0.037..6084.385 rows=211286 loops=1)
Total runtime: 6520.499 ms
Thats 988 * 211286 =~ 200MB of data. Since the explain-analyse completes
in 6.5secs that would mean you're spending 13.5 seconds building the
result-set, transferring it and processing it at the client end.
That will take up at least 400MB of RAM (realistically more) - I'd
suggest you'd be better off with a cursor, unless you really need the
whole thing in one go.
If you do need all the data at once, you'll want a faster CPU and faster
RAM I guess.
--
Richard Huxton
Archonet Ltd
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/