On Dec 5, 2006, at 21:56 , Wei Weng wrote:

I have a table that has roughly 200,000 entries and many columns.

The query is very simple:

SELECT Field1, Field2, Field3... FieldN FROM TargetTable;

This is the very definition of a sequential scan: you're reading 200,000 rows from that table, and the performance of doing this is constrained by the amount of time PostgreSQL can read the data from (at worst) disk or (at best) the disk cache. It's bound to be slow on any database system.

Test=> show shared_buffers ;
 shared_buffers
----------------
 60800
(1 row)

That's 475MB. How large is your table? You can find out with this query:

select relpages * 8192 from pg_class where relname ilike 'TargetTable';

Note that unlike the 8.x series, 7.4 apparently doesn't use shared_buffers that much for caching tuples across queries. In other words, a large shared_buffers setting might not have much of an effect.

Also: How much memory is left available to the OS cache?

Alexander.

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to