Richard Huxton wrote:

On Wednesday 14 January 2004 11:11, Sezai YILMAZ wrote:


Hi,

I use PostgreSQL 7.4 for storing huge amount of data. For example 7
million rows. But when I run the query "select count(*) from table;", it
results after about 120 seconds. Is this result normal for such a huge
table? Is there any methods for speed up the querying time? The huge
table has integer primary key and some other indexes for other columns.



PG uses MVCC to manage concurrency. A downside of this is that to verify the exact number of rows in a table you have to visit them all.


There's plenty on this in the archives, and probably the FAQ too.

What are you using the count() for?


I use count() for some statistics. Just to show how many records collected so far.

-sezai

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
     joining column's datatypes do not match

Reply via email to