Hi Posgres users/developers,
Can anyone explain why PosgreSQL (version 7.4.5 on Linux) does a full table scan to compute a count(*) on a base table after a vacuum analyze has been done with no following updates that might have outdated any statistics. Strangly the explain command does give the correct number of tuples instantaniously from the catalog, as one would expect. Still the optimizer thinks it needs a full table scan to do count.
See example below:
------8<---------------------------------------------------------------------------------------------
TestDB=# \d test_tbl; Table "public.test_tbl" Column | Type | Modifiers --------+---------+----------- pre | integer | not null name | text | not null Indexes: "test_tbl_pkey" primary key, btree (pre) "test_tbl_pre_index" unique, btree (pre) "test_tbl_name_index" btree (name)
TestDB=# explain select count(*) from test_tbl;
QUERY PLAN ----------------------------------------------------------------------------
Aggregate (cost=34293.60..34293.60 rows=1 width=0)
-> Seq Scan on test_tbl (cost=0.00..34293.60 rows=166558 width=0)
(2 rows)
Time: 25.188 ms
TestDB=# select count(*) from test_tbl;
count --------
166558
(1 row)
Time: 1024.803 ms TestDB=#
------8<---------------------------------------------------------------------------------------------
The consequence of this seemingly odd count implementation is a very very slow count.
Regards,
Henk Ernst Blok
-- address: DB group, Computer Science, EEMCS Dept., University of Twente, PO Box 217, 7500 AE, ENSCHEDE, THE NETHERLANDS phone: ++31 (0)53 489 3754 (if no response: 3690) email: [EMAIL PROTECTED] WWW: http://www.cs.utwente.nl/~blokh
---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster