On Wed, 2005-10-26 at 15:41, aurora wrote: > I am running Postgre 7.4 on FreeBSD. The main table have 2 million > record (we would like to do at least 10 mil or more). It is mainly a > FIFO structure with maybe 200,000 new records coming in each day that > displace the older records. > > We have a GUI that let user browser through the record page by page at > about 25 records a time. (Don't ask me why but we have to have this > GUI). This translates to something like > > select count(*) from table <-- to give feedback about the DB size > select * from table order by date limit 25 offset 0 > > Tables seems properly indexed, with vacuum and analyze ran regularly. > Still this very basic SQLs takes up to a minute run. > > I read some recent messages that select count(*) would need a table > scan for Postgre. That's disappointing. But I can accept an > approximation if there are some way to do so. But how can I optimize > select * from table order by date limit x offset y? One minute > response time is not acceptable.
Have you run your script without the select count(*) part and timed it? What does explain analyze select * from table order by date limit 25 offset 0 say? Is date indexed? ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org