[PERFORM] Obtaining the exact size of the database.

2010-06-20 Thread venu madhav
Hi All, I am using Postgres 8.1.9 for my application. My application also has a clean up module which cleans up specified percentage of total database size at regular intervals. Now the problem is I use *pg_database_size* to obtain the size of the database. After deleting the records, we run

Re: [PERFORM] Autovacuum in postgres.

2010-06-02 Thread venu madhav
Thanks for the reply.. I am using postgres 8.01 and since it runs on a client box, I can't upgrade it. I've set the auto vacuum nap time to 3600 seconds. On Thu, May 27, 2010 at 8:03 PM, Bruce Momjian wrote: > venu madhav wrote: > > Hi All, > >In my

Re: [PERFORM] Autovacuum in postgres.

2010-05-27 Thread venu madhav
One more question " Is is expected ?" On Fri, May 21, 2010 at 3:08 PM, venu madhav wrote: > Hi All, >In my application we are using postgres which runs on an embedded > box. I have configured autovacuum to run once for every one hour. It has 5 > different databases

[PERFORM] Autovacuum in postgres.

2010-05-27 Thread venu madhav
Hi All, In my application we are using postgres which runs on an embedded box. I have configured autovacuum to run once for every one hour. It has 5 different databases in it. When I saw the log messages, I found that it is running autovacuum on one database every hour. As a result, on my da

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-14 Thread venu madhav
On Wed, May 12, 2010 at 7:26 PM, Kevin Grittner wrote: > venu madhav wrote: > > >> > If the records are more in the interval, > >> > >> How do you know that before you run your query? > >> > > I calculate the count first. > > This a

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-14 Thread venu madhav
On Wed, May 12, 2010 at 5:25 PM, Kevin Grittner wrote: > venu madhav wrote: > > >>> AND e.timestamp >= '1270449180' > >>> AND e.timestamp < '1273473180' > >>> ORDER BY. > >>> e.cid DESC, > >>> e.cid D

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-14 Thread venu madhav
On Wed, May 12, 2010 at 3:20 AM, Kevin Grittner wrote: > venu madhav wrote: > > > When I try to get the last twenty records from the database, it > > takes around 10-15 mins to complete the operation. > > Making this a little easier to read (for me, at least) I ge

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-14 Thread venu madhav
097 rows=36 loops=1) -> Seq Scan on signature s (cost=0.00..2.35 rows=35 width=191) (actual time=0.005..0.045 rows=36 loops=1) *Total runtime: 1463829.145 ms* (10 rows) Thank you, Venu Madhav. > > Thanks, > > Shrirang Chitnis > Sr. Manager, Applications D

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-14 Thread venu madhav
stats_row_level = on #stats_reset_on_server_start = off* Please let me know if you are referring to something else. > Which version of Postgres are you running? Which OS? > [Venu] Postgres Version 8.1 and Cent OS 5.1 is the Operating System. Thank you, Venu > > > > >

[PERFORM] Performance issues when the number of records are around 10 Million

2010-05-11 Thread venu madhav
Hi all, In my database application, I've a table whose records can reach 10M and insertions can happen at a faster rate like 100 insertions per second in the peak times. I configured postgres to do auto vacuum on hourly basis. I have frontend GUI application in CGI which displays the data fro