Re: [PERFORM] Planning hot/live backups?

2008-03-24 Thread Matthew T. O'Connor
Miguel Arroz wrote: Going a bit off topic, but one quick question: to avoid storing GB of WAL files that will probably take a lot of time to reload, how can the backup be "reset"? I suspect that it's something like stopping the WAL archiving, doing a new base backup, and restart archiving, but

Re: [PERFORM] slow pg_connect()

2008-03-24 Thread Chris
* Read about configuring and using persistent database connections (http://www.php.net/manual/en/function.pg-pconnect.php) with PHP Though make sure you understand the ramifications of using persistent connections. You can quickly exhaust your connections by using this and also cause other

Re: [PERFORM] waiting for harddisk

2008-03-24 Thread Scott Marlowe
On Mon, Mar 24, 2008 at 7:05 AM, petchimuthu lingam <[EMAIL PROTECTED]> wrote: > i am using postgresql 8.1.8, > > Following configurations: >shared_buffers = 5000 > work_mem = 65536 > maintenance_work_mem = 65536 > effective_cache_size = 16000 >

Re: [PERFORM] Planning hot/live backups?

2008-03-24 Thread Miguel Arroz
Hi! Going a bit off topic, but one quick question: to avoid storing GB of WAL files that will probably take a lot of time to reload, how can the backup be "reset"? I suspect that it's something like stopping the WAL archiving, doing a new base backup, and restart archiving, but I've nev

Re: [PERFORM] Planning hot/live backups?

2008-03-24 Thread Steve Poe
Tom, So, are you saying we need to get to at least 8.1.x before considering PITR for a production environment? Unfortunately, the vendor/supplier of our veterinary application does not support higher versions. We would be proceeding "at our own risk". Is there anything else we can do we 8.0.15 ve

Re: [PERFORM] Planning hot/live backups?

2008-03-24 Thread Matthew T. O'Connor
Steve Poe wrote: At this point, I am just moving the pg_dumpall file to another server. Pardon my question: how would you 'ship the log files'? [ You should cc the mailing list so that everyone can benefit from the conversation. ] RTM: http://www.postgresql.org/docs/8.3/interactive/conti

Re: [PERFORM] Planning hot/live backups?

2008-03-24 Thread Tom Lane
"Matthew T. O'Connor" <[EMAIL PROTECTED]> writes: > Steve Poe wrote: >> The owners of the animal hospital where I work at want to consider live/hot >> backups through out the day so we're less likely to lose a whole >> day of transaction. We use Postgresql 8.0.15. We do 3AM >> backups, using pg_du

Re: [PERFORM] Planning hot/live backups?

2008-03-24 Thread paul rivers
Matthew T. O'Connor wrote: Steve Poe wrote: The owners of the animal hospital where I work at want to consider live/hot backups through out the day so we're less likely to lose a whole day of transaction. We use Postgresql 8.0.15. We do 3AM backups, using pg_dumpall, to a file when there is v

Re: [PERFORM] Planning hot/live backups?

2008-03-24 Thread Matthew T. O'Connor
Steve Poe wrote: The owners of the animal hospital where I work at want to consider live/hot backups through out the day so we're less likely to lose a whole day of transaction. We use Postgresql 8.0.15. We do 3AM backups, using pg_dumpall, to a file when there is very little activity. You p

Re: [PERFORM] Planning hot/live backups?

2008-03-24 Thread Campbell, Lance
I back up around 10 Gig of data every half hour using pg_dump. I don't backup the entire database at once. Instead I backup at the schema namespace level. But I do all of them every half hour. It takes four minutes. That includes the time to copy the files to the backup server. I do each schem

[PERFORM] Planning hot/live backups?

2008-03-24 Thread Steve Poe
The owners of the animal hospital where I work at want to consider live/hot backups through out the day so we're less likely to lose a whole day of transaction. We use Postgresql 8.0.15. We do 3AM backups, using pg_dumpall, to a file when there is very little activity. The hospital enjoys the ove

Re: [PERFORM] Turn correlated in subquery into join

2008-03-24 Thread Tom Lane
Dennis Bjorklund <[EMAIL PROTECTED]> writes: > Look like the mysql people found a subquery that postgresql doesn't > handle as good as possible: >http://s.petrunia.net/blog/ > Is there some deeper issue here that I fail to see or is it simply that > it hasn't been implemented but is fairly

Re: [PERFORM] waiting for harddisk

2008-03-24 Thread PFC
i am using postgresql 8.1.8, Following configurations: shared_buffers = 5000 work_mem = 65536 maintenance_work_mem = 65536 effective_cache_size = 16000 random_page_cost = 0.1 The cpu is waiting percentage goes upto 50%, and query result

[PERFORM] waiting for harddisk

2008-03-24 Thread petchimuthu lingam
i am using postgresql 8.1.8, Following configurations: shared_buffers = 5000 work_mem = 65536 maintenance_work_mem = 65536 effective_cache_size = 16000 random_page_cost = 0.1 The cpu is waiting percentage goes upto 50%, and query result c

Re: [PERFORM] slow pg_connect()

2008-03-24 Thread Thomas Pundt
Hi, [EMAIL PROTECTED] schrieb: Please, how long takes your connectiong to postgres? $starttimer=time()+microtime(); $dbconn = pg_connect("host=localhost port=5432 dbname=xxx user=xxx password=xxx") or die("Couldn't Connect".pg_last_error()); $stoptimer = time()+microtime(); echo "Gene

Re: [PERFORM] slow pg_connect()

2008-03-24 Thread Tommy Gildseth
[EMAIL PROTECTED] wrote: Hi, I'm uning postgres 8.1 at P4 2.8GHz with 2GB RAM. (web server + database on the same server) Please, how long takes your connectiong to postgres? It takes more then 0.05s :( Only this function reduce server speed max to 20request per second. I tried running the

Re: [PERFORM] increasing shared buffer slow downs query performance.

2008-03-24 Thread Andreas Kretschmer
petchimuthu lingam <[EMAIL PROTECTED]> schrieb: > Hi friends, > > I am using postgresql 8.1, I have shared_buffers = 5, now i execute the > query, it takes 18 seconds to do sequential scan, when i reduced to 5000, it > takes one 10 seconds, Why. Wild guess: the second time the data are in th

[PERFORM] increasing shared buffer slow downs query performance.

2008-03-24 Thread petchimuthu lingam
Hi friends, I am using postgresql 8.1, I have shared_buffers = 5, now i execute the query, it takes 18 seconds to do sequential scan, when i reduced to 5000, it takes one 10 seconds, Why. Can anyone explain what is the reason, ( any other configuration is needed in postgresql.conf) -- With

Re: [PERFORM] slow pg_connect()

2008-03-24 Thread vincent
> > It takes more then 0.05s :( > > Only this function reduce server speed max to 20request per second. First, benchmarking using only PHP is not very accurate, you're probably also measuring some work that PHP needs to do just to get started in the first place. Second, this 20r/s is not requests

Re: [PERFORM] slow pg_connect()

2008-03-24 Thread Craig Ringer
Craig Ringer wrote: [EMAIL PROTECTED] wrote: It takes more then 0.05s :( Only this function reduce server speed max to 20request per second. If you need that sort of frequent database access, you might want to look into: - Doing more work in each connection and reducing the number of con

Re: [PERFORM] slow pg_connect()

2008-03-24 Thread Craig Ringer
[EMAIL PROTECTED] wrote: It takes more then 0.05s :( Only this function reduce server speed max to 20request per second. If you need that sort of frequent database access, you might want to look into: - Doing more work in each connection and reducing the number of connections required; -

[PERFORM] Turn correlated in subquery into join

2008-03-24 Thread Dennis Bjorklund
Look like the mysql people found a subquery that postgresql doesn't handle as good as possible: http://s.petrunia.net/blog/ Is there some deeper issue here that I fail to see or is it simply that it hasn't been implemented but is fairly straigt forward? In the link above they do state that

[PERFORM] slow pg_connect()

2008-03-24 Thread firerox
Hi, I'm uning postgres 8.1 at P4 2.8GHz with 2GB RAM. (web server + database on the same server) Please, how long takes your connectiong to postgres? $starttimer=time()+microtime(); $dbconn = pg_connect("host=localhost port=5432 dbname=xxx user=xxx password=xxx") or die("Couldn't Connect"