We are currently trying to use PostgreSQL in a high throuput OLTP style application. Without trying to give you all the details of what we are doing(There is quite a bit) I just want to explain the problem and in general what we are doing. I am sure this will be something you have come accrross before.
We install PostgreSQL, install our schema and stored procedures. We then populate the db with lots of test data. Perform a vacuumdb db to generate stats Remove all the test data. Stop the server. We then tar up all the files. Fine so far, this is our set of files for deployment. We deploy the db and the rest of our applications and start them all up. Our application then starts and send lots of transactions to the db over three separate transactions. It takes between 200 and 2000 ms for PostgreSQL to service each of our transactions. It does not seem to matter how long we leave the thing running or how many time we vacuum thi is how long transactions take. We then stop the db and our application, Start the db by itself remove all data start our apllication The the exact same transactions that took 200 to 2000 ms now take between 20 and 40 ms. This happens on nearly every deployment but I can not find or understand the root of the problem. Any ideas. Thanks in advance. Clive Deal ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html