On 5/01/2010 3:30 PM, Yan Cheng Cheok wrote:
What is the actual problem you are trying to solve?

I am currently developing a database system for a high speed measurement 
machine.

The time taken to perform measurement per unit is in term of ~30 milliseconds. 
We need to record down the measurement result for every single unit. Hence, the 
time taken by record down the measurement result shall be far more less than 
milliseconds, so that it will have nearly 0 impact on the machine speed (If 
not, machine need to wait for database to finish writing, before performing 
measurement on next unit)

The commit_delay and synchronous_commit pararmeters may help you if you want to do each insert as a separate transaction. Note that with these parameters there's some risk of very recently committed data being lost if the server OS crashes or the server hardware is powered off/power-cycled unexpectedly. PostgreSQL its self crashing shouldn't cause loss of the committed data, though.

Alternately, you can accumulate small batches of measurements in your app and do multi-valued INSERTs once you have a few (say 10) collected up. You'd have to be prepared to lose those if the app crashed though.

Another option is to continue using your flat file, and have a "reader" process tailing the flat file and inserting new records into the database as they become available in the flat file. The reader could batch inserts intelligently, keep a record on disk of its progress, rotate the flat file periodically, etc.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to