On 20 March 2012 19:45, Michael Nolan <htf...@gmail.com> wrote:
>
>>
>> right now I am having about 7000 tables for individual stock and I use
>> perl to do inserts, it's very slow. I would like to use copy or other
>> bulk loading tool to load the daily raw gz data. but I need the split
>> the file to per stock files first before I do bulk loading. I consider
>> this a bit messy.
>
>
> Are you committing each insert separately or doing them in batches using
> 'begin transaction' and 'commit'?
>
> I have a database that I do inserts in from a text file. Doing a commit
> every 1000 transactions cut the time by over 90%.

I use perl dbi and prepared statement. also I set
shared_buffers = 4GB
work_mem = 1GB
synchronous_commit = off
effective_cache_size = 8GB
fsync=off
full_page_writes = off

when I do the insert.

Thanks!


> --
> Mike Nolan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to