On 27 April 2010 09:11, <a.bhattacha...@sungard.com> wrote:

>  Dear All Experts,
>
>
>
> I am using in PostgreSQL 8.3.5 database on windows 64 bit OS.
>
> However, I have a batch program written in Java which processes the data
> and populates them into tables in Postgres database.
>
>
>
> I have *622,000 number of records *but it is taking almost *4 and half
> hours* to load these data into the tables.
>
> I have a simple function in db which is being called from Java batch
> program to populate the records into tables from flat files.
>
>
>
> I have the below system configuration for my database server.
>
> Database Server
>
> *PostgreSQL v8.3.5*
>
> Operating System
>
> *Windows 2003 Server 64 bit, Service Pack 2*
>
> CPU
>
> *2 * Quad Core AMD Opteron Processor 2376 @ 2.29 GHz*
>
> Memory
>
> *16 GB RAM*
>
> Disk Space
>
> *total 2.5 TB [C drive – 454 GB & D drive 1.99 TB]*
>
>
>
>  and I have set my postgresql.conf parameters as below.
>
>
>
> ======================================================================
>
>
> #------------------------------------------------------------------------------
>
> # RESOURCE USAGE (except WAL)
>
>
> #------------------------------------------------------------------------------
>
>
>
> # - Memory -
>
>
>
> shared_buffers = 1GB
>
>
> temp_buffers = 256MB
>
> max_prepared_transactions = 100
>
>
>
>
> work_mem = 512MB
>
> maintenance_work_mem = 512MB
>
>
>
>
>
> # - Free Space Map -
>
>
>
> max_fsm_pages = 1600000
>
>
>
>
> max_fsm_relations = 10000
>                                                                 ‘
>
>
>
>
> #------------------------------------------------------------------------------
>
> # WRITE AHEAD LOG
>
>
> #------------------------------------------------------------------------------
>
> wal_buffers = 5MB                                            # min 32kB
>
>
>
> checkpoint_segments = 32
>
> checkpoint_completion_target = 0.9
>
>
>
>
> #------------------------------------------------------------------------------
>
> # QUERY TUNING
>
>
> #------------------------------------------------------------------------------
>
>
>
> # - Planner Method Configuration -
>
>
>
> enable_hashagg = on
>
> enable_hashjoin = on
>
> enable_indexscan = on
>
> enable_mergejoin = on
>
> enable_nestloop = on
>
> enable_seqscan = on
>
> enable_sort = on
>
>
>
> effective_cache_size = 8GB
>
>
>
> ========================================================================
>
>
>
> Please advise me the best or optimum way setting these parameters to
> achieve better performance.
>
> Also note that, when I am setting my *shared_buffer = 2GB or high , *Postgres
> is throwing an error “*shared_buffer size cannot be more than size_t*”
>
>
>
> It would be very grateful, if anyone can help me on this.
>
>
>
> Many thanks
>
Are these all being sent in 1 transaction?  Can't you use COPY to bulk
insert into the database?  If not, can you insert in batches (like 1,000 at
a time) Have you got any triggers/constraints/complicated domains on the
table you're inserting into?

Thom

Reply via email to