On 27 Apr 2010, at 10:11, <a.bhattacha...@sungard.com> 
<a.bhattacha...@sungard.com> wrote:

> Dear All Experts,
>  
> I am using in PostgreSQL 8.3.5 database on windows 64 bit OS.
> However, I have a batch program written in Java which processes the data and 
> populates them into tables in Postgres database.
>  
> I have 622,000 number of records but it is taking almost 4 and half hours to 
> load these data into the tables.

Wow, that's averaging almost 40s per insert! I don't think those inserts not 
being in a single transaction explains that much of a delay. Not that changing 
that doesn't help, but there's more going on here.

Maybe you got some foreign key constraints involved that don't have indices on 
the foreign keys? In that case I expect that you either have a lot of foreign 
references, only a few but from rather large tables (several million rows at 
least) or something in between?

Any other constraints that could be relevant?

> I have a simple function in db which is being called from Java batch program 
> to populate the records into tables from flat files.

Did you verify that most of the time is spent waiting on the database?

I'm not entirely sure what you mean by the above. Do you have a batch program 
that starts the java interpreter for each flat file?

Are they running synchronously (one after the other) or parallel? In the latter 
case you may be waiting on locks.

Is the data that your program needs to insert in one line in the flat file or 
does it need to collect data from multiple lines throughout the file?

How much memory does your java program use, could it be that it causes postgres 
to be swapped out?

Did you do any benchmarking on your "simple function in db" or on the queries 
it performs (assuming it does perform any)?

> I have the below system configuration for my database server.
> Database Server
> PostgreSQL v8.3.5
> Operating System
> Windows 2003 Server 64 bit, Service Pack 2
> CPU
> 2 * Quad Core AMD Opteron Processor 2376 @ 2.29 GHz
> Memory
> 16 GB RAM
> Disk Space
> total 2.5 TB [C drive – 454 GB & D drive 1.99 TB]

A system like that should be able to insert that small a number of records in 
no time.

> Also note that, when I am setting my shared_buffer = 2GB or high , Postgres 
> is throwing an error “shared_buffer size cannot be more than size_t”

That's odd... Is this a 32-bit Postgres build or is a 64-bit Windows incapable 
of assigning more than a 32-bit number for the amount of shared memory? Are you 
running in some kind of 32-bit compatibility mode maybe (PAE comes to mind)?

That said, I haven't used Windows for anything more serious than gaming since 
last century - I'm not exactly an expert on its behaviour.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4bd6abc310411173714063!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to