Sezai YILMAZ wrote:
Test Hardware:
IBM Thinkpad R40
CPU: Pentium 4 Mobile 1993 Mhz (full powered)
RAM: 512 MB
OS: GNU/Linux, Fedora Core 1, kernel 2.4.24

A test program developed with libpq inserts 200.000 rows into table logs. Insertions are made with 100 row per transaction (total 2.000 transactions).

Some parameter changes from postgresql.conf file follows:
----------------------------------------------------------------
shared_buffers = 2048           # min max_connections*2 or 16, 8KB each

I suggest you up that to say 10000 buffers..


max_fsm_relations = 20000       # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 200000          # min 1000, fsm is free space map, ~6 bytes
max_locks_per_transaction = 256 # min 10
wal_buffers = 64                # min 4, typically 8KB each
sort_mem = 32768                # min 64, size in KB

You need to pull it down a little, I guess. How about 8/16MB?


vacuum_mem = 16384 # min 1024, size in KB

Not required. 1024 could be done since you are testing inserts anyways. Of course, it matters only when you run vacuum..


effective_cache_size = 2000 # typically 8KB each

Is that true? It tells postgresql that it has around 16MB memory. Set it up around 15000 so that around 100MB+ is used. Might change the results of index scans.. I always prefer to set it to whatever available.


The test was made with both of PostgreSQL 7.3.4 and PostgreSQL 7.4.1 (the
test program was recompiled during version changes).

The results are below (average inserted rows per second).

speed for speed for
# of EXISTING RECORDS PostgreSQL 7.3.4 PostgreSQL 7.4.1
=========================================================================


0 initial records 1086 rows/s 1324 rows/s
200.000 initial records 781 rows/s 893 rows/s
400.000 initial records 576 rows/s 213 rows/s
600.000 initial records 419 rows/s 200 rows/s
800.000 initial records 408 rows/s not tested because of bad results

Do you mean 800000? I believe the '.' is a thousands separator here but not too sure..:-)


When the logs table reconstructed with only one index (primary key) then 2941 rows/s speed is reached. But I need all the seven indexes.

The question is why the PostgreSQL 7.4.1 is so slow under heavy work?

Can you run vmstat and see where things get stalled? Probably you can up the number of WAL segments and attempt.


Is there a way to speed up inserts without eliminating indexes?

What about concurrent inserts (cocurrent spare test program execution) into the same table? It did not work.

What does it mean, it didn't work? Any errors?


HTH

Shridhar



---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to