On 08/21/2016 11:53 PM, Павел Филонов wrote:
My greetings to everybody!

I recently faced with the observation which I can not explain. Why
insertion throughput can be reduced with an increase of batch size?

Brief description of the experiment.

  * PostgreSQL 9.5.4 as server
  * https://github.com/sfackler/rust-postgres library as client driver
  * one relation with two indices (scheme in attach)

Experiment steps:

  * populate DB with 259200000 random records
  * start insertion for 60 seconds with one client thread and batch size = m
  * record insertions per second (ips) in clients code

Plot median ips from m for m in [2^0, 2^1, ..., 2^15] (in attachment).


On figure with can see that from m = 128 to m = 256 throughput have been
reduced from 13 000 ips to 5000.

I hope someone can help me understand what is the reason for such behavior?

To add to Jeff's questions:

You say you are measuring the IPS in the clients code.

Where is the client, on the same machine, same network or remote network?


--
Best regards
Filonov Pavel





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to