Finally we solved our problem by using a kind of trick
We have 2 kind of table : online table for read and temp table to mass
insert our data
We work on the temp tables (5 different tables) to insert every data
without any index that goes really fast compared to the previous method
then we create
Please keep the list on CC: in your responses.
Benjamin Dugast wrote:
> 2014-07-18 13:11 GMT+02:00 Albe Laurenz :
>> This sounds a lot like checkpoint I/O spikes.
>>
>> Check with the database server log if the freezes coincide with checkpoints.
>>
>> You can increase checkpoint_segments when you
On Fri, Jul 18, 2014 at 3:52 AM, Benjamin Dugast
wrote:
> Hello,
>
> I'm working on Postgres 9.3.4 for a project.
>
> We are using Scala, Akka and JDBC to insert data in the database, we have
> around 25M insert to do which are basically lines from 5000 files. We issue
> a DELETE according to the
Benjamin Dugast writes:
> • fsync to off (that helped but we can't do this)
not exactly your question, but maybe synchronous_commit=off is a
nice enough intermediary solution for you (it may give better
performances at other places too for only an affordable cost)
--
Guillaume Cottenceau
-
Benjamin Dugast wrote:
> I'm working on Postgres 9.3.4 for a project.
>
>
> We are using Scala, Akka and JDBC to insert data in the database, we have
> around 25M insert to do
> which are basically lines from 5000 files. We issue a DELETE according to the
> file (mandatory) and
> then a COPY ea