Steve Crawford wrote:
[EMAIL PROTECTED] wrote:
Hello
i have a python script to update 60 rows to one table from a csv
file in my
postgres database and it takes me 5 hours to do the transaction...
Let's see if I guessed correctly.
Your Python script is stepping through a 600,000 row
On Sat, 15 Dec 2007, [EMAIL PROTECTED] wrote:
First when i run htop i see that the memory used is never more than 150 MB.
I don't understand in this case why setting shmall and shmmax kernel's
parameters to 16 GB of memory (the server has 32 GB) increase the rapidity of
the transaction a lot com
Loïc Marteau <[EMAIL PROTECTED]> wrote ..
> Steve Crawford wrote:
> > If this
> > is correct, I'd first investigate simply loading the csv data into a
> > temporary table, creating appropriate indexes, and running a single
> > query to update your other table.
My experience is that this is MUCH f
Mark Mielke wrote:
> Asynchronous I/O is no more a magic bullet than threading. It requires a
> lot of work to get it right, and if one gets it wrong, it can be slower
> than the regular I/O or single threaded scenarios. Both look sexy on
> paper, neither may be the solution to your problem. Or