Hi Gavin – thanks I hadn’t realized that about psychopg. I’m on the earlier version, so I can’t use what you recommended at this point. But I did use copy_expert.
Interestingly enough the performance of the copy statement is only slightly better than the insert, as I was running inserts with 5000 values clauses. In the end, the current config couldn’t keep up with the WAL creation, so I turned all that off. But still no perf gains. I also turned off fsync and set the kernel settings to 10% and 98% for dirty pages… I wonder if there’s a better load product than COPY???? But I’d still like to know what separates COPY from bulk inserts… pf From: Gavin Roy <gav...@aweber.com> Sent: Wednesday, November 24, 2021 1:50 PM To: Godfrin, Philippe E <philippe.godf...@nov.com> Cc: pgsql-general@lists.postgresql.org Subject: [EXTERNAL] Re: Inserts and bad performance On Wed, Nov 24, 2021 at 2:15 PM Godfrin, Philippe E <philippe.godf...@nov.com<mailto:philippe.godf...@nov.com>> wrote:GreetingsI am inserting a large number of rows, 5,10, 15 million. The python code commits every 5000 inserts. The table has partitioned children.On the Python client side, if you're usi <https://us.report.cybergraph.mimecast.com/alert-details/?dep=Wr0OlPCq7rNfiTEZXZaOpA%3D%3DjZOFbItn0C5RxyoO%2BmXR2j9FVv%2BWzhFJYReW7ql2zdXPDV40mdS1DQpYOmBt2Oxoehf1bVTmKoJUhNrZa2mIi%2FQMp8dj%2B9IMl1T8FzHRYvXB5us%2BUoZgXp%2BbwqXCXYEsxTG8iZj8bV7I6oscimbLg1XRT039VTqG5EDwXI%2FlGEJpWpx1EVzIcXHenq8DwZLgCSkhj2TFk9HkbexFBWJa3mZxYASZ%2BLx4zI5WJuTtLUGhLcQi5YtrFmxK%2FhegJTn02LIFkfp7RuqaPEJ5b%2BmvbJ8AsY1UH99HbU1dTHOFyQrKRwBXKk1knkZ9ymsDQl7VgWH%2FDg%2FTpgX0URnz8tqnbDANTpMEMJZcEvbETRrqvBMlBcdZlbm2V7LiLwDiQgK3XxvyQpn2CU%2F6QxeZAZslAsvTt%2F3bWNEXmOgoEabPh96vDxjRSdEvVvVGy%2BUPtP36YKLarzhLq1nwAah0bPBgC2XSNlAi02os5URexqotMZjX5vlxMsfPVpncwWUj61%2FFTbVU04xkn2%2FuBm8Izm5oQFsq9iGBQENILj8LakGpFNY5FH1DJuKMEUba91X6mzcy4w2Ez1bPhdWCPFTy9ToiOt7F5vC4AoMD%2FzsxoJCOWQtq9OZMzqVSPaz19AicZdgiGm%2B98bZtbGBZKdIXNiM9YLLKWS9%2FxPaDhL%2FZYkVNUjo%3D> On Wed, Nov 24, 2021 at 2:15 PM Godfrin, Philippe E <philippe.godf...@nov.com<mailto:philippe.godf...@nov.com>> wrote: Greetings I am inserting a large number of rows, 5,10, 15 million. The python code commits every 5000 inserts. The table has partitioned children. On the Python client side, if you're using psycopg, you should consider using using COPY instead of INSERT if you're not: https://www.psycopg.org/psycopg3/docs/basic/copy.html#copy<https://www.psycopg.org/psycopg3/docs/basic/copy.html#copy> And if using psycopg2, execute_batch might be of value: https://www.psycopg.org/docs/extras.html?highlight=insert#psycopg2.extras.execute_batch<https://www.psycopg.org/docs/extras.html?highlight=insert#psycopg2.extras.execute_batch> Regards, Gavin