Re: [GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread 林士博
Thanks for sharing the results of your experiment !

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread twoflower
In fact I did not partition by *fk_job* but by the *id* (primary key) instead thoughpartitioning by *fk_job* was my first idea. I use various columns when querying the table, *fk_job* is not always there. -- View this message in context: http://postgresql.nabble.com/The-fastest-way-to-update-

Re: [GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread 林士博
Yes, partitioning by fk_job can significantly improve performance of this update. And all the SELECTs with definited fk_job can be faster. All you should check carefully is those SELECTs without definited fk_job. 2015-07-24 17:18 GMT+09:00 twoflower : > Thank you, I will look into those sugges

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread twoflower
Thank you, I will look into those suggestions. Meanwhile, I started experimenting with partitioning the table into smaller tables, each holding rows with ID spanning 1 million values and using this approach, the UPDATE takes 300ms. I have to check if all the SELECTs I am issuing against the origi

Re: [GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread 林士博
OK. In your case, I can not see any reasons that using a temp table to do joint-update is faster than directly updating. And from the execution plan, index scan just takes very little time. Most of the time is doing insert/delete. As you mentioned, fk_assignmentwhere is updated frequently, and di

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-23 Thread twoflower
林士博 wrote > Can you post execution plan of the original update sql.EXPLAIN (ANALYZE > ON, BUFFERS ON) update "TRANSLATION" setfk_assignmentwhere fk_job = 1000; Here it is: Update on "TRANSLATION" (cost=0.56..9645.13 rows=3113 width=391) (actual time=35091.036..35091.036 rows=0 loops=1)    Buffe

Re: [GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-23 Thread 林士博
Sorry, my mistake. Execution plan of an update sql seems always get rows=0. Can you post execution plan of the original update sql. EXPLAIN (ANALYZE ON, BUFFERS ON) update "TRANSLATION" set fk_assignmentwhere fk_job = 1000;

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-23 Thread twoflower
林士博 wrote > Try creating an index on TRANSLATION fk_job. The index is already there. -- View this message in context: http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859144p5859191.html Sent from the PostgreSQL - general mailing list archiv

Re: [GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-23 Thread 林士博
Try creating an index on TRANSLATION fk_job. >From the Execution plan you posted, ->(actual time=43262.266..43262.266 rows=0 loops=1) actually, there is no row to be updated. So, if you have a index on TRANSLATION fk_job, the update sql as behind should be finished within several seconds. ->updat

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-23 Thread twoflower
Adrian Klaver-4 wrote > Have you tried wrapping the above in a BEGIN/COMMIT block? Yes, I am running the tests inside a BEGIN TRANSACTION / ROLLBACK block. -- View this message in context: http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859