Thanks for sharing the results of your experiment !
In fact I did not partition by *fk_job* but by the *id* (primary key) instead
thoughpartitioning by *fk_job* was my first idea.
I use various columns when querying the table, *fk_job* is not always there.
--
View this message in context:
http://postgresql.nabble.com/The-fastest-way-to-update-
Yes, partitioning by fk_job can significantly improve performance of this
update.
And all the SELECTs with definited fk_job can be faster.
All you should check carefully is those SELECTs without definited fk_job.
2015-07-24 17:18 GMT+09:00 twoflower :
> Thank you, I will look into those sugges
Thank you, I will look into those suggestions.
Meanwhile, I started experimenting with partitioning the table into smaller
tables, each holding rows with ID spanning 1 million values and using this
approach, the UPDATE takes 300ms. I have to check if all the SELECTs I am
issuing against the origi
OK.
In your case, I can not see any reasons that
using a temp table to do joint-update is faster than directly updating.
And from the execution plan, index scan just takes very little time.
Most of the time is doing insert/delete.
As you mentioned, fk_assignmentwhere is updated frequently,
and di
林士博 wrote
> Can you post execution plan of the original update sql.EXPLAIN (ANALYZE
> ON, BUFFERS ON) update "TRANSLATION" setfk_assignmentwhere fk_job = 1000;
Here it is:
Update on "TRANSLATION" (cost=0.56..9645.13 rows=3113 width=391) (actual
time=35091.036..35091.036 rows=0 loops=1)
Buffe
Sorry, my mistake.
Execution plan of an update sql seems always get rows=0.
Can you post execution plan of the original update sql.
EXPLAIN (ANALYZE ON, BUFFERS ON) update "TRANSLATION" set
fk_assignmentwhere fk_job = 1000;
林士博 wrote
> Try creating an index on TRANSLATION fk_job.
The index is already there.
--
View this message in context:
http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859144p5859191.html
Sent from the PostgreSQL - general mailing list archiv
Try creating an index on TRANSLATION fk_job.
>From the Execution plan you posted,
->(actual time=43262.266..43262.266 rows=0 loops=1)
actually, there is no row to be updated.
So, if you have a index on TRANSLATION fk_job, the update sql as behind
should be finished within several seconds.
->updat
Adrian Klaver-4 wrote
> Have you tried wrapping the above in a BEGIN/COMMIT block?
Yes, I am running the tests inside a BEGIN TRANSACTION / ROLLBACK block.
--
View this message in context:
http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859
10 matches
Mail list logo