On Mon, 14 Sep 2020 at 17:36, Michael Lewis wrote:
> Just curious, are you doing this in a trigger or in your application code?
> Either way, I'd think you could use the table record type to compare the temp
> vs real table values as an entire unit.
Application code - if I would put it in a tri
Just curious, are you doing this in a trigger or in your application code?
Either way, I'd think you could use the table record type to compare the
temp vs real table values as an entire unit.
with cte_indexes as(
select * from pg_indexes limit 10
)
select i1.indexdef, i2.tablename
from cte_indexe
For future reference, in the end I have just added a WHERE clause
comparing all fields with IS DISTINCT FROM. This seems to work well,
prevents any locks/WAL-generation and is about as fast as the previous
solution with suppress_redundant_updates_trigger().
Kind regards,
Mike
So it seems that when before triggers are handled, a SELECT FOR UPDATE
row-level lock is taken before the triggers are run. This causes a
write to the heap, as row-level locks are stored on-heap. This has the
unfortunate effect that suppress_redundant_updates_trigger() is not
able to prevent all wr
On Mon, 14 Sep 2020 at 06:32, Mike Noordermeer wrote:
> So it seems
> suppress_redundant_updates_trigger() does not entirely avoid writing
> _something_ to the blocks, and I don't know what it is and how to
> avoid it.
Looking at the pg_waldump output, it seems to write something
concerning locki
On Mon, 14 Sep 2020 at 06:03, Laurenz Albe wrote:
> These are probably the "hint bits" set on newly committed rows by the first
> reader.
> Note that te blocks are dirtied during the sequential scan, not during the
> update.
>
> You could try VACUUMing the tables before the update (which will se
On Sun, 2020-09-13 at 13:49 +0200, Mike Noordermeer wrote:
> Hi,
>
> I am currently working on a data sync solution, where data is synced
> from external systems to Postgres. As I do not know what data changed
> in the source data store, I have opted for the following flow:
>
> - Load all data in
Hi,
I am currently working on a data sync solution, where data is synced
from external systems to Postgres. As I do not know what data changed
in the source data store, I have opted for the following flow:
- Load all data in a temp table
- Update existing rows main table based on contents of temp