Hi,
On 2020-02-13 16:16:14 -0500, Tom Lane wrote:
> In principle perhaps we could improve the granularity of dead-row
> detection, so that if a row version is both created and deleted by
> the current transaction, and we have no live snapshots that could
> see it, we could go ahead and mark the ro
If your trigger is supposed to change certain fields, you could return OLD
instead of NEW if those fields have not been changed by the trigger. You
could also check an updated_on timestamp field to verify if the row has
already been modified and potentially skip the trigger altogether. Just a
coupl
Thank you for the explanation.
That is pretty much what I suspected, but I held out hope that there is
some functionality I could use to clear the bloat as the transaction
progresses and bring the UPDATE time back down again.
"dont do that" is sensible, but much more easily said than done, as the
=?UTF-8?B?S2FybCBEw7zDvG5h?= writes:
> -- TL; DR;
> UPDATE on a row takes relatively constant amount of time outside a
> transaction block, but running UPDATE on a single row over and over inside
> a transaction gets slower and slower as the number of UPDATE operations
> increases.
Yeah, that's u
On Thu, Feb 13, 2020 at 1:42 PM Karl Düüna wrote:
> It really isn't noticeable until about 5k UPDATEs on a single row.
>
Don't know why, and never dealt with a scenario where this would even come
up, but that this doesn't perform well inside a transaction isn't
surprising to me. Kinda surprised
Hi
I recently came across a performance problem with a big transaction block,
which doesn't make sense to me and hopefully someone more knowledgeable can
explain the reasons and point out a direction for a solution.
-- TL; DR;
UPDATE on a row takes relatively constant amount of time outside a
tr