On 04/04/2025 10:41 pm, Melanie Plageman wrote:
On Fri, Apr 4, 2025 at 3:27 PM Konstantin Knizhnik<knizh...@garret.ru> wrote:
From logical point of view I agree with you: taken in account number of
inserted tuples makes sense if it allows to mark page as all-visible.
So `ins_since_vacuum` should be better renamed to
`ins_all_visible_since_vacuum` and count only all-visible tuples. If newly
inserted tuple is not visible to all, then it should not be accounted in
statistic and trigger autovacuum. But I have completely no idea of how to
efficiently maintain such counter: we should keep track of xids of all recently
inserted tuples and on each transaction commit determine which one of them
become all-visible.
And your suggestion just to not reset `ins_since_vacuum` until all of them
becomes all-visible may be easier to implement, but under permanent workload it
can lead to situation when `ins_since_vacuum` is never reset and at each vacuum
iteration cause vacuuming of the table. Which may cause significant degrade of
performance. Even without long-living transactions.
Right, you definitely can't just never reset it to 0. As it is
currently defined, the behavior is correct, that is how many tuples
were inserted since the last vacuum.
As for your proposed patch, I agree with Robert that triggering
vacuums using the stat you proposed just adds to the problem we know
we already have with wasted vacuuming work due to long-running
transactions.
If we already have a problem with wasted vacuuming then IMHO it should
be addressed (for example by keeping last_autovacuum_xid_horizon). I do
no think that the proposed patch adds something new to the problem. Yes
it will trigger autovacuum after each N visibility checks. And if there
is still some long running transaction, then this vacuum is useless. But
the same is true for criteria based on number of dead tuples.
And from my point of view additional autovacuum runs are less critical
than missed autovacuum runs (when table contains garbage or not updarted
VM but VM is not launched).
A more targeted solution to your specific problem would be to update
the visibility map on access. Then, the first time you have to fetch
that heap page, you could mark it all-visible (assuming the long
running transaction has ended) and then the next index-only scan
wouldn't have to do the same heap fetch. It doesn't add any overhead
in the case that the long running transaction has not ended, unlike
trying to trigger another autovacuum.
I really considered this alternative when thinking about the solution of
the problem. It is more consistent with hint bit approach.
I declined it in favor of this solution because of the following reasons:
1. Index-only scan holds read-only lock on heap page. In order to update
it, we need to upgrade this lock to exclusive.
2. We need to check visibility for all elements on the page (actually do
something like `heap_page_is_all_visible`) but if there is large number
elements at the page it can be quite expensive. And I afraid that it can
slowdown speed of index-only scan. Yes, only in "slow case" - when it
has to access heap to perform visibility check. But still it may be not
acceptable. Also it is not clear how to mark page as already checked.
Otherwise we will have to repeat this check for all tids referring this
page.
3. `heap_page_is_all_visible` is local to lazyvaccum.c. So to use it in
index-only scan we either have to make it global, either cut&paste it's
code. Just removing "static" is not possible, because it is using local
`LVRelState`, so some refactoring is needed in any case.
4. We need to wal-log VM page and heap pages in case of setting
all-visible bit. It is quite expensive operation. Doing it inside
index-only scan can significantly increase time of select. Certainly
Postgres is not a real-time DBMS. But still it is better to provide some
predictable query execution time. This is why I think that it is better
to do such workt in background (in vaccum).