On 14.08.2017 23:56, Andres Freund wrote:
Alternatively we could do something without marker files, with some
added complexity: Keep track of all "uncommitted new files" in memory,
and log them every checkpoint. Commit/abort records clear elements of
that list. Since we always start replay at the beginning of a
checkpoint, we'd always reach a moment with such an up2date list of
pending-action files before reaching end-of-recovery. At end-of-recovery
we can delete all unconfirmed files.  To avoid out-of-memory due to too
many tracked relations, we'd possibly still have to have marker files...


Hi, hackers.

I'm sorry, but I want to bump this thread, because there is still no good solution to solve the problem. I see there are few threads with undo-based approaches, which looks preferable, but have some pitfalls. Is there any chance to return to non-undo approaches partially discussed here? What do you think about the following solutions? 1) Make `pendingDeletes` shared and let postmaster clean all garbage in case of child process dying. Cons: Not works in case of postmaster dying. Should care about `pendingDeletes` pointers validity. 2) Catch and store all records with relfilenode during WAL replay, delete all orphaned nodes at the end of replaying. Cons: The final delete may use an incomplete list of nodes, as there was something before the latest checkpoint. The general opacity - we remove something without a corresponded WAL record (or possibly do it in wrong place in general). 3) This way is close to one I quoted and a combination of two above. `pendingDeletes` is shared. Each checkpoint creates a WAL record with a list of open transactions and created nodes. WAL replaying can use this list as base, adding nodes to it from newer records. The final delete operation has a complete list of orphaned nodes. Cons: Complexity(?). Others(?).

Can it work? Are any of this approaches still relevant?

I'm sorry for the last html-formatted message. Our web-based app is too smart.

--
Regards,
Alex Go, C developer
g...@arenadata.io, www.arenadata.tech


Reply via email to