Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Interesting failure mode.  While reading it I was suddenly struck by the
> thought that overwriting storage managers may somehow be more resistent
> to these kind of failures.  This may well be true, because there is
> never need for a VACUUM process which would fail to correctly determine
> whether a tuple is truly dead or not; but in the end, concurrent
> processes have to follow t_ctid chains anyway.

Yeah.  I think the Oracle style has got about exactly the same issues
if they try to reuse space in the rollback segment.

> I also considered whether the correct test was xmin=xmax, or rather a
> transaction-tree test was needed.  Then I realized that it's not
> possible for a transaction to create a tuple chain crossing a
> subtransaction boundary.  So the xmin=xmax test is correct.

Actually, I thought of a counterexample: consider a tuple updated twice
in the same xact:

                XMIN    XMAX    t_ctid
        T1      X0      X1      -> T2
        T2      X1      X1      -> T3
        T3      X1      -       -> T3 (self)

If we remove T2 we'll be unable to chain from T1 to T3, which would
definitely be wrong.  So I'm now thinking that the special case in
HeapTupleSatisfiesVacuum has to go, too.

>> This is going to require a number of changes since there are several
>> places that follow t_ctid chains.

> I wonder whether this should be refactored so all of them use a single
> piece of code.

Most of the places end up feeding into EvalPlanQual, but passing down
the original tuple's XMAX to there will require changing the APIs of
heap_update, heap_delete, and heap_lock_tuple (sigh).

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to