Hi, 

On February 19, 2022 7:56:53 PM PST, Peter Geoghegan <p...@bowt.ie> wrote:
>On Sat, Feb 19, 2022 at 7:47 PM Andres Freund <and...@anarazel.de> wrote:
>> Non DEAD orphaned versions shouldn't cause a problem in lazy_scan_prune(). 
>> The
>> problem here is a DEAD orphaned HOT tuples, and those we should be able to
>> delete with the new page pruning logic, right?
>
>Right. But what good does that really do? The problematic page had a
>third tuple (at offnum 3) that was LIVE. If we could have done
>something about the problematic tuple at offnum 2 (which is where we
>got stuck), then we'd still be left with a very unpleasant choice
>about what happens to the third tuple.

Why does anything need to happen to it from vacuum's POV?  It'll not be a 
problem for freezing etc. Until it's deleted vacuum doesn't need to care.

Probably worth a WARNING, and amcheck definitely needs to detect it, but 
otherwise I think it's fine to just continue.


>> I think it might be worth getting rid of the need for the retry approach by
>> reusing the same HTSV status array between heap_prune_page and
>> lazy_scan_prune. Then the only legitimate reason for seeing a DEAD item in
>> lazy_scan_prune() would be some form of corruption.  And it'd be a pretty
>> decent performance boost, HTSV ain't cheap.
>
>I guess it doesn't actually matter if we leave an aborted DEAD tuple
>behind, that we could have pruned away, but didn't. The important
>thing is to be consistent at the level of the page.

That's not ok, because it opens up dangers of being interpreted differently 
after wraparound etc.

But I don't see any cases where it would happen with the new pruning logic in 
your patch and sharing the HTSV status array?

Andres


-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Reply via email to