On Fri, Apr 8, 2022 at 12:04 PM Robert Haas <robertmh...@gmail.com> wrote: > I meant wasting space in the page. I think that's a real concern. > Imagine you allowed 1000 line pointers per page. Each one consumes 2 > bytes. So now you could have ~25% of each page in the table storing > dead line pointers. That sounds awful, and just running VACUUM won't > fix it once it's happened, because the still-live line pointers are > likely to be at the end of the line pointer array and thus truncating > it won't necessarily be possible.
I see. That's a legitimate concern, though one that I believe can be addressed. I have learned to dread any kind of bloat that's irreversible, no matter how minor it might seem when seen as an isolated event, so I'm certainly sympathetic to these concerns. You can make a similar argument in favor of a higher MaxHeapLinePointersPerPage limit, though -- and that's why I believe an increase of some kind makes sense. The argument goes like this: What if we miss the opportunity to systematically keep successor versions of a given logical row on the same heap page over time, due only to the current low MaxHeapLinePointersPerPage limit of 291? If we had only been able to "absorb" just a few extra versions in the short term, we would have had stability (in the sense of being able to preserve locality among related logical rows) in the long term. We could have kept everything together, if only we didn't overreact to what were actually short term, rare perturbations. -- Peter Geoghegan