On 2/21/23 3:12 PM, Andres Freund wrote:
CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.



Hi,

On 2023-02-21 15:00:15 -0600, Jim Nasby wrote:
Some food for thought: I think it's also completely fine to extend any
relation over a certain size by multiple blocks, regardless of concurrency.
E.g. 10 extra blocks on an 80MB relation is 0.1%. I don't have a good feel
for what algorithm would make sense here; maybe something along the lines of
extend = max(relpages / 2048, 128); if extend < 8 extend = 1; (presumably
extending by just a couple extra pages doesn't help much without
concurrency).
I previously implemented just that. It's not easy to get right. You can easily
end up with several backends each extending the relation by quite a bit, at
the same time (or you re-introduce contention). Which can end up with a
relation being larger by a bunch if data loading stops at some point.

We might want that as well at some point, but the approach implemented in the
patchset is precise and thus always a win, and thus should be the baseline.
Yeah, what I was suggesting would only make sense when there *wasn't* contention.


Reply via email to