On Fri, Jun 21, 2019 at 6:54 PM Tom Lane <t...@sss.pgh.pa.us> wrote: > > That's not a bad goal, although invoking a user-supplied callback > > while holding a buffer lock is a little scary. > > I nominate Robert for Understater of the Year. I think there's pretty > much 0 chance of that working reliably.
It's an honor to be nominated, although I am pretty sure this is not my best work in category, even for 2019. There are certainly useful things that could be done by such a callback without doing anything that touches shared memory and without doing anything that consumes more than a handful of CPU cycles, so it doesn't seem utterly crazy to think that such a design might survive. However, the constraints we'd have to impose might chafe. I am more inclined to ditch the callback model altogether in favor of putting any necessary looping logic on the caller side. That seems a lot more flexible, and the only trick is figuring out how to keep it cheap. Providing some kind of context object that can hold onto one or more pins seems like the most reasonable approach. Last week it seemed to me that we would need several, but at the moment I can't think of a reason why we would need more than one. I think we just want to optimize the case where several undo lookups in quick succession are actually reading from the same page, and we don't want to go to the expense of looking that page up multiple times. It doesn't seem at all likely that we would have a chain of undo records that leaves a certain page and then comes back to it later, because this is a log that grows forward, not some kind of random-access thing. So a cache of size >1 probably wouldn't help. Unless I'm still confused. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company