On Thu, Feb 22, 2024 at 6:25 AM James Coleman <jtc...@gmail.com> wrote: > > > > ...I think the current approach is just plain dead, because of this > > issue. We can't take an approach that creates an unbounded number of > > unclear reentrancy issues and then insert hacks one by one to cure > > them (or hack around them, more to the point) as they're discovered. > > > > The premise has to be that we only allow logging the query plan at > > points where we know it's safe, rather than, as at present, allowing > > it in places that are unsafe and then trying to compensate with code > > elsewhere. That's not likely to ever be as stable as we want > > PostgreSQL to be. > > This is potentially a bit of a wild idea, but I wonder if having some > kind of argument to CHECK_FOR_INTERRUPTS() signifying we're in > "normal" as opposed to "critical" (using that word differently than > the existing critical sections) would be worth it.
My hunch is this will end up being a maintenance burden since every caller has to decide (carefully) whether the call is under normal condition or not. Developers will tend to take a safe approach and flag calls as critical. But importantly, what's normal for one interrupt action may be critical for another and vice versa. Approach would be useful depending upon how easy it is to comprehend the definition of "normal". If a query executes for longer than a user defined threashold (session level GUC? or same value as auto_explain parameter), the executor proactively prepares an EXPLAIN output and keeps it handy in case asked for. It can do so at a "known" safe place and time rather than at any random time and location. Extra time spent in creating EXPLAIN output may not be noticeable in a long running query. The EXPLAIN output could be saved in pg_stat_activity or similar place. This will avoid signaling the backend. -- Best Wishes, Ashutosh Bapat