I wrote:
> I'm trying to reproduce this now, but it's sounding pretty plausible.
Yeah, that's definitely it. I was able to reproduce the failure semi
reliably (every two or three tries) after adding -DRELCACHE_FORCE_RELEASE
-DCATCACHE_FORCE_RELEASE and inserting a "pg_sleep(1)" just after the
man
David Rowley writes:
> On Tue, 9 Jun 2020 at 15:41, Tom Lane wrote:
>> Hmm ... that's a plausible theory, perhaps. I forget: does autovac
>> recheck, after acquiring the requisite table lock, whether the table
>> still needs to be processed?
> It does, but I wondered if there was a window after
On Tue, 9 Jun 2020 at 15:41, Tom Lane wrote:
>
> David Rowley writes:
> > It does seem plausible, given how slow prion is that autovacuum might
> > be trigger after the manual vacuum somehow and building stats with
> > just 1k buckets instead of 10k.
>
> Hmm ... that's a plausible theory, perhaps
David Rowley writes:
> I see 0c882e52a did change the number of statistics targets on that
> table. The first failure was on the commit directly after that one.
> I'm not sure what instability Tom meant when he wrote "-- results
> below depend on having quite accurate stats for atest12".
See [1],
On Tue, 9 Jun 2020 at 14:27, Thomas Munro wrote:
> Two recent failures show plan changes in RLS queries on master. Based
> on nearby comments, the choice plan is being used to verify access (or
> lack of access) to row estimates, so I guess that means something
> could be amiss here. (Or it coul
Thomas Munro writes:
> Two recent failures show plan changes in RLS queries on master.
Yeah. I assume this is related to 0c882e52a, but I'm not sure how.
The fact that we've only seen it on prion (which runs
-DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE) is suggestive,
but it's not clear why