Andres Freund <and...@anarazel.de> writes: > On 2021-05-11 12:03:33 -0400, Tom Lane wrote: >> In some recent threads I complained about how CLOBBER_CACHE_ALWAYS >> test runs have gotten markedly slower over the past couple of release >> cycles [1][2][3].
> I wonder if the best way to attack this in a more fundamental manner would be > to handle nested invalidations different than we do today. Not just for > CCA/CCR performance, but also to make invalidations easier to understand in > general. I spent some time thinking along those lines too, but desisted after concluding that that would fundamentally break the point of CCA testing, namely to be sure we survive when a cache flush occurs at $any-random-point. Sure, in practice it will not be the case that a flush occurs at EVERY random point. But I think if you try to optimize away a rebuild at point B on the grounds that you just did one at point A, you will fail to cover the scenario where flush requests arrive at exactly points A and B. > IMO the problem largely stems from eagerly rebuilding *all* relcache entries > during invalidation processing. Uh, we don't do that; only for relations that are pinned, which we know are being used. What it looked like to me, in an admittedly cursory bit of perf testing, was that most of the cycles were going into fetching cache entries from catalogs over and over. But it's hard to avoid that. I did wonder for a bit about doing something like moving cache entries to another physical place rather than dropping them. I don't really like that either though, because then the behavior that CCA is testing really has not got that much at all to do with real system behavior. regards, tom lane