Amit Kapila <amit.kapil...@gmail.com> writes: > On Fri, Jan 10, 2020 at 9:31 AM Amit Kapila <amit.kapil...@gmail.com> wrote: >> ... So, we have the below >> options: >> (a) remove this test entirely from all branches and once we found the >> memory leak problem in back-branches, then consider adding it again >> without max_files_per_process restriction. >> (b) keep this test without max_files_per_process restriction till v11 >> and once the memory leak issue in v10 is found, we can back-patch to >> v10 as well.
> I am planning to go with option (a) and attached are patches to revert > the entire test on HEAD and back branches. I am planning to commit > these by Tuesday unless someone has a better idea. Makes sense to me. We've certainly found out something interesting from this test, but not what it was expecting to find ;-). I think that there could be scope for two sorts of successor tests: * I still like my idea of directly constraining max_safe_fds through some sort of debug option. But to my mind, we want to run the entire regression suite with that restriction, not just one small test. * The seeming bug in v10 suggests that we aren't testing large enough logical-decoding cases, or at least aren't noticing leaks in that area. I'm not sure what a good design is for testing that. I'm not thrilled with just using a larger (and slower) test case, but it's not clear to me how else to attack it. regards, tom lane