On 11/2/18, Robert Haas <robertmh...@gmail.com> wrote: > On Fri, Nov 2, 2018 at 10:07 AM Tom Lane <t...@sss.pgh.pa.us> wrote: >> Robert Haas <robertmh...@gmail.com> writes: >> > That's not what I'm saying. If we don't have the FSM, we have to >> > check every page of the table. If there's a workload where that >> > happens a lot on a table that is just under the size threshold for >> > creating the FSM, then it's likely to be a worst case for this patch. >> >> Hmm, you're assuming something not in evidence: why would that be the >> algorithm? > > I think it's in evidence, in the form of several messages mentioning a > flag called try_every_block.
Correct. > Just checking the last page of the table doesn't sound like a good > idea to me. I think that will just lead to a lot of stupid bloat. It > seems likely that checking every page of the table is fine for npages > <= 3, and that would still be win in a very significant number of > cases, since lots of instances have many empty or tiny tables. I was > merely reacting to the suggestion that the approach should be used for > npages <= 32; that threshold sounds way too high. To be clear, no one suggested that. The patch has always had 8 or 10 as a starting point, and I've mentioned 4 and 8 as good possibilities based on the COPY tests upthread. It was apparent I didn't need to recompile a bunch of binaries with different thresholds. All I had to do was compile with a threshold much larger than required, and then test inserting into X number of pages, to simulate a threshold of X. I increased X until I saw a regression. That's where the 32 came from, sorry if that was misleading, in my head it was obvious. I'd be happy test other scenarios. I'm not sure how to test redo -- seems more difficult to get meaningful results than the normal case. -John Naylor