On Sun, Mar 31, 2019 at 8:20 AM Peter Geoghegan <p...@bowt.ie> wrote: > On Sat, Mar 30, 2019 at 8:44 AM Robert Haas <robertmh...@gmail.com> wrote: > > Overall I'm inclined to think that we're making the same mistake here > > that we did with work_mem, namely, assuming that you can control a > > bunch of different prefetching behaviors with a single GUC and things > > will be OK. Let's just create a new GUC for this and default it to 10 > > or something and go home. > > I agree. If you invent a new GUC, then everybody notices, and it > usually has to be justified quite rigorously. There is a strong > incentive to use an existing GUC, if only because the problem that > this creates is harder to measure than the supposed problem that it > avoids. This can perversely work against the goal of making the system > easy to use. Stretching the original definition of a GUC is bad. > > I take issue with the general assumption that not adding a GUC at > least makes things easier for users. In reality, it depends entirely > on the situation at hand.
I'm not sure I understand why this is any different from the bitmap heapscan case though, or in fact why we are adding 10 in this case. In both cases we will soon be reading the referenced buffers, and it makes sense to queue up prefetch requests for the blocks if they aren't already in shared buffers. In both cases, the number of prefetch requests we want to send to the OS is somehow linked to the amount of IO requests we think the OS can handle concurrently at once (since that's one factor determining how fast it drains them), but it's not necessarily the same as that number, AFAICS. It's useful to queue some number of prefetch requests even if you have no IO concurrency at all (a single old school spindle), just because the OS will chew on that queue in the background while we're also doing stuff, which is probably what that "+ 10" is expressing. But that seems to apply to bitmap heapscan too, doesn't it? -- Thomas Munro https://enterprisedb.com