On Mon, Apr 20, 2015 at 04:19:22PM -0300, Alvaro Herrera wrote: > Bruce Momjian wrote: > > > I think the limit has to be in terms of a percentage of the table size. > > For example, if we do one SELECT on a table with all non-dirty pages, it > > would be good to know that 5% of the pages were pruned --- that tells me > > that another 19 SELECTs will totally prune the table, assuming no future > > writes. > > This seems simple to implement: keep two counters, where the second one > is pages we skipped cleanup in. Once that counter hits SOME_MAX_VALUE, > reset the first counter so that further 5 pages will get HOT pruned. 5% > seems a bit high though. (In Simon's design, SOME_MAX_VALUE is > essentially +infinity.)
Oh, I pulled 5% out of the air. Thinking of a SELECT-only workload, which would be our worse case, I was thinking how many SELECTS running through HOT update chains would it take to be slower than generating the WAL to prune the page. I see the percentage as something that we could reasonably balance, while a fixed page count couldn't be analyzed in that way. -- Bruce Momjian <br...@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers