Heikki Linnakangas <heikki.linnakan...@enterprisedb.com> wrote: > Kevin Grittner wrote: >> Tom Lane <t...@sss.pgh.pa.us> wrote: >>> So for example we might try resetting the search to the start of the >>> relation with probability 0.01. >> >> If I understand the heuristic you propose, and my math skill >> haven't eroded too badly from lack of use, every 229 spots >> considered would cause a 90% chance of reset. That means that the >> odds of getting past 50,000 spots (the number of pages with >> available free space at which I generally start to get worried) >> without resetting is about 1 in 10^218 -- which is a risk I'm >> willing to accept. ;-) > > The FSM currently uses a clock sweep kind of algorithm to hand out > pages, and the clock hand is reset to 0 at every vacuum. The purpose > of resetting the clock hand is precisely to bias towards > lower-numbered pages, to allow truncation later on. > > That's a bit of an over-simplification, though, there's actually a > separate "clock hand" on each FSM page (next_slot pointer). > > We probably could do with more bias. For example, we still prefer > pages close to the page we last inserted to, by searching for free > space in the same FSM page first, before starting the search from > the top of the tree. And of course, each backend tries first to > insert to the last page it inserted to before even consulting the > FSM. A mode to disable those behaviors might indeed be useful, along > with the random reset. That flushes out the descriptions given by Tom, but doesn't really make me think I misunderstood the heuristic or miscalculated the odds of getting far from the start with a 1% probability of a reset on each advance of the sweep. (Of course, it's possible I'm being dense and taking statements to mean what I expect, despite evidence to the contrary.) The way I figure it, if there is a 0.01 chance to reset the sweep, then there's a 0.99 percent chance to continue the sweep from the last position. 0.99^229 is about 0.1, which means there is a 10% chance not to have reset after that many attempts to advance. Every 229 attempts after that multiplies by 0.1, too. So, after 229*6 attempts (for example) the odds of not having reset are about one in a million. Am I saying something stupid here? (I used to be good at these sorts of calculations....) -Kevin
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers