"Kevin Grittner" <kevin.gritt...@wicourts.gov> writes: > The way I figure it, if there is a 0.01 chance to reset the sweep, > then there's a 0.99 percent chance to continue the sweep from the last > position. 0.99^229 is about 0.1, which means there is a 10% chance > not to have reset after that many attempts to advance.
Right, so the odds would be that a backend will confine its insertion attempts to the first 229 pages containing a usable amount of free space. As long as the number of backends concurrently inserting into the relation is well under 229, this seems perfectly fine. (Hm, so we might want to make the probability depend on max_connections?) A possible downside of keeping things compact this way is that you'd probably get a longer average search distance because of all the early pages tending to remain full. Maybe what we want is some bias against inserting in the last half or quarter of the table, or some such rule, rather than necessarily heading for the start of the relation. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers