There was some discussion in pgsql-performance about the problem that the live-and-dead-tuple counts that ANALYZE reports to the stats collector don't reflect insert-in-progress tuples properly: http://archives.postgresql.org/pgsql-performance/2007-11/msg00225.php I proposed a patch here: http://archives.postgresql.org/pgsql-patches/2007-11/msg00169.php to switch ANALYZE over to using HeapTupleSatisfiesVacuum() to determine tuple status, which would at least give it awareness of tuples that are in-doubt because of concurrent transactions.
On further thought though, that's not the whole story, and in fact VACUUM itself isn't doing very well at accounting for in-doubt tuples. The current implementation is that whatever live and dead tuple totals are arrived at by a VACUUM or ANALYZE are sent to the stats collector and simply overwrite its counters on arrival. Meanwhile, the transaction that is responsible for an in-doubt tuple will send a stats message to increment either the live-tuple or dead-tuple count as appropriate when it commits or aborts. If that happens before the VACUUM or ANALYZE completes, the increment will get overwritten by VACUUM/ANALYZE's total; if afterwards, the increment will get added onto the total. So ideally we'd count the state change as already done if we knew the other transaction would commit first, otherwise not. Since we don't know whether the other transaction will commit or abort, much less when, it seems hopeless to have any really accurate accounting for in-doubt tuples. We could avoid the whole problem if we gave up on having VACUUM/ANALYZE reset the stats counters (instead, VACUUM would just send an incremental dead-tuple-count reduction, and ANALYZE would do nothing); as indeed was already proposed by Itagaki-san. But I am still convinced that that cure would be worse than the disease --- if the counters are driven entirely by incremental updates then there is no way to correct errors that will inevitably creep in, and the accumulation of errors will allow the counts to diverge arbitrarily far from reality. I'm happier with the idea that the error in the counts can be bounded by the number of in-doubt tuples seen by the last VACUUM/ANALYZE. So having said all that, we need to devise heuristic rules that minimize the error. In that light, I suggest that the accounting rules assume that the transactions responsible for in-doubt tuples will finish after the VACUUM/ANALYZE finishes. That probably sounds wrong, because in most of the sorts of applications we think about, updating transactions are short and are likely to finish before the VACUUM/ANALYZE does. But what I'm thinking is that a short transaction by definition can't change very many rows. The cases that can seriously hurt the accuracy of our counts are long-running bulk inserts or updates, and for those it seems entirely likely that ANALYZE in particular could finish first. Given that assumption, then the rule should be to not count INSERT_IN_PROGRESS tuples at all --- they will be added to the appropriate count when the inserting transaction commits or aborts. And DELETE_IN_PROGRESS tuples should be counted as live --- if the deleting transaction commits, it will increment the dead-tuple count at that time, and if it aborts then the tuple remains live. I feel fairly comfortable with this analysis for ANALYZE, and the patch I posted yesterday can easily be adjusted to accommodate it. However, what of VACUUM? As that code stands, every non-removable tuple (including RECENTLY_DEAD ones) is counted as live, and the dead-tuples count gets reset to zero. That seems clearly bogus. But the other-transaction-commits-second hypothesis seems a good bit more dubious for VACUUM than it is for ANALYZE. Should we attempt to adjust VACUUM's accounting as well, or leave it for 8.4? For that matter, should adjusting ANALYZE be left for 8.4? Thoughts? regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster