The more I think about this vacuum i/o problem, the more I think we have it wrong. The added i/o from vacuum really ought not be any worse than a single full table scan. And there are probably the occasional query doing full table scans already in those systems.
For the folks having this issue, if you run "select count(*) from bigtable" is there as big a hit in transaction performance? On the other hand, does the vacuum performance hit kick in right away? Or only after it's been running for a bit?
The vacuum cost is the same of a full scan table ( select count(*) ) ? Why not do a sort of "vacuum" if a scan table happen ( during a simple select that invole a full scan table for example )?
Regards Gaetano Mendola
---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html