Robert Haas wrote: > Now, in the case where you are setting an overall limit, there is at > least an argument to be made that you can determine the overall rate > of autovacuum-induced I/O activity that the system can tolerate, and > set your limits to stay within that budget, and then let the system > decide how to divide that I/O up between workers. But if you're > overriding a per-table limit, I don't really see how that holds any > water. The system I/O budget doesn't go up just because one > particular table is being vacuumed rather than any other. The only > plausible use case for setting a per-table rate that I can see is when > you actually want the system to use that exact rate for that > particular table.
Yeah, this makes sense to me too -- at least as long as you only have one such table. But if you happen to have more than one, and due to some bad luck they happen to be vacuumed concurrently, they will eat a larger share of your I/O bandwidth budget than you anticipated, which you might not like. Thus what I am saying is that those should be scaled down too to avoid peaks. Now, my proposal above mentioned subtracting the speed of tables under the limit, from the speed of those above the limit; maybe we can just rip that part out. Then we end up with the behavior you want, that is to have the fast table vacuum as fast as it is configured when it's the only fast table being vacuumed; and also with what I say, which is that if you have two of them, the two balance the I/O consumption (but only among themselves, not with the slow ones.) Since figuring out this subtraction is the only thing missing from the patch I posted, ISTM we could have something committable with very little extra effort if we agree on this. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers