On 27 September 2014 03:55, Jeff Janes <jeff.ja...@gmail.com> wrote: > On Fri, Sep 26, 2014 at 11:47 AM, Alvaro Herrera <alvhe...@2ndquadrant.com> > wrote: >> >> Gavin Flower wrote: >> >> > Curious: would it be both feasible and useful to have multiple >> > workers process a 'large' table, without complicating things too >> > much? The could each start at a different position in the file. >> >> Feasible: no. Useful: maybe, we don't really know. (You could just as >> well have a worker at double the speed, i.e. double vacuum_cost_limit). > > > Vacuum_cost_delay is already 0 by default. So unless you changed that, > vacuum_cost_limit does not take effect under vacuumdb. > > It is pretty easy for vacuum to be CPU limited, and even easier for analyze > to be CPU limited (It does a lot of sorting). I think analyzing is the main > use case for this patch, to shorten the pg_upgrade window. At least, that > is how I anticipate using it.
I've been trying to review this thread with the thought "what does this give me?". I am keen to encourage contributions and also keen to extend our feature set, but I do not wish to complicate our code base. Dilip's developments do seem to be good quality; what I question is whether we want this feature. This patch seems to allow me to run multiple VACUUMs at once. But I can already do this, with autovacuum. Is there anything this patch can do that cannot be already done with autovacuum? -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers