On Fri, Sep 26, 2014 at 11:47 AM, Alvaro Herrera <alvhe...@2ndquadrant.com>
wrote:

> Gavin Flower wrote:
>
> > Curious: would it be both feasible and useful to have multiple
> > workers process a 'large' table, without complicating things too
> > much?  The could each start at a different position in the file.
>
> Feasible: no.  Useful: maybe, we don't really know.  (You could just as
> well have a worker at double the speed, i.e. double vacuum_cost_limit).
>

Vacuum_cost_delay is already 0 by default.  So unless you changed that,
vacuum_cost_limit does not take effect under vacuumdb.

It is pretty easy for vacuum to be CPU limited, and even easier for analyze
to be CPU limited (It does a lot of sorting).  I think analyzing is the
main use case for this patch, to shorten the pg_upgrade window.  At least,
that is how I anticipate using it.

Cheers,

Jeff

Reply via email to