Am 13.01.2012 22:50, schrieb Josh Berkus:
Hackers,

It occurs to me that I would find it quite personally useful if the
vacuumdb utility was multiprocess capable.

For example, just today I needed to manually analyze a database with
over 500 tables, on a server with 24 cores.   And I needed to know when
the analyze was done, because it was part of a downtime.  I had to
resort to a python script.

I'm picturing doing this in the simplest way possible: get the list of
tables and indexes, divide them by the number of processes, and give
each child process its own list.

Any reason not to hack on this for 9.3?


Hello,

I like the idea - but ...
I would prefer to have an option that the user is able to tell on how much
cores it should be shared. Something like --share-cores=N.

Default is total core number of the machine but users should be able to say - ok -
my machine has 24 cores but I want that vacuumdb just will use 12 of them.

Especially on startups - you are able to find machines that aren't database-only
machines. Often you find database and web server as single machine.

Also you could have run more cluster on same machine for offering your business in
different languages (one cluster per language). I already saw such a setup.

There might be side businesses on the cores - so it should be possible that the
users decides on how much cores he wants to share vacuumdb.

Susanne


--
Dipl. Inf. Susanne Ebrecht - 2ndQuadrant
PostgreSQL Development, 24x7 Support, Training and Services
www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to