On Wed, Oct 16, 2019 at 6:50 AM Masahiko Sawada <sawada.m...@gmail.com> wrote: > > On Tue, Oct 15, 2019 at 6:33 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > > > Attached updated patch set. 0001 patch introduces new index AM field > amcanparallelvacuum. All index AMs except for gist sets true for now. > 0002 patch incorporated the all comments I got so far. >
I haven't studied the latest patch in detail, but it seems you are still assuming that all indexes will have the same amount of shared memory for index stats and copying it in the same way. I thought we agreed that each index AM should do this on its own. The basic problem is as of now we see this problem only with the Gist index, but some other index AM's could also have a similar problem. Another major problem with previous and this patch version is that the cost-based vacuum concept seems to be entirely broken. Basically, each parallel vacuum worker operates independently w.r.t vacuum delay and cost. Assume that the overall I/O allowed for vacuum operation is X after which it will sleep for some time, reset the balance and continue. In the patch, each worker will be allowed to perform X before which it can sleep and also there is no coordination for the same with master backend. This is somewhat similar to memory usage problem, but a bit more tricky because here we can't easily split the I/O for each of the worker. One idea could be that we somehow map vacuum costing related parameters to the shared memory (dsm) which the vacuum operation is using and then allow workers to coordinate. This way master and worker processes will have the same view of balance cost and can act accordingly. The other idea could be that we come up with some smart way to split the I/O among workers. Initially, I thought we could try something as we do for autovacuum workers (see autovac_balance_cost), but I think that will require much more math. Before launching workers, we need to compute the remaining I/O (heap operation would have used something) after which we need to sleep and continue the operation and then somehow split it equally across workers. Once the workers are finished, then need to let master backend know how much I/O they have consumed and then master backend can add it to it's current I/O consumed. I think this problem matters because the vacuum delay is useful for large vacuums and this patch is trying to exactly solve that problem, so we can't ignore this problem. I am not yet sure what is the best solution to this problem, but I think we need to do something for it. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com