On Thu, Oct 17, 2019 at 2:12 PM Masahiko Sawada <sawada.m...@gmail.com> wrote: > > On Thu, Oct 17, 2019 at 5:30 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > > > On Thu, Oct 17, 2019 at 12:21 PM Amit Kapila <amit.kapil...@gmail.com> > > wrote: > > > > > > On Thu, Oct 17, 2019 at 10:56 AM Masahiko Sawada <sawada.m...@gmail.com> > > > wrote: > > > > > > > > I guess that the concepts of vacuum delay contradicts the concepts of > > > > parallel vacuum. The concepts of parallel vacuum would be to use more > > > > resource to make vacuum faster. Vacuum delays balances I/O during > > > > vacuum in order to avoid I/O spikes by vacuum but parallel vacuum > > > > rather concentrates I/O in shorter duration. > > > > > > > > > > You have a point, but the way it is currently working in the patch > > > doesn't make much sense. > > > > > > > Another point in this regard is that the user anyway has an option to > > turn off the cost-based vacuum. By default, it is anyway disabled. > > So, if the user enables it we have to provide some sensible behavior. > > If we can't come up with anything, then, in the end, we might want to > > turn it off for a parallel vacuum and mention the same in docs, but I > > think we should try to come up with a solution for it. > > I finally got your point and now understood the need. And the idea I > proposed doesn't work fine. > > So you meant that all workers share the cost count and if a parallel > vacuum worker increase the cost and it reaches the limit, does the > only one worker sleep? Is that okay even though other parallel workers > are still running and then the sleep might not help? > I agree with this point. There is a possibility that some of the workers who are doing heavy I/O continue to work and OTOH other workers who are doing very less I/O might become the victim and unnecessarily delay its operation.
-- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com