On Mon, 11 Nov 2019 at 17:57, Dilip Kumar <dilipbal...@gmail.com> wrote: > > On Tue, Oct 29, 2019 at 12:37 PM Masahiko Sawada <sawada.m...@gmail.com> > wrote: > > I realized that v31-0006 patch doesn't work fine so I've attached the > > updated version patch that also incorporated some comments I got so > > far. Sorry for the inconvenience. I'll apply your 0001 patch and also > > test the total delay time. > > > While reviewing the 0002, I got one doubt related to how we are > dividing the maintainance_work_mem > > +prepare_index_statistics(LVShared *lvshared, Relation *Irel, int nindexes) > +{ > + /* Compute the new maitenance_work_mem value for index vacuuming */ > + lvshared->maintenance_work_mem_worker = > + (nindexes_mwm > 0) ? maintenance_work_mem / nindexes_mwm : > maintenance_work_mem; > +} > Is it fair to just consider the number of indexes which use > maintenance_work_mem? Or we need to consider the number of worker as > well. My point is suppose there are 10 indexes which will use the > maintenance_work_mem but we are launching just 2 workers then what is > the point in dividing the maintenance_work_mem by 10. > > IMHO the calculation should be like this > lvshared->maintenance_work_mem_worker = (nindexes_mwm > 0) ? > maintenance_work_mem / Min(nindexes_mwm, nworkers) : > maintenance_work_mem; > > Am I missing something?
No, I think you're right. On the other hand I think that dividing it by the number of indexes that will use the mantenance_work_mem makes sense when parallel degree > the number of such indexes. Suppose the table has 2 indexes and there are 10 workers then we should divide the maintenance_work_mem by 2 rather than 10 because it's possible that at most 2 indexes that uses the maintenance_work_mem are processed in parallel at a time. -- Masahiko Sawada http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services