On Tue, May 15, 2018 at 08:45:07AM +0530, Amit Kapila wrote: > No, it is not like that. We divide the scan among workers and each > worker should perform projection of the rows it scanned (after > applying filter). Now, if the expensive functions are part of target > lists, then we can push the computation of expensive functions (as > part of target list) in workers which will divide the work. > > > Really? Do > > we run each column in its own worker or do we split the result set into > > parts and run those in parallel? How do we know, just the function call > > costs? > > > > The function's cost can be determined via pg_proc->procost. For this > particular case, you can refer the call graph - > create_pathtarget->set_pathtarget_cost_width->cost_qual_eval_node->cost_qual_eval_walker->get_func_cost > > > I can admit I never saw that coming. > > > > I think the use case becomes interesting with parallel query because > now you can divide such cost among workers. > > Feel free to ask more questions if above doesn't clarify the usage of > these features.
OK, I have added the following release note item for both of these: 2017-11-16 [e89a71fb4] Pass InitPlan values to workers via Gather (Merge). 2018-03-29 [3f90ec859] Postpone generate_gather_paths for topmost scan/join rel 2018-03-29 [11cf92f6e] Rewrite the code that applies scan/join targets to paths Allow single-evaluation queries, e.g. <literal>FROM</literal> clause queries, and functions in the target list to be parallelized (Amit Kapila, Robert Haas) -- Bruce Momjian <br...@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +