Richard Guo <guofengli...@gmail.com> writes: > Currently subquery scan is using rel->rows (if no parameterization), > which I believe is not correct. That's not the size the subquery scan > node in each worker needs to handle, as the rows have been divided > across workers by the parallel-aware node.
Really? Maybe I misunderstand the case under consideration, but what I think will be happening is that each worker will re-execute the pushed-down subquery in full. Otherwise it can't compute the correct answer. What gets divided across the set of workers is the total *number of executions* of the subquery, which should be independent of the number of workers, so that the cost is (more or less) the same as the non-parallel case. At least that's true for a standard correlated subplan, which is normally run again for each row processed by the parent node. For hashed subplans and initplans, what would have been "execute once" semantics becomes "execute once per worker", creating a strict cost disadvantage for parallelization. I don't know whether the current costing model accounts for that. But if it does that wrong, arbitrarily altering the number of rows won't make it better. regards, tom lane