On 18 December 2014 at 16:03, Amit Kapila <amit.kapil...@gmail.com> wrote:
> > > On Thu, Dec 18, 2014 at 9:22 PM, Amit Kapila <amit.kapil...@gmail.com> > wrote: > > > > On Mon, Dec 8, 2014 at 10:40 AM, Amit Kapila <amit.kapil...@gmail.com> > wrote: > > > > > > On Sat, Dec 6, 2014 at 5:37 PM, Stephen Frost <sfr...@snowman.net> > wrote: > > > > > > > > > > So to summarize my understanding, below are the set of things > > > which I should work on and in the order they are listed. > > > > > > 1. Push down qualification > > > 2. Performance Data > > > 3. Improve the way to push down the information related to worker. > > > 4. Dynamic allocation of work for workers. > > > > > > > > > > I have worked on the patch to accomplish above mentioned points > > 1, 2 and partly 3 and would like to share the progress with community. > > Sorry forgot to attach updated patch in last mail, attaching it now. > When attempting to recreate the plan in your example, I get an error: ➤ psql://thom@[local]:5488/pgbench # create table t1(c1 int, c2 char(500)) with (fillfactor=10); CREATE TABLE Time: 13.653 ms ➤ psql://thom@[local]:5488/pgbench # insert into t1 values(generate_series(1,100),'amit'); INSERT 0 100 Time: 4.796 ms ➤ psql://thom@[local]:5488/pgbench # explain select c1 from t1; ERROR: could not register background process HINT: You may need to increase max_worker_processes. Time: 1.659 ms ➤ psql://thom@[local]:5488/pgbench # show max_worker_processes ; max_worker_processes ---------------------- 8 (1 row) Time: 0.199 ms # show parallel_seqscan_degree ; parallel_seqscan_degree ------------------------- 10 (1 row) Should I really need to increase max_worker_processes to >= parallel_seqscan_degree? If so, shouldn't there be a hint here along with the error message pointing this out? And should the error be produced when only a *plan* is being requested? Also, I noticed that where a table is partitioned, the plan isn't parallelised: # explain select distinct bid from pgbench_accounts; QUERY PLAN ---------------------------------------------------------------------------------------- HashAggregate (cost=1446639.00..1446643.99 rows=499 width=4) Group Key: pgbench_accounts.bid -> Append (cost=0.00..1321639.00 rows=50000001 width=4) -> Seq Scan on pgbench_accounts (cost=0.00..0.00 rows=1 width=4) -> Seq Scan on pgbench_accounts_1 (cost=0.00..4279.00 rows=100000 width=4) -> Seq Scan on pgbench_accounts_2 (cost=0.00..2640.00 rows=100000 width=4) -> Seq Scan on pgbench_accounts_3 (cost=0.00..2640.00 rows=100000 width=4) -> Seq Scan on pgbench_accounts_4 (cost=0.00..2640.00 rows=100000 width=4) -> Seq Scan on pgbench_accounts_5 (cost=0.00..2640.00 rows=100000 width=4) -> Seq Scan on pgbench_accounts_6 (cost=0.00..2640.00 rows=100000 width=4) -> Seq Scan on pgbench_accounts_7 (cost=0.00..2640.00 rows=100000 width=4) ... -> Seq Scan on pgbench_accounts_498 (cost=0.00..2640.00 rows=100000 width=4) -> Seq Scan on pgbench_accounts_499 (cost=0.00..2640.00 rows=100000 width=4) -> Seq Scan on pgbench_accounts_500 (cost=0.00..2640.00 rows=100000 width=4) (504 rows) Is this expected? Thom