On Mon, Apr 11, 2016 at 11:27 AM, Julien Rouhaud <julien.rouh...@dalibo.com> wrote: > On 11/04/2016 15:56, tushar wrote: >> On 04/08/2016 08:53 PM, Robert Haas wrote: >>> On Fri, Apr 8, 2016 at 1:22 AM, Amit Kapila <amit.kapil...@gmail.com> wrote: >>>> Other than that, patch looks good and I have marked it as Ready For >>>> Committer. Hope, we get this for 9.6. >>> Committed. I think this is likely to make parallel query >>> significantly more usable in 9.6. >>> >> While testing ,I observed couple of things - >> >> Case 1 =Not accepting parallel seq scan when parallel_degree is set to 0 >> >> postgres=# create table fok2(n int) with (parallel_degree=0); >> CREATE TABLE >> postgres=# insert into fok2 values (generate_series(1,1000000)); analyze >> fok2; vacuum fok2; >> INSERT 0 1000000 >> ANALYZE >> VACUUM >> postgres=# set max_parallel_degree =5; >> SET >> postgres=# explain analyze verbose select * from fok2 where n<=10; >> QUERY >> PLAN >> -------------------------------------------------------------------------------------------------------------- >> Seq Scan on public.fok2 (cost=0.00..16925.00 rows=100 width=4) (actual >> time=0.027..217.882 rows=10 loops=1) >> Output: n >> Filter: (fok2.n <= 10) >> Rows Removed by Filter: 999990 >> Planning time: 0.084 ms >> Execution time: 217.935 ms >> (6 rows) >> >> I am assuming parallel_degree=0 is as same as not using it , i.e >> create table fok2(n int) with (parallel_degree=0); = create table >> fok2(n int); >> >> so in this case it should have accepted the parallel seq .scan. >> > > No, setting it to 0 means to force not using parallel workers (but > considering the parallel path IIRC).
I'm not sure what the parenthesized bit means, because you can't use parallelism without workers. But I think I should have made the docs more clear that 0 = don't parallelize scans of this table while committing this. Maybe we should go add a sentence about that. > Even if you set a per-table parallel_degree higher than > max_parallel_degree, it'll be maxed at max_parallel_degree. > > Then, the explain shows that the planner assumed it'll launch 9 workers, > but only 8 were available (or needed perhaps) at runtime. We should probably add the number of workers actually obtained to the EXPLAIN ANALYZE output. That's been requested before. >> postgres=# set max_parallel_degree =2624444; >> ERROR: 2624444 is outside the valid range for parameter >> "max_parallel_degree" (0 .. 262143) >> >> postgres=# set max_parallel_degree =262143; >> SET >> postgres=# >> >> postgres=# explain analyze verbose select * from abd where n<=1; >> ERROR: requested shared memory size overflows size_t >> >> if we remove the analyze keyword then query running successfully. >> >> Expected = Is it not better to throw the error at the time of setting >> max_parallel_degree, if not supported ? > > +1 It surprises me that that request overflowed size_t. I guess we should look into why that's happening. Did you test this on a 32-bit system? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers