On Wed, Jul 17, 2019 at 12:57 PM Thomas Munro <thomas.mu...@gmail.com> wrote: > On Wed, Jul 17, 2019 at 12:44 PM Thomas Munro <thomas.mu...@gmail.com> wrote: > > > #11 0x000055666e0359df in ExecShutdownNode > > > (node=node@entry=0x55667033a6c8) > > > at > > > /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:830 > > > #12 0x000055666e04d0ff in ExecLimit (node=node@entry=0x55667033a428) > > > at > > > /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeLimit.c:139 > > > > https://github.com/postgres/postgres/blob/REL9_6_STABLE/src/backend/executor/nodeLimit.c#L139 > > > > Limit thinks it's OK to "shut down" the subtree, but if you shut down a > > Gather node you can't rescan it later because it destroys its shared > > memory. Oops. Not sure what to do about that yet. > > CCing Amit and Robert, authors of commits 19df1702 and 69de1718.
Here's a repro (I'm sure you can find a shorter one, this one's hacked up from join_hash.sql, basically just adding LIMIT): create table join_foo as select generate_series(1, 3000) as id, 'xxxxx'::text as t; alter table join_foo set (parallel_workers = 0); create table join_bar as select generate_series(0, 10000) as id, 'xxxxx'::text as t; alter table join_bar set (parallel_workers = 2); set parallel_setup_cost = 0; set parallel_tuple_cost = 0; set max_parallel_workers_per_gather = 2; set enable_material = off; set enable_mergejoin = off; set work_mem = '1GB'; select count(*) from join_foo left join (select b1.id, b1.t from join_bar b1 join join_bar b2 using (id) limit 1000) ss on join_foo.id < ss.id + 1 and join_foo.id > ss.id - 1; -- Thomas Munro https://enterprisedb.com