On Sun, May 19, 2019 at 7:07 PM Thomas Munro <thomas.mu...@gmail.com> wrote: > Unfortunately that bits-in-order scheme doesn't work for parallel > hash, where the SharedTuplestore tuples seen by each worker are > non-deterministic. So perhaps in that case we could use the > HEAP_TUPLE_HAS_MATCH bit in the outer tuple header itself, and write > the whole outer batch back out each time through the loop. That'd > keep the tuples and match bits together, but it seems like a lot of > IO...
So, I think the case you're worried about here is something like: Gather -> Parallel Hash Left Join -> Parallel Seq Scan on a -> Parallel Hash -> Parallel Seq Scan on b If I understand ExecParallelHashJoinPartitionOuter correctly, we're going to hash all of a and put it into a set of batch files before we even get started, so it's possible to identify precisely which tuple we're talking about by just giving the batch number and the position of the tuple within that batch. So while it's true that the individual workers can't use the number of tuples they've read to know where they are in the SharedTuplestore, maybe the SharedTuplestore could just tell them. Then they could maintain a paged bitmap of the tuples that they've matched to something, indexed by position-within-the-tuplestore, and those bitmaps could be OR'd together at the end. Crazy idea, or...? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company