On Tue, May 6, 2025 at 12:12 PM Tomas Vondra <to...@vondra.me> wrote:
> On 5/6/25 01:11, Tom Lane wrote:
> > The attached patch is a response to the discussion at [1], where
> > it emerged that lots of rows with null join keys can send a hash
> > join into too-many-batches hell, if they are on the outer side
> > of the join so that they must be null-extended not just discarded.
> > This isn't really surprising given that such rows will certainly
> > end up in the same hash bucket, and no amount of splitting can
> > reduce the size of that bucket.  (I'm a bit surprised that the
> > growEnabled heuristic didn't kick in, but it seems it didn't,
> > at least not up to several million batches.)

Good idea.  I haven't reviewed it properly, but one observation is
that trapping the null-keys tuples in per-worker tuple stores creates
unfairness.  That could be fixed by using a SharedTuplestore instead,
but unfortunately SharedTuplestore always spills to disk at the
moment, so maybe I should think about how to give it some memory for
small sets like regular Tuplestore.  Will look more closely after
Montreal.

> I don't think that's too surprising - growEnabled depends on all tuples
> getting into the same batch during a split. But even if there are many
> duplicate values, real-world data sets often have a couple more tuples
> that just happen to fall into that bucket too. And then during the split
> some get into one batch and some get into another.

Yeah.

> My personal experience is that the growEnabled heuristics is overly
> sensitive, and probably does not trigger very often. It can also get set
> to "true" to early, but that's (much) harder to hit.
>
> I have suggested to make growEnabled less strict in [2], i.e. to
> calculate the threshold as percentage of the batch, and not disable
> growth permanently. But it was orthogonal to what that thread did.

+1, I also wrote a thread with a draft patch like that at some point,
which I'll also try to dig up after Montreal.

> But more importantly, wasn't the issue discussed in [1] about parallel
> hash joins? I got quite confused while reading the thread ... I'm asking
> because growEnabled is checked only in ExecHashIncreaseNumBatches, not
> in ExecParallelHashIncreaseNumBatches. So AFAICS the parallel hash joins
> don't use growEnabled at all, no?

There is an equivalent mechanism, but it's slightly more complicated
as it requires consensus (see code near PHJ_GROW_BATCHES_DECIDE).
It's possible that it's not working quite as well as it should.  It's
definitely less deterministic in some edge cases since tuples are
packed into chunks differently so the memory used can vary slightly
run-to-run, but the tuple count should be stable.  I've made a note to
review that logic again too.

Note also that v16 is the first that could put NULLs in a shared
memory hash table (11c2d6fdf5 enabled Parallel Hash Right|Full Join),
while non-P HJ has had that for a long time, but it also couldn't be
used in a parallel query, so I guess it's possible that this stuff is
coming up now because it wasn't often picked for problems that would
generate interesting numbers of NULLs likely to exceed limits given
available plans in older releases.  See also related bug fix in
98c7c7152, spotted soon after this plan type escaped into the field.

While thinking about that, I wanted to note that we have more things
to improve in PHRJ: (1) Parallelism of unmatched scan: a short but not
entirely satisfying patch was already shared on the PHRJ thread but
not committed with the main feature.  I already had some inklings of
how to do much better which I recently described in a bit more detail
on the PBHS thread in vapourware form, where parallel fairness came up
again.  "La perfection est le mortel ennemi du bien" or whatever it is
they say in the language of Montreal, but really the easy patch for
unmatched scan parallelism wasn't actually bon enough, because it was
non-deterministic how many processes could participate due to deadlock
avoidance arcana, creating run-to-run variation that I'd expect Tomáš
to find empirically and reject in one of his benchmarking expeditions
:-).  (2) Bogus asymmetries in estimations/planning: I wrote some
analysis of why we don't use PHRJ as much as we could/should near
Richard Guo's work on anti/semi joins which went in around the same
time.  My idea there is to try to debogify the parallel degree logic
more generally, it's just that PHRJ brought key aspects of it into
relief for me, ie bogosity of the rule-based "driving table" concept.
I'll try to write these projects up on the wiki, instead of in random
threads :-)

In other words if you just use local Tuplestores as you showed it
would actually be an improvement in fairness over the status quo due
to (1) not being solved yet... but it will be solved, hence mentioning
it in this context.


Reply via email to