> On 26 June 2018 at 22:56, Andres Freund <and...@anarazel.de> wrote: > On 2018-06-26 21:55:07 +0100, Andrew Gierth wrote: >> >>>>> "Dmitry" == Dmitry Dolgov <9erthali...@gmail.com> writes: >> >> Dmitry> Yep, my bad, forgot to turn it on. Now I see what's the >> Dmitry> problem, one of the null fields is screwed up, will try to >> Dmitry> figure out why is that. >> >> The handling of nulls in grouping set results is a bit icky, see >> prepare_projection_slot in nodeAgg.c. The comment there lists a number >> of assumptions which may or may not hold true under JIT which might give >> a starting point to look for problems. (Unfortunately I'm not currently >> in a position to test on a JIT build) > > I probably just screwed up a bit of code generation. I can't see any of > the more fundamental assumptions being changed by the way JITing is > done.
So far I found out that in agg_retrieve_hash_table, when there is a scan for TupleHashEntryData, that contains AggStatePerGroup structure in the field "additional", it's possible to get some garbage data (or at least transValue is lost). It happens when we do: ReScanExprContext(aggstate->aggcontexts[i]); in agg_retrieve_direct before that. Apparently, the reason is that in the jit code there is a store operation for curaggcontext into aggcontext: v_aggcontext = l_ptr_const(op->d.agg_trans.aggcontext, l_ptr(StructExprContext)); /* set aggstate globals */ LLVMBuildStore(b, v_aggcontext, v_curaggcontext); I haven't found anything similar in the original code or in the other branches for aggregation logic. I can't say that I fully understand the idea behind it, but at least it was suspicious for me. When I removed this operation, the problem has disappeared.