On Wed, 17 Jul 2024 at 17:12, Andrei Lepikhov <lepi...@gmail.com> wrote: > I generally like the idea of a support function. But as I can see, the > can_minmax_aggs() rejects if any of the aggregates don't pass the > checks. The prosupport feature is designed to be applied to each > function separately. How do you think to avoid it?
You wouldn't avoid it. The prosupport function would be called once for each Aggref in the query. Why is that a problem? > Also, I don't clearly understand the case you mentioned here - does it > mean that you want to nullify orders for other aggregate types if they > are the same as the incoming order? No, I mean unconditionally nullify Aggref->aggorder and Aggref->aggdistinct for aggregate functions where ORDER BY / DISTINCT in the Aggref makes no difference to the result. I think that's ok for max() and min() for everything besides NUMERIC. For aggorder, we'd have to *not* optimise sum() and avg() for floating point types as that could change the result. sum() and avg() for INT2, INT4 and INT8 seems fine. I'd need to check, but I think sum(numeric) is ok too as the dscale should end up the same regardless of the order. Obviously, aggdistinct can't be changed for sum() and avg() on any type. It seems also possible to adjust count(non-nullable-var) into count(*), which, if done early enough in planning could help significantly by both reducing evaluation during execution, but also possibly reduce tuple deformation if that Var has a higher varattno than anything else in the relation. That would require checking varnullingrels is empty and the respective RelOptInfo's notnullattnums mentions the Var. David