gabotechs commented on code in PR #20019:
URL: https://github.com/apache/datafusion/pull/20019#discussion_r2735993938
##########
datafusion/physical-plan/src/aggregates/mod.rs:
##########
@@ -144,6 +144,23 @@ pub enum AggregateMode {
/// This mode requires that the input has more than one partition, and is
/// partitioned by group key (like FinalPartitioned).
SinglePartitioned,
+ /// Combine multiple partial aggregations to produce a new partial
+ /// aggregation.
+ ///
+ /// Input is intermediate accumulator state (like Final), but output is
+ /// also intermediate accumulator state (like Partial). This enables
+ /// tree-reduce aggregation strategies where partial results from
+ /// multiple workers are combined in multiple stages before a final
+ /// evaluation.
+ ///
+ /// ```text
+ /// Final
+ /// / \
+ /// PartialReduce PartialReduce
+ /// / \ / \
+ /// Partial Partial Partial Partial
+ /// ```
+ PartialReduce,
Review Comment:
Regarding distributed engines, this produces a good alternative for
shuffling, and I imagine that for systems that rely on materializing shuffle
results, this can be a pretty good alternative. However, I do also imagine that
in-memory streaming-based distributed systems like Trino or
https://github.com/datafusion-contrib/datafusion-distributed might not benefit
as much from this pattern because:
- Shuffling is as cheap as it can get, as intermediate results are
zero-copied over the network between workers
- Shuffling allows running the final aggregation step concurrently in
multiple workers, while a tree-reduction approach limits the final aggregation
to one worker
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]