njsmith commented on code in PR #20019:
URL: https://github.com/apache/datafusion/pull/20019#discussion_r2739811343


##########
datafusion/physical-plan/src/aggregates/mod.rs:
##########
@@ -144,6 +144,23 @@ pub enum AggregateMode {
     /// This mode requires that the input has more than one partition, and is
     /// partitioned by group key (like FinalPartitioned).
     SinglePartitioned,
+    /// Combine multiple partial aggregations to produce a new partial
+    /// aggregation.
+    ///
+    /// Input is intermediate accumulator state (like Final), but output is
+    /// also intermediate accumulator state (like Partial). This enables
+    /// tree-reduce aggregation strategies where partial results from
+    /// multiple workers are combined in multiple stages before a final
+    /// evaluation.
+    ///
+    /// ```text
+    ///               Final
+    ///            /        \
+    ///     PartialReduce   PartialReduce
+    ///     /         \      /         \
+    ///  Partial   Partial  Partial   Partial
+    /// ```
+    PartialReduce,

Review Comment:
   If you have heterogenous links between your compute (e.g. a bunch of 
machines, and each machine contains a bunch of cores, where same-machine cores 
have much higher bandwidth/lower latency between them than the cross-machine 
links), then this is still useful in the shuffle world I think?
   Each node partitions local data into N threads -> Partial each partition -> 
within-machine shuffle -> PartialReduce -> cross-machine shuffle -> Final gives 
a 1/N-reduction in network transfer, right? (compared to shuffling each 
threads' Partial result directly)
   Whether this would be worth the extra complexity depends on a bunch of 
deployment-specific constants of course.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to