pache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.expressions.Aggregator>.
> You probably want to run in update mode if you are looking for it to output
> any group that has changed in the batch.
>
> On Wed, Oct 25, 2017 at 5:52 PM, Piyush Mukati
> wrote:
>
Hi,
we are migrating some jobs from Dstream to Structured Stream.
Currently to handle aggregations we call map and reducebyKey on each RDD
like
rdd.map(event => (event._1, event)).reduceByKey((a, b) => merge(a, b))
The final output of each RDD is merged to the sink with support for
aggregation at