Thank you Aljoscha,
I guessed that I should use the reduce method. However, I do not look
for window aggregations. I want to do this on a grouped stream.
The problem is we work with Lists instead of tuples and thus we can not
use the pre-implemented aggregates.
So the idea is to call it lik
Hi,
Alternatively if you would like to create continuous aggregates per key you
can use ds.groupBy().reduce(..), or use one of the stateful functions in
the scala api such as mapWithState.
For a rolling average per key you can check this exmple:
https://github.com/gyfora/summer-school/blob/master
Hi,
with the current API this should do what you are after:
val input = ...
val result = input
.window(...)
.groupBy(...)
.reduceWindow( /* your reduce function */ )
With the reduce function you should be able to implement any custom
aggregations. You can also use foldWindow() if you want
Hello community,
how do I define a custom aggregate function in Flink Streaming (Scala)?
Could you please provide an example on how to do that?
Thank you and best regards,
Philipp