Hi Fabian:
Thanks for your answer - it is starting to make sense to me now.
On Thursday, January 4, 2018 12:58 AM, Fabian Hueske
wrote:
Hi,
the ReduceFunction holds the last emitted record as state. When a new record
arrives, it reduces the new record and last emitted record, updates
Hi,
the ReduceFunction holds the last emitted record as state. When a new
record arrives, it reduces the new record and last emitted record, updates
its state, and emits the new result.
Therefore, a ReduceFunction emits one output record for each input record,
i.e., it is triggered for each input
Hi Stefan:
Thanks for your response.
A follow up question - In a streaming environment, we invoke the operation
reduce and then output results to the sink. Does this mean reduce will be
executed once on every trigger per partition with all the items in each
partition ?
Thanks
On Wednesday
Hi,
I would interpret this as: the reduce produces an output for every new reduce
call, emitting the updated value. There is no need for a window because it
kicks in on every single invocation.
Best,
Stefan
> Am 31.12.2017 um 22:28 schrieb M Singh :
>
> Hi:
>
> Apache Flink documentation
>
Hi:
Apache Flink documentation
(https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/index.html)
indicates that a reduce function on a KeyedStream as follows:
A "rolling" reduce on a keyed data stream. Combines the current element with
the last reduced value and emit