Hi,
Thanks for the hint. The infinite loop was the solution and my pipeline
works now.
Regards
Klemens
Am 24.11.20 um 16:59 schrieb Timo Walther:
For debugging you can also implement a simple non-parallel source
using
`org.apache.flink.streaming.api.functions.source.SourceFunction`. Yo
For debugging you can also implement a simple non-parallel source using
`org.apache.flink.streaming.api.functions.source.SourceFunction`. You
would need to implement the run() method with an endless loop after
emitting all your records.
Regards,
Timo
On 24.11.20 16:07, Klemens Muthmann wrote:
Hi,
Thanks for your reply. I am using processing time instead of event time,
since we do get the events in batches and some might arrive days later.
But for my current dev setup I just use a CSV dump of finite size as
input. I will hand over the pipeline to some other guys, who will need
to
Hi Klemens,
what you are observing are reasons why event-time should be preferred
over processing-time. Event-time uses the timestamp of your data while
processing-time is to basic for many use cases. Esp. when you want to
reprocess historic data, you want to do that at full speed instead of
Hi,
I have written an Apache Flink Pipeline containing the following piece
of code (Java):
stream.window(ProcessingTimeSessionWindows.withGap(Time.seconds(50))).aggregate(new
CustomAggregator()).print();
If I run the pipeline using local execution I see the following
behavior. The "CustomA