s if it is possible to tell the local execution
environment to wait for that window to be closed, instead of just
shutting down.
Thanks and Regards
Klemens Muthmann
e the
`FileProcessingMode.PROCESS_CONTINUOUSLY`. But what kind of connector
are you currently using?
Regards,
Timo
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/datastream_api.html#data-sources
On 24.11.20 09:59, Klemens Muthmann wrote:
Hi,
I have written an Apache Flink Pipeline containin
`. You
would need to implement the run() method with an endless loop after
emitting all your records.
Regards,
Timo
On 24.11.20 16:07, Klemens Muthmann wrote:
Hi,
Thanks for your reply. I am using processing time instead of event
time, since we do get the events in batches and some might arrive
e posted the same question on StackOverflow
<https://stackoverflow.com/questions/67071254/running-apache-flink-1-12-jobs-on-apple-m1-silicon>.
So If you’d like some points there you are free to head over and post a reply.
;)
Thanks and Regards
Klemens Muthmann
Hi,
We are loading our JSON from a Mongo Database. But we also found no readily
available way to stream JSON Data into a Flink Pipeline. I guess this would be
hard to implement since you have to know details about the JSON structure to do
this. So I guess your best bet would be to implement you
the command line interface to upload the
> job to the cluster (that's not happening when executing the job from the IDE))
> My first guess is that a newer netty version might be required? Or maybe
> there's some DEBUG log output that's helpful in understanding the issu
Hi,
I guess this is more of a Java Problem than a Flink Problem. If you want it
quick and dirty you could implement a class such as:
public class Value {
private boolean isLongSet = false;
private long longValue = 0L;
private boolean isIntegerSet = false;
private int intValue = 0