t;
> Afaik, the close() call should not take forever, as the system might
> interrupt your thread if it doesn't finish closing on time (30s is the
> default for "cluster.services.shutdown-timeout")
>
> Best,
> Robert
>
>
> On Tue, Jan 21, 2020 at 10:02 AM Do
Hi Gordon,
Thanks for your reply / help.
Yes, following the savepoint road would certainly make the job, even it's
complicating the picture.
We might go that way in the future, but so far, we have followed an easier
one through eventual consistency:
* if some referential data is not (yet) loade
Hi,
I am looking for a "stream" join between:
* new data from a Kafka topic
* reference data from a compacted Kafka topic
Datastream.connect() works, but for a key, a join may occur before the
corresponding reference data has been read by Flink. And this is, of
course, a problem.
Is there a mea
Hi,
For a Flink batch job, some value are writing to Kafka through a Producer.
I want to register a hook for closing (at the end) the Kafka producer a
worker is using hook to be executed, of course, on worker side.
Is there a way to do so ?
Thanks.
Regards,
Dominique
Hi,
Is there a way to rewind Kafka cursors (while using here Kafka as a
consumer) from (inside) a Flink job ?
Use case [plan A]
* The Flink job would listen 1 main "data" topic + 1 secondary "event" topic
* In case of a given event, the Flink job would rewind all Kafka cursors of
the "data" topic