Hi,
I now realize that you are using the batch API, and I gave you an answer
for the streaming API :(
The mapPartition function also has a close() method, which you can use to
implement the same pattern.
With a JVM Shutdown hook, you are assuming that the TaskManager is shutting
down at the end of
Hi Robert,
Thanks for your hint / reply / help.
So far I have not tested your way (may be next), but tried another one:
* use mapPartitions
-- at the beginning, get a KafkaProducer
-- the KafkaProducerFactory class I use is lazy and caches the first
instances created; so, there is reuse.
* regi
Hi,
Flink's ProcessFunction has a close() method, which is executed on shutdown
of the workers. (You could also use any of the Rich* functions for that
purpose).
If you add a ProcessFunction with the same parallelism before the
KafkaSink, it'll be executed on the same machines as the Kafka produce
Hi,
For a Flink batch job, some value are writing to Kafka through a Producer.
I want to register a hook for closing (at the end) the Kafka producer a
worker is using hook to be executed, of course, on worker side.
Is there a way to do so ?
Thanks.
Regards,
Dominique