Hi Yuta,
You can use cancel-with-savepoint to stop you application and save the state in
a savepoint, then update your jar and restart the application from the saved
savepoint. Checkpointing is an automatic mechanism to recover from runtime
failures and savepoints are designed for manual restar
his sink, the
duplicated messages have not been read so everything is OK.
Kind regards,
Nastaran Motavalli
From: Piotr Nowojski
Sent: Thursday, November 29, 2018 3:38:38 PM
To: Nastaran Motavali
Cc: user@flink.apache.org
Subject: Re: Dulicated messages in kaf
From: Kostas Kloudas
Sent: Thursday, November 29, 2018 1:22:12 PM
To: Nastaran Motavali
Cc: user
Subject: Re: Memory does not be released after job cancellation
Hi Nastaran,
Can you specify what more information do you need?
>From the discussion that you posted:
1) If you have batch jobs, then Fl
Hi,
I have a flink streaming job implemented via java which reads some messages
from a kafka topic, transforms them and finally sends them to another kafka
topic.
The version of flink is 1.6.2 and the kafka version is 011. I pass the
Semantic.EXACTLY_ONCE parameter to the producer. The problem i
Hi,
I have a simple java application uses flink 1.6.2.
When I run the jar file, I can see that the job consumes a part of the host's
main memory. If I cancel the job, the consumed memory does not be released
until I stop the whole cluster. How can I release the memory after cancellation?
I have f