Hi,

first a few quick questions: I assume you are running in HA mode, right? Also 
what version of Flink are you running?

In case you are not running HA, nothing is automatically recovered. With HA, 
you would need to manually remove the corresponding entry from Zookeeper. If 
this is the problem, I suggest using Flink’s Zookeeper namespaces feature, to 
isolate different runs of a job.

Best,
Stefan


> Am 07.12.2016 um 13:20 schrieb Al-Isawi Rami <rami.al-is...@comptel.com>:
> 
> Hi,
> 
> I have faulty flink streaming program running on a cluster that is consuming 
> from kafka,so I brought the cluster down. Now I have a new version that has 
> the fix. Now if I bring up the flink cluster again, the old faulty program 
> will be recovered and it will consume and stream faulty results. How can i 
> cancel it before brining up the cluster again? there is a million of kafka 
> messages waiting to be consumed and I do not want the old program to consume 
> them. The cluster is backed by S3 and I found some blobs there that flink 
> will recover the old program from, but it sounds like bad idea to just delete 
> them.
> 
> Any ideas?
> 
> 
> Regards,
> -Rami
> Disclaimer: This message and any attachments thereto are intended solely for 
> the addressed recipient(s) and may contain confidential information. If you 
> are not the intended recipient, please notify the sender by reply e-mail and 
> delete the e-mail (including any attachments thereto) without producing, 
> distributing or retaining any copies thereof. Any review, dissemination or 
> other use of, or taking of any action in reliance upon, this information by 
> persons or entities other than the intended recipient(s) is prohibited. Thank 
> you.

Reply via email to