Hmm, I did not realize that.

I was planning when upgrading a job (consuming from Kafka) to cancel it
with a savepoint and then start it back from the savedpoint. But this
savedpoint thing was giving me the apparently false feeling I would not
lose anything? My understanding was that maybe I would process some events
twice in this case but certainly not miss events entirely.

Did I misunderstand this thread?

If not this sounds like pretty annoying? Do people have some sort of
workaround for that?

Thanks,
--
Christophe



On Mon, Feb 19, 2018 at 5:50 PM, Till Rohrmann <trohrm...@apache.org> wrote:

> Hi Bart,
>
> you're right that Flink currently does not support a graceful stop
> mechanism for the Kafka source. The community has already a good idea how
> to solve it in the general case and will hopefully soon add it to Flink.
>
> Concerning the StoppableFunction: This interface was introduced quite some
> time ago and currently only works for some batch sources. In order to make
> it work with streaming, we need to add some more functionality to the
> engine in order to properly stop and take a savepoint.
>
> Cheers,
> Till
>
> On Mon, Feb 19, 2018 at 3:36 PM, Bart Kastermans <fl...@kasterma.net>
> wrote:
>
>> In https://ci.apache.org/projects/flink/flink-docs-release-1.4/
>> ops/cli.html it is shown that
>> for gracefully stopping a job you need to implement the StoppableFunction
>> interface.  This
>> appears not (yet) implemented for Kafka consumers.  Am I missing
>> something, or is there a
>> different way to gracefully stop a job using a kafka source so we can
>> restart it later without
>> losing any (in flight) events?
>>
>> - bart
>>
>
>

Reply via email to