Hi,
Sorry for entering the discussion somewhat late but I wrote on the Issue you
created, please have a look.
Best,
Aljoscha
> On 20. Oct 2017, at 16:56, Antoine Philippot
> wrote:
>
> Hi Piotrek,
>
> I come back to you with a Jira ticket that I created and a proposal
> the ticket : https:/
Hi Piotrek,
I come back to you with a Jira ticket that I created and a proposal
the ticket : https://issues.apache.org/jira/browse/FLINK-7883
the proposal :
https://github.com/aphilippot/flink/commit/9c58c95bb4b68ea337f7c583b7e039d86f3142a6
I'am open to any comments or suggestions
Antoine
Le m
Hi,
That’s good to hear :)
I quickly went through the code and it seems reasonable. I think there might be
need to think a little bit more about how this cancel checkpoint should be
exposed to the operators and what should be default action - right now by
default cancel flag is ignored, I wou
Thanks for your advices Piotr.
Firstly, yes, we are aware that even with clean shutdown we can end up with
duplicated messages after a crash and it is acceptable as is it rare and
unintentional unlike deploying new business code or up/down scale.
I made a fork of the 1.2.1 version which we curren
We are planning to work on this clean shut down after releasing Flink 1.4.
Implementing this properly would require some work, for example:
- adding some checkpoint options to add information about “closing”/“shutting
down” event
- add clean shutdown to source functions API
- implement handling o
Thanks Piotr for your answer, we sadly can't use kafka 0.11 for now (and
until a while).
We can not afford tens of thousands of duplicated messages for each
application upgrade, can I help by working on this feature ?
Do you have any hint or details on this part of that "todo list" ?
Le lun. 2 o
Hi,
For failures recovery with Kafka 0.9 it is not possible to avoid duplicated
messages. Using Flink 1.4 (unreleased yet) combined with Kafka 0.11 it will be
possible to achieve exactly-once end to end semantic when writing to Kafka.
However this still a work in progress:
https://issues.apach
Hi,
I'm working on a flink streaming app with a kafka09 to kafka09 use case
which handles around 100k messages per seconds.
To upgrade our application we used to run a flink cancel with savepoint
command followed by a flink run with the previous saved savepoint and the
new application fat jar as