The goal is to do it before the end of this year. For this to happen, the first
release canidate would need to be available by end of November/beginning of
December.
There is an ongoing discussion here:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Schedule-and-Scope-fo
Hi,
Thank you Ufuk.
Hmm. Out of curiosity.
Is there any idea when will 1.2 be released?
Best Regards,
Daniel Santos
On November 7, 2016 12:45:51 PM GMT+00:00, Ufuk Celebi wrote:
>On 7 November 2016 at 13:06:16, Daniel Santos (dsan...@cryptolab.net)
>wrote:
>> I believe the job won't star
On 7 November 2016 at 13:06:16, Daniel Santos (dsan...@cryptolab.net) wrote:
> I believe the job won't start from the last savepoint is that correct,
> on versions ( > 1.2 ), it will start afresh ?
Yes, with 1.2 you will be able to take a savepoint and then resume from that
savepoint with differe
Hi,
Thank you very much.
I see. Ok it makes sense.
I believe there is kinda catch with parallelism.
Say one does a savepoint and then it changes the parallelism.
I believe the job won't start from the last savepoint is that correct,
on versions ( > 1.2 ), it will start afresh ?
Best Regard
Hi,
the state of the window is kept by the WindowOperator (which uses the state
descriptor you mentioned to access the state). The FoldFunction does not
itself keep the state but is only used to update the state inside the
WindowOperator, if you will.
When you say restart, are you talking about st
Hello Aljoscha,
Thank you for your reply.
But I believe, reading from the docs, that any user function has to be a
Rich Function, if it wishes to have state.
Now any Rich Function cannot be used or accepted on a Window.
For instances looking at flink source version 1.1.3 which is the one I'm
Hi Daniel,
Flink will checkpoint the state of all operations (in your case to HDFS).
Flink has several APIs for dealing with state in user functions:
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/state.html The
window operator also internally uses these APIs.
Let me know if you n
Hello,
I have some question that has been bugging me.
Let's say we have a Kafka Source.
Checkpoint is enabled, with a period of 5 seconds.
We have a FSBackend ( Hadoop ).
Now imagine we have a window a tumbling of 10 Minutes.
For simplicity we are going to say that we are counting all elements