Hi Ufuk,
thanks for this information, this is good news!
Updating Flink to 1.1 is not really in our hands, but will hopefully
happen soon :-)
Thank you and best regards
Konstantin
On 26.10.2016 16:07, Ufuk Celebi wrote:
> On Wed, Oct 26, 2016 at 3:06 PM, Konstantin Gregor
> wrote:
>> We are st
On Wed, Oct 26, 2016 at 3:06 PM, Konstantin Gregor
wrote:
> We are still using 1.0.1 so this is an expected behavior, but I just
> wondered whether there are any news concerning this topic.
Yes, we will add an option to ignore this while restoring. This will
be added to the upcoming 1.1.4 and 1.2
Hi everyone,
I found this thread while examining an issue where Flink could not start
from a savepoint. Problem was that we removed an operator, pretty much
the same thing that occurred to Josh earlier in this thread.
We are still using 1.0.1 so this is an expected behavior, but I just
wondered w
Hi,
I think it would probably be a good idea to make these tunable from the
command line. Otherwise we might run into the problem of accidentally
restoring a job that should fail like it does now.
Gyula
Stephan Ewen ezt írta (időpont: 2016. aug. 2., K, 17:17):
> +1 to ignore unmatched state.
>
+1 to ignore unmatched state.
Also +1 to allow programs that resume partially (add some new state that
starts empty)
Both are quite important for program evolution.
On Tue, Aug 2, 2016 at 2:58 PM, Ufuk Celebi wrote:
> No, unfortunately this is the same for 1.1. The idea was to be explicit
> ab
No, unfortunately this is the same for 1.1. The idea was to be explicit
about what works and what not. I see that this is actually a pain for this
use case (which is very nice and reasonable ;)). I think we can either
always ignore state that does not match to the new job or if that is too
aggressi
+Ufuk, looping him in directly
Hmm, I think this is changed for the 1.1 release. Ufuk could you please
comment?
On Mon, 1 Aug 2016 at 08:07 Josh wrote:
> Cool, thanks - I've tried out the approach where we replay data from the
> Kafka compacted log, then take a savepoint and switch to the live
Cool, thanks - I've tried out the approach where we replay data from the
Kafka compacted log, then take a savepoint and switch to the live stream.
It works but I did have to add in a dummy operator for every operator that
was removed. Without doing this, I got an exception:
java.lang.IllegalStateE
Hi,
I have to try this to verify but I think the approach works if you give the
two sources different UIDs. The reason is that Flink will ignore state for
which it doesn't have an operator to assign it to. Therefore, the state of
the "historical Kafka source" should be silently discarded.
Cheers,
@Aljoscha - The N-input operator way sounds very nice, for now I think I'll
try and get something quick running the hacky way, then if we decide to
make this a permanent solution maybe I can work on the proper solution. I
was wondering about your suggestion for "warming up" the state and then
takin
Aljoscha's approach is probably better, but to answer your questions...
>How do you send a request from one Flink job to another?
All of our different flink jobs communicate over kafka. So the main flink
job would be listening to both a "live" kafka source, and a "historical"
kafka source. The h
Hi,
This might be useful to you
https://www.mapr.com/blog/savepoints-apache-flink-stream-processing-whiteboard-walkthrough
Thanks,
Jagat Singh
On 29 July 2016 at 20:59, Aljoscha Krettek wrote:
> Hi,
> I think the exact thing you're trying do do is not possible right now but
> I know of a w
Hi,
I think the exact thing you're trying do do is not possible right now but I
know of a workaround that some people have used.
For "warming up" the state from the historical data, you would run your
regular Flink job but replace the normal Kafka source by a source that
reads from the historical
Hi Jason,
Thanks for the reply - I didn't quite understand all of it though!
> it sends a request to the historical flink job for the old data
How do you send a request from one Flink job to another?
> It continues storing the live events until all the events form the
historical job have been pr
Hey Josh,
The way we replay historical data is we have a second Flink job that
listens to the same live stream, and stores every single event in Google
Cloud Storage.
When the main Flink job that is processing the live stream gets a request
for a specific data set that it has not been processing
Hi all,
I was wondering what approaches people usually take with reprocessing data
with Flink - specifically the case where you want to upgrade a Flink job,
and make it reprocess historical data before continuing to process a live
stream.
I'm wondering if we can do something similar to the 'simpl
16 matches
Mail list logo