between the 2 running instances of
>> the job (since they both would belong to the same consumer group). So when I
>> stop the older version of the job, i stand to lose data (inspite of the fact
>> that my downstream consumer is idempotent)
>>
>> If I used a differe
port-tp10674p14313.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com <http://nabble.com/>.
>
>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
&g
lder version of the job, i stand to lose data (inspite of the fact
> that my downstream consumer is idempotent)
>
> If I used a different consumer group for the new job version (and start it
> from a savepoint), will the savepoint ensure that the second job instance
> starts
up for the new job version (and start it
> from a savepoint), will the savepoint ensure that the second job instance
> starts from the correct offset? Do I need to do anything extra to make this
> work? (example set the uid on the source of the job).
>
> Thanks!
> Moiz
>
>
>
> --
tart it
from a savepoint), will the savepoint ensure that the second job instance
starts from the correct offset? Do I need to do anything extra to make this
work? (example set the uid on the source of the job).
Thanks!
Moiz
--
View this message in context:
http://apache-flink-user-mailing
Hi!
I think in many cases it is more convenient to have a savepoint-and-stop
operation to use for upgrading the cluster/job but it should not be
required. If the output of your job needs to be exactly once and you don't
have an external deduplication mechanism than even the current
fault-tolerance
Hi Greg,
yes certainly, there are more requirements to this than the quick sketch I
gave above and that seems to be one of them.
Cheers,
Aljoscha
On Thu, 22 Dec 2016 at 17:54 Greg Hogan wrote:
> Aljoscha,
>
> For the second, possible solution is there also a requirement that the
> data sinks ha
Aljoscha,
For the second, possible solution is there also a requirement that the data
sinks handle out-of-order writes? If the new job outpaces the old job which
is then terminated, the final write from the old job could have overwritten
"newer" writes from the new job.
Greg
On Tue, Dec 20, 2016
Hi Stephan -
I agree that the savepoint-shutdown-restart model is nominally the same as the
rolling restart with one notable exception - a lack of atomicity. There is a
gap between invoking the savepoint command and the shutdown command. My problem
isn’t fortunate enough to have idempotent oper
Hi Andrew!
Would be great to know if what Aljoscha described works for you. Ideally,
this costs no more than a failure/recovery cycle, which one typically also
gets with rolling upgrades.
Best,
Stephan
On Tue, Dec 20, 2016 at 6:27 PM, Aljoscha Krettek
wrote:
> Hi,
> zero-downtime updates are
Hi,
zero-downtime updates are currently not supported. What is supported in
Flink right now is a savepoint-shutdown-restore cycle. With this, you first
draw a savepoint (which is essentially a checkpoint with some meta data),
then you cancel your job, then you do whatever you need to do (update
mac
Hi. Does Apache Flink currently have support for zero down time or the =
ability to do rolling upgrades?
If so, what are concerns to watch for and what best practices might =
exist? Are there version management and data inconsistency issues to =
watch for?=
12 matches
Mail list logo