This was the other thread, right ?

http://search-hadoop.com/m/Flink/VkLeQ0dXIf1SkHpY?subj=Re+Does+job+restart+resume+from+last+known+internal+checkpoint+

On Wed, Jul 19, 2017 at 9:02 AM, Moiz Jinia <moiz.ji...@gmail.com> wrote:

> Yup! Thanks.
>
> Moiz
>
> —
> sent from phone
>
> On 19-Jul-2017, at 9:21 PM, Aljoscha Krettek [via Apache Flink User
> Mailing List archive.] <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=14338&i=0>> wrote:
>
> This was now answered in your other Thread, right?
>
> Best,
> Aljoscha
>
> On 18. Jul 2017, at 11:37, Moiz Jinia <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=14337&i=0>> wrote:
>
> Aljoscha Krettek wrote
>
> Hi,
> zero-downtime updates are currently not supported. What is supported in
> Flink right now is a savepoint-shutdown-restore cycle. With this, you
> first
> draw a savepoint (which is essentially a checkpoint with some meta data),
> then you cancel your job, then you do whatever you need to do (update
> machines, update Flink, update Job) and restore from the savepoint.
>
> A possible solution for zero-downtime update would be to do a savepoint,
> then start a second Flink job from that savepoint, then shutdown the first
> job. With this, your data sinks would need to be able to handle being
> written to by 2 jobs at the same time, i.e. writes should probably be
> idempotent.
>
> This is the link to the savepoint doc:
> https://ci.apache.org/projects/flink/flink-docs-
> release-1.2/setup/savepoints.html
>
> Does that help?
>
> Cheers,
> Aljoscha
>
> On Fri, 16 Dec 2016 at 18:16 Andrew Hoblitzell &lt;
>
>
> ahoblitzell@
>
>
> &gt;
> wrote:
>
> Hi. Does Apache Flink currently have support for zero down time or the =
> ability to do rolling upgrades?
>
> If so, what are concerns to watch for and what best practices might =
> exist? Are there version management and data inconsistency issues to =
> watch for?=
>
>
> When a second job instance is started in parallel from a savepoint, my
> incoming kafka messages would get sharded between the 2 running instances
> of
> the job (since they both would belong to the same consumer group). So when
> I
> stop the older version of the job, i stand to lose data (inspite of the
> fact
> that my downstream consumer is idempotent)
>
> If I used a different consumer group for the new job version (and start it
> from a savepoint), will the savepoint ensure that the second job instance
> starts from the correct offset? Do I need to do anything extra to make this
> work? (example set the uid on the source of the job).
>
> Thanks!
> Moiz
>
>
>
> --
> View this message in context: http://apache-flink-
> user-mailing-list-archive.2336050.n4.nabble.com/Flink-
> rolling-upgrade-support-tp10674p14313.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com <http://nabble.com/>.
>
>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/Flink-rolling-upgrade-support-tp10674p14337.html
> To unsubscribe from Flink rolling upgrade support, click here.
> NAML
> <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
> ------------------------------
> View this message in context: Re: Flink rolling upgrade support
> <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-rolling-upgrade-support-tp10674p14338.html>
>
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/> at
> Nabble.com.
>

Reply via email to