Hi Timo,

Thanks for update. Flink pipeline deployment is very ad hoc and hard to
maintain as a service platform.
Each team may update their pipeline frequently which was completely
blackbox and managed. Idea situation would be have a repo where all code
lives and mapped to running pipelines in configuration. When new code
landed ( without break topology compatibility), job manager should be able
notified to pick up and load new class.

Is there any doc I can follow to wire userclassloader up to prototype :)

Thanks,
Chen

On Mon, Oct 2, 2017 at 2:20 AM, Timo Walther <twal...@apache.org> wrote:

> Hi Chen,
>
> I think in a long-term perspective it makes sense to support things like
> this. The next big step is dynamic scaling without stopping the execution.
> Partial upgrades could be addressed afterwards, but I'm not aware of any
> plans.
>
> Until then, I would recommend a different architecture by using connect()
> and stream in a new logic dynamically. This is especially interesting for
> ML models etc.
>
> Regards,
> Timo
>
>
> Am 10/1/17 um 3:03 AM schrieb Chen Qin:
>
>> Hi there,
>>
>> So far, flink job is interpreted and deployed during bootstrap phase. Once
>> pipeline runs, it's very hard to do partial upgrade without stop
>> execution.
>> (like savepoint is heavy) Is there any plan to allow upload annotated jar
>> package which hints which stream tasks implementation CAN BE partial
>> upgraded after next checkpoint succeed without worry about backfill.
>>
>>
>> Thanks,
>> Chen
>>
>>
>

Reply via email to