I had come across flink-deployer actually, but somehow didn't want to
"learn" it... (versus just a bunch of lines in a script)

At some time with more bandwidth, we should migrate to this one and
standardize flink-deployer (and later take this to mainstream Flink :P).

---
Oytun Tez

*M O T A W O R D*
The World's Fastest Human Translation Platform.
oy...@motaword.com — www.motaword.com


On Thu, Apr 25, 2019 at 3:14 AM Marc Rooding <m...@webresource.nl> wrote:

> Hi Steven, Oytun
>
> You may find the tool we open-sourced last year useful. It offers
> deploying and updating jobs with savepointing.
>
> You can find it on Github: https://github.com/ing-bank/flink-deployer
>
> There’s also a docker image available in Docker Hub.
>
> Marc
> On 24 Apr 2019, 17:29 +0200, Oytun Tez <oy...@motaword.com>, wrote:
>
> Hi Steven,
>
> As much as I am aware,
> 1) no update call. our build flow feels a little weird to us as well.
> definitely requires scripting.
> 2) we are using Flink management API remotely in our build flow to 1) get
> jobs, 2) savepoint them, 3) cancel them etc. I am going to release a Python
> script for this soon.
>
> ---
> Oytun Tez
>
> *M O T A W O R D*
> The World's Fastest Human Translation Platform.
> oy...@motaword.com — www.motaword.com
>
>
> On Wed, Apr 24, 2019 at 11:06 AM Steven Nelson <snel...@sourceallies.com>
> wrote:
>
>> Hello!
>>
>> I am working on automating our deployments to our Flink cluster. I had a
>> couple questions about the flink cli.
>>
>> 1) I thought there was an "update" command that would internally manage
>> the cancel with savepoint, upload new jar, restart from savepoint process.
>>
>> 2) Is there a way to get the Flink cli to output it's result in a json
>> format? Right now I would need to parse the results of the "flink list"
>> command to get the job id, cancel the job with savepoint, parse the results
>> of that to get the savepoint filename, then restore using that. Parsing the
>> output seems brittle to me.
>>
>> Thought?
>> -Steve
>>
>>

Reply via email to