Thanks Niels, actually I also created one :) We will fix this on the
master and for the 1.1.2 release.
On Thu, Aug 25, 2016 at 5:14 PM, Niels Basjes wrote:
> I have this with a pretty recent version of the source version (not a
> release).
>
> Would be great if you see a way to fix this.
> I cons
I have this with a pretty recent version of the source version (not a
release).
Would be great if you see a way to fix this.
I consider it fine if this requires an extra call to the system indicating
that this is a 'mulitple job' situation.
I created https://issues.apache.org/jira/browse/FLINK-44
Hi Niels,
This is with 1.1.1? We could fix this in the upcoming 1.1.2 release by
only using automatic shut down for detached jobs. In all other cases
we should be able to shutdown from the client side after running all
jobs. The only downside I see is that Flink clusters may actually
never be shut
Hi,
I created a small application that needs to run multiple (batch) jobs on
Yarn and then terminate.
In this case I'm exporting data from a list of HBase tables
I essentially do right now the following:
flink run -m yarn-cluster -yn 10 bla.jar ...
And in my main I do
foreach thing I need to