Hi Matthias,

the streaming folks can probably answer the questions better. But I'll
write something to bring this message back to their attention ;)

1) Which exceptions are you seeing? Flink should be able to cleanly shut
down.
2) As far as I saw it, the execute() method (of the Streaming API) got an
JobExecutionResult return type in the latest master. That contains
accumulator results.
3) I think the cancel() method is there for exactly that purpose. If the
job is shutting down before the cancel method, that probably a bug.


Robert



On Fri, Mar 27, 2015 at 10:07 PM, Matthias J. Sax <
mj...@informatik.hu-berlin.de> wrote:

> Hi,
>
> I am trying to run an infinite streaming job (ie, one that does not
> terminate because it is generating output date randomly on the fly). I
> kill this job with .stop() or .shutdown() method of
> ForkableFlinkMiniCluster.
>
> I did not find any example using a similar setup. In the provided
> examples, each job terminate automatically, because only a finite input
> is processed and the source returns after all data is emitted.
>
>
> I have multiple question about my setup:
>
>  1) The job never terminates "clean", ie, I get some exceptions. Is this
> behavior desired?
>
>  2) Is it possible to get a result back? Similar to
> JobClient.submitJobAndWait(...)?
>
>  3) Is it somehow possible, to send a signal to the running job such
> that the source can terminate regularly as if finite input would be
> processed? Right now, I use an while(running) loop and set 'running' to
> false in the .cancel() method.
>
>
>
> Thanks for your help!
>
> -Matthias
>
>
>

Reply via email to