Hi,

The best way to do so is to use a Flink feature called savepoints. You can find 
more here:

https://ci.apache.org/projects/flink/flink-docs-master/apis/streaming/savepoints.html
 
<https://ci.apache.org/projects/flink/flink-docs-master/apis/streaming/savepoints.html>

In a nutshell, savepoints just take a consistent snapshot of the state of your 
job at the time you 
take them, and you can resume execution from that point.

Using this, you can write your initial job, whenever you want to add the new 
operator you take a save point, 
and after adding your new operator, you can start the execution of the new job 
from the point where the old job stopped. 
In addition, the old job can still keep running, in case you need it, so there 
will be no downtime for that.

If this does not cover your use case, it would be helpful to share some more 
information about 
what exactly you want to do, so that we can figure out a solution that fits 
your needs.

Kostas

> On Jul 7, 2016, at 1:25 PM, adamlehenbauer <adam.lehenba...@gmail.com> wrote:
> 
> Hi, I'm exploring using Flink to replace an in-house micro-batch application.
> Many of the features and concepts are perfect for what I need, but the
> biggest gap is that there doesn't seem to be a way to add new operations at
> runtime after execute(). 
> 
> What is the preferred approach for adding new operations, windows, etc to a
> running application? Should I start multiple execution contexts?
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Adding-and-removing-operations-after-execute-tp7863.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at 
> Nabble.com.

Reply via email to