Hi,
changing the parallelism is not possible while a job is running (currently). 
What you would have to do to change the parallelism is create a savepoint and 
then restore from that savepoint with a different parallelism.

This is the savepoints documentation: 
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/savepoints.html
 
<https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/savepoints.html>

Best,
Aljoscha
> On 21. Apr 2017, at 15:22, Dominik Safaric <dominiksafa...@gmail.com> wrote:
> 
> Hi all,
> 
> Is it possible to set the operator parallelism using Flink CLI while a job is 
> running? 
> 
> I have a cluster of 4 worker nodes, where each node has 4 CPUs, hence the 
> number of task slots is set to 4, whereas the paralellism.default to 16. 
> 
> However, if a worker fails, whereas the jobs were configured at system level 
> to run with 16 task slots, I get the exception “Not enough free slots 
> available to run the job.” raised and the job is not able to continue but 
> instead of aborts. 
> 
> Is this the excepted behaviour? Shouldn’t Flink continue the job execution 
> with in this case only 12 slots available? If not, can someone change the 
> parallelism of a job while in the restart mode in order to allow the job to 
> continue? 
> 
> Thanks,
> Dominik

Reply via email to