On 16 Mar 2015, at 08:27, Emmanuel <ele...@msn.com> wrote:

> Hello,
> 
> In my understanding, the flink-conf.yaml is the one config file to configure 
> a cluster.
> The slave file lists the slave nodes.
> they must both be on every node.

The slaves file is only used for the startup script when using the 
bin/start-cluster.sh. The other configuration files (flink-conf.yaml, 
log4j.properties etc.) need to be available on each worker node if you want to 
run a custom configuration, that's true.

The usual setup is to start the system from a shared directory, which is 
available mounted on each node. If you don't have that in place, it would make 
sense to write a small script to sync the different nodes of your setup. How do 
you do it currently? You need to transfer the Flink files anyways, no?

> Does the cluster need to be restarted to take the new nodes into account? It 
> seems like it.
> Having to replicate the file on all nodes is not super convenient. Restarting 
> is even more trouble.
> Is there a way to scale a live cluster? If so how?

Thanks for the pointer. I think it's a good idea to add documentation for this.

You can add new worker nodes at runtime. You need to use the bin/taskmanager.sh 
script on the new worker node though:

path/to/bin/taskmanager.sh start &

The new worker will be available for all programs submitted after it has been 
registered with the master.

– Ufuk

Reply via email to