Thanks Max and Steven for the response.

On Wednesday, February 17, 2016, Stephan Ewen <se...@apache.org> wrote:

> Hi Deepak!
>
> The "slaves" file is only used by the SSH script to start a standalone
> cluster.
>
> As Max said, TaskManagers register dynamically at the JobManager.
>
> Discovery works via:
>    - config in non-HA mode
>    - ZooKeeper in HA mode
>
>
>
> On Wed, Feb 17, 2016 at 10:11 AM, Maximilian Michels <m...@apache.org
> <javascript:;>> wrote:
>
> > Hi Deepak,
> >
> > The job manager doesn't have to know about task managers. They will
> > simply register at the job manager using the provided configuration.
> > In HA mode, they will lookup the currently leading job manager first
> > and then connect to it. The job manager can then assign work.
> >
> > Cheers,
> > Max
> >
> > On Tue, Feb 16, 2016 at 10:41 PM, Deepak Jha <dkjhan...@gmail.com
> <javascript:;>> wrote:
> > > Hi All,
> > > I have a question on scaling-up/scaling-down flink cluster.
> > > As per the documentation, in order to scale-up the cluster, I can add a
> > new
> > > taskmanager on the fly and jobmanager can assign work to it. Assuming,
> I
> > > have Flink HA , so in the event of master JobManager failure, how is
> this
> > > taskmanager detail is going to get transferred ? I believe new master
> > will
> > > just read the contents from slaves config file. Can anyone give more
> > > clarity on how this is done ? Or, Is it union of slaves and the
> > > taskmanager's that are added on the fly ?
> > >
> > > --
> > > Thanks,
> > > Deepak
> >
>


-- 
Sent from Gmail Mobile

Reply via email to