Hi,
Thanks for your help. Since Our data source has database tables
architecture I have a thought of follow that 'JDBCInputFormat' in Flink. It
would be great if you can provide some information regarding how that
JDBCInputFormat execution happens?

Thanks,
Pawan

On Mon, Jan 23, 2017 at 4:18 PM, Fabian Hueske <fhue...@gmail.com> wrote:

> Hi Pawan,
>
> I don't this this works. The InputSplits are generated by the JobManager,
> i.e., not in parallel by a single process.
> After the parallel InputFormats have been started on the TaskManagers, they
> request InputSplits and open() them. If there are no InputSplits there is
> no work to be done and open will not be called.
> You can tweak the behavior by implementing your own InputSplits and
> InputSplitAssigner which assigns exactly one input split to each task.
>
> Fabian
>
> 2017-01-23 8:44 GMT+01:00 Pawan Manishka Gunarathna <
> pawan.manis...@gmail.com>:
>
> > Hi,
> >
> > When we are implementing that Flink *InputFormat* Interface, if we have
> > that*
> > input split creation* part in our data analytics server APIs can we
> > directly go to the second phase of the flink InputFormat Interface
> > execution.
> >
> > Basically I need to know that can we read those InputSplits directly,
> > without generating InputSplits inside the InputFormat Interface. So it
> > would be great if you can provide any kind of help.
> >
> > Thanks,
> > Pawan
> >
> > --
> >
> > *Pawan Gunaratne*
> > *Mob: +94 770373556*
> >
>



-- 

*Pawan Gunaratne*
*Mob: +94 770373556*

Reply via email to