Thanks a lot Zhu Zhu for such an elaborated explanation.
On Mon, 21 Oct 2019 at 08:33, Zhu Zhu wrote:
> Sources of batch jobs process InputSplit. Each InputSplit can be a file or
> a file block according to the FileSystem(for HDFS it is file block).
> Sources need to retrieve InputSplits to proc
Sources of batch jobs process InputSplit. Each InputSplit can be a file or
a file block according to the FileSystem(for HDFS it is file block).
Sources need to retrieve InputSplits to process from InputSplitAssigner at
JM.
In this way, the assigning of InputSplit to source tasks are possible to
tak
Hi Zhu Zhu,
Thanks for your detailed answer.
Can you please help me to understand how flink task process the data
locally on data nodes first?
I want to understand how flink determines the processing to be done at the
data nodes?
Regards,
Pritam.
On Sat, 19 Oct 2019 at 08:16, Zhu Zhu wrote:
>
Hi Pratam,
Flink does not deploy tasks to certain nodes according to source data
locations.
Instead, it will let a task process local input splits (data on the same
node) first.
So if your parallelism is large enough to distribute on all the data nodes,
most data can be processed locally.
Thanks,
Hi,
I am trying to process data stored on HDFS using flink batch jobs.
Our data is splitted into 16 data nodes.
I am curious to know how data will be pulled from the data nodes with the
same number of parallelism set as the data split on HDFS i.e. 16.
Is the flink task being executed locally on