Re: Data processing with HDFS local or remote

2019-10-21 Thread Pritam Sadhukhan
Thanks a lot Zhu Zhu for such an elaborated explanation. On Mon, 21 Oct 2019 at 08:33, Zhu Zhu wrote: > Sources of batch jobs process InputSplit. Each InputSplit can be a file or > a file block according to the FileSystem(for HDFS it is file block). > Sources need to retrieve InputSplits to proc

Re: Data processing with HDFS local or remote

2019-10-20 Thread Zhu Zhu
Sources of batch jobs process InputSplit. Each InputSplit can be a file or a file block according to the FileSystem(for HDFS it is file block). Sources need to retrieve InputSplits to process from InputSplitAssigner at JM. In this way, the assigning of InputSplit to source tasks are possible to tak

Re: Data processing with HDFS local or remote

2019-10-20 Thread Pritam Sadhukhan
Hi Zhu Zhu, Thanks for your detailed answer. Can you please help me to understand how flink task process the data locally on data nodes first? I want to understand how flink determines the processing to be done at the data nodes? Regards, Pritam. On Sat, 19 Oct 2019 at 08:16, Zhu Zhu wrote: >

Re: Data processing with HDFS local or remote

2019-10-18 Thread Zhu Zhu
Hi Pratam, Flink does not deploy tasks to certain nodes according to source data locations. Instead, it will let a task process local input splits (data on the same node) first. So if your parallelism is large enough to distribute on all the data nodes, most data can be processed locally. Thanks,

Data processing with HDFS local or remote

2019-10-17 Thread Pritam Sadhukhan
Hi, I am trying to process data stored on HDFS using flink batch jobs. Our data is splitted into 16 data nodes. I am curious to know how data will be pulled from the data nodes with the same number of parallelism set as the data split on HDFS i.e. 16. Is the flink task being executed locally on