Hi Fabian,
Thank you, I think I've a better picture of this now. I think if I set
DataSource tasks (a config option I guess?) equal to input splits that
would do as I expected.
Yes, will keep it at the same place across nodes.
Thank you,
Saliya
On Mon, Jan 25, 2016 at 10:59 AM, Fabian Hueske
Hi Saliya,
the number of parallel splits is controlled by the number of input splits
returned by the InputFormat.createInputSplits() method. This method
receives a parameter minNumSplits with is equal to the number of DataSource
tasks.
Flink handles input splits a bit different from Hadoop. In Ha
Hi Fabian,
Thank you for the information.
So, is there a way I can get the task number within the InputFormat? That
way I can use it to offset the block of rows.
The file size is large to fit in a single process' memory, so the current
setup in MPI and Hadoop use the rank (task number) info to m
Hi Saliya,
yes that is possible, however the requirements for reading a binary file
from local fs are basically the same as for reading it from HDSF.
In order to be able to start reading different sections of a file in
parallel, you need to know the different starting positions. This can be
done b
There should be a env.readbinaryfile() IIRC, check that
Sent from my iPhone
> On Jan 24, 2016, at 12:44 PM, Saliya Ekanayake wrote:
>
> Thank you for the response on this, but I still have some doubt. Simply, the
> files is not in HDFS, it's in local storage. In Flink if I run the program
> w
Thank you for the response on this, but I still have some doubt. Simply,
the files is not in HDFS, it's in local storage. In Flink if I run the
program with, say 5 parallel tasks, what I would like to do is to read a
block of rows in each task as shown below. I looked at the simple CSV
reader and w
With readHadoopFile you can use all of Hadoop’s FileInputFormats and thus
you can also do everything with Flink, what you can do with Hadoop. Simply
take the same Hadoop FileInputFormat which you would take for your
MapReduce job.
Cheers,
Till
On Wed, Jan 20, 2016 at 3:16 PM, Saliya Ekanayake
Thank you, I saw the readHadoopFile, but I was not sure how it can be used
to the following, which is what I need. The logic of the code requires an
entire row to operate on, so in our current implementation with P tasks,
each of them will read a rectangular block of (N/P) x N from the matrix. Is
t
Hi Saliya,
You can use the input format from Hadoop in Flink by using readHadoopFile
method. The method returns a dataset which of type is Tuple2. Note
that MapReduce equivalent transformation in Flink is composed of map, groupBy,
and reduceGroup.
> On Jan 20, 2016, at 3:04 PM, Suneel Marthi
Guess u r looking for Flink's BinaryInputFormat to be able to read blocks
of data from HDFS
https://ci.apache.org/projects/flink/flink-docs-release-0.10/api/java/org/apache/flink/api/common/io/BinaryInputFormat.html
On Wed, Jan 20, 2016 at 12:45 AM, Saliya Ekanayake
wrote:
> Hi,
>
> I am trying
10 matches
Mail list logo