Hi Steve,
Thanks for the pointers. I looked at the FileSystemContractBaseTest and
MainOperationsBaseTest and wrote the setUp() functions to setup NFS
underneath. The extension of these test classes aren't very clean and I
hope I can use and the rest of team's help in writing them correctly.
In my
Gokul,
What we expect from a filesystem is defined in (a) the HDFS code , (b) the
filesystem spec as derived from (a), and (c) contract tests derived from
(a) and (b)
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html
There's a wiki page to go with this
Hi Colin,
Yeah, I should add the reasons to the README. We tried LocalFileSystem when
we started out but we think we can do tighter Hadoop integration if we
write a connector.
Some examples include:
1. Limit over-prefetching of data - MapReduce splits the jobs into 128MB
splits and standard NFS d
Hi Niels,
I agree that direct-attached storage seems more economical for many users.
As an HDFS developer, I certainly have a dog in this fight as well :)
But we should be respectful towards people trying to contribute code to
Hadoop and evaluate the code on its own merits. It is up to our users
Why not just use LocalFileSystem with an NFS mount (or several)? I read
through the README but I didn't see that question answered anywhere.
best,
Colin
On Tue, Jan 13, 2015 at 1:35 PM, Gokul Soundararajan wrote:
> Hi,
>
> We (Jingxin Feng, Xing Lin, and I) have been working on providing a
> F
Hi Niels,
Thanks for your comments. My goal in designing the NFS connector is *not*
to replace HDFS. HDFS is ideally suited for Hadoop (otherwise why was it
built?).
The problem is that we have people who have PBs (10PB to 50PB) of data on
NFS storage that they would like process using Hadoop. Suc
Hi,
We (Jingxin Feng, Xing Lin, and I) have been working on providing a
FileSystem implementation that allows Hadoop to utilize a NFSv3 storage
server as a filesystem. It leverages code from hadoop-nfs project for all
the request/response handling. We would like your help to add it as part of
hado