Hi HCFS Community :)

This is Jay...  Some of you know me.... I hack on a broad range of file
system and hadoop ecosystem interoperability stuff.  I just wanted to
introduce myself and let you folks know im going to be working to help
clean up the existing unit testing frameworks for the FileSystem and
FileContext APIs.  I've listed some bullets below .

- byte code inspection based code coverage for file system APIs with a tool
such as corbertura.

- HADOOP-9361 points out that there are many different types of file
systems.

- Creating mock file systems which can be used to validate API tests, which
emulate different FS semantics (atomic directory creation, eventual
consistency, strict consistency, POSIX compliance, append support, etc...)

Is anyone interested in the above issues or have any opinions on how /
where i should get started?

Our end goal is to have a more transparent and portable set of test APIs
for the hadoop file system implementors, across the board : so that we can
all test our individual implementations confidently.

So, anywhere i can lend a hand - let me know.  I think this effort will
require all of us in the file system community to join forces, and it will
benefit us all immensly in the long run as well.

-- 
Jay Vyas
http://jayunit100.blogspot.com

Reply via email to