Hello Ravi,
Thank you for your response. I have another question:
I am trying to trace a call from org.apache.hadoop.fs.FsShell to NameNode.
I am running a simple basic "ls" to understand how the mechanism works.
What I want to know is which classes are used on the way when the "ls" is
computed.
Hi Yasin!
Without knowing more about your project, here are answers to your
questions.
It's trivially easy to start only the Datanode. The HDFS code is very
modular.
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Da
Hello All,
I working on a P2P storage project for research purpose.
I want to use HDFS DataNode as a part of a research project.
One possibility is using only DataNode as a storage engine and do
everything else at upper level. In this case I will have all the metadata
management and replication me