Assuming you speak of the HDFS file-writing code, look at DFSClient
and its utilization of DFSOutputStream (see the write(…) areas).

On Sun, Nov 11, 2012 at 4:36 PM, salmakhalil <salma_7...@hotmail.com> wrote:
>
>
> Hi,
>
> I am trying to find the part of Hadoop that is responsible of distributing
> the input file fragments to the datanodes. I need to understand the source
> code that is responsible of distributing the input files.
>
> Can anyone help me in detecting this part of code. I tried to read the
> namenode.java file but I could not find anything that can help me.
>
> Thanks in advance,
> Salam
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/which-part-of-Hadoop-is-responsible-of-distributing-the-input-file-fragments-to-datanodes-tp4019530.html
> Sent from the Hadoop lucene-dev mailing list archive at Nabble.com.



-- 
Harsh J

Reply via email to