HDFS Source Code

2010-06-07 Thread Vidur Goyal
Hi All, I have been trying to understand the source code of hdfs. But all the classes seem to be highly coupled. Can somebody help me out understanding it. I have read the documentation present on hdfs architecture. Thanks, Vidur -- This message has been scanned for viruses and dangerous conten

overwriting a file in hdfs

2010-06-08 Thread Vidur Goyal
Hi, Whenever i overwrite a file in hdfs , the block ID's are changed. Old blocks are neither listed in .Trash or anywhere in the hdfs. Can somebody explain what happens to the old blocks. Thanks, vidur -- This message has been scanned for viruses and dangerous content by MailScanner, and is bel

how can we know the statistics of namenode and datanode using API's

2010-06-10 Thread Vidur Goyal
Hi All, I have been trying to access the statistics of FSNameSystem using FSNameSystemMetrics , but i have not been able to do it yet. Am I doing it right, if not kindly guide me. I am stuck. Thanks, Vidur -- This message has been scanned for viruses and dangerous content by MailScanner, and is

Re: how can we know the statistics of namenode and datanode using API's

2010-06-11 Thread Vidur Goyal
hanks, Vidur > Hey Vidur, > > Do you need to access it directly in Java, or can you just parse them out > of (for example) Ganglia? There are JMX statistics, but I am not familiar > enough with JMX at the code level to give you decent advise. > > Brian > > On Jun 11, 2010

setting up hadoop 0.20.1 development environment

2010-06-14 Thread Vidur Goyal
Hi, I am trying to set up a development cluster for hadoop 0.20.1 in eclipse. I used this url http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1/ to check out the build. I compiled "compile , compile-core-test , and eclipse-files" using ant. Then when I build the project , I am gett

Overwriting the same block instead of creating a new one

2010-06-21 Thread Vidur Goyal
Hi All, In FSNamesystem#startFileInternal , whenever there is a overwrite flag set , why is the INode removed from the namespace and a new INodeFileUnderConstruction is created. Why can't we use the convert the same INode to INodeFileUnderConstruction. And we start writing to the same blocks at th

Re: Overwriting the same block instead of creating a new one

2010-06-21 Thread Vidur Goyal
remaining blocks. -vidur > HDFS assumes in hundreds of places that blocks never shrink. So, there is > no > option to truncate a block. > > -Todd > > On Mon, Jun 21, 2010 at 9:41 PM, Vidur Goyal > wrote: > >> Hi All, >> >> In FSNamesystem#startFileInterna

Re: Overwriting the same block instead of creating a new one

2010-06-22 Thread Vidur Goyal
x27;m not following. The "overwrite" flag causes the file to be >> overwritten >> starting at offset 0 - it doesn't allow you to retain any bit of the >> preexisting file. It's equivalent to a remove followed by a create. >> Think >> of >

Re: Overwriting the same block instead of creating a new one

2010-06-22 Thread Vidur Goyal
C. > > -Todd > > On Mon, Jun 21, 2010 at 10:03 PM, Vidur Goyal > wrote: > >> Dear Todd, >> >> By truncating i meant removing unused *blocks* from the namespace and >> let >> them be garbage collected. There will be no truncation of the last >>

How is a block allocated.

2010-06-27 Thread Vidur Goyal
Hi all, If i have a LocatedBlock object and i want to link it with a file , how should i proceed. What is the process by which a block gets linked with a file? Regards, Vidur -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.

create error

2010-07-03 Thread Vidur Goyal
Hi, I am trying to create a file in hdfs . I am calling create from an instance of DFSClient. This is a part of code that i am using byte[] buf = new byte[65536]; int len; while ((len = dis.available()) != 0) { if (le