[jira] Resolved: (HDFS-1227) UpdateBlock fails due to unmatched file length

2010-06-22 Thread Todd Lipcon (JIRA)
[ https://issues.apache.org/jira/browse/HDFS-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HDFS-1227. --- Resolution: Duplicate Going to resolve this as invalid. If you can reproduce after HDFS-1186 is commi

[jira] Resolved: (HDFS-1226) Last block is temporary unavailable for readers because of crashed appender

2010-06-22 Thread Todd Lipcon (JIRA)
[ https://issues.apache.org/jira/browse/HDFS-1226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HDFS-1226. --- Resolution: Duplicate > Last block is temporary unavailable for readers because of crashed appender >

[jira] Created: (HDFS-1261) Important methods and fields of BlockPlacementPolicyDefault.java shouldn't be private

2010-06-22 Thread Rodrigo Schmidt (JIRA)
Important methods and fields of BlockPlacementPolicyDefault.java shouldn't be private - Key: HDFS-1261 URL: https://issues.apache.org/jira/browse/HDFS-1261 Project: H

[jira] Resolved: (HDFS-1223) DataNode fails stop due to a bad disk (or storage directory)

2010-06-22 Thread Konstantin Shvachko (JIRA)
[ https://issues.apache.org/jira/browse/HDFS-1223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko resolved HDFS-1223. --- Resolution: Duplicate This is fixed as Todd mentions. BTW sometimes the behavior you d

[jira] Created: (HDFS-1260) 0.20: Block lost when multiple DNs trying to recover it to different genstamps

2010-06-22 Thread Todd Lipcon (JIRA)
0.20: Block lost when multiple DNs trying to recover it to different genstamps -- Key: HDFS-1260 URL: https://issues.apache.org/jira/browse/HDFS-1260 Project: Hadoop HDFS

[jira] Created: (HDFS-1259) getCorruptFiles() should double check blocks that are not really corrupted

2010-06-22 Thread Rodrigo Schmidt (JIRA)
getCorruptFiles() should double check blocks that are not really corrupted -- Key: HDFS-1259 URL: https://issues.apache.org/jira/browse/HDFS-1259 Project: Hadoop HDFS Is

[jira] Created: (HDFS-1258) Clearing namespace quota on "/" corrupts FS image

2010-06-22 Thread Aaron T. Myers (JIRA)
Clearing namespace quota on "/" corrupts FS image - Key: HDFS-1258 URL: https://issues.apache.org/jira/browse/HDFS-1258 Project: Hadoop HDFS Issue Type: Bug Components: name-node

[jira] Resolved: (HDFS-1239) All datanodes are bad in 2nd phase

2010-06-22 Thread Konstantin Shvachko (JIRA)
[ https://issues.apache.org/jira/browse/HDFS-1239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko resolved HDFS-1239. --- Resolution: Invalid > All datanodes are bad in 2nd phase > ---

[jira] Created: (HDFS-1257) Race condition introduced by HADOOP-5124

2010-06-22 Thread Ramkumar Vadali (JIRA)
Race condition introduced by HADOOP-5124 Key: HDFS-1257 URL: https://issues.apache.org/jira/browse/HDFS-1257 Project: Hadoop HDFS Issue Type: Bug Components: name-node Reporter:

[jira] Resolved: (HDFS-18) NameNode startup failed

2010-06-22 Thread Konstantin Shvachko (JIRA)
[ https://issues.apache.org/jira/browse/HDFS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko resolved HDFS-18. - Resolution: Cannot Reproduce Please feel free to reopen if you see this as a problem again.

Re: Source Code Question about blockReport

2010-06-22 Thread Todd Lipcon
On Mon, Jun 21, 2010 at 10:59 PM, Jeff Zhang wrote: > Hi Hadoop Devs, > > I have one question about the blockReport DataNode send to NameNode. I > think NameNode get the blockReport from DataNode, then it can tell > DataNode which block is invalid and which block should be replicated, > But I loo

Re: Overwriting the same block instead of creating a new one

2010-06-22 Thread Todd Lipcon
On Mon, Jun 21, 2010 at 10:31 PM, Vidur Goyal wrote: > I know about the current behaviour of HDFS. I am proposing this new > behaviour which i mentioned in my first mail. > > If you're going to propose new behavior, you should be prepared to explain why that new behavior is useful, and what the ne

Re: Overwriting the same block instead of creating a new one

2010-06-22 Thread Vidur Goyal
I know about the current behaviour of HDFS. I am proposing this new behaviour which i mentioned in my first mail. In Hadoop-0.20.2 , a new block is allocated and stored at datanodes and a new INode is created in namespace. Why is an overwrite considered as a file creation operation. -vidur > Hi V

Re: Overwriting the same block instead of creating a new one

2010-06-22 Thread Todd Lipcon
On Mon, Jun 21, 2010 at 10:42 PM, Vidur Goyal wrote: > Like in any other filesystem as ext4 , in case of overwrite why don't we > update the existing physical memory. Why is there a need to allocate > memory every time when an overwrite takes place. Isn't this a overhead. > > I think the issue is

Source Code Question about blockReport

2010-06-22 Thread Jeff Zhang
Hi Hadoop Devs, I have one question about the blockReport DataNode send to NameNode. I think NameNode get the blockReport from DataNode, then it can tell DataNode which block is invalid and which block should be replicated, But I look at the source code of method blockReport of NameNode, it always

Re: Overwriting the same block instead of creating a new one

2010-06-22 Thread Vidur Goyal
Like in any other filesystem as ext4 , in case of overwrite why don't we update the existing physical memory. Why is there a need to allocate memory every time when an overwrite takes place. Isn't this a overhead. > I know about the current behaviour of HDFS. I am proposing this new > behaviour w

[jira] Resolved: (HDFS-1254) 0.20: mark dfs.supprt.append to be true by default for the 0.20-append branch

2010-06-22 Thread dhruba borthakur (JIRA)
[ https://issues.apache.org/jira/browse/HDFS-1254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dhruba borthakur resolved HDFS-1254. Hadoop Flags: [Reviewed] Resolution: Fixed I just committed this. > 0.20: mark dfs.su