[jira] Created: (HDFS-702) Add Hdfs Impl for the new file system interface

2009-10-12 Thread Sanjay Radia (JIRA)
Add Hdfs Impl for the new file system interface --- Key: HDFS-702 URL: https://issues.apache.org/jira/browse/HDFS-702 Project: Hadoop HDFS Issue Type: New Feature Affects Versions: 0.22.0

[jira] Created: (HDFS-701) DataNode's volumeMap should be private.

2009-10-12 Thread Konstantin Shvachko (JIRA)
DataNode's volumeMap should be private. --- Key: HDFS-701 URL: https://issues.apache.org/jira/browse/HDFS-701 Project: Hadoop HDFS Issue Type: Bug Components: data-node Affects Versions: 0.21

[jira] Created: (HDFS-700) BlockReceiver is ignoring java.io.InterruptedIOException.

2009-10-12 Thread Tsz Wo (Nicholas), SZE (JIRA)
BlockReceiver is ignoring java.io.InterruptedIOException. - Key: HDFS-700 URL: https://issues.apache.org/jira/browse/HDFS-700 Project: Hadoop HDFS Issue Type: Bug Components:

[jira] Created: (HDFS-699) Primary datanode should compare replicas' on disk lengths

2009-10-12 Thread Tsz Wo (Nicholas), SZE (JIRA)
Primary datanode should compare replicas' on disk lengths - Key: HDFS-699 URL: https://issues.apache.org/jira/browse/HDFS-699 Project: Hadoop HDFS Issue Type: Bug Components:

[jira] Created: (HDFS-698) BlockSender should compare on disk file length with replica length instead of replica visible length.

2009-10-12 Thread Konstantin Shvachko (JIRA)
BlockSender should compare on disk file length with replica length instead of replica visible length. - Key: HDFS-698 URL: https://issues.apache.org/jira/browse/HD

[jira] Created: (HDFS-697) Enable asserts for tests by default

2009-10-12 Thread Eli Collins (JIRA)
Enable asserts for tests by default --- Key: HDFS-697 URL: https://issues.apache.org/jira/browse/HDFS-697 Project: Hadoop HDFS Issue Type: Test Reporter: Eli Collins See HADOOP-6309. Let's make t

[jira] Created: (HDFS-696) Assert in chooseNodes fails when running TestAccessTokenWithDFS

2009-10-12 Thread Eli Collins (JIRA)
Assert in chooseNodes fails when running TestAccessTokenWithDFS --- Key: HDFS-696 URL: https://issues.apache.org/jira/browse/HDFS-696 Project: Hadoop HDFS Issue Type: Test

[jira] Created: (HDFS-695) RaidNode should red in configuration from hdfs-site.xml

2009-10-12 Thread dhruba borthakur (JIRA)
RaidNode should red in configuration from hdfs-site.xml --- Key: HDFS-695 URL: https://issues.apache.org/jira/browse/HDFS-695 Project: Hadoop HDFS Issue Type: Bug Components: cont

[jira] Created: (HDFS-694) Add a test to make sure that node decomission does not get blocked by underreplicated blocks in an unclosed file

2009-10-12 Thread Hairong Kuang (JIRA)
Add a test to make sure that node decomission does not get blocked by underreplicated blocks in an unclosed file Key: HDFS-694 URL: https://issues.apa

Re: About the memory file system, any suggestions?

2009-10-12 Thread Jason Venner
You could use the jvm reuse features, and static objects will persist across tasks. They will not persist across jobs. In the prohadoop book example code, there is a jvm reuse example that demonstrates this. com.apress.hadoopbook.examples.advancedtechniques.JVMReuseAndStaticInitializers On Sun, Oc

Re: HDFS Client under Windows

2009-10-12 Thread Allen Wittenauer
On 10/11/09 2:26 PM, "Tobias N. Sasse" wrote: > I am planning to integrate the HDFS libraries into a program, which runs > on a variety of platforms. Be aware that HDFS (or MR for that matter) is not wire compatible with itself across versions.

Re: HDFS Client under Windows

2009-10-12 Thread Amr Awadallah
unfortunately the hadoop code base (both client and server) have a lot of dependencies on unix-like commands, so the only way to make it work would be to rewrite it to remove all such dependencies, which is a major effort. -- amr Hi Amr, I am planning to integrate the HDFS libraries into a

About the memory file system, any suggestions?

2009-10-12 Thread 曹楠楠
Hi all : I try to use the memory file system in hadoop. the idea is very simple. I want to use memory file system to the map intermediate file. It is like this; 1. the memory is limited, the data will be written into the disk. 2.If the file in memory is deleted and there are space in memory, the d