Add Hdfs Impl for the new file system interface
---
Key: HDFS-702
URL: https://issues.apache.org/jira/browse/HDFS-702
Project: Hadoop HDFS
Issue Type: New Feature
Affects Versions: 0.22.0
DataNode's volumeMap should be private.
---
Key: HDFS-701
URL: https://issues.apache.org/jira/browse/HDFS-701
Project: Hadoop HDFS
Issue Type: Bug
Components: data-node
Affects Versions: 0.21
BlockReceiver is ignoring java.io.InterruptedIOException.
-
Key: HDFS-700
URL: https://issues.apache.org/jira/browse/HDFS-700
Project: Hadoop HDFS
Issue Type: Bug
Components:
Primary datanode should compare replicas' on disk lengths
-
Key: HDFS-699
URL: https://issues.apache.org/jira/browse/HDFS-699
Project: Hadoop HDFS
Issue Type: Bug
Components:
BlockSender should compare on disk file length with replica length instead of
replica visible length.
-
Key: HDFS-698
URL: https://issues.apache.org/jira/browse/HD
Enable asserts for tests by default
---
Key: HDFS-697
URL: https://issues.apache.org/jira/browse/HDFS-697
Project: Hadoop HDFS
Issue Type: Test
Reporter: Eli Collins
See HADOOP-6309. Let's make t
Assert in chooseNodes fails when running TestAccessTokenWithDFS
---
Key: HDFS-696
URL: https://issues.apache.org/jira/browse/HDFS-696
Project: Hadoop HDFS
Issue Type: Test
RaidNode should red in configuration from hdfs-site.xml
---
Key: HDFS-695
URL: https://issues.apache.org/jira/browse/HDFS-695
Project: Hadoop HDFS
Issue Type: Bug
Components: cont
Add a test to make sure that node decomission does not get blocked by
underreplicated blocks in an unclosed file
Key: HDFS-694
URL: https://issues.apa
You could use the jvm reuse features, and static objects will persist across
tasks.
They will not persist across jobs.
In the prohadoop book example code, there is a jvm reuse example that
demonstrates this.
com.apress.hadoopbook.examples.advancedtechniques.JVMReuseAndStaticInitializers
On Sun, Oc
On 10/11/09 2:26 PM, "Tobias N. Sasse" wrote:
> I am planning to integrate the HDFS libraries into a program, which runs
> on a variety of platforms.
Be aware that HDFS (or MR for that matter) is not wire compatible with
itself across versions.
unfortunately the hadoop code base (both client and server) have a lot
of dependencies on unix-like commands, so the only way to make it work
would be to rewrite it to remove all such dependencies, which is a major
effort.
-- amr
Hi Amr,
I am planning to integrate the HDFS libraries into a
Hi all :
I try to use the memory file system in hadoop. the idea is very simple. I
want to use memory file system to the map intermediate file. It is like
this; 1. the memory is limited, the data will be written into the disk. 2.If
the file in memory is deleted and there are space in memory, the d
13 matches
Mail list logo