Re: NFSv3 Filesystem Connector

2015-01-14 Thread Gokul Soundararajan
Hi Colin, Yeah, I should add the reasons to the README. We tried LocalFileSystem when we started out but we think we can do tighter Hadoop integration if we write a connector. Some examples include: 1. Limit over-prefetching of data - MapReduce splits the jobs into 128MB splits and standard NFS d

[jira] [Created] (HDFS-7614) Implement COMPLETE state of erasure coding block groups

2015-01-14 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-7614: --- Summary: Implement COMPLETE state of erasure coding block groups Key: HDFS-7614 URL: https://issues.apache.org/jira/browse/HDFS-7614 Project: Hadoop HDFS Issue Type: S

[jira] [Created] (HDFS-7613) Block placement policy for erasure coding groups

2015-01-14 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-7613: --- Summary: Block placement policy for erasure coding groups Key: HDFS-7613 URL: https://issues.apache.org/jira/browse/HDFS-7613 Project: Hadoop HDFS Issue Type: Sub-task

[jira] [Created] (HDFS-7612) TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir

2015-01-14 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-7612: - Summary: TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir Key: HDFS-7612 URL: https://issues.apache.org/jira/browse/HDFS-7612 P

[jira] [Created] (HDFS-7611) TestFileTruncate.testTruncateEditLogLoad times out waiting for Mini HDFS Cluster to start

2015-01-14 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-7611: - Summary: TestFileTruncate.testTruncateEditLogLoad times out waiting for Mini HDFS Cluster to start Key: HDFS-7611 URL: https://issues.apache.org/jira/browse/HDFS-7611

[jira] [Created] (HDFS-7610) Should use StorageDirectory.getCurrentDIr() to construct FsVolumeImpl

2015-01-14 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-7610: --- Summary: Should use StorageDirectory.getCurrentDIr() to construct FsVolumeImpl Key: HDFS-7610 URL: https://issues.apache.org/jira/browse/HDFS-7610 Project: Hadoop HDFS

Re: NFSv3 Filesystem Connector

2015-01-14 Thread Colin P. McCabe
Hi Niels, I agree that direct-attached storage seems more economical for many users. As an HDFS developer, I certainly have a dog in this fight as well :) But we should be respectful towards people trying to contribute code to Hadoop and evaluate the code on its own merits. It is up to our users

Re: NFSv3 Filesystem Connector

2015-01-14 Thread Colin McCabe
Why not just use LocalFileSystem with an NFS mount (or several)? I read through the README but I didn't see that question answered anywhere. best, Colin On Tue, Jan 13, 2015 at 1:35 PM, Gokul Soundararajan wrote: > Hi, > > We (Jingxin Feng, Xing Lin, and I) have been working on providing a > F

[jira] [Resolved] (HDFS-7586) HFTP does not work when namenode bind on wildcard

2015-01-14 Thread Daryn Sharp (JIRA)
[ https://issues.apache.org/jira/browse/HDFS-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp resolved HDFS-7586. --- Resolution: Not a Problem > HFTP does not work when namenode bind on wildcard > ---

Re: NFSv3 Filesystem Connector

2015-01-14 Thread Gokul Soundararajan
Hi Niels, Thanks for your comments. My goal in designing the NFS connector is *not* to replace HDFS. HDFS is ideally suited for Hadoop (otherwise why was it built?). The problem is that we have people who have PBs (10PB to 50PB) of data on NFS storage that they would like process using Hadoop. Suc

Hadoop-Hdfs-trunk - Build # 2005 - Failure

2015-01-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2005/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 11008 lines...] [INFO] [INFO] --- maven-source-plugin:2.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2005

2015-01-14 Thread Apache Jenkins Server
See Changes: [xgong] MAPREDUCE-6173. Document the configuration of deploying MR over [cnauroth] HDFS-7570. SecondaryNameNode need twice memory when calling reloadFromImageFile. Contributed by zhaoyunjiong. [jianhe] YARN-2637. Fixed

Hadoop-Hdfs-trunk-Java8 - Build # 70 - Still Failing

2015-01-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/70/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 11234 lines...] main: [mkdir] Created dir: /home

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #70

2015-01-14 Thread Apache Jenkins Server
See Changes: [xgong] MAPREDUCE-6173. Document the configuration of deploying MR over [cnauroth] HDFS-7570. SecondaryNameNode need twice memory when calling reloadFromImageFile. Contributed by zhaoyunjiong. [jianhe] YARN-2637. F

[jira] [Created] (HDFS-7609) startup used too much time to load edits

2015-01-14 Thread Carrey Zhan (JIRA)
Carrey Zhan created HDFS-7609: - Summary: startup used too much time to load edits Key: HDFS-7609 URL: https://issues.apache.org/jira/browse/HDFS-7609 Project: Hadoop HDFS Issue Type: Improvement

[jira] [Created] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)
zhangshilong created HDFS-7608: -- Summary: hdfs dfsclient newConnectedPeer has no read or write timeout Key: HDFS-7608 URL: https://issues.apache.org/jira/browse/HDFS-7608 Project: Hadoop HDFS