[jira] [Resolved] (HDFS-6258) Namenode server-side storage for XAttrs

2014-05-02 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6258.
---

  Resolution: Fixed
   Fix Version/s: HDFS XAttrs (HDFS-2006)
Target Version/s: HDFS XAttrs (HDFS-2006)  (was: 3.0.0)
Hadoop Flags: Reviewed

I have just committed the patch to branch, thanks all.

> Namenode server-side storage for XAttrs
> ---
>
> Key: HDFS-6258
> URL: https://issues.apache.org/jira/browse/HDFS-6258
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
> HDFS-6258.4.patch, HDFS-6258.5.patch, HDFS-6258.6.patch, HDFS-6258.patch
>
>
> Namenode Server-side storage for XAttrs: FSNamesystem and friends.
> Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Hadoop-Hdfs-trunk - Build # 1748 - Still Failing

2014-05-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1748/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12865 lines...]

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[2:01:36.331s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [3.531s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 2:01:41.215s
[INFO] Finished at: Fri May 02 13:37:17 UTC 2014
[INFO] Final Memory: 43M/297M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-10562
Updating HDFS-6252
Updating HDFS-6289
Updating YARN-1696
Updating HDFS-6304
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.testBalancerWithRackLocality

Error Message:
expected:<1800> but was:<1814>

Stack Trace:
java.lang.AssertionError: expected:<1800> but was:<1814>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:253)




Build failed in Jenkins: Hadoop-Hdfs-trunk #1748

2014-05-02 Thread Apache Jenkins Server
See 

Changes:

[arp] HADOOP-10562. Undo previous commit.

[arp] HADOOP-10562. Namenode exits on exception without printing stack trace in 
AbstractDelegationTokenSecretManager. (Contributed by Suresh Srinivas)

[wheat9] HDFS-6252. Phase out the old web UI in HDFS. Contributed by Haohui Mai.

[vinodkv] YARN-1696. Added documentation for ResourceManager fail-over. 
Contributed by Karthik Kambatla, Masatake Iwasaki, Tsuyoshi OZAWA.

[atm] HDFS-6289. HA failover can fail if there are pending DN messages for DNs 
which no longer exist. Contributed by Aaron T. Myers.

[wheat9] HDFS-6304. Consolidate the logic of path resolution in FSDirectory. 
Contributed by Haohui Mai.

--
[...truncated 12672 lines...]
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.449 sec - in 
org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.614 sec - in 
org.apache.hadoop.hdfs.TestFileCreationEmpty
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.369 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 142.547 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.297 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.82 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.03 sec - in 
org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestBlockReaderFactory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.675 sec - in 
org.apache.hadoop.hdfs.TestBlockReaderFactory
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.129 sec - in 
org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.896 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.473 sec - in 
org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.576 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.492 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.332 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.881 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.271 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.175 sec - in 
org.apache.hadoop.hdfs.TestQuota
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.1 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.399 sec - in 
org.apache.hadoop.hdfs.TestDatanodeRegistration
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.043 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.517 sec - 
in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.628 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.675 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.329 sec - in 
org.apache.hadoop.hdfs.TestPeerCache
Running

[jira] [Created] (HDFS-6324) Shift XAttr helper code out for reuse.

2014-05-02 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6324:


 Summary: Shift XAttr helper code out for reuse.
 Key: HDFS-6324
 URL: https://issues.apache.org/jira/browse/HDFS-6324
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: HDFS XAttrs (HDFS-2006)


Shift XAttr helper code out for reuse: in DFSClient and WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6303) HDFS implementation of FileContext API for XAttrs.

2014-05-02 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6303.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

I have committed this to branch.

> HDFS implementation of FileContext API for XAttrs.
> --
>
> Key: HDFS-6303
> URL: https://issues.apache.org/jira/browse/HDFS-6303
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6303.2.patch, HDFS-6303.3.patch, HDFS-6303.patch
>
>
> HDFS implementation of AbstractFileSystem and FileContext for XAttrs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases

2014-05-02 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-6255.
-

Resolution: Not a Problem

Hi, [~schu].  I'm going to resolve this based on my last comment about fuse 
itself likely rejecting access before fuse_dfs gets involved at all.  If you 
find that this isn't what's happening in your environment and it really does 
look like a bad interaction with HDFS ACLs, please feel free to reopen.  Thank 
you.

> fuse_dfs will not adhere to ACL permissions in some cases
> -
>
> Key: HDFS-6255
> URL: https://issues.apache.org/jira/browse/HDFS-6255
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Stephen Chu
>Assignee: Chris Nauroth
>
> As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. 
> Then I set a new acl group:jenkins:rwx on /tmp/acl_dir.
> {code}
> jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir
> # file: /tmp/acl_dir
> # owner: hdfs
> # group: supergroup
> user::rwx
> group::---
> group:jenkins:rwx
> mask::rwx
> other::---
> {code}
> Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create 
> a file and directory inside.
> {code}
> [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1
> [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1
> hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/
> Found 2 items
> drwxr-xr-x   - jenkins supergroup  0 2014-04-17 19:11 
> /tmp/acl_dir/testdir1
> -rw-r--r--   1 jenkins supergroup  0 2014-04-17 19:11 
> /tmp/acl_dir/testfile1
> [jenkins@hdfs-vanilla-1 ~]$ 
> {code}
> However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a 
> fuse_dfs mount, I get permission denied. Same permission denied when I try to 
> create or list files.
> {code}
> [jenkins@hdfs-vanilla-1 tmp]$ ls -l
> total 16
> drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir
> drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2
> drwxr-xr-x 3 mapred  nobody 4096 Mar 11 03:53 mapred
> drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli
> -rwx-- 1 hdfsnobody0 Apr  7 17:18 tf1
> [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir
> bash: cd: acl_dir: Permission denied
> [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2
> touch: cannot touch `acl_dir/testfile2': Permission denied
> [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2
> mkdir: cannot create directory `acl_dir/testdir2': Permission denied
> [jenkins@hdfs-vanilla-1 tmp]$ 
> {code}
> The fuse_dfs debug output doesn't show any error for the above operations:
> {code}
> unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48
>unique: 18, success, outsize: 32
> unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80
> readdir[0] from 0
>unique: 19, success, outsize: 312
> unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56
> getattr /tmp
>unique: 20, success, outsize: 120
> unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80
>unique: 21, success, outsize: 16
> unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
>unique: 22, success, outsize: 16
> unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56
> getattr /tmp
>unique: 23, success, outsize: 120
> unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 24, success, outsize: 120
> unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 25, success, outsize: 120
> unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 26, success, outsize: 120
> unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 27, success, outsize: 120
> unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 28, success, outsize: 120
> {code}
> In other scenarios, ACL permissions are enforced successfully. For example, 
> as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set 
> the acl user:jenkins:--- on the directory. On the fuse mount, I am not able 
> to ls, mkdir, or touch to that directory as jenkins user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6325) Append should fail if the last block has unsufficient number of replicas

2014-05-02 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-6325:
-

 Summary: Append should fail if the last block has unsufficient 
number of replicas
 Key: HDFS-6325
 URL: https://issues.apache.org/jira/browse/HDFS-6325
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Konstantin Shvachko
 Fix For: 2.5.0


Currently append() succeeds on a file with the last block that has no replicas. 
But the subsequent updatePipeline() fails as there are no replicas with the 
exception "Unable to retrieve blocks locations for last block". This leaves the 
file unclosed, and others can not do anything with it until its lease expires.
The solution is to check replicas of the last block on the NameNode and fail 
during append() rather than during updatePipeline().
How many replicas should be present before NN allows to append? I see two 
options:
# min-replication: allow append if the last block is minimally replicated (1 by 
default)
# full-replication: allow append if the last block is fully replicated (3 by 
default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-02 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6326:
-

 Summary: WebHdfs ACL compatibility is broken
 Key: HDFS-6326
 URL: https://issues.apache.org/jira/browse/HDFS-6326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.4.0, 3.0.0
Reporter: Daryn Sharp
Priority: Blocker


2.4 ACL support is completely incompatible with <2.4 webhdfs servers.  The NN 
throws an {{IllegalArgumentException}} exception.

{code}
hadoop fs -ls webhdfs://nn/
Found 21 items
ls: Invalid value for webhdfs parameter "op": No enum constant 
org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
[... 20 more times...]
{code}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6327) Clean up FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6327:


 Summary: Clean up FSDirectory
 Key: HDFS-6327
 URL: https://issues.apache.org/jira/browse/HDFS-6327
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai


This is an umbrella jira that coves the clean up work on the FSDirectory class.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6328) Simplify code in FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6328:


 Summary: Simplify code in FSDirectory
 Key: HDFS-6328
 URL: https://issues.apache.org/jira/browse/HDFS-6328
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


This jira proposes:

# Cleaning up dead code in FSDirectory.
# Simplify the control flows that IntelliJ flags as warnings.
# Move functions related to resolving paths into one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6329) WebHdfs does not work if HA is enabled on NN but logical URI is not configured.

2014-05-02 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-6329:


 Summary: WebHdfs does not work if HA is enabled on NN but logical 
URI is not configured.
 Key: HDFS-6329
 URL: https://issues.apache.org/jira/browse/HDFS-6329
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Priority: Blocker


After HDFS-6100, namenode unconditionally puts the logical name (name service 
id) as the token service when redirecting webhdfs requests to datanodes, if it 
detects HA.

For HA configurations with no client-side failover proxy provider (e.g. IP 
failover), webhdfs does not work since the clients do not use logical name.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6330) Move mkdir() to FSNamesystem

2014-05-02 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6330:


 Summary: Move mkdir() to FSNamesystem
 Key: HDFS-6330
 URL: https://issues.apache.org/jira/browse/HDFS-6330
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


Currently mkdir() automatically creates all ancestors for a directory. This is 
implemented in FSDirectory, by calling unprotectedMkdir() along the path. This 
jira proposes to move the function to FSNamesystem to simplify the primitive 
that FSDirectory needs to provide.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6331) ClientProtocol#setXattr should not be annotated idempotent

2014-05-02 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6331:
-

 Summary: ClientProtocol#setXattr should not be annotated idempotent
 Key: HDFS-6331
 URL: https://issues.apache.org/jira/browse/HDFS-6331
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Andrew Wang


ClientProtocol#setXAttr is annotated @Idempotent, but this is incorrect since 
subsequent retries need to throw different exceptions based on the passed flags 
(e.g. CREATE, REPLACE).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6332) Support protobuf-based directory listing output suitable for OIV

2014-05-02 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6332:
-

 Summary: Support protobuf-based directory listing output suitable 
for OIV
 Key: HDFS-6332
 URL: https://issues.apache.org/jira/browse/HDFS-6332
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Andrew Wang






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6333) REST API for fetching directory listing file from NN

2014-05-02 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6333:
-

 Summary: REST API for fetching directory listing file from NN
 Key: HDFS-6333
 URL: https://issues.apache.org/jira/browse/HDFS-6333
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Andrew Wang


It'd be convenient if the NameNode supported fetching the directory listing via 
HTTP.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6334) Client failover proxy provider for IP failover based NN HA

2014-05-02 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-6334:


 Summary: Client failover proxy provider for IP failover based NN HA
 Key: HDFS-6334
 URL: https://issues.apache.org/jira/browse/HDFS-6334
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee


With RPCv9 and improvements in the SPNEGO auth handling, it is possible to set 
up a pair of HA namenodes utilizing IP failover as client-request fencing 
mechanism.

This jira will make it possible for HA to be configured without requiring use 
of logical URI and provide a simple IP failover proxy provider.  The change 
will allow any old implementation of {{FailoverProxyProvider}} to continue to 
work.



--
This message was sent by Atlassian JIRA
(v6.2#6252)