[jira] [Reopened] (HDFS-6134) Transparent data at rest encryption
[ https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur reopened HDFS-6134: -- [cross-posting with HADOOP-10150] Reopening HDFS-6134 After some offline discussions with Yi, Tianyou, ATM, Todd, Andrew and Charles we think is makes more sense to implement encryption for HDFS directly into the DistributedFileSystem client and to use CryptoFileSystem support encryption for FileSystems that don’t support native encryption. The reasons for this change of course are: * If we want to we add support for HDFS transparent compression, the compression should be done before the encryption (implying less entropy). If compression is to be handled by HDFS DistributedFileSystem, then the encryption has to be handled afterwards (in the write path). * The proposed CryptoSupport abstraction significantly complicates the implementation of CryptoFileSystem and the wiring in HDFS FileSystem client. * Building it directly into HDFS FileSystem client may allow us to avoid an extra copy of data. Because of this, the idea is now: * A common set of Crypto Input/Output streams. They would be used by CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. Note we cannot use the JDK Cipher Input/Output streams directly because we need to support the additional interfaces that the Hadoop FileSystem streams implement (Seekable, PositionedReadable, ByteBufferReadable, HasFileDescriptor, CanSetDropBehind, CanSetReadahead, HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind). * CryptoFileSystem. To support encryption in arbitrary FileSystems. * HDFS client encryption. To support transparent HDFS encryption. Both CryptoFilesystem and HDFS client encryption implementations would be built using the Crypto Input/Output streams, xAttributes and KeyProvider API. > Transparent data at rest encryption > --- > > Key: HDFS-6134 > URL: https://issues.apache.org/jira/browse/HDFS-6134 > Project: Hadoop HDFS > Issue Type: New Feature > Components: security >Affects Versions: 2.3.0 >Reporter: Alejandro Abdelnur >Assignee: Alejandro Abdelnur > Attachments: HDFSDataAtRestEncryption.pdf > > > Because of privacy and security regulations, for many industries, sensitive > data at rest must be in encrypted form. For example: the healthcare industry > (HIPAA regulations), the card payment industry (PCI DSS regulations) or the > US government (FISMA regulations). > This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can > be used transparently by any application accessing HDFS via Hadoop Filesystem > Java API, Hadoop libhdfs C library, or WebHDFS REST API. > The resulting implementation should be able to be used in compliance with > different regulation requirements. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6378) NFS: when portmap/rpcbind is not available, NFS registration should timeout instead of hanging
Brandon Li created HDFS-6378: Summary: NFS: when portmap/rpcbind is not available, NFS registration should timeout instead of hanging Key: HDFS-6378 URL: https://issues.apache.org/jira/browse/HDFS-6378 Project: Hadoop HDFS Issue Type: Bug Components: nfs Reporter: Brandon Li -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HDFS-6380) TestBalancerWithNodeGroup is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G resolved HDFS-6380. --- Resolution: Duplicate > TestBalancerWithNodeGroup is failing in trunk > - > > Key: HDFS-6380 > URL: https://issues.apache.org/jira/browse/HDFS-6380 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.0.0 >Reporter: Uma Maheswara Rao G > > Error Message > expected:<1800> but was:<1814> > Stacktrace > {noformat} > java.lang.AssertionError: expected:<1800> but was:<1814> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:144) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:253) > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6381) Fix a typo in INodeReference.java
Binglin Chang created HDFS-6381: --- Summary: Fix a typo in INodeReference.java Key: HDFS-6381 URL: https://issues.apache.org/jira/browse/HDFS-6381 Project: Hadoop HDFS Issue Type: Bug Reporter: Binglin Chang Assignee: Binglin Chang Priority: Trivial hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java {code} * For example, - * (1) Support we have /abc/foo, say the inode of foo is inode(id=1000,name=foo) + * (1) Suppose we have /abc/foo, say the inode of foo is inode(id=1000,name=foo) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6393) User settable xAttr to stop HDFS admins from reading/chowning a file
Alejandro Abdelnur created HDFS-6393: Summary: User settable xAttr to stop HDFS admins from reading/chowning a file Key: HDFS-6393 URL: https://issues.apache.org/jira/browse/HDFS-6393 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode, security Reporter: Alejandro Abdelnur Assignee: Charles Lamb A user should be able to an xAttr on any file in HDFS to stop an HDFS admin user from reading the file. The blacklist for chown/chgrp would also e enforced. This will stop an HDFS admin from aging access to job token files and getting HDFS DelegationTokens that would allow him/her to read an encrypted file. -- This message was sent by Atlassian JIRA (v6.2#6252)
Created branch for FileSystem encryption work
HDFS devs, I've just created the branch fs-encryption for HADOOP-10150 and HDFS-6134 work. Thanks. -- Alejandro
[jira] [Created] (HDFS-6396) Remove support for ACL feature from INodeSymlink
Andrew Wang created HDFS-6396: - Summary: Remove support for ACL feature from INodeSymlink Key: HDFS-6396 URL: https://issues.apache.org/jira/browse/HDFS-6396 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.4.0 Reporter: Andrew Wang Assignee: Charles Lamb Priority: Minor Symlinks cannot have ACLs, but we still have support for the ACL feature in INodeSymlink because of class inheritance. Let's remove this support for code consistency. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6367) Domain>#parse in EnumSetParam fails for parmater containing more than one enum.
Yi Liu created HDFS-6367: Summary: Domain>#parse in EnumSetParam fails for parmater containing more than one enum. Key: HDFS-6367 URL: https://issues.apache.org/jira/browse/HDFS-6367 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Affects Versions: 3.0.0 Reporter: Yi Liu Fix For: 3.0.0 Attachments: HDFS-6367.patch Fails because additional "," java.lang.IllegalArgumentException: No enum const class org.apache.hadoop.fs.Options$Rename.,OVERWRITE at java.lang.Enum.valueOf(Enum.java:196) at org.apache.hadoop.hdfs.web.resources.EnumSetParam$Domain.parse(EnumSetParam.java:85) at org.apache.hadoop.hdfs.web.resources.RenameOptionSetParam.(RenameOptionSetParam.java:45) at org.apache.hadoop.hdfs.web.resources.TestParam.testRenameOptionSetParam(TestParam.java:355) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6385) Show when block deletion will start after NameNode startup in WebUI
Jing Zhao created HDFS-6385: --- Summary: Show when block deletion will start after NameNode startup in WebUI Key: HDFS-6385 URL: https://issues.apache.org/jira/browse/HDFS-6385 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Jing Zhao HDFS-6186 provides functionality to delay block deletion for a period of time after NameNode startup. Currently we only show the number of pending block deletions in WebUI. We should also show when the block deletion will start in WebUI. -- This message was sent by Atlassian JIRA (v6.2#6252)