[jira] [Created] (HDFS-10863) hadoop superusergroup supergroup issue
www.jbigdata.fr created HDFS-10863: -- Summary: hadoop superusergroup supergroup issue Key: HDFS-10863 URL: https://issues.apache.org/jira/browse/HDFS-10863 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.7.2 Environment: $ hadoop version Hadoop 2.7.2 Reporter: www.jbigdata.fr Priority: Minor I want to match my unix user to HDFS: hduser:hadoop. For the user I use the VE. $ echo $HADOOP_HDFS_USER hduser For the group I use the hdfs-site.xml : dfs.permissions.superusergroup hadoop The namenode log file show the parameter user/group values. INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE) INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = hadoop INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true Everything seems to be OK, but when I copy file form FS to HDFS the group is not correct. It keeps the supergroup default value. Thoses shell commands show the issue: $ ll /srv/downloads/zk.tar -rw-r--r-- 1 hduser hadoop 41984000 Aug 18 13:25 /srv/downloads/zk.tar $ hdfs dfs -put /srv/downloads/zk.tar /tmp $ hdfs dfs -ls /tmp/zk.tar -rw-r--r-- 2 hduser supergroup 41984000 2016-09-14 12:47 /tmp/zk.tar I have: -rw-r--r-- 2 hduser supergroup 41984000 2016-09-14 12:47 /tmp/zk.tar I expect : -rw-r--r-- 2 hduser hadoop 41984000 2016-09-14 12:47 /tmp/zk.tar Why the HDFS group is not the value of the dfs.permissions.superusergroup property ? @jbigdata.fr -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/ [Sep 13, 2016 9:53:24 AM] (rohithsharmaks) YARN-5631. Missing refreshClusterMaxPriority usage in rmadmin help [Sep 13, 2016 2:41:27 PM] (jlowe) YARN-5630. NM fails to start after downgrade from 2.8 to 2.7. [Sep 13, 2016 4:38:12 PM] (aengineer) HDFS-10599. DiskBalancer: Execute CLI via Shell. Contributed by Manoj [Sep 13, 2016 6:02:36 PM] (wang) HDFS-10837. Standardize serializiation of WebHDFS DirectoryListing. [Sep 13, 2016 6:12:52 PM] (jing9) HADOOP-13546. Override equals and hashCode of the default retry policy [Sep 13, 2016 7:42:10 PM] (aengineer) HDFS-10562. DiskBalancer: update documentation on how to report issues [Sep 13, 2016 7:54:14 PM] (lei) HDFS-10636. Modify ReplicaInfo to remove the assumption that replica [Sep 14, 2016 2:14:31 AM] (aajisaka) HADOOP-13598. Add eol=lf for unix format files in .gitattributes. [Sep 15, 2016 2:46:00 AM] (kai.zheng) HADOOP-13218. Migrate other Hadoop side tests to prepare for removing -1 overall The following subsystems voted -1: asflicense mvnsite unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.TestEncryptionZones hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-compile-javac-root.txt [168K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-checkstyle-root.txt [16M] mvnsite: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-mvnsite-root.txt [112K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-patch-pylint.txt [16K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [192K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [124K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-10745) Directly resolve paths into INodesInPath
[ https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang reopened HDFS-10745: -- Sorry to reopen the JIRA, testing branch-2.7 patch. > Directly resolve paths into INodesInPath > > > Key: HDFS-10745 > URL: https://issues.apache.org/jira/browse/HDFS-10745 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-10745.2.patch, HDFS-10745.branch-2.patch, > HDFS-10745.patch > > > The intermediate resolution to a string, only to be decomposed by > {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly > to an IIP. The IIP will contain the resolved path if required. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [Release thread] 2.6.5 release activities
We ported 16 issues to branch-2.6. We will go ahead and start the release process, including cutting the release branch. If you have any critical change that should be made part of 2.6.5, please reach out to us and commit the changes. Thanks! Sangjin On Mon, Sep 12, 2016 at 3:24 PM, Sangjin Lee wrote: > Thanks Chris! > > I'll help Chris to get those JIRAs marked in his spreadsheet committed. > We'll cut the release branch shortly after that. If you have any critical > change that should be made part of 2.6.5 (CVE patches included), please > reach out to us and commit the changes. If all things go well, we'd like to > cut the branch in a few days. > > Thanks, > Sangjin > > On Fri, Sep 9, 2016 at 1:24 PM, Chris Trezzo wrote: > >> Hi all, >> >> I wanted to give an update on the Hadoop 2.6.5 release efforts. >> >> Here is what has been done so far: >> >> 1. I have gone through all of the potential backports and recorded the >> commit hashes for each of them from the branch that seems the most >> appropriate (i.e. if there was a backport to 2.7.x then I used the hash >> from the backport). >> >> 2. I verified if the cherry pick for each commit is clean. This was best >> effort as some of the patches are in parts of the code that I am less >> familiar with. This is recorded in the public spread sheet here: >> https://docs.google.com/spreadsheets/d/1lfG2CYQ7W4q3ol >> WpOCo6EBAey1WYC8hTRUemHvYPPzY/edit?usp=sharing >> >> I am going to need help from committers to get these backports committed. >> If there are any committers that have some spare cycles, especially if you >> were involved with the initial commit for one of these issues, please look >> at the spreadsheet and volunteer to backport one of the issues. >> >> As always, please let me know if you have any questions or feel that I >> have >> missed something. >> >> Thank you! >> Chris Trezzo >> >> On Mon, Aug 15, 2016 at 10:55 AM, Allen Wittenauer < >> a...@effectivemachines.com >> > wrote: >> >> > >> > > On Aug 12, 2016, at 8:19 AM, Junping Du wrote: >> > > >> > > In this community, we are so aggressive to drop Java 7 support in >> 3.0.x >> > release. Here, why we are so conservative to keep releasing new bits to >> > support Java 6? >> > >> > I don't view a group of people putting bug fixes into a micro >> > release as particularly conservative. If a group within the community >> > wasn't interested in doing it, 2.6.5 wouldn't be happening. >> > >> > But let's put the releases into context, because I think it >> tells >> > a more interesting story. >> > >> > * hadoop 2.6.x = EOLed JREs (6,7) >> > * hadoop 2.7 -> hadoop 2.x = transitional (7,8) >> > * hadoop 3.x = JRE 8 >> > * hadoop 4.x = JRE 9 >> > >> > There are groups of people still using JDK6 and they want bug >> > fixes in a maintenance release. Boom, there's 2.6.x. >> > >> > Hadoop 3.x has been pushed off for years for "reasons". So we >> > still have releases coming off of branch-2. If 2.7 had been released as >> > 3.x, this chart would look less weird. But it wasn't thus 2.x has this >> > weird wart in the middle of that supports JDK7 and JDK8. Given the >> public >> > policy and roadmaps of at least one major vendor at the time of this >> > writing, we should expect to see JDK7 support for at least the next two >> > years after 3.x appears. Bang, there's 2.x, where x is some large >> number. >> > >> > Then there is the future. People using JRE 8 want to use newer >> > dependencies. A reasonable request. Some of these dependency updates >> won't >> > work with JRE 7. We can't do that in hadoop 2.x in any sort of >> compatible >> > way without breaking the universe. (Tons of JIRAs on this point.) This >> > means we can only do it in 3.x (re: Hadoop Compatibility Guidelines). >> > Kapow, there's 3.x >> > >> > The log4j community has stated that v1 won't work with JDK9. In >> > turn, this means we'll need to upgrade to v2 at some point. Upgrading >> to >> > v2 will break the log4j properties file (and maybe other things?). >> Another >> > incompatible change and it likely won't appear until Apache Hadoop v4 >> > unless someone takes the initiative to fix it before v3 hits store >> > shelves. This makes JDK9 the likely target for Apache Hadoop v4. >> > >> > Having major release cadences tied to JRE updates isn't >> > necessarily a bad thing and definitely forces the community to a) >> actually >> > stop beating around the bush on majors and b) actually makes it >> relatively >> > easy to determine what the schedule looks like to some degree. >> > >> > >> > >> > >> > >> > - >> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org >> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org >> > >> > >> > >