Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2
+1 binding. - Verified hashes and signatures - Built from source - Verfied that the last commit of release binary matches the tag - Start a ozone cluster with docker compose - Run ozone sh and scmcli commands - Run freon to create 100k keys with validation Thanks Dinesh for driving the release. Bests, Sammi On Mon, Mar 16, 2020 at 10:27 AM Dinesh Chitlangia wrote: > Hi Folks, > > We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta. > > The RC artifacts are at: > https://home.apache.org/~dineshc/ozone-0.5.0-rc2/ > > The public key used for signing the artifacts can be found at: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > The maven artifacts are staged at: > https://repository.apache.org/content/repositories/orgapachehadoop-1262 > > The RC tag in git is at: > https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2 > > This release contains 800+ fixes/improvements [1]. > Thanks to everyone who put in the effort to make this happen. > > *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm PST.* > > Note: This release is beta quality, it’s not recommended to use in > production but we believe that it’s stable enough to try out the feature > set and collect feedback. > > > [1] https://s.apache.org/ozone-0.5.0-fixed-issues > > Thanks, > Dinesh Chitlangia >
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.TestMultipleNNPortQOP hadoop.hdfs.TestRollingUpgrade hadoop.hdfs.server.namenode.TestDecommissioningStatus hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.registry.secure.TestSecureLogins hadoop.mapreduce.v2.TestUberAM hadoop.tools.TestDistCpSystem cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [240K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K] https://builds.apache.org/job/hadoop-qbt-b
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/ [Mar 21, 2020 4:44:55 PM] (tasanuma) HDFS-15214. WebHDFS: Add snapshot counts to Content Summary. Contributed -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml FindBugs : module:hadoop-cloud-storage-project/hadoop-cos Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66] org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178] org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged Failed CTEST tests : remote_block_reader memcheck_remote_block_reader bad_datanode memcheck_bad_datanode Failed junit tests : hadoop.io.compress.snappy.TestSnappyCompressorDecompressor hadoop.io.compress.TestCompressorDecompressor hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapreduce.TestMapreduceConfigFields cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-compile-cc-root.txt [32K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-compile-javac-root.txt [428K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-checkstyle-root.txt [16M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/whitespace-eol.txt [13M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/whitespace-tabs.txt [1.9M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/xml.txt [20K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K] javadoc
Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2
+1 Deployed a 3 node cluster Tried ozone shell and filesystem commands Ran freon load generator Thanks Dinesh for working on the RC2. On Sun, Mar 15, 2020 at 7:27 PM Dinesh Chitlangia wrote: > Hi Folks, > > We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta. > > The RC artifacts are at: > https://home.apache.org/~dineshc/ozone-0.5.0-rc2/ > > The public key used for signing the artifacts can be found at: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > The maven artifacts are staged at: > https://repository.apache.org/content/repositories/orgapachehadoop-1262 > > The RC tag in git is at: > https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2 > > This release contains 800+ fixes/improvements [1]. > Thanks to everyone who put in the effort to make this happen. > > *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm PST.* > > Note: This release is beta quality, it’s not recommended to use in > production but we believe that it’s stable enough to try out the feature > set and collect feedback. > > > [1] https://s.apache.org/ozone-0.5.0-fixed-issues > > Thanks, > Dinesh Chitlangia >
[jira] [Reopened] (HDFS-15113) Missing IBR when NameNode restart if open processCommand async feature
[ https://issues.apache.org/jira/browse/HDFS-15113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reopened HDFS-15113: Reopen to have the addendum tested. > Missing IBR when NameNode restart if open processCommand async feature > -- > > Key: HDFS-15113 > URL: https://issues.apache.org/jira/browse/HDFS-15113 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Blocker > Fix For: 3.3.0 > > Attachments: HDFS-15113.001.patch, HDFS-15113.002.patch, > HDFS-15113.003.patch, HDFS-15113.004.patch, HDFS-15113.005.patch, > HDFS-15113.addendum.patch > > > Recently, I meet one case that NameNode missing block after restart which is > related with HDFS-14997. > a. during NameNode restart, it will return command `DNA_REGISTER` to DataNode > when receive some RPC request from DataNode. > b. when DataNode receive `DNA_REGISTER` command, it will run #reRegister > async. > {code:java} > void reRegister() throws IOException { > if (shouldRun()) { > // re-retrieve namespace info to make sure that, if the NN > // was restarted, we still match its version (HDFS-2120) > NamespaceInfo nsInfo = retrieveNamespaceInfo(); > // and re-register > register(nsInfo); > scheduler.scheduleHeartbeat(); > // HDFS-9917,Standby NN IBR can be very huge if standby namenode is down > // for sometime. > if (state == HAServiceState.STANDBY || state == > HAServiceState.OBSERVER) { > ibrManager.clearIBRs(); > } > } > } > {code} > c. As we know, #register will trigger BR immediately. > d. because #reRegister run async, so we could not make sure which one run > first between send FBR and clear IBR. If clean IBR run first, it will be OK. > But if send FBR first then clear IBR, it will missing some blocks received > between these two time point until next FBR. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1447/ [Mar 22, 2020 6:14:18 AM] (ayushsaxena) HDFS-15227. NPE if the last block changes from COMMITTED to COMPLETE - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2
+1 binding Download source and verify signature. Verify build and documents. Deployed an 11 node cluster (3 om with ha, 6 datanodes, 1 scm and 1 s3g) Verify multiple RATIS-3 pipelines are created as expected. Tried ozone shell commands via o3 and o3fs, focus on security and HA related. Only find a few minor issues that we can fix in followup JIRAs. 1) ozone getconf -ozonemanagers does not return all the om instances bash-4.2$ ozone getconf -ozonemanagers 0.0.0.0 2) The document on specifying service/ID can be improved. More specifically, the URI should give examples for the Service ID in HA. Currently, it only mentions host/port. ozone sh vol create /vol1 Service ID or host name must not be omitted when ozone.om.service.ids is defined. bash-4.2$ ozone sh vol create --help Usage: ozone sh volume create [-hV] [--root] [-q=] [-u=] Creates a volume for the specified user URI of the volume. Ozone URI could start with o3:// or without prefix. URI may contain the host and port of the OM server. Both are optional. If they are not specified it will be identified from the config files. 3). ozone scmcli container list seems report incorrect numberOfKeys and usedBytes Also, container owner is set as the current leader om(om3), should we use the om service id here instead? bash-4.2$ ozone scmcli container list { "state" : "OPEN", "replicationFactor" : "THREE", "replicationType" : "RATIS", "usedBytes" : 3813, "numberOfKeys" : 1, ... bash-4.2$ ozone sh key list o3://id1/vol1/bucket1/ { "volumeName" : "vol1", "bucketName" : "bucket1", "name" : "k1", "dataSize" : 3813, "creationTime" : "2020-03-23T03:23:30.670Z", "modificationTime" : "2020-03-23T03:23:33.207Z", "replicationType" : "RATIS", "replicationFactor" : 3 } { "volumeName" : "vol1", "bucketName" : "bucket1", "name" : "k2", "dataSize" : 3813, "creationTime" : "2020-03-23T03:18:46.735Z", "modificationTime" : "2020-03-23T03:20:15.005Z", "replicationType" : "RATIS", "replicationFactor" : 3 } Run freon with random key generation. Thanks Dinesh for driving the the release of Beta RC2. Xiaoyu On Sun, Mar 22, 2020 at 2:51 PM Aravindan Vijayan wrote: > +1 > Deployed a 3 node cluster > Tried ozone shell and filesystem commands > Ran freon load generator > > Thanks Dinesh for working on the RC2. > > On Sun, Mar 15, 2020 at 7:27 PM Dinesh Chitlangia > wrote: > > > Hi Folks, > > > > We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta. > > > > The RC artifacts are at: > > https://home.apache.org/~dineshc/ozone-0.5.0-rc2/ > > > > The public key used for signing the artifacts can be found at: > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > > > The maven artifacts are staged at: > > https://repository.apache.org/content/repositories/orgapachehadoop-1262 > > > > The RC tag in git is at: > > https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2 > > > > This release contains 800+ fixes/improvements [1]. > > Thanks to everyone who put in the effort to make this happen. > > > > *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm > PST.* > > > > Note: This release is beta quality, it’s not recommended to use in > > production but we believe that it’s stable enough to try out the feature > > set and collect feedback. > > > > > > [1] https://s.apache.org/ozone-0.5.0-fixed-issues > > > > Thanks, > > Dinesh Chitlangia > > >
Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2
+1(non binding) *Built from source *Verified Checksums *Ran some basic Shell Commands. Thanx Dinesh for driving the release. -Ayush On Mon, 16 Mar 2020 at 07:57, Dinesh Chitlangia wrote: > Hi Folks, > > We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta. > > The RC artifacts are at: > https://home.apache.org/~dineshc/ozone-0.5.0-rc2/ > > The public key used for signing the artifacts can be found at: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > The maven artifacts are staged at: > https://repository.apache.org/content/repositories/orgapachehadoop-1262 > > The RC tag in git is at: > https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2 > > This release contains 800+ fixes/improvements [1]. > Thanks to everyone who put in the effort to make this happen. > > *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm PST.* > > Note: This release is beta quality, it’s not recommended to use in > production but we believe that it’s stable enough to try out the feature > set and collect feedback. > > > [1] https://s.apache.org/ozone-0.5.0-fixed-issues > > Thanks, > Dinesh Chitlangia >
[jira] [Created] (HDFS-15232) Some CTESTs are failing after HADOOP-16054
Akira Ajisaka created HDFS-15232: Summary: Some CTESTs are failing after HADOOP-16054 Key: HDFS-15232 URL: https://issues.apache.org/jira/browse/HDFS-15232 Project: Hadoop HDFS Issue Type: Bug Components: native Reporter: Akira Ajisaka Failed CTEST tests after HADOOP-16054: * remote_block_reader * memcheck_remote_block_reader * bad_datanode * memcheck_bad_datanode -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org