Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/583/ [Feb 23, 2022 3:42:01 AM] (Wei-Chiu Chuang) HDFS-11041. Unable to unregister FsDatasetState MBean if DataNode is shutdown twice. Contributed by Wei-Chiu Chuang. [Error replacing 'FILE' - Workspace is not accessible] - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/791/ [Feb 23, 2022 7:38:10 PM] (noreply) HADOOP-18071. ABFS: Set driver global timeout for ITestAzureBlobFileSystemBasics (#3866) [Error replacing 'FILE' - Workspace is not accessible] - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.3.2 - RC5
+1 (non-binding) Using hadoop-vote.sh: * Signature: ok * Checksum : ok * Rat check (1.8.0_301): ok - mvn clean apache-rat:check * Built from source (1.8.0_301): ok - mvn clean install -DskipTests * Built tar from source (1.8.0_301): ok - mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true * Basic functional testing on pseudo distributed cluster (carry-forwarded from RC4): HDFS, MapReduce, ATSv2, HBase (2.x) * Jira fixVersions seem consistent with git commits On Tue, Feb 22, 2022 at 10:47 AM Chao Sun wrote: > Hi all, > > Here's Hadoop 3.3.2 release candidate #5: > > The RC is available at: http://people.apache.org/~sunchao/hadoop-3.3.2-RC5 > The RC tag is at: > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC5 > The Maven artifacts are staged at: > https://repository.apache.org/content/repositories/orgapachehadoop-1335 > > You can find my public key at: > https://downloads.apache.org/hadoop/common/KEYS > > CHANGELOG is the only difference between this and RC4. Therefore, the tests > I've done in RC4 are still valid: > - Ran all the unit tests > - Started a single node HDFS cluster and tested a few simple commands > - Ran all the tests in Spark using the RC5 artifacts > > Please evaluate the RC and vote, thanks! > > Best, > Chao >
[jira] [Created] (HDFS-16482) ObserverNamenode throw FileNotFoundException on addBlock
cp created HDFS-16482: - Summary: ObserverNamenode throw FileNotFoundException on addBlock Key: HDFS-16482 URL: https://issues.apache.org/jira/browse/HDFS-16482 Project: Hadoop HDFS Issue Type: Bug Components: dfsclient Affects Versions: 3.2.1 Reporter: cp dfsClient call ObsererNamenode throw FileNotFoundException: ``` at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2898) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2777) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892) ``` should `addBlock` be a coordinated call? -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16397) Reconfig slow disk parameters for datanode
[ https://issues.apache.org/jira/browse/HDFS-16397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma resolved HDFS-16397. - Fix Version/s: 3.4.0 Resolution: Fixed Merged to trunk. I will try to backport it into branch-3.3 later. > Reconfig slow disk parameters for datanode > -- > > Key: HDFS-16397 > URL: https://issues.apache.org/jira/browse/HDFS-16397 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h > Remaining Estimate: 0h > > In large clusters, rolling restart datanodes takes long time. We can make > slow peers parameters and slow disks parameters in datanode reconfigurable to > facilitate cluster operation and maintenance. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-16483) RBF: DataNode talk to Router requesting block info in WebHDFS
Fengnan Li created HDFS-16483: - Summary: RBF: DataNode talk to Router requesting block info in WebHDFS Key: HDFS-16483 URL: https://issues.apache.org/jira/browse/HDFS-16483 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Reporter: Fengnan Li Assignee: Fengnan Li In Webhdfs, before router redirects the OPEN call to datanode, it will attach the namenoderpcaddress param. When Datanode WebHdfsHandler takes the call, it will construct a DFSClient based on the ip address, which is pointing to Router. This is OK when Router and Datanode are both secure or nonsecure. However when DN is not but Router is secure, there will be org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]] Comments are welcome in terms of how to fix this. One way is to always make Datanode construct the DFSClient based on the default FS since the default FS is always the Namenode in the same cluster which should is with the same security setting as Datanode. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-16484) fix an infinite loop bug in SPSPathIdProcessor thread
qinyuren created HDFS-16484: --- Summary: fix an infinite loop bug in SPSPathIdProcessor thread Key: HDFS-16484 URL: https://issues.apache.org/jira/browse/HDFS-16484 Project: Hadoop HDFS Issue Type: Sub-task Reporter: qinyuren -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org