[
https://issues.apache.org/jira/browse/HDFS-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shashikant Banerjee resolved HDFS-15961.
Resolution: Fixed
> standby namenode failed to start ordered snapshot deletion is e
Haiyang Hu created HDFS-15998:
-
Summary: Fix NullPointException In listOpenFiles
Key: HDFS-15998
URL: https://issues.apache.org/jira/browse/HDFS-15998
Project: Hadoop HDFS
Issue Type: Bug
Aff
For more details, see
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/489/
[Apr 25, 2021 6:31:51 AM] (noreply) HDFS-15978. Solve
DatanodeManager#getBlockRecoveryCommand() printing IOException. (#2913)
Contributed by JiangHua Zhu.
[Apr 25, 2021 5:33:55 PM] (noreply) HADOOP-
Siyao Meng created HDFS-15997:
-
Summary: Implement dfsadmin -provisionSnapshotTrash -all
Key: HDFS-15997
URL: https://issues.apache.org/jira/browse/HDFS-15997
Project: Hadoop HDFS
Issue Type: Sub
For more details, see
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/280/
No changes
-1 overall
The following subsystems voted -1:
asflicense findbugs hadolint mvnsite pathlen unit
The following subsystems voted -1 but
were configured to be filtered/ignored
[
https://issues.apache.org/jira/browse/HDFS-15621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Stephen O'Donnell resolved HDFS-15621.
--
Resolution: Fixed
> Datanode DirectoryScanner uses excessive memory
> -
Thanks. I created a Jenkins job to upload the SNAPSHOT to Apache nexus
repository.
https://ci-hadoop.apache.org/view/Hadoop/job/Hadoop-thirdparty-trunk-Commit/
I can see the new artifacts uploaded. Let's see if the main Hadoop repo
precommit can consume the bits.
On Mon, Apr 26, 2021 at 6:02 PM A
Yep, you have to do it manually
-Ayush
> On 26-Apr-2021, at 3:23 PM, Wei-Chiu Chuang wrote:
>
>
> Does anyone know how we publish hadoop-thirdparty SNAPSHOT artifacts?
>
> The main Hadoop arifacts are published by this job
> https://ci-hadoop.apache.org/view/Hadoop/job/Hadoop-trunk-Commit/
Does anyone know how we publish hadoop-thirdparty SNAPSHOT artifacts?
The main Hadoop arifacts are published by this job
https://ci-hadoop.apache.org/view/Hadoop/job/Hadoop-trunk-Commit/ after
every commit.
However, we don't seem to publish hadoop-thirdparty regularly. (Apache
nexus:
https://repos
[
https://issues.apache.org/jira/browse/HDFS-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Takanobu Asanuma resolved HDFS-15991.
-
Fix Version/s: 3.4.0
Resolution: Fixed
> Add location into datanode info for NameN
If you are seeing precommit failures, like this:
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile
(default-compile) on project hadoop-annotations: Compilation failure
[ERROR] javac: invalid flag: -Xmaxwarns=
[ERROR] Usage: javac
[ERROR] use -help for
11 matches
Mail list logo