I am not sure what classifies as a Hack and what not, I thought reflections
are part of Java.
Whatever solution but pulling in just the HDFS specific stuff to FileSystem
just for Ozone, because Hbase guys didn’t agree and we have people in
Hadoop who we can convince, I am -1 to such an approach an
Wei-Hsiang Lin created HADOOP-18672:
---
Summary: ask: abfs connector to support checksum
Key: HADOOP-18672
URL: https://issues.apache.org/jira/browse/HADOOP-18672
Project: Hadoop Common
Issue
[
https://issues.apache.org/jira/browse/HADOOP-18668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wei-Chiu Chuang resolved HADOOP-18668.
--
Fix Version/s: 3.4.0
Resolution: Fixed
> Path capability probe for truncate is
Hbase doesn’t want to add Ozone as a dependency sounds to me like a ‘Hbase
having resistance against the people proposing or against Ozone’
Anyway doesn’t ViewDistributedFileSystem not solve this Ozone problem, I
remember Uma chasing that to solve these problems only?
Pulling up the core HDFS A
For more details, see
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/972/
No changes
ERROR: File 'out/email-report.txt' does not exist
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.
Wei-Chiu Chuang created HADOOP-18671:
Summary: Add recoverLease(), setSafeMode(), isFileClosed() APIs to
FileSystem
Key: HADOOP-18671
URL: https://issues.apache.org/jira/browse/HADOOP-18671
Projec
For more details, see
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1171/
No changes
-1 overall
The following subsystems voted -1:
blanks hadolint pathlen spotbugs unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc chec
Thank you. Makes sense to me. Yes, as part of this effort we are going to
need contract tests.
On Fri, Mar 17, 2023 at 3:52 AM Steve Loughran
wrote:
>1. I think a new interface would be good as FileContext could do the
>same thing
>2. using PathCapabilities probes should still be man
+1
Thank you for the release candidate, Steve!
* Verified all checksums.
* Verified all signatures.
* Built from source, including native code on Linux.
* mvn clean package -Pnative -Psrc -Drequire.openssl -Drequire.snappy
-Drequire.zstd -DskipTests
* Tests passed.
* mvn --fail-never clea
[
https://issues.apache.org/jira/browse/HADOOP-18670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran resolved HADOOP-18670.
-
Resolution: Invalid
hadoop guava on 3.3.0+ is 27+. therefore this is unlikely to be a d
+1(Binding)
* Built from source (x86 & ARM)
* Successful Native Build (x86 & ARM)
* Verified Checksums (x86 & ARM)
* Verified Signature (x86 & ARM)
* Checked the output of hadoop version (x86 & ARM)
* Verified the output of hadoop checknative (x86 & ARM)
* Ran some basic HDFS shell commands.
* Ran
+1
* Verified signature and checksum of the source tarball.
* Built the source code on Ubuntu and OpenJDK 11 by `mvn clean package
-DskipTests -Pnative -Pdist -Dtar`.
* Setup pseudo cluster with HDFS and YARN.
* Run simple FsShell - mkdir/put/get/mv/rm (include EC) and check the
result.
* Run exam
12 matches
Mail list logo