On Sat, 5 Oct 2024 at 01:58, Wei-Chiu Chuang <weic...@apache.org> wrote:

> Hey the 3.4.1 tarball is a whopping 929MB! The corresponding docker image
> is over 1.1GB. Not that long ago, 3.2.3 was less than 500MB 2 years ago.
> 3.3.6 was less than 700MB a year ago.
> That AWS SDK v2 jar itself is more than 500MB.
>

for you: jhttps://issues.apache.org/jira/browse/HADOOP-19083

we now rip apart the tar, remove the  bundle.jar, retar and sign it again
under a different path.


> One issue I found with Ozone is protobuf classpath.
>
> this test is failing
>
> https://github.com/jojochuang/ozone/actions/runs/11187465927/job/31105329742
> because
>
> hdfs dfs -put /opt/hadoop/NOTICE.txt o3fs://
> bucket1.volume1.om//ozone-50948
>
> Exception in thread "main" java.lang.NoClassDefFoundError:
> com/google/protobuf/ServiceException
>         at
> org.apache.hadoop.ozone.om
> .protocolPB.Hadoop3OmTransportFactory.createOmTransport(Hadoop3OmTransportFactory.java:33)
>         at
> org.apache.hadoop.ozone.om
> .protocolPB.OmTransportFactory.create(OmTransportFactory.java:45)
>         at
>
> org.apache.hadoop.ozone.client.rpc.RpcClient.createOmTransport(RpcClient.java:414)
>         at
>
> org.apache.hadoop.ozone.client.rpc.RpcClient.&lt;init&gt;(RpcClient.java:261)
>         at
>
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:248)
>         at
>
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:231)
>         at
>
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:94)
>         at
>
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.&lt;init&gt;(BasicOzoneClientAdapterImpl.java:190)
>         at
>
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.&lt;init&gt;(OzoneClientAdapterImpl.java:51)
>         at
>
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:109)
>         at
>
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:200)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3615)
>         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:172)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3716)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3667)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:557)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:366)
>         at
> org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:347)
>         at
>
> org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:210)
>         at
>
> org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:289)
>         at org.apache.hadoop.fs.shell.Command.run(Command.java:191)
>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:97)
>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
> Caused by: java.lang.ClassNotFoundException:
> com.google.protobuf.ServiceException
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>         ... 25 more
>
>
> The classpath is
> sh-4.2$ hadoop classpath
>
> /opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/ozone/share/ozone/lib/ozone-filesystem-hadoop3-client-1.5.0-SNAPSHOT.jar
>
> Looks like caused by HADOOP-18487
> <https://issues.apache.org/jira/browse/HADOOP-18487> and I guess Ozone
> will
> need to declare explicit dependency on protobuf 2.5 to avoid this problem.
> This is fine.
>

yes, afraid so. there is one still lurking in timelineservice, but that is
gone in trunk as we remove hbase 1 support there.

./hadoop-3.4.1/share/hadoop/yarn/csi/lib/grpc-protobuf-1.53.0.jar
./hadoop-3.4.1/share/hadoop/yarn/csi/lib/grpc-protobuf-lite-1.53.0.jar
./hadoop-3.4.1/share/hadoop/yarn/csi/lib/proto-google-common-protos-2.9.0.jar
./hadoop-3.4.1/share/hadoop/yarn/timelineservice/lib/protobuf-java-2.5.0.jar
./hadoop-3.4.1/share/hadoop/yarn/timelineservice/lib/hbase-protocol-1.7.1.jar
./hadoop-3.4.1/share/hadoop/common/lib/hadoop-shaded-protobuf_3_25-1.3.0.jar
./hadoop-3.4.1/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_25-1.3.0.jar

 this change went in with 3.4.0.

the good news: you get to choose which protobuf version to use; if you use
our shaded one than you can bind to the (fixed) shaded version as they all
coexist with each other.


> Apart from that, there was one problem with s3a running against Ozone s3
> gateway with Hadoop 3.4.0, however, 3.4.1 is good.
>
> I believe this error means that during PutObject or MPU, Ozone considers
> the s3 object name as file system path names, but the client didn't give a
> proper fs path name.
> One of which is test/testOverwriteNonEmptyDirectory
> the other is test/testOverwriteEmptyDirectory
>
>
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369
>
>

We added a new performance flag for create which is "turn off the safety
checks"; it saves that LIST call to stop the caller overwriting a
directory. It looks like your store has a lot more of the "proper"
filesystem semantics so that overwrite check is happening at the far end
anyway. This is a good thing: it means we can save on that LIST call while
also stopping directory overwrites.

I think we could add a flag for the system which declares that the far end
does these checks. Because we could actually do it in the live system at
that point. In your store, there is no need at all to do the safety check,
even when the performance flag is not set.

Create a new Jira and for 3.4.2 we will let you add an option for to
declare that your store does the safety checks; we will use this in
production everywhere and modify this test.

OK, this is happening be

> Error: Tests run: 32, Failures: 0, Errors: 2, Skipped: 8, Time elapsed:
> 18.272 s <<< FAILURE! - in
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
> 1061
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1062
> >Error:
>
> testOverwriteNonEmptyDirectory[1](org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate)
> Time elapsed: 0.247 s <<< ERROR!
> 1062
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1063
> >org.apache.hadoop.fs.s3a.AWSBadRequestException:
> Writing Object on test/testOverwriteNonEmptyDirectory:
> software.amazon.awssdk.services.s3.model.S3Exception: An error occurred
> (InvalidRequest) when calling the PutObject/MPU PartUpload operation:
> ozone.om.enable.filesystem.paths is enabled Keys are considered as Unix
> Paths. Path has Violated FS Semantics which caused put operation to fail.
> (Service: S3, Status Code: 400, Request ID:
> b6532d68-ee29-4b74-8b2f-94d12a5ab4f2, Extended Request ID:
> j4wgGna6gqWdHB):InvalidRequest: An error occurred (InvalidRequest) when
> calling the PutObject/MPU PartUpload operation:
> ozone.om.enable.filesystem.paths is enabled Keys are considered as Unix
> Paths. Path has Violated FS Semantics which caused put operation to fail.
> (Service: S3, Status Code: 400, Request ID:
> b6532d68-ee29-4b74-8b2f-94d12a5ab4f2, Extended Request ID: j4wgGna6gqWdHB)
> 1063
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1064
> >
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:260)
> 1064
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1065
> >
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
> 1065
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1066
> >
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
> 1066
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1067
> >
> at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
> 1067
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1068
> >
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
> 1068
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1069
> >
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
> 1069
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1070
> >
> at
>
> org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:205)
>
> 1070
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1071
> >
> at
>
> org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:523)
>
> 1071
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1072
> >
> at
>
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:620)
>
> 1072
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1073
> >
> at
>
> org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
>
> 1073
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1074
> >
> at
>
> org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:75)
>
> 1074
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1075
> >
> at
>
> org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
>
> 1075
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1076
> >
> at
>
> org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
>
> 1076
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1077
> >
> at
>
> org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
>
> 1077
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1078
> >
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>
> 1078
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1079
> >
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>
> 1079
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1080
> >
> at java.base/java.lang.Thread.run(Thread.java:829)
>
>
> Error:
> testOverwriteEmptyDirectory[1](org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate)
> Time elapsed: 0.226 s <<< ERROR!
> 1131
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1132
> >org.apache.hadoop.fs.s3a.AWSBadRequestException:
> Writing Object on test/testOverwriteEmptyDirectory:
> software.amazon.awssdk.services.s3.model.S3Exception: An error occurred
> (InvalidRequest) when calling the PutObject/MPU PartUpload operation:
> ozone.om.enable.filesystem.paths is enabled Keys are considered as Unix
> Paths. Path has Violated FS Semantics which caused put operation to fail.
> (Service: S3, Status Code: 400, Request ID:
> 501f3dba-c8b1-4476-9367-29493be35c63, Extended Request ID:
> OStjB6k6vsjvO):InvalidRequest: An error occurred (InvalidRequest) when
> calling the PutObject/MPU PartUpload operation:
> ozone.om.enable.filesystem.paths is enabled Keys are considered as Unix
> Paths. Path has Violated FS Semantics which caused put operation to fail.
> (Service: S3, Status Code: 400, Request ID:
> 501f3dba-c8b1-4476-9367-29493be35c63, Extended Request ID: OStjB6k6vsjvO)
> 1132
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1133
> >
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:260)
> 1133
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1134
> >
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
> 1134
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1135
> >
> at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
> 1135
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1136
> >
> at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
> 1136
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1137
> >
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
> 1137
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1138
> >
> at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
> 1138
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1139
> >
> at
>
> org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:205)
>
> 1139
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1140
> >
> at
>
> org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:523)
>
> 1140
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1141
> >
> at
>
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:620)
>
> 1141
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1142
> >
> at
>
> org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
>
> 1142
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1143
> >
> at
>
> org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:75)
>
> 1143
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1144
> >
> at
>
> org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
>
> 1144
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1145
> >
> at
>
> org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
>
> 1145
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1146
> >
> at
>
> org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
>
> 1146
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1147
> >
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>
> 1147
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1148
> >
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>
> 1148
> <
> https://github.com/jojochuang/ozone/actions/runs/11169844919/job/31094862369#step:5:1149
> >
> at java.base/java.lang.Thread.run(Thread.java:829)
>
>
> On Tue, Oct 1, 2024 at 12:53 PM Wei-Chiu Chuang <weic...@apache.org>
> wrote:
>
> > Hi I'm late to the party, but I'd like to build and test this release
> with
> > Ozone and HBase.
> >
> > On Tue, Oct 1, 2024 at 2:12 AM Mukund Madhav Thakur
> > <mtha...@cloudera.com.invalid> wrote:
> >
> >> Thanks @Dongjoon Hyun <dongjoon.h...@gmail.com> for trying out the RC
> and
> >> finding out this bug. This has to be fixed.
> >> It would be great if others can give the RC a try such that we know of
> any
> >> issues earlier.
> >>
> >> Thanks
> >> Mukund
> >>
> >> On Tue, Oct 1, 2024 at 2:21 AM Steve Loughran
> <ste...@cloudera.com.invalid
> >> >
> >> wrote:
> >>
> >> > ok, we will have to consider that a -1
> >> >
> >> > Interestingly we haven't seen that on any of our internal QE, maybe
> >> none of
> >> > the requests weren't overlapping.
> >> >
> >> > I was just looking towards an =0 because of
> >> >
> >> > https://issues.apache.org/jira/browse/HADOOP-19295
> >> >
> >> > *Unlike the v1 sdk, PUT/POST of data now shares the same timeout as
> all
> >> > other requests, and on a slow network connection requests time out.
> >> > Furthermore, large file uploads cn generate the same failure
> >> > condition because the competing block uploads reduce the bandwidth for
> >> the
> >> > others.*
> >> >
> >> > I'll describe more on the JIRA -the fix is straightforward, have a
> much
> >> > longer timeout, such as 15 minutes. It will mean that problems with
> >> other
> >> > calls will not timeout for the same time.
> >> >
> >> > Note that In previous releases that request timeout *did not* apply to
> >> the
> >> > big upload. This has been reverted.
> >> >
> >> > This is not a regression between 3.4.0; it had the same problem just
> >> nobody
> >> > has noticed. That's what comes from doing a lot of the testing within
> >> AWS
> >> > and other people doing the testing (me) not trying to upload files >
> >> 1GB. I
> >> > have now.
> >> >
> >> > Anyway, I do not consider that a -1 because it wasn't a regression and
> >> it's
> >> > straightforward to work around in a site configuration.
> >> >
> >> > Other than that, my findings were
> >> > -Pnative breaks enforcer on macos (build only; fix is upgrade enforcer
> >> > version)
> >> >
> >> > -native code probes on my ubuntu rasberry pi5 (don't laugh -this is
> the
> >> > most powerful computer I personally own) wan about a missing link in
> the
> >> > native checks.
> >> >  I haven't yet set up openssl bindings for s3a and abfs to see if they
> >> > actually work.
> >> >
> >> >   [hadoopq] 2024-09-27 19:52:16,544 WARN crypto.OpensslCipher: Failed
> to
> >> > load OpenSSL Cipher.
> >> >   [hadoopq] java.lang.UnsatisfiedLinkError: EVP_CIPHER_CTX_block_size
> >> >   [hadoopq]     at
> org.apache.hadoop.crypto.OpensslCipher.initIDs(Native
> >> > Method)
> >> >   [hadoopq]     at
> >> > org.apache.hadoop.crypto.OpensslCipher.<clinit>(OpensslCipher.java:90)
> >> >   [hadoopq]     at
> >> > org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.
> >> >
> >> > You're one looks like it is. Pity -but thank you for the testing. Give
> >> it a
> >> > couple more days to see if people report any other issues.
> >> >
> >> > Mukund has been doing all the work on this; I'll see how much I can do
> >> > myself to share the joy.
> >> >
> >> > On Sun, 29 Sept 2024 at 06:24, Dongjoon Hyun <dongj...@apache.org>
> >> wrote:
> >> >
> >> > > Unfortunately, it turns out to be a regression in addition to a
> >> breaking
> >> > > change.
> >> > >
> >> > > In short, HADOOP-19098 (or more) makes Hadoop 3.4.1 fails even when
> >> users
> >> > > give disjoint ranges.
> >> > >
> >> > > I filed a Hadoop JIRA issue and a PR. Please take a look at that.
> >> > >
> >> > > - HADOOP-19291. `CombinedFileRange.merge` should not convert
> disjoint
> >> > > ranges into overlapped ones
> >> > > - https://github.com/apache/hadoop/pull/7079
> >> > >
> >> > > I believe this is a Hadoop release blocker for both Apache ORC and
> >> Apache
> >> > > Parquet project perspective.
> >> > >
> >> > > Dongjoon.
> >> > >
> >> > > On 2024/09/29 03:16:18 Dongjoon Hyun wrote:
> >> > > > Thank you for 3.4.1 RC2.
> >> > > >
> >> > > > HADOOP-19098 (Vector IO: consistent specified rejection of
> >> overlapping
> >> > > ranges) seems to be a hard breaking change at 3.4.1.
> >> > > >
> >> > > > Do you think we can have an option to handle the overlapping
> ranges
> >> in
> >> > > Hadoop layer instead of introducing a breaking change to the users
> at
> >> the
> >> > > maintenance release?
> >> > > >
> >> > > > Dongjoon.
> >> > > >
> >> > > > On 2024/09/25 20:13:48 Mukund Madhav Thakur wrote:
> >> > > > > Apache Hadoop 3.4.1
> >> > > > >
> >> > > > >
> >> > > > > With help from Steve I have put together a release candidate
> (RC2)
> >> > for
> >> > > > > Hadoop 3.4.1.
> >> > > > >
> >> > > > >
> >> > > > > What we would like is for anyone who can to verify the tarballs,
> >> > > especially
> >> > > > >
> >> > > > > anyone who can try the arm64 binaries as we want to include them
> >> too.
> >> > > > >
> >> > > > >
> >> > > > > The RC is available at:
> >> > > > >
> >> > > > > https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/
> >> > > > >
> >> > > > >
> >> > > > > The git tag is release-3.4.1-RC2, commit
> >> > > > > b3a4b582eeb729a0f48eca77121dd5e2983b2004
> >> > > > >
> >> > > > >
> >> > > > > The maven artifacts are staged at
> >> > > > >
> >> > > > >
> >> > >
> >> https://repository.apache.org/content/repositories/orgapachehadoop-1426
> >> > > > >
> >> > > > >
> >> > > > > You can find my public key at:
> >> > > > >
> >> > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >> > > > >
> >> > > > >
> >> > > > > Change log
> >> > > > >
> >> > > > >
> >> > >
> >> >
> >>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/CHANGELOG.md
> >> > > > >
> >> > > > >
> >> > > > > Release notes
> >> > > > >
> >> > > > >
> >> > >
> >> >
> >>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/RELEASENOTES.md
> >> > > > >
> >> > > > >
> >> > > > > This is off branch-3.4.
> >> > > > >
> >> > > > >
> >> > > > > Key changes include
> >> > > > >
> >> > > > >
> >> > > > > * Bulk Delete API.
> >> > https://issues.apache.org/jira/browse/HADOOP-18679
> >> > > > >
> >> > > > > * Fixes and enhancements in Vectored IO API.
> >> > > > >
> >> > > > > * Improvements in Hadoop Azure connector.
> >> > > > >
> >> > > > > * Fixes and improvements post upgrade to AWS V2 SDK in
> >> S3AConnector.
> >> > > > >
> >> > > > > * This release includes Arm64 binaries. Please can anyone with
> >> > > > >
> >> > > > >   compatible systems validate these.
> >> > > > >
> >> > > > >
> >> > > > > Note, because the arm64 binaries are built separately on a
> >> different
> >> > > > >
> >> > > > > platform and JVM, their jar files may not match those of the x86
> >> > > > >
> >> > > > > release -and therefore the maven artifacts. I don't think this
> is
> >> > > > >
> >> > > > > an issue (the ASF actually releases source tarballs, the
> binaries
> >> are
> >> > > > >
> >> > > > > there for help only, though with the maven repo that's a bit
> >> > blurred).
> >> > > > >
> >> > > > >
> >> > > > > The only way to be consistent would actually untar the
> x86.tar.gz,
> >> > > > >
> >> > > > > overwrite its binaries with the arm stuff, retar, sign and push
> >> out
> >> > > > >
> >> > > > > for the vote. Even automating that would be risky.
> >> > > > >
> >> > > > >
> >> > > > > Please try the release and vote. The vote will run for 5 days.
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > Thanks,
> >> > > > >
> >> > > > > Mukund
> >> > > > >
> >> > > >
> >> > > >
> >> ---------------------------------------------------------------------
> >> > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> > > > For additional commands, e-mail:
> common-dev-h...@hadoop.apache.org
> >> > > >
> >> > > >
> >> > >
> >> > >
> ---------------------------------------------------------------------
> >> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >> > >
> >> > >
> >> >
> >>
> >
>

Reply via email to