Hi,
I am not sure about this but is there any requirement to use S3a at all ?
Regards,
Gourav
On Tue, Jul 21, 2020 at 12:07 PM Steve Loughran
wrote:
>
>
> On Tue, 7 Jul 2020 at 03:42, Stephen Coy
> wrote:
>
>> Hi Steve,
>>
>> While I understand your point regarding the mixing of Hadoop jars,
On Tue, 7 Jul 2020 at 03:42, Stephen Coy
wrote:
> Hi Steve,
>
> While I understand your point regarding the mixing of Hadoop jars, this
> does not address the java.lang.ClassNotFoundException.
>
> Prebuilt Apache Spark 3.0 builds are only available for Hadoop 2.7 or
> Hadoop 3.2. Not Hadoop 3.1.
Hi Steve,
While I understand your point regarding the mixing of Hadoop jars, this does
not address the java.lang.ClassNotFoundException.
Prebuilt Apache Spark 3.0 builds are only available for Hadoop 2.7 or Hadoop
3.2. Not Hadoop 3.1.
The only place that I have found that missing class is in t
you are going to need hadoop-3.1 on your classpath, with hadoop-aws and the
same aws-sdk it was built with (1.11.something). Mixing hadoop JARs is
doomed. using a different aws sdk jar is a bit risky, though more recent
upgrades have all be fairly low stress
On Fri, 19 Jun 2020 at 05:39, murat mig
Hi Murat Migdisoglu,
Unfortunately you need the secret sauce to resolve this.
It is necessary to check out the Apache Spark source code and build it with the
right command line options. This is what I have been using:
dev/make-distribution.sh --name my-spark --tgz -Pyarn -Phadoop-3.2 -Pyarn
-
Hi all
I've upgraded my test cluster to spark 3 and change my comitter to
directory and I still get this error.. The documentations are somehow
obscure on that.
Do I need to add a third party jar to support new comitters?
java.lang.ClassNotFoundException:
org.apache.spark.internal.io.cloud.PathOut