Thanks Andrew for reporting this. I just submitted the fix.
https://github.com/apache/spark/pull/24304
On Fri, Apr 5, 2019 at 3:21 PM Andrew Melo wrote:
> Hello,
>
> I'm not sure if this is the proper place to report it, but the 2.4.1
> version of the config docs apparently didn't render right i
On Fri, Apr 5, 2019 at 9:41 AM Jungtaek Lim wrote:
>
> Thanks Andrew for reporting this. I just submitted the fix.
> https://github.com/apache/spark/pull/24304
Thanks!
>
> On Fri, Apr 5, 2019 at 3:21 PM Andrew Melo wrote:
>>
>> Hello,
>>
>> I'm not sure if this is the proper place to report it
Hi there! I'm trying to run Spark unit tests with the following profiles:
And 'core' module fails with the following test failing with
NoClassDefFoundError:
In the meantime building a distribution works fine when running:
Also, there are no problems with running tests using Hadoop 2.7 profile
Really sorry for the formatting. Here's the original message:
Hi there! I'm trying to run Spark unit tests with the following profiles:
./build/mvn test -Pmesos "-Phadoop-3.1" -Pnetlib-lgpl -Psparkr -Phive
-Phive-thriftserver
And 'core' module fails with the following test failing with
NoClassDe
The hadoop-3 profile doesn't really work yet, not even on master.
That's being worked on still.
On Fri, Apr 5, 2019 at 10:53 AM akirillov wrote:
>
> Hi there! I'm trying to run Spark unit tests with the following profiles:
>
> And 'core' module fails with the following test failing with
> NoClass
Marcelo, Sean, thanks for the clarification. So in order to support Hadoop
3+ the preferred way would be to use Hadoop-free builds and provide Hadoop
dependencies in the classpath, is that correct?
On Fri, Apr 5, 2019 at 10:57 AM Marcelo Vanzin wrote:
> The hadoop-3 profile doesn't really work y
Yes, you can try it, though I doubt that will 100% work. Have a look
at the "hadoop 3" JIRAs and PRs still in progress on master.
On Fri, Apr 5, 2019 at 1:14 PM Anton Kirillov
wrote:
>
> Marcelo, Sean, thanks for the clarification. So in order to support Hadoop 3+
> the preferred way would be to
You can always try. But Hadoop 3 is not yet supported by Spark.
On Fri, Apr 5, 2019 at 11:13 AM Anton Kirillov
wrote:
>
> Marcelo, Sean, thanks for the clarification. So in order to support Hadoop 3+
> the preferred way would be to use Hadoop-free builds and provide Hadoop
> dependencies in the
Hadoop 3 isn't supported yet, not quite even in master. I think the
profile there exists for testing at the moment.
Others may know a way that it can work but don't think it would out of the box.
On Fri, Apr 5, 2019 at 12:53 PM akirillov wrote:
>
> Hi there! I'm trying to run Spark unit tests wit
I just filed SPARK-27396 as the SPIP for this proposal. Please use that
JIRA for further discussions.
Thanks for all of the feedback,
Bobby
On Wed, Apr 3, 2019 at 7:15 PM Bobby Evans wrote:
> I am still working on the SPIP and should get it up in the next few days.
> I have the basic text mor
10 matches
Mail list logo