nvm figured. I compiled my client jar with 2.0.2 while the spark that is
deployed on my machines were 2.0.1. communication problems between dev team
and ops team :)
On Fri, Jan 20, 2017 at 3:03 PM, kant kodali wrote:
> Is this because of versioning issue? can't wait for JDK 9 modular system.
> I
Is this because of versioning issue? can't wait for JDK 9 modular system. I
am not sure if spark plans to leverage it?
On Fri, Jan 20, 2017 at 1:30 PM, kant kodali wrote:
> I get the following exception. I am using Spark 2.0.1 and Scala 2.11.8.
>
> org.apache.spark.SparkException: Job aborted du
This is probably a versioning issue, are you sure your code is compiling
and running against the same versions?
On Oct 14, 2015 2:19 PM, "Shreeharsha G Neelakantachar" <
shreeharsh...@in.ibm.com> wrote:
> Hi,
> I have Terasort being executed on spark1.4.1 with hadoop 2.7 for a
> datasize of
f
From: Yana Kadiyska
[mailto:yana.kadiy...@gmail.com<mailto:yana.kadiy...@gmail.com>]
Sent: Monday, July 13, 2015 2:16 PM
To: Ellafi, Saif A.
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: java.io.InvalidClassException
It's a bit hard to tell from the snippets of code bu
t: Row): Validator = {
>
> var check1: Boolean = if (input.getDouble(shortsale_in_pos) >
> 140.0) true else false
>
> if (check1) this else Nomatch
>
> }
>
> }
>
>
>
> Saif
>
>
>
> *From:* Yana Kadiyska [mailto:yana.kadiy...@gmail.co
il.com]
Sent: Monday, July 13, 2015 2:16 PM
To: Ellafi, Saif A.
Cc: user@spark.apache.org
Subject: Re: java.io.InvalidClassException
It's a bit hard to tell from the snippets of code but it's likely related to
the fact that when you serialize instances the enclosing class, if any, also
It's a bit hard to tell from the snippets of code but it's likely related
to the fact that when you serialize instances the enclosing class, if any,
also gets serialized, as well as any other place where fields used in the
closure come from...e.g.check this discussion:
http://stackoverflow.com/ques
SerializableMapWrapper was added in
https://issues.apache.org/jira/browse/SPARK-3926; do you mind opening a new
JIRA and linking it to that one?
On Mon, Dec 1, 2014 at 12:17 AM, lokeshkumar wrote:
> The workaround was to wrap the map returned by spark libraries into HashMap
> and then broadcast
The workaround was to wrap the map returned by spark libraries into HashMap
and then broadcast them.
Could anyone please let me know if there is any issue open?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-io-InvalidClassException-org-apache-spark-a