On Thu, Jun 26, 2014 at 1:44 PM, Robert James wrote:
> I downloaded the Spark distro for Hadoop 2, and installed it on my
> machine. But the code doesn't have a reference to that path - it uses
> sbt for dependencies. As far as I can tell, using sbt or maven or ivy
> will always result in a tran
On 6/26/14, Sean Owen wrote:
> Yes it does. The idea is to override the dependency if needed. I thought
> you mentioned that you had built for Hadoop 2.
I'm very confused :-(
I downloaded the Spark distro for Hadoop 2, and installed it on my
machine. But the code doesn't have a reference to tha
Yes it does. The idea is to override the dependency if needed. I thought
you mentioned that you had built for Hadoop 2.
On Jun 26, 2014 11:07 AM, "Robert James" wrote:
> Yes. As far as I can tell, Spark seems to be including Hadoop 1 via
> its transitive dependency:
> http://mvnrepository.com/ar
Yes. As far as I can tell, Spark seems to be including Hadoop 1 via
its transitive dependency:
http://mvnrepository.com/artifact/org.apache.spark/spark-core_2.10/1.0.0
- shows a dependency on Hadoop 1.0.4, which I'm perplexed by.
On 6/26/14, Sean Owen wrote:
> You seem to have the binary for Had
You seem to have the binary for Hadoop 2, since it was compiled
expecting that TaskAttemptContext is an interface. So the error
indicates that Spark is also seeing Hadoop 1 classes somewhere.
On Wed, Jun 25, 2014 at 4:41 PM, Robert James wrote:
> After upgrading to Spark 1.0.0, I get this error:
After upgrading to Spark 1.0.0, I get this error:
ERROR org.apache.spark.executor.ExecutorUncaughtExceptionHandler -
Uncaught exception in thread Thread[Executor task launch
worker-2,5,main]
java.lang.IncompatibleClassChangeError: Found interface
org.apache.hadoop.mapreduce.TaskAttemptContext, bu