Since upgrading to Spark 1.4, I'm getting a
scala.reflect.internal.MissingRequirementError when creating a DataFrame
from an RDD. The error references a case class in the application (the
RDD's type parameter), which has been verified to be present.
Items of note:
1) This is running on AWS EMR (YAR
anged in
Spark 1.3.0 (released recently). The relevant configuration flag changed
names to 'spark.executor.userClassPathFirst', and it does work the way I
expected it (unlike in v1.2.0).
On Thu, Mar 12, 2015 at 2:50 PM, Adam Lewandowski <
adam.lewandow...@gmail.com> wrote:
> I
with identical results.
Has anyone else seen this issue, had any success with the
"spark.files.userClassPathFirst" flag, or been able to use the AWS SDK?
I was going to submit this a Spark JIRA issue, but thought I would check
here first.
Thanks,
Adam Lewandowski