[
https://issues.apache.org/jira/browse/SPARK-17579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-17579.
-------------------------------
Resolution: Not A Problem
This shows a different error, which is just a config problem. I'm not totally
sure why it would manifest for you on cluster vs client but I'm not sure how
you're running this.
{code}
Caused by: org.apache.spark.SparkException: A master URL must be set in your
configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:371)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2256)
at
org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
at
org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
at scala.Option.getOrElse(Option.scala:121)
at
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
at bench.mllib.wjf.Env$.<init>(Test.scala:5)
at bench.mllib.wjf.Env$.<clinit>(Test.scala)
{code}
> Exception When the Main object extends Encoder in cluster mode but ok in
> local mode
> -----------------------------------------------------------------------------------
>
> Key: SPARK-17579
> URL: https://issues.apache.org/jira/browse/SPARK-17579
> Project: Spark
> Issue Type: Bug
> Components: Spark Core, SQL
> Affects Versions: 2.0.0
> Reporter: Jianfei Wang
>
> this the code below: I got exception in cluster mode, but it's ok in local
> mode.
> Besides if I remove the extends in Main object it will be ok in cluster mode.
> Why this?
> {code}
> import org.apache.spark.sql._
> object Env {
> val spark = SparkSession.builder.getOrCreate()
> }
> import Env.spark.implicits._
> abstract class A[T : Encoder] {}
> object Main extends A[String] {
> def func(str:String):String = str
> def main(args: Array[String]): Unit = {
> Env.spark.createDataset(Seq("a","b","c")).map(func).show()
> }
> }
> {code}
> I got exception below:
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class Main$
> at Main$$anonfun$main$1.apply(test.scala:14)
> at Main$$anonfun$main$1.apply(test.scala:14)
> at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>
> Source)
> at
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>
> at
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
>
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
>
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
>
> at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
>
> at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
>
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
> at org.apache.spark.scheduler.Task.run(Task.scala:86)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> at java.lang.Thread.run(Thread.java:745)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]