Serialization issue when using Spark3.1.2 with hadoop yarn

2021-10-03 Thread davvy benny
My spark Job fails with this error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (davben-lubuntu executor 2): java.lang.ClassCastException: cannot assign instance of java.lang.invok

Re: 21/09/27 23:34:03 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2021-09-28 Thread davvy benny
.set("spark.executor.memory", "1g") > twice. Perhaps you need to driver instance. ? > > An example would bem but you can translate them to SparkConf > > --conf spark.executor.cores=1 \ > > --conf spark.executor.memory=1g \ > > > --conf

Re: 21/09/27 23:34:03 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2021-09-28 Thread davvy benny
How can I check it? On 2021/09/28 03:29:45, Stelios Philippou wrote: > It might be possible that you do not have the resources on the cluster. So > your job will remain to wait for them as they cannot be provided. > > On Tue, 28 Sep 2021, 04:26 davvy benny, wrote: > > >

Re: 21/09/27 23:34:03 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2021-09-27 Thread davvy benny
How can I solve the problem? On 2021/09/27 23:05:41, Thejdeep G wrote: > Hi, > > That would usually mean that the application has not been allocated the > executor resources from the resource manager yet. > > On 2021/09/27 21:37:30, davvy benny wrote: > > Hi >

21/09/27 23:34:03 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

2021-09-27 Thread davvy benny
Hi I am trying to run spark programmatically from eclipse with these configurations for hadoop cluster locally SparkConf sparkConf = new SparkConf().setAppName("simpleTest2").setMaster("yarn") .set("spark.executor.memory", "1g")