My spark Job fails with this error:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID
3) (davben-lubuntu executor 2): java.lang.ClassCastException: cannot assign
instance of java.lang.invok
.set("spark.executor.memory", "1g")
> twice. Perhaps you need to driver instance. ?
>
> An example would bem but you can translate them to SparkConf
>
> --conf spark.executor.cores=1 \
>
> --conf spark.executor.memory=1g \
>
>
> --conf
How can I check it?
On 2021/09/28 03:29:45, Stelios Philippou wrote:
> It might be possible that you do not have the resources on the cluster. So
> your job will remain to wait for them as they cannot be provided.
>
> On Tue, 28 Sep 2021, 04:26 davvy benny, wrote:
>
> >
How can I solve the problem?
On 2021/09/27 23:05:41, Thejdeep G wrote:
> Hi,
>
> That would usually mean that the application has not been allocated the
> executor resources from the resource manager yet.
>
> On 2021/09/27 21:37:30, davvy benny wrote:
> > Hi
>
Hi
I am trying to run spark programmatically from eclipse with these
configurations for hadoop cluster locally
SparkConf sparkConf = new
SparkConf().setAppName("simpleTest2").setMaster("yarn")
.set("spark.executor.memory", "1g")