It might be possible that you do not have the resources on the cluster. So
your job will remain to wait for them as they cannot be provided.
On Tue, 28 Sep 2021, 04:26 davvy benny, wrote:
> How can I solve the problem?
>
> On 2021/09/27 23:05:41, Thejdeep G wrote:
> > Hi,
> >
> > That would usu
How can I solve the problem?
On 2021/09/27 23:05:41, Thejdeep G wrote:
> Hi,
>
> That would usually mean that the application has not been allocated the
> executor resources from the resource manager yet.
>
> On 2021/09/27 21:37:30, davvy benny wrote:
> > Hi
> > I am trying to run spark pr
Hi,
That would usually mean that the application has not been allocated the
executor resources from the resource manager yet.
On 2021/09/27 21:37:30, davvy benny wrote:
> Hi
> I am trying to run spark programmatically from eclipse with these
> configurations for hadoop cluster locally
>
Hi
I am trying to run spark programmatically from eclipse with these
configurations for hadoop cluster locally
SparkConf sparkConf = new
SparkConf().setAppName("simpleTest2").setMaster("yarn")
.set("spark.executor.memory", "1g")
This isn't specific to Spark, just use any standard java approach, for
example:
https://dzone.com/articles/how-to-capture-java-heap-dumps-7-options
You need the JDK installed to use jmap
On Mon, Sep 27, 2021 at 1:41 PM Kiran Biswal wrote:
> Thanks Sean.
>
> When executors has only 2gb, execut
Thanks Sean.
When executors has only 2gb, executors restarted every 2/3 hours
with OOMkilled errors
When I increased executir memory to 12 GB and number of cores to 12 (2
executors, 6 cores per executor), the OOMKilled is stopped and restart
happens but the meory usage peaks to 14GB after few hou