The exit code 52 comes from org.apache.spark.util.SparkExitCode, and it is
val OOM=52 - i.e. an OutOfMemoryError
Refer
https://github.com/apache/spark/blob/d6dc12ef0146ae409834c78737c116050961f350/core/src/main/scala/org/apache/spark/util/SparkExitCode.scala
On 19 September 2016 at 14:57, Cyanny
My job is 1TB join + 10 GB table on spark1.6.1
run on yarn mode:
*1. if I open shuffle service, the error is *
Job aborted due to stage failure: ShuffleMapStage 2 (writeToDirectory at
NativeMethodAccessorImpl.java:-2) has failed the maximum allowable number
of times: 4. Most recent failure reason:
Can you list the spark-submit command line you used ?
Thanks
On Tue, Sep 29, 2015 at 9:02 AM, Anup Sawant
wrote:
> Hi all,
> Any idea why I am getting 'Executor heartbeat timed out' ? I am fairly new
> to Spark so I have less knowledge about the internals of it. The job was
> running for a day
Try increasing memory (--conf spark.executor.memory=3g or
--executor-memory) for executors. Here is something I noted from your logs
15/09/29 06:32:03 WARN MemoryStore: Failed to reserve initial memory
threshold of 1024.0 KB for computing block rdd_2_1813 in memory.
15/09/29 06:32:03 WARN MemorySt
Hi all,
Any idea why I am getting 'Executor heartbeat timed out' ? I am fairly new
to Spark so I have less knowledge about the internals of it. The job was
running for a day or so on 102 Gb of data with 40 workers.
-Best,
Anup.
15/09/29 06:32:03 ERROR TaskSchedulerImpl: Lost executor driver on
loc
y code as -
>
> dstream.foreachRDD { rdd => rdd.foreach { record => // look up with
> cassandra table
> // save updated rows to cassandra table.
> }
> }
> This foreachRDD is causing executor lost failure. what is the behavior of
> this foreachRDD ???
>
> Thanks,
> Padma Ch
>
Hello All,
I am using foreachRDD in my code as -
dstream.foreachRDD { rdd => rdd.foreach { record => // look up with
cassandra table
// save updated rows to cassandra table.
}
}
This foreachRDD is causing executor lost failure. what is the behavior of
this foreachRDD ???
Thanks,
Padma Ch
on which writes the processed results
> to cassandra. In local mode, the code seems to work fine. The moment i
> start running in distributed mode using yarn, i see executor lost failure.
> I increased executor memory to occupy entire node's memory which is around
> 12gb/ But still s
Hi All,
I have a spark streaming application which writes the processed results to
cassandra. In local mode, the code seems to work fine. The moment i start
running in distributed mode using yarn, i see executor lost failure. I
increased executor memory to occupy entire node's memory whi
Yes... found the output on web UI of the slave.
Thanks :)
On Tue, Nov 11, 2014 at 2:48 AM, Ankur Dave wrote:
> At 2014-11-10 22:53:49 +0530, Ritesh Kumar Singh <
> riteshoneinamill...@gmail.com> wrote:
> > Tasks are now getting submitted, but many tasks don't happen.
> > Like, after opening the
At 2014-11-10 22:53:49 +0530, Ritesh Kumar Singh
wrote:
> Tasks are now getting submitted, but many tasks don't happen.
> Like, after opening the spark-shell, I load a text file from disk and try
> printing its contentsas:
>
>>sc.textFile("/path/to/file").foreach(println)
>
> It does not give me
-- Forwarded message --
From: Ritesh Kumar Singh
Date: Mon, Nov 10, 2014 at 10:52 PM
Subject: Re: Executor Lost Failure
To: Akhil Das
Tasks are now getting submitted, but many tasks don't happen.
Like, after opening the spark-shell, I load a text file from disk and try
pri
On Mon, Nov 10, 2014 at 10:52 PM, Ritesh Kumar Singh <
riteshoneinamill...@gmail.com> wrote:
> Tasks are now getting submitted, but many tasks don't happen.
> Like, after opening the spark-shell, I load a text file from disk and try
> printing its contentsas:
>
> >sc.textFile("/path/to/file").fore
Try adding the following configurations also, might work.
spark.rdd.compress true
spark.storage.memoryFraction 1
spark.core.connection.ack.wait.timeout 600
spark.akka.frameSize 50
Thanks
Best Regards
On Mon, Nov 10, 2014 at 6:51 PM, Ritesh Kumar Singh <
riteshoneinamill...@g
Hi,
I am trying to submit my application using spark-submit, using following
spark-default.conf params:
spark.master spark://:7077
spark.eventLog.enabled true
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.executor.extraJavaOptions
15 matches
Mail list logo