Can you post the complete stack trace ?
ᐧ
On Tue, May 17, 2016 at 7:00 PM, wrote:
> Hi,
>
> i am getting error below while running application on yarn-cluster mode.
>
> *ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM*
>
> Anyone can suggest why i am getting this error message?
>
> Tha
Hi,
this is a good spot to start for Spark and YARN.
https://spark.apache.org/docs/1.5.0/running-on-yarn.html
specific to the version you are on, you can toggle between pages.
-
Neelesh S. Salian
Cloudera
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com
Hello,
Thank you for the question.
The Status UNDEFINED means the application has not been completed and not
been resourced.
Upon getting assignment it will progress to RUNNING and then SUCCEEDED upon
completion.
It isn't a problem that you should worry about.
You should make sure to tune your YA
what does the WebUI show? What do you see when you click on "stderr" and
"stdout" links ? These links must contain stdoutput and stderr for each
executor.
About your custom logging in executor, are you sure you checked "${spark.
yarn.app.container.log.dir}/spark-app.log"
Actual location of this fil
Hi Ted & Nguyen,
@Ted , I was under the belief that if the log4j.properties file would be
taken from the application classpath if file path is not specified.
Please correct me if I am wrong. I tried your approach as well still I
couldn't find the logs.
@nguyen I am running it on a Yarn cluster ,
Please use the following syntax:
--conf
"spark.executor.extraJavaOptions=-Dlog4j.configuration=file:///local/file/log4j.properties"
FYI
On Fri, Apr 29, 2016 at 6:03 AM, dev loper wrote:
> Hi Spark Team,
>
> I have asked the same question on stack overflow , no luck yet.
>
>
> http://stackov
These are executor's logs, not the driver logs. To see this log files, you
have to go to executor machines where tasks is running. To see what you
will print to stdout or stderr you can either go to the executor machines
directly (will store in "stdout" and "stderr" files somewhere in the
executor
Hi Rachana,
Are you by any chance saying something like this in your code
?
"sparkConf.setMaster("yarn-cluster");"
SparkContext is not supported with yarn-cluster mode.
I think you are hitting this bug -- >
https://issues.apache.org/jira/browse/SPARK-7504. This got fixed in
Spark-1.4.0,
Thanks Sandy- I was digging through the code in the deploy.yarn.Client and
literally found that property right before I saw your reply. I'm on 1.2.x
right now which doesn't have the property. I guess I need to update sooner
rather than later.
On Thu, May 28, 2015 at 3:56 PM, Sandy Ryza wrote:
>
Hi Corey,
As of this PR https://github.com/apache/spark/pull/5297/files, this can be
controlled with spark.yarn.submit.waitAppCompletion.
-Sandy
On Thu, May 28, 2015 at 11:48 AM, Corey Nolet wrote:
> I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
> noticing the jvm t
10 matches
Mail list logo