Hi,
Start with spark.executor.memory 2g. You may also
give spark.yarn.executor.memoryOverhead a try.
See https://spark.apache.org/docs/latest/configuration.html and
https://spark.apache.org/docs/latest/running-on-yarn.html for more in-depth
information.
Pozdrawiam,
Jacek Laskowski
https://a
I am running Zeppelin on EMR. with the default settings. I am getting the
following error. Restarting the Zeppelin application fixes the problem.
What default settings do I need to override that will help fix this error.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 71
Not much information in the attachment.
There was TimeoutException w.r.t. BlockManagerMaster.removeRdd().
Any chance of more logs ?
Thanks
On Thu, Jun 2, 2016 at 2:07 AM, Vishnu Nair wrote:
> Hi Ted
>
> We use Hadoop 2.6 & Spark 1.3.1. I also attached the error file to this
> mail, please hav
Hi,
Few things for closer examination:
* is yarn master URL accepted in 1.3? I thought it was only in later
releases. Since you're seeing the issue it seems it does work.
* I've never seen specifying confs using a single string. Can you check in
the Web ui they're applied?
* what about this in
Hi Ted
We use Hadoop 2.6 & Spark 1.3.1. I also attached the error file to this
mail, please have a look at it.
Thanks
On Thu, Jun 2, 2016 at 11:51 AM, Ted Yu wrote:
> Can you show the error in bit more detail ?
>
> Which release of hadoop / Spark are you using ?
>
> Is CapacityScheduler being
Can you show the error in bit more detail ?
Which release of hadoop / Spark are you using ?
Is CapacityScheduler being used ?
Thanks
On Thu, Jun 2, 2016 at 1:32 AM, Prabeesh K. wrote:
> Hi I am using the below command to run a spark job and I get an error like
> "Container preempted by schedu
Hi I am using the below command to run a spark job and I get an error like
"Container preempted by scheduler"
I am not sure if it's related to the wrong usage of Memory:
nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \
--deploy-mode cluster \ --queue adhoc \ --driver-memor
Hi,
For more details,
I'm using mesos below and run a very simple command on spark-shell,
scala>
sc.textFile("/data/pickat/tsv/app/2014/07/31/*").map(_.split).groupBy(p =>
p(1)).saveAsTextFile("/user/1001079/pickat_test")
"hdfs" is an account name run mesos, "1001079" is that of running script.
Hi,
I've used hdfs 2.3.0-cdh5.0.1, mesos 0.19.1 and spark 1.0.2 that is
re-compiled.
For a security reason, we run hdfs and mesos as hdfs, that is an account
name and not in a root group, and non-root user submit a spark job on
mesos. With no-switch_user, simple job, which only read data from hdf