Re: spark job error

2018-01-30 Thread Jacek Laskowski
Hi, Start with spark.executor.memory 2g. You may also give spark.yarn.executor.memoryOverhead a try. See https://spark.apache.org/docs/latest/configuration.html and https://spark.apache.org/docs/latest/running-on-yarn.html for more in-depth information. Pozdrawiam, Jacek Laskowski https://a

spark job error

2018-01-30 Thread shyla deshpande
I am running Zeppelin on EMR. with the default settings. I am getting the following error. Restarting the Zeppelin application fixes the problem. What default settings do I need to override that will help fix this error. org.apache.spark.SparkException: Job aborted due to stage failure: Task 71

Re: Container preempted by scheduler - Spark job error

2016-06-02 Thread Ted Yu
Not much information in the attachment. There was TimeoutException w.r.t. BlockManagerMaster.removeRdd(). Any chance of more logs ? Thanks On Thu, Jun 2, 2016 at 2:07 AM, Vishnu Nair wrote: > Hi Ted > > We use Hadoop 2.6 & Spark 1.3.1. I also attached the error file to this > mail, please hav

Re: Container preempted by scheduler - Spark job error

2016-06-02 Thread Jacek Laskowski
Hi, Few things for closer examination: * is yarn master URL accepted in 1.3? I thought it was only in later releases. Since you're seeing the issue it seems it does work. * I've never seen specifying confs using a single string. Can you check in the Web ui they're applied? * what about this in

Fwd: Container preempted by scheduler - Spark job error

2016-06-02 Thread Prabeesh K.
Hi Ted We use Hadoop 2.6 & Spark 1.3.1. I also attached the error file to this mail, please have a look at it. Thanks On Thu, Jun 2, 2016 at 11:51 AM, Ted Yu wrote: > Can you show the error in bit more detail ? > > Which release of hadoop / Spark are you using ? > > Is CapacityScheduler being

Re: Container preempted by scheduler - Spark job error

2016-06-02 Thread Ted Yu
Can you show the error in bit more detail ? Which release of hadoop / Spark are you using ? Is CapacityScheduler being used ? Thanks On Thu, Jun 2, 2016 at 1:32 AM, Prabeesh K. wrote: > Hi I am using the below command to run a spark job and I get an error like > "Container preempted by schedu

Container preempted by scheduler - Spark job error

2016-06-02 Thread Prabeesh K.
Hi I am using the below command to run a spark job and I get an error like "Container preempted by scheduler" I am not sure if it's related to the wrong usage of Memory: nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \ --deploy-mode cluster \ --queue adhoc \ --driver-memor

Re: Spark-job error on writing result into hadoop w/ switch_user=false

2014-08-21 Thread Jongyoul Lee
Hi, For more details, I'm using mesos below and run a very simple command on spark-shell, scala> sc.textFile("/data/pickat/tsv/app/2014/07/31/*").map(_.split).groupBy(p => p(1)).saveAsTextFile("/user/1001079/pickat_test") "hdfs" is an account name run mesos, "1001079" is that of running script.

Spark-job error on writing result into hadoop w/ switch_user=false

2014-08-20 Thread Jongyoul Lee
Hi, I've used hdfs 2.3.0-cdh5.0.1, mesos 0.19.1 and spark 1.0.2 that is re-compiled. For a security reason, we run hdfs and mesos as hdfs, that is an account name and not in a root group, and non-root user submit a spark job on mesos. With no-switch_user, simple job, which only read data from hdf