hi~I want to set the executor number to 16, but it is very strange that
executor cores may affect executor num on spark on yarn, i don't know why
and how to set executor number.
=
./bin/spark-submit --class com.hequn.spark.SparkJoins \
--master yarn-c
I tried centralized cache step by step following the apache hadoop oficial
website, but it seems centralized cache doesn't work.
see :
http://stackoverflow.com/questions/22293358/centralized-cache-failed-in-hadoop-2-3
.
Can anyone succeed?
2014-05-15 5:30 GMT+08:00 William Kang :
> Hi,
> Any com
he immutable feature of rdd, full discuss can make
> it more clear. Any ideas?
> --
> 发件人: hequn cheng
> 发送时间: 2014/3/25 10:40
> 收件人: user@spark.apache.org
> 主题: Re: 答复: RDD usage
>
> First question:
> If you save your modified RDD like this:
job running, is that right?
> ------
> 发件人: hequn cheng
> 发送时间: 2014/3/25 9:35
> 收件人: user@spark.apache.org
> 主题: Re: RDD usage
>
> points.foreach(p=>p.y = another_value) will return a new modified RDD.
>
>
> 2014-03-24 18:13 GMT+08:00 Chie
points.foreach(p=>p.y = another_value) will return a new modified RDD.
2014-03-24 18:13 GMT+08:00 Chieh-Yen :
> Dear all,
>
> I have a question about the usage of RDD.
> I implemented a class called AppDataPoint, it looks like:
>
> case class AppDataPoint(input_y : Double, input_x : Array[Double
persist and unpersist.
unpersist:Mark the RDD as non-persistent, and remove all blocks for it from
memory and disk
2014-03-19 16:40 GMT+08:00 林武康 :
> Hi, can any one tell me about the lifecycle of an rdd? I search through
> the official website and still can't figure it out. Can I use an rdd in
When i increase my input data size, the executor will be failed and lost.
see below:
14/03/11 20:44:18 INFO AppClient$ClientActor: Executor updated:
app-20140311204343-0008/8 is now FAILED (Command exited with code 134)
14/03/11 20:44:18 INFO SparkDeploySchedulerBackend: Executor
app-2014031120434
have your send spark-env.sh to the slave nodes ?
2014-03-11 6:47 GMT+08:00 Linlin :
>
> Hi,
>
> I have a java option (-Xss) setting specified in SPARK_JAVA_OPTS in
> spark-env.sh, noticed after stop/restart the spark cluster, the
> master/worker daemon has the setting being applied, but this se
hi
hi
10 matches
Mail list logo