I think this should go to another PR
can you create a JIRA on that?
Best,
--
Nan Zhu
http://codingcat.me
On Wednesday, March 11, 2015 at 8:50 PM, Du Li wrote:
> Is it possible to extend this PR further (or create another PR) to allow for
> per-node configuration of workers?
>
> There
Is it possible to extend this PR further (or create another PR) to allow for
per-node configuration of workers?
There are many discussions about heterogeneous spark cluster. Currently
configuration on master will override those on the workers. Many spark users
have the need for having machines
at least 1.4 I think
now using YARN or allowing multiple worker instances are just fine
Best,
--
Nan Zhu
http://codingcat.me
On Wednesday, March 11, 2015 at 8:42 PM, Du Li wrote:
> Is it being merged in the next release? It's indeed a critical patch!
>
> Du
>
>
> On Wednesday, J
Is it being merged in the next release? It's indeed a critical patch!
Du
On Wednesday, January 21, 2015 3:59 PM, Nan Zhu
wrote:
…not sure when will it be reviewed…
but for now you can work around by allowing multiple worker instances on a
single machine
http://spark.apache.org/docs
…not sure when will it be reviewed…
but for now you can work around by allowing multiple worker instances on a
single machine
http://spark.apache.org/docs/latest/spark-standalone.html
search SPARK_WORKER_INSTANCES
Best,
--
Nan Zhu
http://codingcat.me
On Wednesday, January 21, 2015 at
Will SPARK-1706 be included in next release?
On Wed, Jan 21, 2015 at 2:50 PM, Ted Yu wrote:
> Please see SPARK-1706
>
> On Wed, Jan 21, 2015 at 2:43 PM, Larry Liu wrote:
>
>> I tried to submit a job with --conf "spark.cores.max=6"
>> or --total-executor-cores 6 on a standalone cluster. But I
Please see SPARK-1706
On Wed, Jan 21, 2015 at 2:43 PM, Larry Liu wrote:
> I tried to submit a job with --conf "spark.cores.max=6"
> or --total-executor-cores 6 on a standalone cluster. But I don't see more
> than 1 executor on each worker. I am wondering how to use multiple
> executors when su
I tried to submit a job with --conf "spark.cores.max=6"
or --total-executor-cores 6 on a standalone cluster. But I don't see more
than 1 executor on each worker. I am wondering how to use multiple
executors when submitting jobs.
Thanks
larry