I think this should go to another PR

can you create a JIRA on that?

Best,  

--  
Nan Zhu
http://codingcat.me


On Wednesday, March 11, 2015 at 8:50 PM, Du Li wrote:

> Is it possible to extend this PR further (or create another PR) to allow for 
> per-node configuration of workers?  
>  
> There are many discussions about heterogeneous spark cluster. Currently 
> configuration on master will override those on the workers. Many spark users 
> have the need for having machines with different cpu/memory capacities in the 
> same cluster.
>  
> Du  
>  
>  
> On Wednesday, January 21, 2015 3:59 PM, Nan Zhu <zhunanmcg...@gmail.com 
> (mailto:zhunanmcg...@gmail.com)> wrote:
>  
>  
> …not sure when will it be reviewed…
>  
> but for now you can work around by allowing multiple worker instances on a 
> single machine  
>  
> http://spark.apache.org/docs/latest/spark-standalone.html
>  
> search SPARK_WORKER_INSTANCES
>  
> Best,  
>  
> --  
> Nan Zhu
> http://codingcat.me
>  
> On Wednesday, January 21, 2015 at 6:50 PM, Larry Liu wrote:
> > Will  SPARK-1706 be included in next release?
> >  
> > On Wed, Jan 21, 2015 at 2:50 PM, Ted Yu <yuzhih...@gmail.com 
> > (mailto:yuzhih...@gmail.com)> wrote:
> > > Please see SPARK-1706
> > >  
> > > On Wed, Jan 21, 2015 at 2:43 PM, Larry Liu <larryli...@gmail.com 
> > > (mailto:larryli...@gmail.com)> wrote:
> > > > I tried to submit a job with  --conf "spark.cores.max=6"  or 
> > > > --total-executor-cores 6 on a standalone cluster. But I don't see more 
> > > > than 1 executor on each worker. I am wondering how to use multiple 
> > > > executors when submitting jobs.
> > > >  
> > > > Thanks
> > > > larry
> > > >  
> > > >  
> > >  
> > >  
> > >  
> >  
>  
>  
>  

Reply via email to