at least 1.4 I think  

now using YARN or allowing multiple worker instances are just fine

Best,  

--  
Nan Zhu
http://codingcat.me


On Wednesday, March 11, 2015 at 8:42 PM, Du Li wrote:

> Is it being merged in the next release? It's indeed a critical patch!
>  
> Du  
>  
>  
> On Wednesday, January 21, 2015 3:59 PM, Nan Zhu <zhunanmcg...@gmail.com 
> (mailto:zhunanmcg...@gmail.com)> wrote:
>  
>  
> …not sure when will it be reviewed…
>  
> but for now you can work around by allowing multiple worker instances on a 
> single machine  
>  
> http://spark.apache.org/docs/latest/spark-standalone.html
>  
> search SPARK_WORKER_INSTANCES
>  
> Best,  
>  
> --  
> Nan Zhu
> http://codingcat.me
>  
> On Wednesday, January 21, 2015 at 6:50 PM, Larry Liu wrote:
> > Will  SPARK-1706 be included in next release?
> >  
> > On Wed, Jan 21, 2015 at 2:50 PM, Ted Yu <yuzhih...@gmail.com 
> > (mailto:yuzhih...@gmail.com)> wrote:
> > > Please see SPARK-1706
> > >  
> > > On Wed, Jan 21, 2015 at 2:43 PM, Larry Liu <larryli...@gmail.com 
> > > (mailto:larryli...@gmail.com)> wrote:
> > > > I tried to submit a job with  --conf "spark.cores.max=6"  or 
> > > > --total-executor-cores 6 on a standalone cluster. But I don't see more 
> > > > than 1 executor on each worker. I am wondering how to use multiple 
> > > > executors when submitting jobs.
> > > >  
> > > > Thanks
> > > > larry
> > > >  
> > > >  
> > >  
> > >  
> > >  
> >  
>  
>  
>  

Reply via email to