Re: Spark with Yarn Client

2016-03-11 Thread Alexander Pivovarov
Check doc - http://spark.apache.org/docs/latest/running-on-yarn.html also you can start EMR-4.2.0 or 4.3.0 cluster with Spark app and see how it's configured On Fri, Mar 11, 2016 at 7:50 PM, Divya Gehlot wrote: > Hi, > I am trying to understand behaviour /configuration of spark with yarn > clie

Re: Spark with YARN

2014-09-24 Thread Marcelo Vanzin
If you launched the job in yarn-cluster mode, the tracking URL is printed on the output of the launched process. That will lead you to the Spark UI once the job is running. If you're using CM, you can reach the same link by clicking on the "Resource Manager UI" link on your Yarn service, then find

Re: Spark with YARN

2014-09-24 Thread Raghuveer Chanda
Yeah I got the logs and its reporting about the memory. 14/09/25 00:08:26 WARN YarnClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory Now I shifted to big cluster with more memory but here im not abl

Re: Spark with YARN

2014-09-24 Thread Marcelo Vanzin
You need to use the command line yarn application that I mentioned ("yarn logs"). You can't look at the logs through the UI after the app stops. On Wed, Sep 24, 2014 at 11:16 AM, Raghuveer Chanda wrote: > > Thanks for the reply .. This is the error in the logs obtained from UI at > http://dml3:80

Re: Spark with YARN

2014-09-24 Thread Raghuveer Chanda
Thanks for the reply .. This is the error in the logs obtained from UI at http://dml3:8042/node/containerlogs/container_1411578463780_0001_02_01/chanda So now how to set the Log Server url .. Failed while trying to construct the redirect url to the log server. Log Server url may not be confi

Re: Spark with YARN

2014-09-24 Thread Raghuveer Chanda
The screenshot executors.8080.png is of the executors tab itself and only driver is added without workers even if I kept the master as yarn-cluster. On Wed, Sep 24, 2014 at 11:18 PM, Matt Narrell wrote: > This just shows the driver. Click the Executors tab in the Spark UI > > mn > > On Sep 24,

Re: Spark with YARN

2014-09-24 Thread Raghuveer Chanda
Thanks for the reply, I have doubt as to which path to set for YARN_CONF_DIR My /etc/hadoop folder has the following sub folders conf conf.cloudera.hdfs conf.cloudera.mapreduce conf.cloudera.yarn and both conf and conf.cloudera.yarn folders have yarn-site.xml. As of now I set the variable as

Re: Spark with YARN

2014-09-24 Thread Marcelo Vanzin
You'll need to look at the driver output to have a better idea of what's going on. You can use "yarn logs --applicationId blah" after your app is finished (e.g. by killing it) to look at it. My guess is that your cluster doesn't have enough resources available to service the container request you'

Re: Spark with YARN

2014-09-24 Thread Greg Hill
Do you have YARN_CONF_DIR set in your environment to point Spark to where your yarn configs are? Greg From: Raghuveer Chanda mailto:raghuveer.cha...@gmail.com>> Date: Wednesday, September 24, 2014 12:25 PM To: "u...@spark.incubator.apache.org" mailto:u..

Re: Spark with YARN

2014-09-24 Thread Matt Narrell
This just shows the driver. Click the Executors tab in the Spark UI mn On Sep 24, 2014, at 11:25 AM, Raghuveer Chanda wrote: > Hi, > > I'm new to spark and facing problem with running a job in cluster using YARN. > > Initially i ran jobs using spark master as --master spark://dml2:7077 and