Re: Running Spark on Yarn-Client/Cluster mode

2016-04-12 Thread Jon Kjær Amundsen
Hi Ashesh You might be experiencing problems with the virtual memory allocation. Try grepping the yarn-hadoop-nodemanager-*.log (found in $HADOOP_INSTALL/logs) for 'virtual memory limits' If you se a message like - WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Co

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-11 Thread ashesh_28
I have updated all my nodes in the Cluster to have 4GB RAM memory , but still face the same error when trying to launch Spark-Shell in yarn-client mode Any suggestion ? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mo

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-11 Thread ashesh_28
I have Modified my Yarn-site to include the following properties , yarn.nodemanager.resource.memory-mb 4096 yarn.scheduler.minimum-allocation-mb 256 yarn.scheduler.maximum-allocation-mb 2250 A

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-08 Thread ashesh_28
Hi Dhiraj , Thanks for the clarification , Yes i indeed checked that Both YARN related (Nodemanager & ResourceManager) daemons are running in their respective nodes and i can access HDFS directory structure from each node. I am using Hadoop version 2.7.2 and i have downloaded Pre-build version

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-08 Thread ashesh_28
Few more added information with Nodes Memory and Core ptfhadoop01v - 4GB ntpcam01v - 1GB ntpcam03v - 2GB Each of the VM has only 1 core CPU -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26714.html Sent f

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-08 Thread ashesh_28
Hi , Just a Quick Update , After trying for a while , i rebooted all the Three machines used in the Cluster and formatted namenode and ZKFC . Then i started every Daemon in the Cluster. After all the Daemons were up and Running i tried to issue the same command as earlier

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-07 Thread ashesh_28
Hi , I am also attaching a screenshot of my ResourceManager UI which shows the available cores and memory allocated for each node , -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-07 Thread ashesh_28
Hi Guys , Thanks for your valuable inputs , I have tried few alternatives as suggested but it all leads me to same result - Unable to start Spark Context @Dhiraj Peechara I am able to start my spark SC(SparkContext) in stand-alone mode by just issuing the *$spark-shell* command from the termina

Re: Running Spark on Yarn-Client/Cluster mode

2016-04-07 Thread JasmineGeorge
The logs are self explanatory. It says "java.io.IOException: Incomplete HDFS URI, no host: hdfs:/user/hduser/share/lib/spark-assembly.jar" you need to specify the host in the above hdfs url. It should look something like the following: hdfs://:8020/user/hduser/share/lib/spark-assembly.jar -