Installation Issues - Spark 1.6.0 With Hadoop 2.6 - Pre Built On Windows 7
Dear Experts, Need help to get this resolved - What am I doing wrong? Any Help greatly appreciated. Env - Windows 7 - 64 bit OS Spark 1.6.0 With Hadoop 2.6 - Pre Built setup JAVA_HOME - point to 1.7 SCALA_HOME - 2.11 I have Admin User and Standard User on Windows. All the setups and running of spark is done using the Standard User Acc. Have Spark setup on D drive.- D:\Home\Prod_Inst\BigData\Spark\VER_1_6_0_W_H_2_6 Have set Hadoop_Home to point to winutils.exe (64 bit version) on D drive - D:\Home\Prod_Inst\BigData\Spark\MySparkSetup\winutils Standard User Acc - w7-PC\Shaffu_Knowledge Using Standard User account - created mkdir D:\tmp\hive Using Standard User account - winutils.exe chmod -R 777 D:\tmp Using Standard User account - winutils.exe ls D:\tmp and D:\tmp\hive *drwxrwxrwx 1 w7-PC\Shaffu_Knowledge w7-PC\None 0 Apr 12 2016 \tmp* *drwxrwxrwx 1 w7-PC\Shaffu_Knowledge w7-PC\None 0 Apr 12 2016 \tmp\hive* Running spark-shell results in the following exception D:\Home\Prod_Inst\BigData\Spark\VER_1_6_0_W_H_2_6>./bin/spark-shell '.' is not recognized as an internal or external command, operable program or batch file. D:\Home\Prod_Inst\BigData\Spark\VER_1_6_0_W_H_2_6>.\bin\spark-shell 16/04/12 14:40:16 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Welcome to __ / __/__ ___ _/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.6.0 /_/ Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_25) Type in expressions to have them evaluated. Type :help for more information. Spark context available as sc. 16/04/12 14:40:22 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/D:/Home/Prod_Inst/BigData/Spark/VER_1_6_0_W_H_2_6/lib/ 16/04/12 14:40:22 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/D:/Home/Prod_Inst/BigData/Spark/VER_1_6_0_W_H_2_6/bin/../lib/datan 16/04/12 14:40:22 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/D:/Home/Prod_Inst/BigData/Spark/VER_1_6_0_W_H_2_6/bin/../l 16/04/12 14:40:22 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) 16/04/12 14:40:23 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) 16/04/12 14:40:38 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 16/04/12 14:40:38 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException 16/04/12 14:40:39 WARN : Your hostname, w7-PC resolves to a loopback/non-reachable address: fe80:0:0:0:8d4f:1fa9:cf7d:23d0%17, but we couldn't find any external IP address! j*ava.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-* at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) at org.apache.spark.sql.hive.client.ClientWrapper.(ClientWrapper.scala:194) at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238) at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:218) at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:208) at org.apache.spark.sql.hive.HiveContext.functionRegistry$lzycompute(HiveContext.scala:462) at org.apache.spark.sql.hive.HiveContext.functionRegistry(HiveContext.scala:461) at org.apache.spark.sql.UDFRegistration.(UDFRegistration.scala:40) at org.apache.spark.sql.SQLContext.(SQLContext.scala:330) at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:90) at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:101) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028) at $iwC$$iwC.(:15) at $iwC.(:24) at (:26) at .(:30) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAc
How to start HDFS on Spark Standalone
Hi , I am a newbie on Spark.I wanted to know how to start and verify if HDFS has started on Spark stand alone. Env - Windows 7 - 64 bit Spark 1.4.1 With Hadoop 2.6 Using Scala Shell - spark-shell -- Thanks, Harry
Re: How to start HDFS on Spark Standalone
Deepak, The following could be a very dumb questions so pardon me for the same. 1) When I download the binary for Spark with a version of Hadoop(Hadoop 2.6) does it not come in the zip or tar file? 2) If it does not come along,Is there a Apache Hadoop for windows, is it in binary format or will have to build it? 3) Is there a basic tutorial for Hadoop on windows for the basic needs of Spark. Thanks in Advance ! On Mon, Apr 18, 2016 at 5:35 PM, Deepak Sharma wrote: > Once you download hadoop and format the namenode , you can use > start-dfs.sh to start hdfs. > Then use 'jps' to sss if datanode/namenode services are up and running. > > Thanks > Deepak > > On Mon, Apr 18, 2016 at 5:18 PM, My List wrote: > >> Hi , >> >> I am a newbie on Spark.I wanted to know how to start and verify if HDFS >> has started on Spark stand alone. >> >> Env - >> Windows 7 - 64 bit >> Spark 1.4.1 With Hadoop 2.6 >> >> Using Scala Shell - spark-shell >> >> >> -- >> Thanks, >> Harry >> > > > > -- > Thanks > Deepak > www.bigdatabig.com > www.keosha.net > -- Thanks, Harmeet
Re: How to start HDFS on Spark Standalone
Deepak, Would you advice that I use Ubuntu? or Redhat. Cause Windows support etc and issues are Galore on Spark. Since I am starting afresh, what would you advice? On Mon, Apr 18, 2016 at 5:45 PM, Deepak Sharma wrote: > Binary for Spark means ts spark built against hadoop 2.6 > It will not have any hadoop executables. > You'll have to setup hadoop separately. > I have not used windows version yet but there are some. > > Thanks > Deepak > > On Mon, Apr 18, 2016 at 5:43 PM, My List wrote: > >> Deepak, >> >> The following could be a very dumb questions so pardon me for the same. >> 1) When I download the binary for Spark with a version of Hadoop(Hadoop >> 2.6) does it not come in the zip or tar file? >> 2) If it does not come along,Is there a Apache Hadoop for windows, is it >> in binary format or will have to build it? >> 3) Is there a basic tutorial for Hadoop on windows for the basic needs of >> Spark. >> >> Thanks in Advance ! >> >> On Mon, Apr 18, 2016 at 5:35 PM, Deepak Sharma >> wrote: >> >>> Once you download hadoop and format the namenode , you can use >>> start-dfs.sh to start hdfs. >>> Then use 'jps' to sss if datanode/namenode services are up and running. >>> >>> Thanks >>> Deepak >>> >>> On Mon, Apr 18, 2016 at 5:18 PM, My List wrote: >>> >>>> Hi , >>>> >>>> I am a newbie on Spark.I wanted to know how to start and verify if HDFS >>>> has started on Spark stand alone. >>>> >>>> Env - >>>> Windows 7 - 64 bit >>>> Spark 1.4.1 With Hadoop 2.6 >>>> >>>> Using Scala Shell - spark-shell >>>> >>>> >>>> -- >>>> Thanks, >>>> Harry >>>> >>> >>> >>> >>> -- >>> Thanks >>> Deepak >>> www.bigdatabig.com >>> www.keosha.net >>> >> >> >> >> -- >> Thanks, >> Harmeet >> > > > > -- > Thanks > Deepak > www.bigdatabig.com > www.keosha.net > -- Thanks, Harmeet
Re: How to start HDFS on Spark Standalone
Deepak, I love the unix flavors have been a programmed on them. Just have a windows laptop and pc hence haven't moved to unix flavors. Was trying to run big data stuff on windows. Have run in so much of issues that I could just throw the laptop with windows out. Your view - Redhat, Ubuntu or Centos. Does Redhat give a one year licence on purchase etc? Thanks On Mon, Apr 18, 2016 at 5:52 PM, Deepak Sharma wrote: > It works well with all flavors of Linux. > It all depends on your ex with these flavors. > > Thanks > Deepak > > On Mon, Apr 18, 2016 at 5:51 PM, My List wrote: > >> Deepak, >> >> Would you advice that I use Ubuntu? or Redhat. Cause Windows support etc >> and issues are Galore on Spark. >> Since I am starting afresh, what would you advice? >> >> On Mon, Apr 18, 2016 at 5:45 PM, Deepak Sharma >> wrote: >> >>> Binary for Spark means ts spark built against hadoop 2.6 >>> It will not have any hadoop executables. >>> You'll have to setup hadoop separately. >>> I have not used windows version yet but there are some. >>> >>> Thanks >>> Deepak >>> >>> On Mon, Apr 18, 2016 at 5:43 PM, My List wrote: >>> >>>> Deepak, >>>> >>>> The following could be a very dumb questions so pardon me for the same. >>>> 1) When I download the binary for Spark with a version of Hadoop(Hadoop >>>> 2.6) does it not come in the zip or tar file? >>>> 2) If it does not come along,Is there a Apache Hadoop for windows, is >>>> it in binary format or will have to build it? >>>> 3) Is there a basic tutorial for Hadoop on windows for the basic needs >>>> of Spark. >>>> >>>> Thanks in Advance ! >>>> >>>> On Mon, Apr 18, 2016 at 5:35 PM, Deepak Sharma >>>> wrote: >>>> >>>>> Once you download hadoop and format the namenode , you can use >>>>> start-dfs.sh to start hdfs. >>>>> Then use 'jps' to sss if datanode/namenode services are up and running. >>>>> >>>>> Thanks >>>>> Deepak >>>>> >>>>> On Mon, Apr 18, 2016 at 5:18 PM, My List wrote: >>>>> >>>>>> Hi , >>>>>> >>>>>> I am a newbie on Spark.I wanted to know how to start and verify if >>>>>> HDFS has started on Spark stand alone. >>>>>> >>>>>> Env - >>>>>> Windows 7 - 64 bit >>>>>> Spark 1.4.1 With Hadoop 2.6 >>>>>> >>>>>> Using Scala Shell - spark-shell >>>>>> >>>>>> >>>>>> -- >>>>>> Thanks, >>>>>> Harry >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Thanks >>>>> Deepak >>>>> www.bigdatabig.com >>>>> www.keosha.net >>>>> >>>> >>>> >>>> >>>> -- >>>> Thanks, >>>> Harmeet >>>> >>> >>> >>> >>> -- >>> Thanks >>> Deepak >>> www.bigdatabig.com >>> www.keosha.net >>> >> >> >> >> -- >> Thanks, >> Harmeet >> > > > > -- > Thanks > Deepak > www.bigdatabig.com > www.keosha.net > -- Thanks, Harmeet