Hi, It looks like interpreter process is somehow failed to launch or spark context is failed to create. Can you find any other log in ./logs directory?
Thanks, moon On Thu, Dec 24, 2015 at 6:59 AM Hoc Phan <quang...@yahoo.com> wrote: > Hi > > I am downloading this prebuild binary from Downloads | Apache Spark > <http://spark.apache.org/downloads.html> > > > > > > > Downloads | Apache Spark <http://spark.apache.org/downloads.html> > Download Spark The latest release of Spark is Spark 1.5.2, released on > November 9, 2015 (release notes) (git tag) Choose a Spark release: Choose a > package type: Choose a download type: Download Spark: > View on spark.apache.org <http://spark.apache.org/downloads.html> > Preview by Yahoo > > > spark-1.5.2-bin-hadoop2.6.tgz > <http://d3kbcqa49mib13.cloudfront.net/spark-1.5.2-bin-hadoop2.6.tgz> > > I don't install Hadoop in my local machine but pyspark works fine. > > I pointed SPARK_HOME to that folder ./bin and run simple %pyspark and got > this error: > > java.net.ConnectException: Connection refused at > java.net.PlainSocketImpl.socketConnect(Native Method) at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at > java.net.Socket.connect(Socket.java:579) at > org.apache.thrift.transport.TSocket.open(TSocket.java:182) at > org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51) > at > org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37) > at > org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60) > at > org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861) > at > org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435) > at > org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) > at > org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:139) > at > org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:266) > at > org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:104) > at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:197) at > org.apache.zeppelin.scheduler.Job.run(Job.java:170) at > org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:304) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > > Do I need to have Apache Hadoop in my local machine? Again spark-shell and > pyspark works fine. > > >