Thanks for sharing the problem.

Based on your log file, it looks like somehow your spark master address is
not well configured.

Can you confirm that you have also set 'master' property in Interpreter
menu on GUI, at spark section?

If it is not, you can connect Spark Master UI with your web browser and see
the first line, "Spark Master at spark://....". That value should be in
'master' property in Interpreter menu on GUI, at spark section.

Hope this helps

Best,
moon

On Tue, Nov 24, 2015 at 3:07 AM Timur Shenkao <t...@timshenkao.su> wrote:

> Hi!
>
> New mistake comes: TTransportException.
> I use CentOS 6.7 + Spark 1.5.2 Standalone + Cloudera Hadoop 5.4.8 on the
> same cluster. I can't use Mesos or Spark on YARN.
> I built Zeppelin 0.6.0 so:
> mvn clean package  –DskipTests  -Pspark-1.5 -Phadoop-2.6 -Pyarn -Ppyspark
> -Pbuild-distr
>
> I constantly get errors like
> ERROR [2015-11-23 18:14:33,404] ({pool-1-thread-4} Job.java[run]:183) -
> Job failed
> org.apache.zeppelin.interpreter.InterpreterException:
> org.apache.thrift.transport.TTransportException
>     at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:237)
>
>
> or
>
> ERROR [2015-11-23 18:07:26,535] ({Thread-11}
> RemoteInterpreterEventPoller.java[run]:72) - Can't get
> RemoteInterpreterEvent
> org.apache.thrift.transport.TTransportException
>
> I changed several parameters in zeppelin-env.sh and in Spark configs.
> Whatever I do - these mistakes come. At the same time, when I use local
> Zeppelin with Hadoop in pseudodistributed mode + Spark Standalone (Master +
> workers on the same machine), everything works.
>
> What configurations (memory, network, CPU cores) should be in order to
> Zeppelin to work?
>
> I launch H2O on this cluster. And it works.
> Spark Master config:
> SPARK_MASTER_WEBUI_PORT=18080
> HADOOP_CONF_DIR=/etc/hadoop/conf
> SPARK_HOME=/usr/spark
>
> Spark Worker config:
>    export HADOOP_CONF_DIR=/etc/hadoop/conf
>    export MASTER=spark://192.168.58.10:7077
>    export SPARK_HOME=/usr/spark
>
>    SPARK_WORKER_INSTANCES=1
>    SPARK_WORKER_CORES=4
>    SPARK_WORKER_MEMORY=32G
>
>
> I apply Spark configs + zeppelin configs & logs for local mode   +
> zeppelin configs & logs when I defined IP address of Spark Master
> explicitly.
> Thank you.
>

Reply via email to