Hello again,

it seems that not everything is ignored. I traced the processes that get 
started, and there are surely things in that come from configuration files in 
HADOOP_CONF_DIR, however yarn-site.xml seems to get ignored. The 
ResourceManager is correctly configured there (yarn.resourcemanager.hostname 
and yarn.resourcemanager.*.address are set to the correct FQDN and ports) but 
it always uses localhost:18032 instead of the configured values.

What is going wrong here?

> Am 29.12.2015 um 20:18 schrieb Jens Rabe <rabe-j...@t-online.de>:
> 
> Hello,
> 
> I am trying to set up Zeppelin to use Spark on YARN. Spark on YARN itself 
> works, I can use spark-submit and spark-shell. So I set up Zeppelin and my 
> zeppelin-env.sh contains the following:
> 
> #!/bin/bash
> 
> export JAVA_HOME=/usr/lib/jvm/java-7-oracle
> export MASTER=yarn-client                     # Spark master url. eg. 
> spark://master_addr:7077. Leave empty if you want to use local mode.
> export ZEPPELIN_JAVA_OPTS="-Dspark.dynamicAllocation.enabled=true 
> -Dspark.shuffle.service.enabled=true"           # Additional jvm options. for 
> example, export ZEPPELIN_JAVA_OPTS="-Dspark.executor.memory=8g 
> -Dspark.cores.max=16"
> export ZEPPELIN_PORT=10080
> export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
> 
> I double-checked that /opt/hadoop/etc/hadoop really contains the correct 
> configuration files, and it does. zeppelin-env-sh is executable, too. But 
> when I start Zeppelin and try to submit something, it tries to connect to a 
> YARN RM at 127.0.0.1. It seems that it ignores HADOOP_CONF_DIR.
> 
> Is this a bug or am I missing something?
> 
> - Jens

Reply via email to