Hi,

To access hive tables with spark sql, I copied hive-seite.xml to zeppelin
configuration folder. It works fine with non-ha cluster. But when I'm
accessing hive tables in HA cluster, there is an unknownHost exception, do
you have any idea? Many thanks.

2016-06-01 11:15:51,000 INFO  [dispatcher-event-loop-2]
storage.BlockManagerInfo (Logging.scala:logInfo(58)) - Added
broadcast_2_piece0 in memory on localhost:57979 (size: 42.3 KB, free: 511.5
MB)
2016-06-01 11:15:51,003 INFO  [pool-2-thread-3] spark.SparkContext
(Logging.scala:logInfo(58)) - Created broadcast 2 from take at
NativeMethodAccessorImpl.java:-2
2016-06-01 11:15:51,922 ERROR [pool-2-thread-3] scheduler.Job
(Job.java:run(182)) - Job failed
org.apache.zeppelin.interpreter.InterpreterException:
java.lang.reflect.InvocationTargetException
        at
org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:301)
        at
org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:144)
        at
org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
        at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
        at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:300)
        at org.apache.zeppelin.scheduler.Job.run(Job.java:169)
        at
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:134)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:297)
        ... 13 more
Caused by: java.net.UnknownHostException: Invalid host name: local host is:
(unknown); destination host is: "ha-cluster":8020;
java.net.UnknownHostException; For more details see:
http://wiki.apache.org/hadoop/UnknownHost
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:743)
        at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:402)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1511)
        at org.apache.hadoop.ipc.Client.call(Client.java:1438)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy30.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988)
        at
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
        at
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
        at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)



hadoop conf for HA cluster:

core-site.xml

<configuration>
  <property>
     <name>master_hostname</name>
     <value>MasterHostName</value>
  </property>
  <property>
     <name>fs.defaultFS</name>
     <value>hdfs://ha-cluster</value>
  </property>

  <property>
     <name>hadoop.home</name>
     <value>/usr/lib/hadoop-current</value>
  </property>

  <property>
    <name>fs.trash.interval</name>
    <value>60</value>
  </property>
  <property>
    <name>fs.trash.checkpoint.interval</name>
    <value>30</value>
  </property>

  <property>
    <name>hadoop.tmp.dir</name>
    <value>/mnt/disk1/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>ha.zookeeper.quorum</name>
    <value>header-1:2181,header-2:2181,header-3:2181</value>
  </property>

  <property>
  <name>hadoop.proxyuser.hadoop.groups</name>
    <value>*</value>
    </property>

  <property>
    <name>hadoop.proxyuser.hadoop.hosts</name>
    <value>*</value>
  </property>

  <property>
    <name>hadoop.proxyuser.hue.groups</name>
    <value>*</value>
    <description>User proxy groups for hue.</description>
  </property>

  <property>
    <name>hadoop.proxyuser.hue.hosts</name>
    <value>*</value>
    <description>User proxy hosts for hue.</description>
  </property>
</configuration>


hdfs-site.xml


<configuration>
  <property>
    <name>dfs.nameservices</name>
    <value>ha-cluster</value>
  </property>

  <property>
    <name>dfs.ha.namenodes.cluster</name>
    <value>nn1,nn2</value>
  </property>

  <property>
    <name>dfs.namenode.rpc-address.cluster.nn1</name>
    <value>header-1:8020</value>
  </property>

  <property>
    <name>dfs.namenode.rpc-address.cluster.nn2</name>
    <value>header-2:8020</value>
  </property>

  <property>
    <name>dfs.namenode.http-address.cluster.nn1</name>
    <value>header-1:50070</value>
  </property>

  <property>
    <name>dfs.namenode.http-address.cluster.nn2</name>
    <value>header-2:50070</value>
  </property>

  <property>
    <name>dfs.namenode.shared.edits.dir</name>

<value>qjournal://header-1:8485;header-2:8485;header-3:8485/cluster</value>
  </property>

  <property>
    <name>dfs.client.failover.proxy.provider.cluster</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>

  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>shell(/bin/true)</value>
  </property>

  <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>

  <property>
    <name>ha.zookeeper.quorum</name>
    <value>header-1:2181,header-2:2181,header-3:2181</value>
  </property>

  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/mnt/disk1/hdfs/journal</value>
  </property>

  <property>
    <name>dfs.replication</name>
    <value>2</value>
        <description>Default block replication.
    The actual number of replications can be specified when the file is
created.
    The default is used if replication is not specified in create time.
    </description>
  </property>

  <property>
     <name>dfs.permissions.superusergroup</name>
     <value>hadoop</value>
     <description>The name of the group of super-users.</description>
  </property>

  <property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
  </property>

  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///mnt/disk1/hdfs/name</value>
  </property>
  <property>
    <name>dfs.namenode.edits.dir</name>
    <value>file:///mnt/disk1/hdfs/edits</value>
  </property>
  <property>
    <name>dfs.namenode.resource.du.reserved</name>
    <value>1073741824</value>
  </property>

  <property>
    <name>dfs.namenode.handler.count</name>
    <value>10</value>
  </property>
  <property>
    <name>dfs.support.append</name>
    <value>true</value>
    </property>
  <property>
    <name>dfs.http.address</name>
    <value>0.0.0.0:50070</value>
  </property>
  <property>
    <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
    <value>false</value>
  </property>

<!-- DATANODE -->
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>
        file:///mnt/disk1
    </value>
  </property>
  <property>
    <name>dfs.datanode.data.dir.perm</name>
    <value>770</value>
  </property>
  <property>
    <name>dfs.datanode.du.reserved</name>
    <value>1073741824</value>
  </property>
  <!-- SECONDARYNAMENODE -->
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>
      file:///mnt/disk1/hdfs/namesecondary
    </value>
  </property>

</configuration>



yarn-site.xml

<configuration>

 <property>
   <name>yarn.resourcemanager.ha.enabled</name>
   <value>true</value>
 </property>

  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>
    <value>/yarn-leader-election</value>
  </property>

  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>

 <property>
   <name>yarn.resourcemanager.cluster-id</name>
   <value>ha-cluster</value>
 </property>

 <property>
   <name>yarn.resourcemanager.ha.rm-ids</name>
   <value>rm1,rm2</value>
 </property>

 <property>
   <name>yarn.resourcemanager.hostname.rm1</name>
   <value>header-1</value>
 </property>

 <property>
   <name>yarn.resourcemanager.hostname.rm2</name>
   <value>header-2</value>
 </property>
 <property>
   <name>yarn.resourcemanager.zk-address</name>
   <value>header-1:2181,header-2:2181,header-3:2181</value>
 </property>

  <property>
    <name>yarn.web-proxy.address</name>
    <value>${master_hostname}:20888</value>
  </property>

  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>header-1:8025</value>
  </property>

  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>header-2:8025</value>
  </property>

  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>header-1:8032</value>
  </property>

  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>header-2:8032</value>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>header-1:8030</value>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>header-2:8030</value>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle,</value>
  </property>

  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>false</value>
  </property>

  <property>
    <description>How long to keep aggregation logs before deleting them.
 -1 disables.
    Be careful set this too small and you will spam the name
node.</description>
    <name>yarn.log-aggregation.retain-seconds</name>
    <value>86400</value>
  </property>

<!-- hdfs dir -->
  <property>
    <description>Where to aggregate logs to.</description>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/logs</value>
  </property>

  <property>
    <name>yarn.dispatcher.exit-on-error</name>
    <value>true</value>
  </property>

  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>file:///mnt/disk1/yarn</value>
    <final>true</final>
  </property>

  <property>
    <description>Where to store container logs.</description>
    <name>yarn.nodemanager.log-dirs</name>
    <value>file:///mnt/disk1/log/hadoop-yarn/containers</value>
  </property>

  <property>
    <description>Classpath for typical applications.</description>
     <name>yarn.application.classpath</name>
     <value>
        $HADOOP_CONF_DIR,
        $HADOOP_COMMON_HOME/share/hadoop/common/*,
        $HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
        $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
        $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
        $HADOOP_YARN_HOME/share/hadoop/yarn/*,
        $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*,
        /opt/apps/extra-jars/*
     </value>
  </property>

  ...
</configuration>

Reply via email to