Yesha Vora created ZEPPELIN-1147: ------------------------------------ Summary: Spark Interpreter fails with "HiveException: org.apache.thrift.transport.TTransportException" Key: ZEPPELIN-1147 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1147 Project: Zeppelin Issue Type: Bug Affects Versions: 0.6.0 Reporter: Yesha Vora
Scenario: * Create a new notebook * Run below paragraph {code} %sh hdfs dfs -copyFromLocal /etc/hadoop//conf/core-site.xml /tmp{code} {code} %spark val file = sc.textFile("/tmp/core-site.xml") val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _) counts.saveAsTextFile("/tmp/wordcount1"){code} Notebook is alive : http://xxxxx:9995/#/notebook/2BPXE3AYN {code:title=output from zeppelin notebook} org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_delegation_token(ThriftHiveMetastore.java:3715) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_delegation_token(ThriftHiveMetastore.java:3701) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDelegationToken(HiveMetaStoreClient.java:1796) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy29.getDelegationToken(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.getDelegationToken(Hive.java:3150) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$$anonfun$obtainTokenForHiveMetastoreInner$4.apply(YarnSparkHadoopUtil.scala:251) at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$$anonfun$obtainTokenForHiveMetastoreInner$4.apply(YarnSparkHadoopUtil.scala:249) at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$$anon$1.run(YarnSparkHadoopUtil.scala:340) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.doAsRealUser(YarnSparkHadoopUtil.scala:339) at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.obtainTokenForHiveMetastoreInner(YarnSparkHadoopUtil.scala:249) at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.obtainTokenForHiveMetastore(YarnSparkHadoopUtil.scala:204) at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.obtainTokenForHiveMetastore(YarnSparkHadoopUtil.scala:151) at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:348) at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:733) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:143) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144) at org.apache.spark.SparkContext.<init>(SparkContext.scala:530) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:338) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:122) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:513) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341) at org.apache.zeppelin.scheduler.Job.run(Job.java:176) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} The same spark wordcount example works fine directly using spark-shell. It fails only via Zeppelin. -- This message was sent by Atlassian JIRA (v6.3.4#6332)