If I put all the jar files from my local hive in the front of the spark class 
path, a different error was reported, as follows:


14/10/28 18:29:40 ERROR transport.TSaslTransport: SASL negotiation failure

javax.security.sasl.SaslException: PLAIN auth failed: null

at 
org.apache.hadoop.security.SaslPlainServer.evaluateResponse(SaslPlainServer.java:108)

at 
org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:528)

at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:272)

at 
org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)

at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)

at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:190)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:722)

14/10/28 18:29:40 ERROR server.TThreadPoolServer: Error occurred during 
processing of message.

java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: 
PLAIN auth failed: null

at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)

at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:190)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:722)

Caused by: org.apache.thrift.transport.TTransportException: PLAIN auth failed: 
null

at 
org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:221)

at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:305)

at 
org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)

at 
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)

... 4 more




From: Cheng Lian <lian.cs....@gmail.com<mailto:lian.cs....@gmail.com>>
Date: Tuesday, October 28, 2014 at 2:50 AM
To: Du Li <l...@yahoo-inc.com.invalid<mailto:l...@yahoo-inc.com.invalid>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>" 
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: [SPARK SQL] kerberos error when creating database from 
beeline/ThriftServer2

Which version of Spark and Hadoop are you using? Could you please provide the 
full stack trace of the exception?

On Tue, Oct 28, 2014 at 5:48 AM, Du Li 
<l...@yahoo-inc.com.invalid<mailto:l...@yahoo-inc.com.invalid>> wrote:
Hi,

I was trying to set up Spark SQL on a private cluster. I configured a 
hive-site.xml under spark/conf that uses a local metestore with warehouse and 
default FS name set to HDFS on one of my corporate cluster. Then I started 
spark master, worker and thrift server. However, when creating a database on 
beeline, I got the following error:

org.apache.hive.service.cli.HiveSQLException: 
org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
MetaException(message:Got exception: java.io.IOException Failed on local 
exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]; Host Details : local host is: “<spark-master-host>"; 
destination host is: “<HDFS-namenode:port>"; )

It occurred when spark was trying to create a hdfs directory under the 
warehouse in order to create the database. All processes (spark master, worker, 
thrift server, beeline) were run as a user with the right access permissions. 
My spark classpaths have /home/y/conf/hadoop in the front. I was able to read 
and write files from hadoop fs command line under the same directory and also 
from the spark-shell without any issue.

Any hints regarding the right way of configuration would be appreciated.

Thanks,
Du

Reply via email to