hi,lalit sharma. I add jar to SPARK_CLASSPATH env variable.
But spark thriftserver can not start.


errors:
###
16/05/31 02:29:12 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Found both spark.executor.extraClassPath and 
SPARK_CLASSPATH. Use only the former.
        at 
org.apache.spark.SparkConf$$anonfun$validateSettings$6$$anonfun$apply$8.apply(SparkConf.scala:473)
        at 
org.apache.spark.SparkConf$$anonfun$validateSettings$6$$anonfun$apply$8.apply(SparkConf.scala:471)
        at scala.collection.immutable.List.foreach(List.scala:318)







I found, udf can not work not because can not find jar. 
because when I create pernamant function, it says function have exists.
And in metastore, I found  a record:
8       com.dmp.hive.udfs.utils.URLEncode       1464623031      1       
urlencode       1               USER.


I think It should be a privilege problem.But I can not make sure where the 
problem is ?
##########
logs:
org.apache.hive.service.cli.HiveSQLException: 
org.apache.spark.sql.AnalysisException: undefined function URLEncode; line 1 
pos 17
        at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.runInternal(SparkExecuteStatementOperation.scala:259)
        at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:182)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)





------------------ ???????? ------------------
??????: "lalit sharma";<lalitkishor...@gmail.com>;
????????: 2016??5??31??(??????) ????2:15
??????: "??????"<251922...@qq.com>; 
????: "user"<user@spark.apache.org>; 
????: Re: can not use udf in hivethriftserver2



Can you try adding jar to SPARK_CLASSPATH env variable ?


On Mon, May 30, 2016 at 9:55 PM, ?????? <251922...@qq.com> wrote:
HI all, I have a problem when using hiveserver2 and beeline.
when I use CLI mode, the udf works well.
But when I begin to use hiveserver2 and beeline, the udf can not work.
My Spark version is 1.5.1.
I tried 2 methods, first:
######
add jar /home/hadoop/dmp-udf-0.0.1-SNAPSHOT.jar;
create temporary function URLEncode as "com.dmp.hive.udfs.utils.URLEncode" ;


errors:
Error: org.apache.spark.sql.AnalysisException: undefined function URLEncode; 
line 1 pos 207 (state=,code=0)




second:
create temporary function URLEncode as 'com.dmp.hive.udfs.utils.URLEncode' 
using jar 
'hdfs:///warehouse/dmpv3.db/datafile/libjars/dmp-udf-0.0.1-SNAPSHOT.jar';


the error is same:
Error: org.apache.spark.sql.AnalysisException: undefined function URLEncode; 
line 1 pos 207 (state=,code=0)


###


can anyone give some suggestions? Or how to use udf in hiveserver2/beeline mode?

Reply via email to