Hi Wojtek,

I've not checked but I think your hive-site.xml has `<include>`. Does it
still happen if you put all parameters directly in hive-site.xml? If that
resolves the issue, do you have a reason to use `include`?
https://github.com/apache/hadoop/blob/57100bba1bfd6963294181a2521396dc30c295f7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L3288-L3291

Regards,
Okumin

On Fri, Sep 22, 2023 at 3:52 AM Wojtek Meler <wme...@wp.pl> wrote:

> I've noticed strange behaviour of hive. When you run query against
> partitioned table like this:
>
> select * from mytable
> where log_date = date_add('2023-09-10',1)
> limit 3
>
> (mytable is partitioned by log_date string column) hive is trying to
> evaluate date_add inside metastore and throws exception when sees xinclude
> in configuration file.
>
> java.lang.RuntimeException: Error parsing resource file:/etc/hive/
> conf.dist/hive-site.xml: XInclude is not supported for restricted
> resources
>         at
> org.apache.hadoop.conf.Configuration$Parser.handleInclude(Configuration.java:3258)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.conf.Configuration$Parser.handleStartElement(Configuration.java:3202)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3398)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3182)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3075)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:3041)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2914)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.conf.Configuration.addResourceObject(Configuration.java:1034)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:939)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:6353)
> ~[hive-common-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:6302)
> ~[hive-common-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.getBestAvailableConf(ExprNodeGenericFuncEvaluator.java:145)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:181)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartExprEvalUtils.prepareExpr(PartExprEvalUtils.java:118)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prunePartitionNames(PartitionPruner.java:556)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore.filterPartitionsByExpr(PartitionExpressionForMetastore.java:96)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartitionNamesPrunedByExprNoTxn(ObjectStore.java:4105)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.ObjectStore.access$1700(ObjectStore.java:285)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.ObjectStore$11.getJdoResult(ObjectStore.java:4066)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.ObjectStore$11.getJdoResult(ObjectStore.java:4036)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:4362)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByExprInternal(ObjectStore.java:4072)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByExpr(ObjectStore.java:4016)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method) ~[?:?]
>         at
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[?:?]
>         at
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:?]
>         at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>         at
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at com.sun.proxy.$Proxy27.getPartitionsByExpr(Unknown Source)
> ~[?:?]
>         at
> org.apache.hadoop.hive.metastore.HMSHandler.get_partitions_spec_by_expr(HMSHandler.java:7366)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method) ~[?:?]
>         at
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[?:?]
>         at
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:?]
>         at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>         at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:146)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at com.sun.proxy.$Proxy28.get_partitions_spec_by_expr(Unknown
> Source) ~[?:?]
>         at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_spec_by_expr.getResult(ThriftHiveMetastore.java:21420)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_spec_by_expr.getResult(ThriftHiveMetastore.java:21399)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:38)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:646)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:641)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at java.security.AccessController.doPrivileged(Native Method)
> ~[?:?]
>         at javax.security.auth.Subject.doAs(Subject.java:423) ~[?:?]
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
> ~[hadoop-common-3.3.4.jar:?]
>         at
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:641)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:250)
> ~[hive-exec-4.0.0-alpha-2.jar:4.0.0-alpha-2]
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> ~[?:?]
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> ~[?:?]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> 2
>
>
> Any ideas how to deal with it? In my opinion metastore is not a good place
> to evaluate UDF. Also I have no idea why config is loaded in restricted
> mode instead of being passed from instance loaded on server startup.
>
> Regards,
> Wojtek
>

Reply via email to