[ https://issues.apache.org/jira/browse/HIVE-25417?focusedWorklogId=636277&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-636277 ]
ASF GitHub Bot logged work on HIVE-25417: ----------------------------------------- Author: ASF GitHub Bot Created on: 10/Aug/21 06:33 Start Date: 10/Aug/21 06:33 Worklog Time Spent: 10m Work Description: maheshk114 merged pull request #2556: URL: https://github.com/apache/hive/pull/2556 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 636277) Time Spent: 1h 20m (was: 1h 10m) > Null bit vector is not handled while getting the stats for Postgres backend > --------------------------------------------------------------------------- > > Key: HIVE-25417 > URL: https://issues.apache.org/jira/browse/HIVE-25417 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 > Reporter: mahesh kumar behera > Assignee: mahesh kumar behera > Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > While adding stats with null bit vector, a special string "HL" is added as > Postgres does not support null value for byte columns. But while getting the > stats, the conversion to null is not done. This is causing failure during > deserialisation of bit vector field if the existing stats is used for merge. > > {code:java} > The input stream is not a HyperLogLog stream. 7276-1 instead of 727676 or > 7077^M at > org.apache.hadoop.hive.common.ndv.hll.HyperLogLogUtils.checkMagicString(HyperLogLogUtils.java:349)^M > at > org.apache.hadoop.hive.common.ndv.hll.HyperLogLogUtils.deserializeHLL(HyperLogLogUtils.java:139)^M > at > org.apache.hadoop.hive.common.ndv.hll.HyperLogLogUtils.deserializeHLL(HyperLogLogUtils.java:213)^M > at > org.apache.hadoop.hive.common.ndv.hll.HyperLogLogUtils.deserializeHLL(HyperLogLogUtils.java:227)^M > at > org.apache.hadoop.hive.common.ndv.NumDistinctValueEstimatorFactory.getNumDistinctValueEstimator(NumDistinctValueEstimatorFactory.java:53)^M > at > org.apache.hadoop.hive.metastore.columnstats.cache.LongColumnStatsDataInspector.updateNdvEstimator(LongColumnStatsDataInspector.java:124)^M > at > org.apache.hadoop.hive.metastore.columnstats.cache.LongColumnStatsDataInspector.getNdvEstimator(LongColumnStatsDataInspector.java:107)^M > at > org.apache.hadoop.hive.metastore.columnstats.merge.LongColumnStatsMerger.merge(LongColumnStatsMerger.java:36)^M > at > org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.mergeColStats(MetaStoreUtils.java:1174)^M > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.updateTableColumnStatsWithMerge(HiveMetaStore.java:8934)^M > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:8800)^M > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)^M > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)^M > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)^M > at java.lang.reflect.Method.invoke(Method.java:498)^M at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:160)^M > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:121)^M > at com.sun.proxy.$Proxy35.set_aggr_stats_for(Unknown Source)^M at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$set_aggr_stats_for.getResult(ThriftHiveMetastore.java:20489)^M > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$set_aggr_stats_for.getResult(ThriftHiveMetastore.java:20473)^M > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)^M > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)^M at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:643)^M > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:638)^M > at java.security.AccessController.doPrivileged(Native Method)^M at > javax.security.auth.Subject.doAs(Subject.java:422)^M at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)^M > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:638)^M > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)^M > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)^M > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)^M > at java.lang.Thread.run(Thread.java:748) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)