[ 
https://issues.apache.org/jira/browse/HIVE-25417?focusedWorklogId=634286&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-634286
 ]

ASF GitHub Bot logged work on HIVE-25417:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/Aug/21 11:34
            Start Date: 05/Aug/21 11:34
    Worklog Time Spent: 10m 
      Work Description: kgyrtkirk commented on a change in pull request #2556:
URL: https://github.com/apache/hive/pull/2556#discussion_r682520895



##########
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/StatObjectConverter.java
##########
@@ -525,6 +531,16 @@ public static MPartitionColumnStatistics 
convertToMPartitionColumnStatistics(
     return mColStats;
   }
 
+  private static byte[] getBitVector(byte[] bytes) {

Review comment:
       you could move this logic into `MTableColumnStatistics#getBitVector` ;
   

##########
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
##########
@@ -9538,7 +9538,7 @@ private void writeMPartitionColumnStatistics(Table table, 
Partition partition,
     if (oldStats != null) {
       StatObjectConverter.setFieldsIntoOldStats(mStatsObj, oldStats);
     } else {
-      if (sqlGenerator.getDbProduct().isPOSTGRES() && mStatsObj.getBitVector() 
== null) {

Review comment:
       you could aslo move this into the `setBitVector` / defaults stuff into 
the `MTableColumnStatistics`

##########
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/model/MPartitionColumnStatistics.java
##########
@@ -281,11 +281,20 @@ public void setDecimalHighValue(String decimalHighValue) {
   }
 
   public byte[] getBitVector() {
+    // workaround for DN bug in persisting nulls in pg bytea column
+    // instead set empty bit vector with header.
+    // https://issues.apache.org/jira/browse/HIVE-17836
+    if (bitVector != null && bitVector.length == 2 && bitVector[0] == 'H' && 
bitVector[1] == 'L') {
+      return null;
+    }
     return bitVector;
   }
 
   public void setBitVector(byte[] bitVector) {
-    this.bitVector = bitVector;
+    // workaround for DN bug in persisting nulls in pg bytea column

Review comment:
       is the DN serialization happens thru the getters or thru the fields? 
   if its reading the fields; what happens if we create an instance of this 
class and never call the `setBitVector(null)`? will that be okay?
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 634286)
    Time Spent: 40m  (was: 0.5h)

> Null bit vector is not handled while getting the stats for Postgres backend
> ---------------------------------------------------------------------------
>
>                 Key: HIVE-25417
>                 URL: https://issues.apache.org/jira/browse/HIVE-25417
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2
>            Reporter: mahesh kumar behera
>            Assignee: mahesh kumar behera
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> While adding stats with null bit vector, a special string "HL" is added as 
> Postgres does not support null value for byte columns. But while getting the 
> stats, the conversion to null is not done. This is causing failure during 
> deserialisation of bit vector field if the existing stats is used for merge.
>  
> {code:java}
>  The input stream is not a HyperLogLog stream.  7276-1 instead of 727676 or 
> 7077^M        at 
> org.apache.hadoop.hive.common.ndv.hll.HyperLogLogUtils.checkMagicString(HyperLogLogUtils.java:349)^M
>  at 
> org.apache.hadoop.hive.common.ndv.hll.HyperLogLogUtils.deserializeHLL(HyperLogLogUtils.java:139)^M
>    at 
> org.apache.hadoop.hive.common.ndv.hll.HyperLogLogUtils.deserializeHLL(HyperLogLogUtils.java:213)^M
>    at 
> org.apache.hadoop.hive.common.ndv.hll.HyperLogLogUtils.deserializeHLL(HyperLogLogUtils.java:227)^M
>    at 
> org.apache.hadoop.hive.common.ndv.NumDistinctValueEstimatorFactory.getNumDistinctValueEstimator(NumDistinctValueEstimatorFactory.java:53)^M
>   at 
> org.apache.hadoop.hive.metastore.columnstats.cache.LongColumnStatsDataInspector.updateNdvEstimator(LongColumnStatsDataInspector.java:124)^M
>   at 
> org.apache.hadoop.hive.metastore.columnstats.cache.LongColumnStatsDataInspector.getNdvEstimator(LongColumnStatsDataInspector.java:107)^M
>      at 
> org.apache.hadoop.hive.metastore.columnstats.merge.LongColumnStatsMerger.merge(LongColumnStatsMerger.java:36)^M
>       at 
> org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.mergeColStats(MetaStoreUtils.java:1174)^M
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.updateTableColumnStatsWithMerge(HiveMetaStore.java:8934)^M
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:8800)^M
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)^M        
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)^M
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)^M
>       at java.lang.reflect.Method.invoke(Method.java:498)^M   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:160)^M
>     at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:121)^M
>     at com.sun.proxy.$Proxy35.set_aggr_stats_for(Unknown Source)^M  at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$set_aggr_stats_for.getResult(ThriftHiveMetastore.java:20489)^M
>     at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$set_aggr_stats_for.getResult(ThriftHiveMetastore.java:20473)^M
>     at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)^M 
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)^M   at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:643)^M
>        at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:638)^M
>        at java.security.AccessController.doPrivileged(Native Method)^M at 
> javax.security.auth.Subject.doAs(Subject.java:422)^M at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)^M
>        at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:638)^M
>      at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)^M
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)^M
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)^M
>     at java.lang.Thread.run(Thread.java:748) {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to