[ 
https://issues.apache.org/jira/browse/HIVE-27267?focusedWorklogId=860710&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-860710
 ]

ASF GitHub Bot logged work on HIVE-27267:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/May/23 09:06
            Start Date: 05/May/23 09:06
    Worklog Time Spent: 10m 
      Work Description: ngsg opened a new pull request, #4296:
URL: https://github.com/apache/hive/pull/4296

   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute
     2. Ensure that you have created an issue on the Hive project JIRA: 
https://issues.apache.org/jira/projects/HIVE/summary
     3. Ensure you have added or run the appropriate tests for your PR: 
     4. If the PR is unfinished, add '[WIP]' in your PR title, e.g., 
'[WIP]HIVE-XXXXX:  Your PR title ...'.
     5. Be sure to keep the PR description updated to reflect all changes.
     6. Please write your PR title to summarize what this PR proposes.
     7. If possible, provide a concise example to reproduce the issue for a 
faster review.
   
   -->
   
   ### What changes were proposed in this pull request?
   <!--
   Please clarify what changes you are proposing. The purpose of this section 
is to outline the changes and how this PR fixes the issue. 
   If possible, please consider writing useful notes for better and faster 
reviews in your PR. See the examples below.
     1. If you refactor some codes with changing classes, showing the class 
hierarchy will help reviewers.
     2. If you fix some SQL features, you can provide some references of other 
DBMSes.
     3. If there is design documentation, please add the link.
     4. If there is a discussion in the mailing list, please add the link.
   -->
   Gather correct indices of bucket expression from big table side 
ReduceSinkOperator's partition columns.
   
   ### Why are the changes needed?
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you propose a new API, clarify the use case for a new API.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   As HIVE-27267 reportes, BucketMapJoin returns wrong result.
   Current BucketMapJoin conversion algorithm sets incorrect expression to 
partition column, and this leads to mismatch between bucket id of 
MapJoinOprator and that of rows from small table RS. As a consequence, 
BucketMapJoin returns a subset of correct result as it does not have complete 
small table for corresponding bucket.
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as 
the documentation fix.
   If yes, please clarify the previous behavior and the change this PR proposes 
- provide the console output, description, screenshot and/or a reproducable 
example to show the behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to 
the released Hive versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   No
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
   If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
   -->
   I added a qfile test that compares BucketMapJoin and MapJoin.
   




Issue Time Tracking
-------------------

            Worklog Id:     (was: 860710)
    Remaining Estimate: 0h
            Time Spent: 10m

> Incorrect results when doing bucket map join on decimal bucketed column with 
> subquery
> -------------------------------------------------------------------------------------
>
>                 Key: HIVE-27267
>                 URL: https://issues.apache.org/jira/browse/HIVE-27267
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Sourabh Badhya
>            Assignee: Seonggon Namgung
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following queries when run on a Hive cluster produce no results - 
> Repro queries - 
> {code:java}
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> set hive.support.concurrency=true;
> set hive.convert.join.bucket.mapjoin.tez=true;
> drop table if exists test_external_source;
> create external table test_external_source (date_col date, string_col string, 
> decimal_col decimal(38,0)) stored as orc tblproperties 
> ('external.table.purge'='true');
> insert into table test_external_source values ('2022-08-30', 'pipeline', 
> '50000000000000000005905545593'), ('2022-08-16', 'pipeline', 
> '50000000000000000005905545593'), ('2022-09-01', 'pipeline', 
> '50000000000000000006008686831'), ('2022-08-30', 'pipeline', 
> '50000000000000000005992620837'), ('2022-09-01', 'pipeline', 
> '50000000000000000005992620837'), ('2022-09-01', 'pipeline', 
> '50000000000000000005992621067'), ('2022-08-30', 'pipeline', 
> '50000000000000000005992621067');
> drop table if exists test_external_target;
> create external table test_external_target (date_col date, string_col string, 
> decimal_col decimal(38,0)) stored as orc tblproperties 
> ('external.table.purge'='true');
> insert into table test_external_target values ('2017-05-17', 'pipeline', 
> '50000000000000000000441610525'), ('2018-12-20', 'pipeline', 
> '50000000000000000001048981030'), ('2020-06-30', 'pipeline', 
> '50000000000000000002332575516'), ('2021-08-16', 'pipeline', 
> '50000000000000000003897973989'), ('2017-06-06', 'pipeline', 
> '50000000000000000000449148729'), ('2017-09-08', 'pipeline', 
> '50000000000000000000525378314'), ('2022-08-30', 'pipeline', 
> '50000000000000000005905545593'), ('2022-08-16', 'pipeline', 
> '50000000000000000005905545593'), ('2018-05-03', 'pipeline', 
> '50000000000000000000750826355'), ('2020-01-10', 'pipeline', 
> '50000000000000000001816579677'), ('2021-11-01', 'pipeline', 
> '50000000000000000004269423714'), ('2017-11-07', 'pipeline', 
> '50000000000000000000585901787'), ('2019-10-15', 'pipeline', 
> '50000000000000000001598843430'), ('2020-04-01', 'pipeline', 
> '50000000000000000002035795461'), ('2020-02-24', 'pipeline', 
> '50000000000000000001932600185'), ('2020-04-27', 'pipeline', 
> '50000000000000000002108160849'), ('2016-07-05', 'pipeline', 
> '50000000000000000000054405114'), ('2020-06-02', 'pipeline', 
> '50000000000000000002234387967'), ('2020-08-21', 'pipeline', 
> '50000000000000000002529168758'), ('2021-02-17', 'pipeline', 
> '50000000000000000003158511687');
> drop table if exists target_table;
> drop table if exists source_table;
> create table target_table(date_col date, string_col string, decimal_col 
> decimal(38,0)) clustered by (decimal_col) into 7 buckets stored as orc 
> tblproperties ('bucketing_version'='2', 'transactional'='true', 
> 'transactional_properties'='default');
> create table source_table(date_col date, string_col string, decimal_col 
> decimal(38,0)) clustered by (decimal_col) into 7 buckets stored as orc 
> tblproperties ('bucketing_version'='2', 'transactional'='true', 
> 'transactional_properties'='default');
> insert into table target_table select * from test_external_target;
> insert into table source_table select * from test_external_source; {code}
> Query which is under investigation - 
> {code:java}
> select * from target_table inner join (select distinct date_col, 'pipeline' 
> string_col, decimal_col from source_table where coalesce(decimal_col,'') = 
> '50000000000000000005905545593') s on s.date_col = target_table.date_col AND 
> s.string_col = target_table.string_col AND s.decimal_col = 
> target_table.decimal_col; {code}
> Expected result of the query - 2 records
> {code:java}
> +------------------------+--------------------------+--------------------------------+-------------+---------------+--------------------------------+
> | target_table.date_col  | target_table.string_col  |    
> target_table.decimal_col    | s.date_col  | s.string_col  |         
> s.decimal_col          |
> +------------------------+--------------------------+--------------------------------+-------------+---------------+--------------------------------+
> | 2022-08-16             | pipeline                 | 
> 50000000000000000005905545593  | 2022-08-16  | pipeline      | 
> 50000000000000000005905545593  |
> | 2022-08-30             | pipeline                 | 
> 50000000000000000005905545593  | 2022-08-30  | pipeline      | 
> 50000000000000000005905545593  |
> +------------------------+--------------------------+--------------------------------+-------------+---------------+--------------------------------+
>  {code}
> Actual result of the query - No records
> {code:java}
> +------------------------+--------------------------+---------------------------+-------------+---------------+----------------+
> | target_table.date_col  | target_table.string_col  | 
> target_table.decimal_col  | s.date_col  | s.string_col  | s.decimal_col  |
> +------------------------+--------------------------+---------------------------+-------------+---------------+----------------+
> +------------------------+--------------------------+---------------------------+-------------+---------------+----------------+
>  {code}
> The workaround which fetches the correct result here is to set the below 
> config to false - 
> {code:java}
> set hive.convert.join.bucket.mapjoin.tez=false;{code}
> Notes from investigation - 
> 1. The batch containing the 2 results are forwarded correctly to the map join 
> operator. However, during the join comparision, the hash table is empty.
> 2. The problem seems to be that even though HashTableDummyOperator performs 
> loading of hash table with the records, however the map join operator does 
> not take into account all the hash tables from various instances of 
> HashTableDummyOperator (due to multiple map tasks initiated by bucket map 
> join) but rather uses only one hash table from one of the 
> HashTableDummyOperator instance. In this case, the selected instance had an 
> empty hash table hence no records were matched in the join operator.
> 3. If the table is unbucketed / 1-bucketed, then the results are correct. 
> There is only 1 map task which is spawned which loads the records into the 
> hash table. The workaround (setting *hive.convert.join.bucket.mapjoin.tez* to 
> {*}false{*}) also has the same effect since there is 1 map task which loads 
> the records into the hash table.
> 4. HashTableDummyOperator is created in the optimizer and is associated with 
> the plan, hence suspecting there is a some issue in the optimizer code. 
> Ideally, all hash tables from all instances of HashTableDummyOperator must be 
> used by the map join operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to