[
https://issues.apache.org/jira/browse/IMPALA-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18007684#comment-18007684
]
ASF subversion and git services commented on IMPALA-13898:
----------------------------------------------------------
Commit 78a27c56fec29f5f27c24e5b5cd32b454f6dba07 in impala's branch
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=78a27c56f ]
IMPALA-13898: Incorporate partition information into tuple cache keys
Currently, the tuple cache keys do not include partition
information in either the planner key or the fragment instance
key. However, the partition actually is important to correctness.
First, there are settings defined on the table and partition that
can impact the results. For example, for processing text files,
the separator, escape character, etc are specified at the table
level. This impacts the rows produced from a given file. There
are other such settings stored at the partition level (e.g.
the JSON binary format).
Second, it is possible to have two partitions pointed at the same
filesystem location. For example,
scale_db.num_partitions_1234_blocks_per_partition_1
is a table that has all partitions pointing to the same
location. In that case, the cache can't tell the partitions
apart based on the files alone. This is an exotic configuration.
Incorporating an identifier of the partition (e.g. the partition
keys/values) allows the cache to tell the difference.
To fix this, we incorporate partition information into the
key. At planning time, when incorporating the scan range information,
we also incorporate information about the associated partitions.
This moves the code to HdfsScanNode and changes it to iterate over
the partitions, hashing both the partition information and the scan
ranges. At runtime, the TupleCacheNode looks up the partition
associated with a scan node and hashes the additional information
on the HdfsPartitionDescriptor.
This includes some test-only changes to make it possible to run the
TestBinaryType::test_json_binary_format test case with tuple caching.
ImpalaTestSuite::_get_table_location() (used by clone_table()) now
detects a fully-qualified table name and extracts the database from it.
It only uses the vector to calculate the database if the table is
not fully qualified. This allows a test to clone a table without
needing to manipulate its vector to match the right database. This
also changes _get_table_location() so that it does not switch into the
database. This required reworking test_scanners_fuzz.py to use absolute
paths for queries. It turns out that some tests in test_scanners_fuzz.py
were running in the wrong database and running against uncorrupted
tables. After this is corrected, some tests can crash Impala. This
xfails those tests until this can be fixed (tracked by IMPALA-14219).
Testing:
- Added a frontend test in TupleCacheTest for a table with
multiple partitions pointed at the same place.
- Added custom cluster tests testing both issues
Change-Id: I3a7109fcf8a30bf915bb566f7d642f8037793a8c
Reviewed-on: http://gerrit.cloudera.org:8080/23074
Reviewed-by: Yida Wu <[email protected]>
Reviewed-by: Michael Smith <[email protected]>
Tested-by: Joe McDonnell <[email protected]>
> Tuple cache produces incorrect result when querying
> scale_db.num_partitions_1234_blocks_per_partition_1
> -------------------------------------------------------------------------------------------------------
>
> Key: IMPALA-13898
> URL: https://issues.apache.org/jira/browse/IMPALA-13898
> Project: IMPALA
> Issue Type: Bug
> Components: Frontend
> Affects Versions: Impala 5.0.0
> Reporter: Joe McDonnell
> Assignee: Joe McDonnell
> Priority: Critical
>
> Tuple caching generates the same key for these two queries:
> {noformat}
> select * from scale_db.num_partitions_1234_blocks_per_partition_1 where j=1
> select * from scale_db.num_partitions_1234_blocks_per_partition_1 where j=1
> or j=2;{noformat}
> This is a scenario from catalog_service/test_large_num_partitions.py. It is a
> correctness issue.
> scale_db.num_partitions_1234_blocks_per_partition_1 is an exotic table where
> all of the partitions point to the same location / file. It also only has
> partition columns, so the contents of the file don't matter. This means that
> j=1 and j=2 both point to the same file. The partition information is not
> included in the key, so the two are indistinguishable. We'll need to expand
> what we put in the cache key to handle this scenario.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]