[
https://issues.apache.org/jira/browse/IMPALA-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18046071#comment-18046071
]
ASF subversion and git services commented on IMPALA-14490:
----------------------------------------------------------
Commit d4644b0381979c7be8b2c069d95a76b35894cabd in impala's branch
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=d4644b038 ]
IMPALA-13607, IMPALA-14490: Deflake
test_cache_valid_on_nontransactional_table_ddls
When catalogd runs with --start_hms_server=true, it services all the HMS
endpoints so that any HMS compatible client would be able to use
catalogd as a metadata cache. For all the DDL/DML requests, catalogd
just delegates them to HMS APIs without reloading related metadata in
the cache. For read requests like get_table_req, catalogd serves them
from its cache which could be stale.
There is a flag, invalidate_hms_cache_on_ddls, to decide whether to
explicitly invalidate the table when catalogd delegates a DDL/DML on the
table to HMS. test_cache_valid_on_nontransactional_table_ddls is a test
verifying that when invalidate_hms_cache_on_ddls=false, the cache is not
updated so should have stale metadata.
However, there are HMS events generated from invoking the HMS APIs. Even
when invalidate_hms_cache_on_ddls=false, catalogd can still update its
cache when processing the corresponding HMS events. The test fails when
its check is done after catalogd applies the event (so the cache is
up-to-date). If the check is done before that, the test passes.
This patch deflakes the test by explicitly disabling event processing.
Also updates the description of invalidate_hms_cache_on_ddls to mention
the impact of event processing.
Tests:
- Ran the test locally 100 times.
Change-Id: Ib1ffc11a793899a0dbdb009bf2ac311117f2318e
Reviewed-on: http://gerrit.cloudera.org:8080/23792
Reviewed-by: Impala Public Jenkins <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>
> test_cache_valid_on_nontransactional_table_ddls() fails with
> NoSuchObjectException
> ----------------------------------------------------------------------------------
>
> Key: IMPALA-14490
> URL: https://issues.apache.org/jira/browse/IMPALA-14490
> Project: IMPALA
> Issue Type: Bug
> Affects Versions: Impala 5.0.0
> Reporter: Laszlo Gaal
> Assignee: Quanlong Huang
> Priority: Blocker
> Attachments:
> catalogd.impala-ec2-redhat86-m6i-4xlarge-ondemand-0002.vpc.cloudera.com.jenkins.log.INFO.20251214-022818.2108185
>
>
> custom_cluster.test_metastore_service.TestMetastoreService.test_cache_valid_on_nontransactional_table_ddls()
> failed with
> {code}
> impala_thrift_gen.hive_metastore.ttypes.NoSuchObjectException:
> NoSuchObjectException(message='hive.test_cache_valid_on_nontransactional_table_ddls_dbhlrot.test_cache_valid_on_nontransactional_table_ddls_tblwqwpi_new
> table not found')
> {code}
> during s3-arm-datacache test.
> Stack trace:{code}
> custom_cluster/test_metastore_service.py:466: in
> test_cache_valid_on_nontransactional_table_ddls
> self.__test_non_transactional_table_cache_helper(db_name, tbl_name, False)
> custom_cluster/test_metastore_service.py:679: in
> __test_non_transactional_table_cache_helper
> cur_get_table_response = catalog_hms_client.get_table_req(
> /data/jenkins/workspace/impala-asf-master-core-s3-arm-data-cache/repos/Impala/shell/impala_thrift_gen/hive_metastore/ThriftHiveMetastore.py:4373:
> in get_table_req
> return self.recv_get_table_req()
> /data/jenkins/workspace/impala-asf-master-core-s3-arm-data-cache/repos/Impala/shell/impala_thrift_gen/hive_metastore/ThriftHiveMetastore.py:4399:
> in recv_get_table_req
> raise result.o2
> E impala_thrift_gen.hive_metastore.ttypes.NoSuchObjectException:
> NoSuchObjectException(message='hive.test_cache_valid_on_nontransactional_table_ddls_dbhlrot.test_cache_valid_on_nontransactional_table_ddls_tblwqwpi_new
> table not found'){code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]