[ 
https://issues.apache.org/jira/browse/IMPALA-14280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18012248#comment-18012248
 ] 

ASF subversion and git services commented on IMPALA-14280:
----------------------------------------------------------

Commit 9d6997b7c00512295401f815630e8f02876ecb74 in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=9d6997b7c ]

IMPALA-14280: (Addendum) Waits for updating active catalogd address

Some tests for catalogd HA failover have a lightweight verifier function
that finishes quickly before coordinator notices catalogd HA failover,
e.g. when the verifier function runs a statement that doesn't trigger
catalogd RPCs.

If the test finishes in such a state, coordinator will use the stale
active catalogd address in cleanup, i.e. dropping unique_database, and
fails quickly since the catalogd is passive now. Retrying the statement
immediately usually won't help since coordinator hasn't updated the
active catalogd address yet.

Note that we also retry the verifier function immediately when it's
failed by coordinator talking to the stale catalogd address. It works
since the previous active catalogd is not running so the catalogd RPCs
fail and got retried. The retry interval is 3s (configured by
catalog_client_rpc_retry_interval_ms) and we retry it for at least 2
times (customized by catalog_client_connection_num_retries in the
tests). The duration is usually enough for coordinator to update the
active catalogd address. But depending on this duration is a bit tricky.

This patch adds a wait before the verifier function to make sure
coordinator has updated the active catalogd address. This also make sure
the cleanup of unique_database won't fail due to stale active catalogd
address.

Tests:
 - Ran test_catalogd_ha.py

Change-Id: I45e4a20170fdcce8282f1762f81a290689777aed
Reviewed-on: http://gerrit.cloudera.org:8080/23252
Reviewed-by: Riza Suminto <[email protected]>
Reviewed-by: Wenzhe Zhou <[email protected]>
Tested-by: Quanlong Huang <[email protected]>


> TestCatalogdHA.test_warmed_up_metadata_failover_catchup fails with status 
> code assertion errors
> -----------------------------------------------------------------------------------------------
>
>                 Key: IMPALA-14280
>                 URL: https://issues.apache.org/jira/browse/IMPALA-14280
>             Project: IMPALA
>          Issue Type: Bug
>            Reporter: Surya Hebbar
>            Assignee: Quanlong Huang
>            Priority: Major
>             Fix For: Impala 5.0.0
>
>
> Error Message
> {code:java}
> assert 404 == 200  +  where 404 = <Response [404]>.status_code  +  and   200 
> = <lookup 'status_codes'>.ok  +    where <lookup 'status_codes'> = 
> requests.codes
> {code}
> Stacktrace
> {code:java}
> custom_cluster/test_catalogd_ha.py:643: in 
> test_warmed_up_metadata_failover_catchup
>     db, self._refresh_table, self._verify_refresh)
> custom_cluster/test_catalogd_ha.py:737: in _test_metadata_after_failover
>     (active_catalogd, standby_catalogd) = self.__get_catalogds()
> custom_cluster/test_catalogd_ha.py:110: in __get_catalogds
>     assert page.status_code == requests.codes.ok
> E   assert 404 == 200
> E    +  where 404 = <Response [404]>.status_code
> E    +  and   200 = <lookup 'status_codes'>.ok
> E    +    where <lookup 'status_codes'> = requests.codes
> {code}
> Standard Output
> {code:java}
> Redirecting stdout to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/catalogd.impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com.jenkins.log.INFO.20250730-032306.3703284
> Redirecting stdout to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/catalogd.impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com.jenkins.log.INFO.20250730-032337.3703557
> Standard Error
> -- 2025-07-30 03:22:52,492 INFO     MainThread: Starting cluster with 
> command: 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/bin/start-impala-cluster.py
>  '--state_store_args=--statestore_update_frequency_ms=50 
> --statestore_priority_update_frequency_ms=50 
> --statestore_heartbeat_frequency_ms=50' --cluster_size=3 --num_coordinators=3 
> --log_dir=/data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests
>  --log_level=1 '--impalad_args=--use_local_catalog=true ' 
> '--state_store_args=--use_subscriber_id_as_catalogd_priority=true ' 
> '--catalogd_args=--catalog_topic_mode=minimal 
> --catalogd_ha_reset_metadata_on_failover=false 
> --debug_actions=catalogd_event_processing_delay:SLEEP@1000 
> --enable_reload_events=true 
> --warmup_tables_config_file=file:///data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/testdata/data/warmup_test_config.txt
>  ' --enable_catalogd_ha --impalad_args=--default_query_options=
> 03:22:53 MainThread: Found 0 impalad/0 statestored/0 catalogd process(es)
> 03:22:53 MainThread: Starting State Store logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/statestored.INFO
> 03:22:53 MainThread: Starting Catalog Service logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/catalogd.INFO
> 03:22:53 MainThread: Starting Catalog Service logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/catalogd_node1.INFO
> 03:22:53 MainThread: Starting Impala Daemon logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/impalad.INFO
> 03:22:53 MainThread: Starting Impala Daemon logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/impalad_node1.INFO
> 03:22:53 MainThread: Starting Impala Daemon logging to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/impalad_node2.INFO
> 03:22:55 MainThread: Found 3 impalad/1 statestored/2 catalogd process(es)
> 03:22:55 MainThread: Waiting for Impalad webserver port 25000
> 03:22:55 MainThread: Waiting for Impalad webserver port 25000
> 03:22:56 MainThread: Waiting for Impalad webserver port 25000
> 03:22:56 MainThread: Waiting for Impalad webserver port 25000
> 03:22:56 MainThread: Waiting for Impalad webserver port 25001
> 03:22:56 MainThread: Waiting for Impalad webserver port 25002
> 03:22:58 MainThread: Waiting for coordinator client services - hs2 port: 
> 21050 hs2-http port: 28000 beeswax port: 21000
> 03:23:00 MainThread: Waiting for coordinator client services - hs2 port: 
> 21051 hs2-http port: 28001 beeswax port: 21001
> 03:23:02 MainThread: Waiting for coordinator client services - hs2 port: 
> 21052 hs2-http port: 28002 beeswax port: 21002
> 03:23:02 MainThread: Getting num_known_live_backends from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25000
> 03:23:02 MainThread: num_known_live_backends has reached value: 3
> 03:23:02 MainThread: Getting num_known_live_backends from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25001
> 03:23:02 MainThread: num_known_live_backends has reached value: 3
> 03:23:02 MainThread: Getting num_known_live_backends from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25002
> 03:23:02 MainThread: num_known_live_backends has reached value: 3
> 03:23:02 MainThread: Total wait: 7.64s
> 03:23:02 MainThread: Impala Cluster Running with 3 nodes (3 coordinators, 3 
> executors).
> -- 2025-07-30 03:23:02,956 DEBUG    MainThread: Found 3 impalad/1 
> statestored/2 catalogd process(es)
> -- 2025-07-30 03:23:02,956 INFO     MainThread: Getting metric: 
> statestore.live-backends from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25010
> -- 2025-07-30 03:23:02,959 INFO     MainThread: Metric 
> 'statestore.live-backends' has reached desired value: 5. total_wait: 0s
> -- 2025-07-30 03:23:02,959 DEBUG    MainThread: Getting 
> num_known_live_backends from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25000
> -- 2025-07-30 03:23:02,960 INFO     MainThread: num_known_live_backends has 
> reached value: 3
> -- 2025-07-30 03:23:02,961 DEBUG    MainThread: Getting 
> num_known_live_backends from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25001
> -- 2025-07-30 03:23:02,962 INFO     MainThread: num_known_live_backends has 
> reached value: 3
> -- 2025-07-30 03:23:02,962 DEBUG    MainThread: Getting 
> num_known_live_backends from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25002
> -- 2025-07-30 03:23:02,964 INFO     MainThread: num_known_live_backends has 
> reached value: 3
> -- 2025-07-30 03:23:02,964 INFO     MainThread: beeswax: 
> set 
> client_identifier=custom_cluster/test_catalogd_ha.py::TestCatalogdHA::()::test_warmed_up_metadata_failover_catchup;
> -- 2025-07-30 03:23:02,964 INFO     MainThread: beeswax: connected to 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:21000 with 
> beeswax
> -- 2025-07-30 03:23:02,964 INFO     MainThread: hs2: 
> set 
> client_identifier=custom_cluster/test_catalogd_ha.py::TestCatalogdHA::()::test_warmed_up_metadata_failover_catchup;
> -- 2025-07-30 03:23:02,964 INFO     MainThread: hs2: connected to 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:21050 with 
> impyla hs2
> -- 2025-07-30 03:23:02,964 INFO     MainThread: hs2-http: 
> set 
> client_identifier=custom_cluster/test_catalogd_ha.py::TestCatalogdHA::()::test_warmed_up_metadata_failover_catchup;
> -- 2025-07-30 03:23:02,965 INFO     MainThread: hs2-http: connected to 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:28000 with 
> impyla hs2-http
> -- 2025-07-30 03:23:02,965 INFO     MainThread: hs2-feng: 
> set 
> client_identifier=custom_cluster/test_catalogd_ha.py::TestCatalogdHA::()::test_warmed_up_metadata_failover_catchup;
> -- 2025-07-30 03:23:02,965 INFO     MainThread: hs2-feng: connected to 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:11050 with 
> impyla hs2-feng
> -- 2025-07-30 03:23:02,967 INFO     MainThread: hs2: executing against Impala 
> at impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:21050. 
> session: 7b48bf0f176821be:dc50e9789a659d95 main_cursor: True user: None
> create database if not exists warmup_test_db;
> -- 2025-07-30 03:23:03,414 INFO     MainThread: 
> 7f4a5c54454939d0:4aba16f200000000: query started
> -- 2025-07-30 03:23:03,415 INFO     MainThread: 
> 7f4a5c54454939d0:4aba16f200000000: getting log for operation
> -- 2025-07-30 03:23:03,416 INFO     MainThread: 
> 7f4a5c54454939d0:4aba16f200000000: getting runtime profile operation
> -- 2025-07-30 03:23:03,416 INFO     MainThread: 
> 7f4a5c54454939d0:4aba16f200000000: closing query for operation
> -- 2025-07-30 03:23:03,447 INFO     MainThread: hs2: executing against Impala 
> at impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:21050. 
> session: 7b48bf0f176821be:dc50e9789a659d95 main_cursor: True user: None
> create table warmup_test_db.tbl like functional.alltypes stored as parquet 
> location '/test-warehouse/warmup_test_db.tbl';
> -- 2025-07-30 03:23:03,929 INFO     MainThread: 
> 5f41a122c82d88b6:64ea256f00000000: query started
> -- 2025-07-30 03:23:03,930 INFO     MainThread: 
> 5f41a122c82d88b6:64ea256f00000000: getting log for operation
> -- 2025-07-30 03:23:03,930 INFO     MainThread: 
> 5f41a122c82d88b6:64ea256f00000000: getting runtime profile operation
> -- 2025-07-30 03:23:03,930 INFO     MainThread: 
> 5f41a122c82d88b6:64ea256f00000000: closing query for operation
> -- 2025-07-30 03:23:03,958 INFO     MainThread: Found PID 3701982 for 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/be/build/latest/service/catalogd
>  -logbufsecs=5 -v=1 -max_log_files=0 -log_rotation_match_pid=true 
> -log_filename=catalogd 
> -log_dir=/data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests
>  -kudu_master_hosts localhost --catalog_topic_mode=minimal 
> --catalogd_ha_reset_metadata_on_failover=false 
> --debug_actions=catalogd_event_processing_delay:SLEEP@1000 
> --enable_reload_events=true 
> --warmup_tables_config_file=file:///data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/testdata/data/warmup_test_config.txt
>  -catalog_service_port=26000 -state_store_subscriber_port=23020 
> -webserver_port=25020 -enable_catalogd_ha=true
> -- 2025-07-30 03:23:03,985 INFO     MainThread: Killing <CatalogdProcess PID: 
> 3701982 
> (/data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/be/build/latest/service/catalogd
>  -logbufsecs=5 -v=1 -max_log_files=0 -log_rotation_match_pid=true 
> -log_filename=catalogd 
> -log_dir=/data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests
>  -kudu_master_hosts localhost --catalog_topic_mode=minimal 
> --catalogd_ha_reset_metadata_on_failover=false 
> --debug_actions=catalogd_event_processing_delay:SLEEP@1000 
> --enable_reload_events=true 
> --warmup_tables_config_file=file:///data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/testdata/data/warmup_test_config.txt
>  -catalog_service_port=26000 -state_store_subscriber_port=23020 
> -webserver_port=25020 -enable_catalogd_ha=true)> with signal 9
> -- 2025-07-30 03:23:04,024 INFO     MainThread: Getting metric: 
> catalog-server.active-status from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25021
> -- 2025-07-30 03:23:04,028 INFO     MainThread: Waiting for metric value 
> 'catalog-server.active-status'=True. Current value: False. total_wait: 0s
> -- 2025-07-30 03:23:04,028 INFO     MainThread: Sleeping 1s before next retry.
> -- 2025-07-30 03:23:05,029 INFO     MainThread: Getting metric: 
> catalog-server.active-status from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25021
> -- 2025-07-30 03:23:05,031 INFO     MainThread: Metric 
> 'catalog-server.active-status' has reached desired value: True. total_wait: 
> 1.0043129921s
> -- 2025-07-30 03:23:05,035 INFO     MainThread: hs2: executing against Impala 
> at impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:21050. 
> session: 7b48bf0f176821be:dc50e9789a659d95 main_cursor: True user: None
> describe warmup_test_db.tbl;
> -- 2025-07-30 03:23:06,746 INFO     MainThread: 
> 7e48f7c0c27463d4:727cf0f700000000: query started
> -- 2025-07-30 03:23:06,747 INFO     MainThread: 
> 7e48f7c0c27463d4:727cf0f700000000: getting log for operation
> -- 2025-07-30 03:23:06,747 INFO     MainThread: 
> 7e48f7c0c27463d4:727cf0f700000000: getting runtime profile operation
> -- 2025-07-30 03:23:06,747 INFO     MainThread: 
> 7e48f7c0c27463d4:727cf0f700000000: closing query for operation
> -- 2025-07-30 03:23:06,748 INFO     MainThread: Starting Catalogd process: 
> ['-logbufsecs=5', '-v=1', '-max_log_files=0', '-log_rotation_match_pid=true', 
> '-log_filename=catalogd', 
> '-log_dir=/data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests',
>  '-kudu_master_hosts', 'localhost', '--catalog_topic_mode=minimal', 
> '--catalogd_ha_reset_metadata_on_failover=false', 
> '--debug_actions=catalogd_event_processing_delay:SLEEP@1000', 
> '--enable_reload_events=true', 
> '--warmup_tables_config_file=file:///data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/testdata/data/warmup_test_config.txt',
>  '-catalog_service_port=26000', '-state_store_subscriber_port=23020', 
> '-webserver_port=25020', '-enable_catalogd_ha=true']
> -- 2025-07-30 03:23:06,750 INFO     MainThread: Getting metric: 
> statestore-subscriber.connected from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25020
> -- 2025-07-30 03:23:06,751 INFO     MainThread: Debug webpage not yet 
> available: 
> HTTPConnectionPool(host='impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com',
>  port=25020): Max retries exceeded with url: /jsonmetrics?json (Caused by 
> NewConnectionError('<urllib3.connection.HTTPConnection object at 
> 0x7f09c6d9c750>: Failed to establish a new connection: [Errno 111] Connection 
> refused',))
> Turning perftools heap leak checking off
> Redirecting stderr to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/catalogd.impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com.jenkins.log.ERROR.20250730-032306.3703284
> -- 2025-07-30 03:23:07,754 INFO     MainThread: Waiting for metric value 
> 'statestore-subscriber.connected'=1. Current value: None. total_wait: 0s
> -- 2025-07-30 03:23:07,754 INFO     MainThread: Sleeping 1s before next retry.
> -- 2025-07-30 03:23:08,754 INFO     MainThread: Getting metric: 
> statestore-subscriber.connected from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25020
> -- 2025-07-30 03:23:08,767 INFO     MainThread: Metric 
> 'statestore-subscriber.connected' has reached desired value: True. 
> total_wait: 2.00457000732s
> -- 2025-07-30 03:23:08,794 INFO     MainThread: hs2: executing against Impala 
> at impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:21050. 
> session: 7b48bf0f176821be:dc50e9789a659d95 main_cursor: True user: None
> alter table warmup_test_db.tbl add partition(year=2025, month=1);
> -- 2025-07-30 03:23:09,221 INFO     MainThread: 
> 6145ce6d36d997b7:48635e2400000000: query started
> -- 2025-07-30 03:23:09,222 INFO     MainThread: 
> 6145ce6d36d997b7:48635e2400000000: getting log for operation
> -- 2025-07-30 03:23:09,222 INFO     MainThread: 
> 6145ce6d36d997b7:48635e2400000000: getting runtime profile operation
> -- 2025-07-30 03:23:09,222 INFO     MainThread: 
> 6145ce6d36d997b7:48635e2400000000: closing query for operation
> -- 2025-07-30 03:23:09,249 INFO     MainThread: Found PID 3701992 for 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/be/build/latest/service/catalogd
>  -logbufsecs=5 -v=1 -max_log_files=0 -log_rotation_match_pid=true 
> -log_filename=catalogd_node1 
> -log_dir=/data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests
>  -kudu_master_hosts localhost --catalog_topic_mode=minimal 
> --catalogd_ha_reset_metadata_on_failover=false 
> --debug_actions=catalogd_event_processing_delay:SLEEP@1000 
> --enable_reload_events=true 
> --warmup_tables_config_file=file:///data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/testdata/data/warmup_test_config.txt
>  -catalog_service_port=26001 -state_store_subscriber_port=23021 
> -webserver_port=25021 -enable_catalogd_ha=true
> -- 2025-07-30 03:23:09,274 INFO     MainThread: Killing <CatalogdProcess PID: 
> 3701992 
> (/data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/be/build/latest/service/catalogd
>  -logbufsecs=5 -v=1 -max_log_files=0 -log_rotation_match_pid=true 
> -log_filename=catalogd_node1 
> -log_dir=/data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests
>  -kudu_master_hosts localhost --catalog_topic_mode=minimal 
> --catalogd_ha_reset_metadata_on_failover=false 
> --debug_actions=catalogd_event_processing_delay:SLEEP@1000 
> --enable_reload_events=true 
> --warmup_tables_config_file=file:///data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/testdata/data/warmup_test_config.txt
>  -catalog_service_port=26001 -state_store_subscriber_port=23021 
> -webserver_port=25021 -enable_catalogd_ha=true)> with signal 9
> -- 2025-07-30 03:23:09,313 INFO     MainThread: Getting metric: 
> catalog-server.active-status from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25020
> -- 2025-07-30 03:23:09,316 INFO     MainThread: Waiting for metric value 
> 'catalog-server.active-status'=True. Current value: False. total_wait: 0s
> -- 2025-07-30 03:23:09,316 INFO     MainThread: Sleeping 1s before next retry.
> -- 2025-07-30 03:23:10,317 INFO     MainThread: Getting metric: 
> catalog-server.active-status from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25020
> -- 2025-07-30 03:23:10,320 INFO     MainThread: Metric 
> 'catalog-server.active-status' has reached desired value: True. total_wait: 
> 1.00428390503s
> -- 2025-07-30 03:23:10,324 INFO     MainThread: hs2: executing against Impala 
> at impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:21050. 
> session: 7b48bf0f176821be:dc50e9789a659d95 main_cursor: True user: None
> show partitions warmup_test_db.tbl;
> -- 2025-07-30 03:23:37,360 INFO     MainThread: Retry for error Query 
> d74a2e4e80679e57:436eef8a00000000 failed:
> LocalCatalogException: Could not load table names for database 
> 'warmup_test_db' from HMS
> CAUSED BY: TException: org.apache.impala.common.InternalException: Couldn't 
> open transport for 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:26001 (connect() 
> failed: Connection refused)
> CAUSED BY: InternalException: Couldn't open transport for 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:26001 (connect() 
> failed: Connection refused)
> -- 2025-07-30 03:23:37,360 INFO     MainThread: hs2: executing against Impala 
> at impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:21050. 
> session: 7b48bf0f176821be:dc50e9789a659d95 main_cursor: True user: None
> show partitions warmup_test_db.tbl;
> -- 2025-07-30 03:23:37,457 INFO     MainThread: 
> 14407808e786c43f:2ee4e3f600000000: query started
> -- 2025-07-30 03:23:37,458 INFO     MainThread: 
> 14407808e786c43f:2ee4e3f600000000: getting log for operation
> -- 2025-07-30 03:23:37,458 INFO     MainThread: 
> 14407808e786c43f:2ee4e3f600000000: getting runtime profile operation
> -- 2025-07-30 03:23:37,458 INFO     MainThread: 
> 14407808e786c43f:2ee4e3f600000000: closing query for operation
> -- 2025-07-30 03:23:37,459 INFO     MainThread: partition result: 
> ['2025\t1\t-1\t0\t0B\tNOT CACHED\tNOT 
> CACHED\tPARQUET\tfalse\thdfs://localhost:20500/test-warehouse/warmup_test_db.tbl/year=2025/month=1\tNONE',
>  'Total\t\t-1\t0\t0B\t0B\t\t\t\t\t']
> -- 2025-07-30 03:23:37,459 INFO     MainThread: Starting Catalogd process: 
> ['-logbufsecs=5', '-v=1', '-max_log_files=0', '-log_rotation_match_pid=true', 
> '-log_filename=catalogd_node1', 
> '-log_dir=/data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests',
>  '-kudu_master_hosts', 'localhost', '--catalog_topic_mode=minimal', 
> '--catalogd_ha_reset_metadata_on_failover=false', 
> '--debug_actions=catalogd_event_processing_delay:SLEEP@1000', 
> '--enable_reload_events=true', 
> '--warmup_tables_config_file=file:///data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/testdata/data/warmup_test_config.txt',
>  '-catalog_service_port=26001', '-state_store_subscriber_port=23021', 
> '-webserver_port=25021', '-enable_catalogd_ha=true']
> -- 2025-07-30 03:23:37,461 INFO     MainThread: Getting metric: 
> statestore-subscriber.connected from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25021
> -- 2025-07-30 03:23:37,462 INFO     MainThread: Debug webpage not yet 
> available: 
> HTTPConnectionPool(host='impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com',
>  port=25021): Max retries exceeded with url: /jsonmetrics?json (Caused by 
> NewConnectionError('<urllib3.connection.HTTPConnection object at 
> 0x7f0939986950>: Failed to establish a new connection: [Errno 111] Connection 
> refused',))
> Turning perftools heap leak checking off
> Redirecting stderr to 
> /data/jenkins/workspace/impala-cdw-master-staging-core-admissiond/repos/Impala/logs/custom_cluster_tests/catalogd.impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com.jenkins.log.ERROR.20250730-032337.3703557
> -- 2025-07-30 03:23:38,466 INFO     MainThread: Waiting for metric value 
> 'statestore-subscriber.connected'=1. Current value: None. total_wait: 0s
> -- 2025-07-30 03:23:38,466 INFO     MainThread: Sleeping 1s before next retry.
> -- 2025-07-30 03:23:39,467 INFO     MainThread: Getting metric: 
> statestore-subscriber.connected from 
> impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:25021
> -- 2025-07-30 03:23:39,477 INFO     MainThread: Metric 
> 'statestore-subscriber.connected' has reached desired value: True. 
> total_wait: 2.0062930584s
> -- 2025-07-30 03:23:39,497 INFO     MainThread: hs2: executing against Impala 
> at impala-ec2-rhel92-m6i-4xlarge-ondemand-1cc7.vpc.cloudera.com:21050. 
> session: 7b48bf0f176821be:dc50e9789a659d95 main_cursor: True user: None
> drop database if exists warmup_test_db cascade;
> -- 2025-07-30 03:23:39,708 INFO     MainThread: 
> 74436e446b5486d1:e2e5a77700000000: query started
> -- 2025-07-30 03:23:39,709 INFO     MainThread: 
> 74436e446b5486d1:e2e5a77700000000: getting log for operation
> -- 2025-07-30 03:23:39,709 INFO     MainThread: 
> 74436e446b5486d1:e2e5a77700000000: getting runtime profile operation
> -- 2025-07-30 03:23:39,709 INFO     MainThread: 
> 74436e446b5486d1:e2e5a77700000000: closing query for operation
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to