[
https://issues.apache.org/jira/browse/IMPALA-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Quanlong Huang updated IMPALA-14292:
------------------------------------
Attachment: IMPALA-14292-stacktrace.txt
> TestAdmissionController.test_timeout_reason_host_memory is flaky
> ----------------------------------------------------------------
>
> Key: IMPALA-14292
> URL: https://issues.apache.org/jira/browse/IMPALA-14292
> Project: IMPALA
> Issue Type: Bug
> Reporter: Quanlong Huang
> Assignee: Quanlong Huang
> Priority: Critical
> Attachments: IMPALA-14292-stacktrace.txt
>
>
> h3. Stacktrace
> {noformat}
> custom_cluster/test_admission_controller.py:1129: in
> test_timeout_reason_host_memory
> assert num_reasons >= 1, \
> E AssertionError: At least one query should have been timed out with topN
> query details: Query (id=4347afe4d0941306:ca48ea9000000000):
> E DEBUG MODE WARNING: Query profile created while running a DEBUG build
> of Impala. Use RELEASE builds to measure query performance.
> E Summary:
> E Session ID: 5348e8a574e296af:1c5a2c0497cc7699
> E Session Type: HIVESERVER2
> E HiveServer2 Protocol Version: V6
> E Start Time: 2025-08-05 02:50:25.109749000
> E End Time: 2025-08-05 02:50:25.601402000
> E Duration: 491.653ms (491653 us)
> E Query Type: QUERY
> E Query State: EXCEPTION
> E Impala Query State: ERROR
> E Query Status: Exec() rpc failed: Remote error: Service unavailable:
> ExecQueryFInstances request on impala.ControlService from 127.0.0.1:49400
> dropped due to backpressure. The service queue contains 0 items out of a
> maximum of 2147483647; memory consumption is 0.
> ...
> E assert 0 >= 1{noformat}
> h3. Standard Error
> {noformat}
> -- 2025-08-05 02:50:10,671 INFO MainThread: Starting cluster with
> command:
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan-arm/repos/Impala/bin/start-impala-cluster.py
> '--state_store_args=--statestore_update_frequency_ms=50
> --statestore_priority_update_frequency_ms=50
> --statestore_heartbeat_frequency_ms=50' --cluster_size=3 --num_coordinators=3
> --log_dir=/data/jenkins/workspace/impala-cdw-master-staging-core-ubsan-arm/repos/Impala/logs/custom_cluster_tests
> --log_level=1 '--impalad_args=-vmodule admission-controller=3
> -default_pool_max_requests 10 -default_pool_max_queued 10
> -default_pool_mem_limit 10485760 -mem_limit=2097152
> -queue_wait_timeout_ms=1000 -codegen_cache_capacity=0 '
> '--state_store_args=-statestore_heartbeat_frequency_ms=100
> -statestore_priority_update_frequency_ms=100 '
> --impalad_args=--default_query_options=
> 02:50:11 MainThread: Found 0 impalad/0 statestored/0 catalogd process(es)
> 02:50:11 MainThread: Starting State Store logging to
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan-arm/repos/Impala/logs/custom_cluster_tests/statestored.INFO
> 02:50:11 MainThread: Starting Catalog Service logging to
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan-arm/repos/Impala/logs/custom_cluster_tests/catalogd.INFO
> 02:50:11 MainThread: Starting Impala Daemon logging to
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan-arm/repos/Impala/logs/custom_cluster_tests/impalad.INFO
> 02:50:11 MainThread: Starting Impala Daemon logging to
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan-arm/repos/Impala/logs/custom_cluster_tests/impalad_node1.INFO
> 02:50:11 MainThread: Starting Impala Daemon logging to
> /data/jenkins/workspace/impala-cdw-master-staging-core-ubsan-arm/repos/Impala/logs/custom_cluster_tests/impalad_node2.INFO
> 02:50:13 MainThread: Found 3 impalad/1 statestored/1 catalogd process(es)
> 02:50:13 MainThread: Waiting for Impalad webserver port 25000
> 02:50:14 MainThread: Waiting for Impalad webserver port 25000
> 02:50:14 MainThread: Waiting for Impalad webserver port 25000
> 02:50:14 MainThread: Waiting for Impalad webserver port 25001
> 02:50:14 MainThread: Waiting for Impalad webserver port 25002
> 02:50:19 MainThread: Waiting for coordinator client services - hs2 port:
> 21050 hs2-http port: 28000 beeswax port: 21000
> 02:50:22 MainThread: Waiting for coordinator client services - hs2 port:
> 21051 hs2-http port: 28001 beeswax port: 21001
> 02:50:24 MainThread: Waiting for coordinator client services - hs2 port:
> 21052 hs2-http port: 28002 beeswax port: 21002
> 02:50:24 MainThread: Getting num_known_live_backends from
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:25000
> 02:50:24 MainThread: num_known_live_backends has reached value: 3
> 02:50:24 MainThread: Getting num_known_live_backends from
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:25001
> 02:50:24 MainThread: num_known_live_backends has reached value: 3
> 02:50:25 MainThread: Getting num_known_live_backends from
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:25002
> 02:50:25 MainThread: num_known_live_backends has reached value: 3
> 02:50:25 MainThread: Total wait: 11.48s
> 02:50:25 MainThread: Impala Cluster Running with 3 nodes (3 coordinators, 3
> executors).
> -- 2025-08-05 02:50:25,089 DEBUG MainThread: Found 3 impalad/1
> statestored/1 catalogd process(es)
> -- 2025-08-05 02:50:25,090 INFO MainThread: Getting metric:
> statestore.live-backends from
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:25010
> -- 2025-08-05 02:50:25,094 INFO MainThread: Metric
> 'statestore.live-backends' has reached desired value: 4. total_wait: 0s
> -- 2025-08-05 02:50:25,094 DEBUG MainThread: Getting
> num_known_live_backends from
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:25000
> -- 2025-08-05 02:50:25,096 INFO MainThread: num_known_live_backends has
> reached value: 3
> -- 2025-08-05 02:50:25,096 DEBUG MainThread: Getting
> num_known_live_backends from
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:25001
> -- 2025-08-05 02:50:25,099 INFO MainThread: num_known_live_backends has
> reached value: 3
> -- 2025-08-05 02:50:25,099 DEBUG MainThread: Getting
> num_known_live_backends from
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:25002
> -- 2025-08-05 02:50:25,101 INFO MainThread: num_known_live_backends has
> reached value: 3
> -- 2025-08-05 02:50:25,102 INFO MainThread: beeswax:
> set
> client_identifier=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_timeout_reason_host_memory;
> -- 2025-08-05 02:50:25,102 INFO MainThread: beeswax: connected to
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:21000 with
> beeswax
> -- 2025-08-05 02:50:25,102 INFO MainThread: hs2:
> set
> client_identifier=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_timeout_reason_host_memory;
> -- 2025-08-05 02:50:25,102 INFO MainThread: hs2: connected to
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:21050 with
> impyla hs2
> -- 2025-08-05 02:50:25,102 INFO MainThread: hs2-http:
> set
> client_identifier=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_timeout_reason_host_memory;
> -- 2025-08-05 02:50:25,102 INFO MainThread: hs2-http: connected to
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:28000 with
> impyla hs2-http
> -- 2025-08-05 02:50:25,103 INFO MainThread: hs2-feng:
> set
> client_identifier=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_timeout_reason_host_memory;
> -- 2025-08-05 02:50:25,103 INFO MainThread: hs2-feng: connected to
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:11050 with
> impyla hs2-feng
> -- 2025-08-05 02:50:25,103 INFO MainThread: beeswax:
> set
> client_identifier=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_timeout_reason_host_memory;
> -- 2025-08-05 02:50:25,103 INFO MainThread: hs2:
> set
> client_identifier=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_timeout_reason_host_memory;
> -- 2025-08-05 02:50:25,103 INFO MainThread: hs2-http:
> set
> client_identifier=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_timeout_reason_host_memory;
> -- 2025-08-05 02:50:25,105 INFO MainThread: hs2:
> set enable_trivial_query_for_admission=false;
> -- 2025-08-05 02:50:25,105 INFO MainThread: hs2:
> set
> client_identifier=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_timeout_reason_host_memory;
> -- 2025-08-05 02:50:25,105 INFO MainThread: hs2: connected to
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:21050 with
> impyla hs2
> -- 2025-08-05 02:50:25,105 INFO MainThread: hs2:
> set
> client_identifier=custom_cluster/test_admission_controller.py::TestAdmissionController::()::test_timeout_reason_host_memory;
> -- 2025-08-05 02:50:25,105 INFO MainThread: hs2: set_configuration:
> set spool_query_results=0;
> set mem_limit=2mb;
> -- 2025-08-05 02:50:25,108 INFO MainThread: hs2: executing against Impala
> at impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:21050.
> session: 5348e8a574e296af:1c5a2c0497cc7699 main_cursor: False user: None
> select sleep(1000);
> -- 2025-08-05 02:50:25,594 INFO MainThread: hs2: executing against Impala
> at impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:21050.
> session: f84214fc14d487b6:a1f5879e48352e99 main_cursor: False user: None
> select sleep(1000);
> -- 2025-08-05 02:50:25,601 INFO MainThread: hs2: executing against Impala
> at impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:21050.
> session: 6a45a64f7d50440d:12e486c45726f085 main_cursor: False user: None
> select sleep(1000);
> -- 2025-08-05 02:50:25,608 INFO MainThread: hs2: executing against Impala
> at impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:21050.
> session: d845c41e70e16edb:9c6baf4d619ca3b4 main_cursor: False user: None
> select sleep(1000);
> -- 2025-08-05 02:50:25,615 INFO MainThread: hs2: executing against Impala
> at impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:21050.
> session: 1e4e767b8689bb81:2bda2cfa395eb5bc main_cursor: False user: None
> select sleep(1000);
> -- 2025-08-05 02:50:25,620 INFO MainThread:
> 4347afe4d0941306:ca48ea9000000000: getting state
> -- 2025-08-05 02:50:25,621 INFO MainThread:
> 4347afe4d0941306:ca48ea9000000000: getting runtime profile operation
> -- 2025-08-05 02:50:25,622 INFO MainThread:
> e047c2d275250b53:99cb9ea500000000: getting state
> -- 2025-08-05 02:50:25,622 INFO MainThread:
> e047c2d275250b53:99cb9ea500000000: getting runtime profile operation
> -- 2025-08-05 02:50:25,623 INFO MainThread:
> 9145a55bb1d0548d:8fb8a4b000000000: getting state
> -- 2025-08-05 02:50:25,623 INFO MainThread:
> 9145a55bb1d0548d:8fb8a4b000000000: getting runtime profile operation
> -- 2025-08-05 02:50:25,624 INFO MainThread:
> bb49b16c00c971f0:7e65f60f00000000: getting state
> -- 2025-08-05 02:50:25,624 INFO MainThread:
> bb49b16c00c971f0:7e65f60f00000000: getting runtime profile operation
> -- 2025-08-05 02:50:25,624 INFO MainThread:
> 5c4eccf00e8c84fb:c37ca0b400000000: getting state
> -- 2025-08-05 02:50:25,624 INFO MainThread:
> 5c4eccf00e8c84fb:c37ca0b400000000: getting runtime profile operation
> -- 2025-08-05 02:50:25,625 INFO MainThread:
> 4347afe4d0941306:ca48ea9000000000: closing query for operation
> -- 2025-08-05 02:50:25,625 INFO MainThread:
> e047c2d275250b53:99cb9ea500000000: closing query for operation
> -- 2025-08-05 02:50:25,626 INFO MainThread:
> 9145a55bb1d0548d:8fb8a4b000000000: closing query for operation
> -- 2025-08-05 02:50:25,626 INFO MainThread:
> bb49b16c00c971f0:7e65f60f00000000: closing query for operation
> -- 2025-08-05 02:50:25,626 INFO MainThread:
> 5c4eccf00e8c84fb:c37ca0b400000000: closing query for operation
> -- 2025-08-05 02:50:25,627 INFO MainThread: hs2: closing 1 sync and 5
> async hs2 connections to:
> impala-ec2-rhel92-m7g-4xlarge-ondemand-0f8c.vpc.cloudera.com:21050{noformat}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]