Dev-iL opened a new pull request, #62960:
URL: https://github.com/apache/airflow/pull/62960

    <!-- SPDX-License-Identifier: Apache-2.0
         https://www.apache.org/licenses/LICENSE-2.0 -->
   
   <!--
   Thank you for contributing!
   
   Please provide above a brief description of the changes made in this pull 
request.
   Write a good git commit message following this guide: 
http://chris.beams.io/posts/git-commit/
   
   Please make sure that your code changes are covered with tests.
   And in case of new features or big changes remember to adjust the 
documentation.
   
   Feel free to ping (in general) for the review if you do not see reaction for 
a few days
   (72 Hours is the minimum reaction time you can expect from volunteers) - we 
sometimes miss notifications.
   
   In case of an existing issue, reference it using one of the following:
   
   * closes: #ISSUE
   * related: #ISSUE
   -->
   ## Problem
   
   All 9 `TestOtelIntegration` tests in 
`airflow-core/tests/integration/otel/test_otel.py` fail
   with `Timeout >60.0s` on the `integration-core-redis` CI job. The timeout 
occurs at
   `scheduler_process.wait()` in the `finally` cleanup block.
   
   ### Root cause chain
   
   1. **Wrong CI marker.** `TestOtelIntegration` is marked 
`@pytest.mark.integration("redis")`
      but **not** `@pytest.mark.integration("otel")`. The 
`integration-core-redis` job sets
      `INTEGRATION_REDIS=true` but not `INTEGRATION_OTEL=true`, so the tests 
pass the marker
      check and run — in an environment that never starts the OTel collector 
container
      (`breeze-otel-collector`).
   
   2. **OTLP exporter targets a missing collector.** `setup_class` configures
      `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` and 
`dag_execution_for_testing_metrics` configures
      `OTEL_EXPORTER_OTLP_METRICS_ENDPOINT`, both pointing at 
`breeze-otel-collector:4318`.
      DNS resolution for this host fails since the container doesn't exist in 
the redis-only job.
   
   3. **Subprocess shutdown blocks on OTel flush.** When the test's `finally` 
block calls
      `scheduler_process.terminate()`, the scheduler receives SIGTERM, calls 
`sys.exit(0)`, and
      triggers Python's `atexit` handlers. The OTel metric flush handler
      (`flush_otel_metrics` -> `provider.force_flush()`) and the 
`TracerProvider` shutdown
      both attempt to export to the unreachable collector with retry + 
exponential backoff,
      blocking process exit.
   
   4. **Bare `process.wait()` hangs indefinitely.** The `finally` blocks use
      `process.wait()` with no timeout, blocking until the subprocess exits. 
Since the
      subprocess is stuck in the OTel flush, the 60s pytest execution timeout 
fires.
   
   ## Fixes applied
   
   ### 1. Add `@pytest.mark.integration("otel")` marker (root cause fix)
   
   Added the `otel` integration marker to `TestOtelIntegration` alongside the 
existing `redis`
   marker. The pytest plugin checks each integration marker independently — both
   `INTEGRATION_REDIS=true` **and** `INTEGRATION_OTEL=true` must be set for the 
tests to run.
   This immediately prevents the tests from running in the redis-only CI job.
   
   The tests can still be run manually:
   ```
   breeze testing core-integration-tests --integration otel --integration redis
   ```
   
   ### 2. Set `OTEL_EXPORTER_OTLP_TIMEOUT=1` (defense-in-depth)
   
   Set the OTLP export timeout to 1 second in `setup_class`. This env var is 
inherited by all
   spawned subprocesses (via `os.environ.copy()` in 
`start_worker_and_scheduler1`). It reduces
   the per-request OTLP timeout from the default 10s to 1s, so even if a 
collector becomes
   temporarily unavailable, the atexit flush completes quickly instead of 
blocking.
   
   ### 3. Replace bare `process.wait()` with bounded wait + kill (defensive 
coding)
   
   Introduced two helper functions:
   - `_terminate_and_wait(process, timeout=30)` — sends SIGTERM, waits up to 
30s, then
     escalates to SIGKILL if the process hasn't exited.
   - `_wait_or_kill(process, timeout=30)` — for processes already terminated 
earlier, waits
     with the same timeout + SIGKILL fallback.
   
   Applied to all `finally` block cleanup code across all 7 test methods (~20 
call sites).
   The `assert scheduler_process_1.wait() == 0` in-test assertion was 
intentionally left
   unchanged since it's a test assertion, not cleanup.
   
   This prevents any future subprocess hang from blocking test teardown, 
regardless of cause.
   
   
   
   ---
   
   ##### Was generative AI tooling used to co-author this PR?
   
   <!--
   If generative AI tooling has been used in the process of authoring this PR, 
please
   change below checkbox to `[X]` followed by the name of the tool, uncomment 
the "Generated-by".
   -->
   
   - [x] Yes (please specify the tool below)
   
   Generated-by: Claude Opus 4.6 following [the 
guidelines](https://github.com/apache/airflow/blob/main/contributing-docs/05_pull_requests.rst#gen-ai-assisted-contributions)
   
   ---
   
   * Read the **[Pull Request 
Guidelines](https://github.com/apache/airflow/blob/main/contributing-docs/05_pull_requests.rst#pull-request-guidelines)**
 for more information. Note: commit author/co-author name and email in commits 
become permanently public when merged.
   * For fundamental code changes, an Airflow Improvement Proposal 
([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals))
 is needed.
   * When adding dependency, check compliance with the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   * For significant user-facing changes create newsfragment: 
`{pr_number}.significant.rst` or `{issue_number}.significant.rst`, in 
[airflow-core/newsfragments](https://github.com/apache/airflow/tree/main/airflow-core/newsfragments).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to