merlimat opened a new pull request, #25352:
URL: https://github.com/apache/pulsar/pull/25352

   ## Flaky test failure
   
   ```
   
ServerCnxTest.testCreateProducerTimeout:2093->getResponse:2884->getResponse:2908
 IO Failed to get response from socket within 10s
   ```
   
   ## Summary
   
   Fix a race condition in `ServerCnx` where producer/consumer lifecycle 
callbacks could execute out of order, causing the create/close/create producer 
sequence to fail intermittently.
   
   ### Root Cause
   
   The `ServerCnx` producers and consumers `ConcurrentLongHashMap` maps are 
designed with `concurrencyLevel=1`, assuming all accesses happen on the same 
Netty IO thread (`ctx.executor()`). However, most async callbacks used 
synchronous `CompletableFuture` variants (`thenAccept`, `thenCompose`, etc.) 
instead of async variants with `ctx.executor()`.
   
   The issue is how `CompletableFuture` chaining works **without an explicit 
executor**:
   
   - If the upstream future is **NOT yet completed** when `thenCompose(fn)` is 
called, `fn` is queued as a dependent and runs when the completing thread calls 
`complete()`. Multiple chained stages execute in registration order since the 
completing thread walks the chain sequentially.
   
   - If the upstream future is **ALREADY completed** when `thenCompose(fn)` is 
called, `fn` runs **IMMEDIATELY on the calling thread**, right there in the 
`thenCompose` call. It skips the queue entirely.
   
   This distinction causes the create/close/create producer race: both 
`createProducer1` and `createProducer2` call `getOrCreateTopic()` and chain 
`thenCompose(topic -> addProducer(...))`. When the topic future is not yet 
completed, both callbacks are queued as dependents and execute in order. But if 
the topic future is already completed when `createProducer2` chains on it, 
producer2's `addProducer()` runs immediately inline — potentially before 
producer1's cleanup from the close command has finished. Producer1 wins the 
race to register, producer2's `addProducer()` fails with "already connected", 
and the client never gets a success response.
   
   ### Fix
   
   Using `thenComposeAsync(fn, ctx.executor())` forces `fn` to always be 
**submitted to the executor's task queue**, regardless of whether the future is 
already completed. This guarantees FIFO ordering — all stages go through the 
same queue, so the close handler runs before producer2's creation, and 
producer1's cleanup completes before producer2 tries to register.
   
   ### Changes
   
   - `handleProducer` chain: `thenApplyAsync`, `thenComposeAsync`, 
`thenRunAsync`, `exceptionallyAsync` with `ctx.executor()`
   - `buildProducerAndAddTopic`: `thenAcceptAsync`, `thenRunAsync`
   - `handleSubscribe` chain: `thenApplyAsync`, `thenAcceptAsync`, 
`exceptionallyAsync`
   - `handleCloseProducer`: `thenAcceptAsync`
   - `safelyRemoveProducer/Consumer`: `whenCompleteAsync`
   - `ServerCnxTest`: add `channel.runPendingTasks()` in `getResponse()` 
polling loop so `EmbeddedChannel` executor tasks are processed
   
   ## Documentation
   
   - [x] `doc-not-needed`
   (Your PR doesn't need any doc update)
   
   ## Matching PR in forked repository
   
   _No response_
   
   ### Tip
   
   Add the labels `ready-to-test` and `area/test` to trigger the CI.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to