dnadolny commented on code in PR #21065:
URL: https://github.com/apache/kafka/pull/21065#discussion_r2668954861
##########
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java:
##########
@@ -1132,7 +1156,7 @@ void abortBatches(final RuntimeException reason) {
dq.remove(batch);
}
batch.abort(reason);
- deallocate(batch);
+ completeBatchAndDeallocate(batch);
Review Comment:
At the least it can happen in tests. I [added a
flag](https://github.com/dnadolny/kafka/commit/42693053658faf72b1dbda0b825aa3cd5385d719)
to ProducerBatch and set it in Sender.sendProduceRequest, and then at this
location in RecordAccumulator I throw an exception if it's set and it gets hit
in `SenderTest.testCancelInFlightRequestAfterFatalError` as well as
`SenderTest.testForceCloseWithProducerIdReset`
If this is a real issue it might be worth adding some logic like that to run
all the time, to validate that deallocate is only called on batches that are
either not sent or are already complete.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]