[ https://issues.apache.org/jira/browse/KAFKA-17410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17878999#comment-17878999 ]
Lianet Magrans commented on KAFKA-17410: ---------------------------------------- Hey [~frankvicky], I would say it's expected and ok to see the interrupted coming from different places in both consumers, basically because their implementations are very different around this area. The new consumer has extensive use of futures and waits on them for results on the app thread, so basically whenever it hits one of those "waits" and the thread is interrupted, it will propagate the exception right there. The classic consumer is fundamentally different, as its does not have such same waits, and it's right after the client poll that it will maybeThrowInterruptException [https://github.com/apache/kafka/blob/b0d0956b20735575b7d4d870c2838912f4006940/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java#L298] I wonder if the flakiness here was due to a gap in the addAndGet, and the future getting completed in-between the call the add, and the call to get? I ended up noticing and fixing that gap here [https://github.com/apache/kafka/blob/4c0d248b93bf295c5e6c9928b7870b04e4a699cd/clients/src/main/java/org/apache/kafka/clients/consumer/internals/events/ApplicationEventHandler.java#L114-L119] , and seeing this issue now they seem related to me. What do you think? Worth maybe checking this test with the changes in that PR, to see if it's not flaky anymore? Hope it helps! sorry I had missed this comment earlier. > Flaky test testPollThrowsInterruptExceptionIfInterrupted for new consumer > ------------------------------------------------------------------------- > > Key: KAFKA-17410 > URL: https://issues.apache.org/jira/browse/KAFKA-17410 > Project: Kafka > Issue Type: Bug > Components: clients, consumer > Reporter: Lianet Magrans > Assignee: TengYao Chi > Priority: Major > Labels: consumer-threading-refactor, flaky-test > > KafkaConsumerTest.testPollThrowsInterruptExceptionIfInterrupted is flaky for > the new consumer (passing consistently for the classic consumer). > Fails with: > org.opentest4j.AssertionFailedError: Expected > org.apache.kafka.common.errors.InterruptException to be thrown, but nothing > was thrown. > It's been flaky since enabled for the new consumer recently > [https://ge.apache.org/scans/tests?search.names=Git%20branch&search.rootProjectNames=kafka&search.startTimeMax=1724385599999&search.startTimeMin=1720065600000&search.timeZoneId=America%2FToronto&search.values=trunk&tests.container=org.apache.kafka.clients.consumer.KafkaConsumerTest&tests.test=testPollThrowsInterruptExceptionIfInterrupted(GroupProtocol)%5B2%5D|https://ge.apache.org/scans/tests?search.names=Git%20branch&search.rootProjectNames=kafka&search.startTimeMax=1724385599999&search.startTimeMin=1720065600000&search.timeZoneId=America%2FToronto&search.values=trunk&tests.container=org.apache.kafka.clients.consumer.KafkaConsumerTest&tests.test=testPollThrowsInterruptExceptionIfInterrupted(GroupProtocol)%5B2%5D.] > Note that a very similar test already exist in AsyncKafkaConsumerTest. > testPollThrowsInterruptExceptionIfInterrupted, written specifically for the > async consumer, and that passes consistently. -- This message was sent by Atlassian Jira (v8.20.10#820010)