[jira] [Created] (KAFKA-15809) Adding TS to broker's features list and updating broker's metadata schema to include TS enable status
Phuc Hong Tran created KAFKA-15809: -- Summary: Adding TS to broker's features list and updating broker's metadata schema to include TS enable status Key: KAFKA-15809 URL: https://issues.apache.org/jira/browse/KAFKA-15809 Project: Kafka Issue Type: Sub-task Reporter: Phuc Hong Tran Assignee: Phuc Hong Tran Currently controller doesn't have the visibility of all brokers's TS enable status. As mentioned in KAFKA-15341, we need to add metadata about TS enable status of brokers so that controller can check for these status before enabling TS per topic -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16011) Fix PlaintextConsumerTest.testMultiConsumerSessionTimeoutOnClose
[ https://issues.apache.org/jira/browse/KAFKA-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phuc Hong Tran resolved KAFKA-16011. Resolution: Fixed > Fix PlaintextConsumerTest.testMultiConsumerSessionTimeoutOnClose > > > Key: KAFKA-16011 > URL: https://issues.apache.org/jira/browse/KAFKA-16011 > Project: Kafka > Issue Type: Test > Components: clients, consumer, unit tests >Affects Versions: 3.7.0 >Reporter: Kirk True >Assignee: Phuc Hong Tran >Priority: Major > Labels: consumer-threading-refactor, kip-848 > Fix For: 3.8.0 > > > The integration test > {{PlaintextConsumerTest.testMultiConsumerSessionTimeoutOnClose}} is failing > when using the {{AsyncKafkaConsumer}}. > The error is: > {code} > org.opentest4j.AssertionFailedError: Did not get valid assignment for > partitions HashSet(topic1-2, topic1-4, topic-1, topic-0, topic1-5, topic1-1, > topic1-0, topic1-3). Instead, got ArrayBuffer(Set(topic1-0, topic-0, > topic-1), Set(), Set(topic1-1, topic1-5)) > at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:38) > at org.junit.jupiter.api.Assertions.fail(Assertions.java:134) > at > kafka.api.AbstractConsumerTest.validateGroupAssignment(AbstractConsumerTest.scala:286) > at > kafka.api.PlaintextConsumerTest.runMultiConsumerSessionTimeoutTest(PlaintextConsumerTest.scala:1865) > at > kafka.api.PlaintextConsumerTest.testMultiConsumerSessionTimeoutOnClose(PlaintextConsumerTest.scala:1277) > {code} > The logs include these lines: > > {code} > [2023-12-13 15:33:33,180] ERROR [Consumer clientId=ConsumerTestConsumer, > groupId=my-test] GroupHeartbeatRequest failed due to error: INVALID_REQUEST > (org.apache.kafka.clients.consumer.internals.HeartbeatRequestManager:376) > [2023-12-13 15:33:33,180] ERROR [Consumer clientId=ConsumerTestConsumer, > groupId=my-test] Member JQ_e0S5FTzKnYyStB3aBrQ with epoch 0 transitioned to > FATAL state > (org.apache.kafka.clients.consumer.internals.MembershipManagerImpl:456) > [2023-12-13 15:33:33,212] ERROR [daemon-consumer-assignment]: Error due to > (kafka.api.AbstractConsumerTest$ConsumerAssignmentPoller:139) > org.apache.kafka.common.errors.InvalidRequestException: RebalanceTimeoutMs > must be provided in first request. > [2023-12-13 15:33:39,196] WARN [Consumer clientId=ConsumerTestConsumer, > groupId=my-test] consumer poll timeout has expired. This means the time > between subsequent calls to poll() was longer than the configured > max.poll.interval.ms, which typically implies that the poll loop is spending > too much time processing messages. You can address this either by increasing > max.poll.interval.ms or by reducing the maximum size of batches returned in > poll() with max.poll.records. > (org.apache.kafka.clients.consumer.internals.HeartbeatRequestManager:188) > [2023-12-13 15:33:39,200] WARN [Consumer clientId=ConsumerTestConsumer, > groupId=my-test] consumer poll timeout has expired. This means the time > between subsequent calls to poll() was longer than the configured > max.poll.interval.ms, which typically implies that the poll loop is spending > too much time processing messages. You can address this either by increasing > max.poll.interval.ms or by reducing the maximum size of batches returned in > poll() with max.poll.records. > (org.apache.kafka.clients.consumer.internals.HeartbeatRequestManager:188) > {code} > I don't know if that's related or not. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-16034) AsyncKafkaConsumer will get Invalid Request error when trying to rejoin on fenced/unknown member Id
[ https://issues.apache.org/jira/browse/KAFKA-16034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phuc Hong Tran resolved KAFKA-16034. Resolution: Fixed > AsyncKafkaConsumer will get Invalid Request error when trying to rejoin on > fenced/unknown member Id > --- > > Key: KAFKA-16034 > URL: https://issues.apache.org/jira/browse/KAFKA-16034 > Project: Kafka > Issue Type: Sub-task >Reporter: Philip Nee >Assignee: Phuc Hong Tran >Priority: Major > Fix For: 3.8.0 > > > The consumer will log invalid request error when joining from fenced/unknown > member id because we didn't reset the HeartbeatState and we won't send the > needed fields (rebalanceTimeoutMs for example) when joining. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15538) Client support for java regex based subscription
[ https://issues.apache.org/jira/browse/KAFKA-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phuc Hong Tran resolved KAFKA-15538. Resolution: Fixed > Client support for java regex based subscription > > > Key: KAFKA-15538 > URL: https://issues.apache.org/jira/browse/KAFKA-15538 > Project: Kafka > Issue Type: Sub-task > Components: clients, consumer >Reporter: Lianet Magrans >Assignee: Phuc Hong Tran >Priority: Major > Labels: kip-848, kip-848-client-support > Fix For: 3.8.0 > > > When using subscribe with a java regex (Pattern), we need to resolve it on > the client side to send the broker a list of topic names to subscribe to. > Context: > The new consumer group protocol uses [Google > RE2/J|https://github.com/google/re2j] for regular expressions and introduces > new methods in the consumer API to subscribe using a `SubscribePattern`. The > subscribe using a java `Pattern` will be still supported for a while but > eventually removed. > * When the subscribe with SubscriptionPattern is used, the client should > just send the regex to the broker and it will be resolved on the server side. > * In the case of the subscribe with Pattern, the regex should be resolved on > the client side. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (KAFKA-15538) Client support for java regex based subscription
[ https://issues.apache.org/jira/browse/KAFKA-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phuc Hong Tran reopened KAFKA-15538: > Client support for java regex based subscription > > > Key: KAFKA-15538 > URL: https://issues.apache.org/jira/browse/KAFKA-15538 > Project: Kafka > Issue Type: Sub-task > Components: clients, consumer >Reporter: Lianet Magrans >Assignee: Phuc Hong Tran >Priority: Major > Labels: kip-848, kip-848-client-support > Fix For: 3.8.0 > > > When using subscribe with a java regex (Pattern), we need to resolve it on > the client side to send the broker a list of topic names to subscribe to. > Context: > The new consumer group protocol uses [Google > RE2/J|https://github.com/google/re2j] for regular expressions and introduces > new methods in the consumer API to subscribe using a `SubscribePattern`. The > subscribe using a java `Pattern` will be still supported for a while but > eventually removed. > * When the subscribe with SubscriptionPattern is used, the client should > just send the regex to the broker and it will be resolved on the server side. > * In the case of the subscribe with Pattern, the regex should be resolved on > the client side. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15538) Client support for java regex based subscription
[ https://issues.apache.org/jira/browse/KAFKA-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phuc Hong Tran resolved KAFKA-15538. Resolution: Fixed > Client support for java regex based subscription > > > Key: KAFKA-15538 > URL: https://issues.apache.org/jira/browse/KAFKA-15538 > Project: Kafka > Issue Type: Sub-task > Components: clients, consumer >Reporter: Lianet Magrans >Assignee: Phuc Hong Tran >Priority: Major > Labels: kip-848, kip-848-client-support > Fix For: 3.8.0 > > > When using subscribe with a java regex (Pattern), we need to resolve it on > the client side to send the broker a list of topic names to subscribe to. > Context: > The new consumer group protocol uses [Google > RE2/J|https://github.com/google/re2j] for regular expressions and introduces > new methods in the consumer API to subscribe using a `SubscribePattern`. The > subscribe using a java `Pattern` will be still supported for a while but > eventually removed. > * When the subscribe with SubscriptionPattern is used, the client should > just send the regex to the broker and it will be resolved on the server side. > * In the case of the subscribe with Pattern, the regex should be resolved on > the client side. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15809) Update broker's metadata schema to include TS enable status
[ https://issues.apache.org/jira/browse/KAFKA-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phuc Hong Tran resolved KAFKA-15809. Resolution: Won't Fix > Update broker's metadata schema to include TS enable status > --- > > Key: KAFKA-15809 > URL: https://issues.apache.org/jira/browse/KAFKA-15809 > Project: Kafka > Issue Type: Sub-task >Reporter: Phuc Hong Tran >Assignee: Phuc Hong Tran >Priority: Minor > Labels: KIP-405 > Fix For: 3.8.0 > > > Currently controller doesn't have the visibility of all brokers's TS enable > status. As mentioned in KAFKA-15341, we need to add metadata about TS enable > status of brokers so that controller can check for these status before > enabling TS per topic -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15558) Determine if Timer should be used elsewhere in PrototypeAsyncConsumer.updateFetchPositions()
[ https://issues.apache.org/jira/browse/KAFKA-15558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phuc Hong Tran resolved KAFKA-15558. Resolution: Fixed > Determine if Timer should be used elsewhere in > PrototypeAsyncConsumer.updateFetchPositions() > > > Key: KAFKA-15558 > URL: https://issues.apache.org/jira/browse/KAFKA-15558 > Project: Kafka > Issue Type: Task > Components: clients, consumer >Reporter: Kirk True >Assignee: Phuc Hong Tran >Priority: Major > Labels: consumer-threading-refactor, fetcher, timeout > Fix For: 3.8.0 > > > This is a followup ticket based on a question from [~junrao] when reviewing > the [fetch request manager pull > request|https://github.com/apache/kafka/pull/14406]: > {quote}It still seems weird that we only use the timer for > {{{}refreshCommittedOffsetsIfNeeded{}}}, but not for other cases where we > don't have valid fetch positions. For example, if all partitions are in > {{AWAIT_VALIDATION}} state, it seems that {{PrototypeAsyncConsumer.poll()}} > will just go in a busy loop, which is not efficient. > {quote} > The goal here is to determine if we should also be propagating the Timer to > the validate positions and reset positions operations. > Note: we should also investigate if the existing {{KafkaConsumer}} > implementation should be fixed, too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-15160) Message bytes duplication in Kafka headers when compression is enabled
[ https://issues.apache.org/jira/browse/KAFKA-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phuc Hong Tran resolved KAFKA-15160. Resolution: Won't Fix > Message bytes duplication in Kafka headers when compression is enabled > -- > > Key: KAFKA-15160 > URL: https://issues.apache.org/jira/browse/KAFKA-15160 > Project: Kafka > Issue Type: Bug > Components: clients, compression, consumer >Affects Versions: 3.2.3, 3.3.2 >Reporter: Vikash Mishra >Assignee: Phuc Hong Tran >Priority: Critical > Attachments: dump-compressed-data-.7z, java heap dump.png, > wireshark-min.png > > > I created a spring Kafka consumer using @KafkaListener. > During this, I encounter a scenario where when data is compressed ( any > compression snappy/gzip) and consumed by the consumer then I see that in a > heap dump, there is a " byte" occupying the same amount of memory as in > Message value. > This behavior is seen only in cases when compressed data is consumed by > consumers not in the case of uncompressed data. > Tried to capture Kafka's message through Wireshark, there it shows the proper > size of data incoming from Kafka server & no extra bytes in headers. So, this > is definitely something in Kafka client. Spring doesn't do any actions about > compression; the whole functionality is done internally in the Kafka client > library. > Attached is the screenshot of the heap dump and Wireshark. > This seems like a critical issue as message size in memory almost gets > doubles impacting consumer memory and performance. Somewhere it feels like > the actual message value is copied to headers? > *To Reproduce* > # Produce compressed data on any topic. > # Create a simple consumer consuming from the above-created topic. > # Capture heap dump. > *Expected behavior* > Headers should not show bytes consuming memory equivalent to value. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-16618) Update the RPC for ConsumerGroupHeartbeatRequest and ConsumerGroupHeartbeatResponse
Phuc Hong Tran created KAFKA-16618: -- Summary: Update the RPC for ConsumerGroupHeartbeatRequest and ConsumerGroupHeartbeatResponse Key: KAFKA-16618 URL: https://issues.apache.org/jira/browse/KAFKA-16618 Project: Kafka Issue Type: Sub-task Components: clients Reporter: Phuc Hong Tran Assignee: Phuc Hong Tran Fix For: 4.0.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)