Re: [PR] KAFKA-10789: Streamlining Tests in ChangeLoggingKeyValueBytesStoreTest [kafka]
leaf-soba commented on PR #18816: URL: https://github.com/apache/kafka/pull/18816#issuecomment-2661797786 Hi @chia7712, in the CI process, there are three flaky tests that are unrelated to this PR. Could you let me know how to re-run the CI process to get them to pass? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18332) Fix KafkaRaftClient complexity
[ https://issues.apache.org/jira/browse/KAFKA-18332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927585#comment-17927585 ] Ryan Ye commented on KAFKA-18332: - Hi can I take this issue? > Fix KafkaRaftClient complexity > -- > > Key: KAFKA-18332 > URL: https://issues.apache.org/jira/browse/KAFKA-18332 > Project: Kafka > Issue Type: Improvement >Reporter: Alyssa Huang >Assignee: Jose Armando Garcia Sancio >Priority: Major > > Remove suppressions for ClassDataAbstractionCoupling and JavaNCSS > KafkaRaftClientTest also has suppressions for JavaNCSS (in suppressions.xml), > ClassDataAbstractionCoupling, and ClassFanOutComplexity -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] KAFKA-18332: fix ClassDataAbstractionCoupling problem in KafkaRaftClientTest [kafka]
leaf-soba opened a new pull request, #18926: URL: https://github.com/apache/kafka/pull/18926 - extract a unit test named `KafkaRaftClientClusterAuthTest` to reduce the number of imported class -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18461: Fix potential NPE in setDelta after map is erased [kafka]
leaf-soba commented on PR #18684: URL: https://github.com/apache/kafka/pull/18684#issuecomment-2661834192 Hi @ijuma, could you please take a look? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-18810) Flaky test: ClientIdQuotaTest.testQuotaOverrideDelete
Andrew Schofield created KAFKA-18810: Summary: Flaky test: ClientIdQuotaTest.testQuotaOverrideDelete Key: KAFKA-18810 URL: https://issues.apache.org/jira/browse/KAFKA-18810 Project: Kafka Issue Type: Bug Reporter: Andrew Schofield Failed in https://github.com/apache/kafka/actions/runs/13329397255/job/37230139655?pr=18864 and more than 20 failures in the past month. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] Mark testQuotaOverrideDelete as flaky [kafka]
AndrewJSchofield opened a new pull request, #18925: URL: https://github.com/apache/kafka/pull/18925 ClientIdQuotaTest.testQuotaOverrideDelete is flaky. Marking this test as flaky. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18757: Create full-function SimpleAssignor to match KIP-932 description [kafka]
apoorvmittal10 commented on code in PR #18864: URL: https://github.com/apache/kafka/pull/18864#discussion_r1957418602 ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/SimpleAssignor.java: ## @@ -72,36 +77,178 @@ private GroupAssignment assignHomogenous( if (subscribeTopicIds.isEmpty()) return new GroupAssignment(Collections.emptyMap()); -Map> targetPartitions = computeTargetPartitions( +// Subscribed topic partitions for the share group. +List targetPartitions = computeTargetPartitions( subscribeTopicIds, subscribedTopicDescriber); -return new GroupAssignment(groupSpec.memberIds().stream().collect(Collectors.toMap( -Function.identity(), memberId -> new MemberAssignmentImpl(targetPartitions; +// The current assignment from topic partition to members. +Map> currentAssignment = currentAssignment(groupSpec); +return newAssignmentHomogeneous(groupSpec, subscribeTopicIds, targetPartitions, currentAssignment); } private GroupAssignment assignHeterogeneous( GroupSpec groupSpec, SubscribedTopicDescriber subscribedTopicDescriber ) { -Map members = new HashMap<>(); +Map> memberToPartitionsSubscription = new HashMap<>(); for (String memberId : groupSpec.memberIds()) { MemberSubscription spec = groupSpec.memberSubscription(memberId); if (spec.subscribedTopicIds().isEmpty()) continue; -Map> targetPartitions = computeTargetPartitions( +// Subscribed topic partitions for the share group member. +List targetPartitions = computeTargetPartitions( spec.subscribedTopicIds(), subscribedTopicDescriber); +memberToPartitionsSubscription.put(memberId, targetPartitions); +} + +// The current assignment from topic partition to members. +Map> currentAssignment = currentAssignment(groupSpec); +return newAssignmentHeterogeneous(groupSpec, memberToPartitionsSubscription, currentAssignment); +} -members.put(memberId, new MemberAssignmentImpl(targetPartitions)); +// Get the current assignment for subscribed topic partitions to share group members. Review Comment: Please write the method comments as javadoc. ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/SimpleAssignor.java: ## @@ -111,12 +258,26 @@ private Map> computeTargetPartitions( ); } -Set partitions = new HashSet<>(); for (int i = 0; i < numPartitions; i++) { -partitions.add(i); +targetPartitions.add(new TargetPartition(topicId, i)); } -targetPartitions.put(topicId, partitions); }); return targetPartitions; } + +record TargetPartition(Uuid topicId, int partition) { Review Comment: Why we didn't use `org.apache.kafka.server.common.TopicIdPartition`, seems same? ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/SimpleAssignor.java: ## @@ -111,12 +292,41 @@ private Map> computeTargetPartitions( ); } -Set partitions = new HashSet<>(); for (int i = 0; i < numPartitions; i++) { -partitions.add(i); +targetPartitions.add(new TargetPartition(topicId, i)); } -targetPartitions.put(topicId, partitions); }); return targetPartitions; } + +static class TargetPartition { +Uuid topicId; +int partition; Review Comment: This can be a record class. ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/SimpleAssignor.java: ## @@ -72,36 +77,178 @@ private GroupAssignment assignHomogenous( if (subscribeTopicIds.isEmpty()) return new GroupAssignment(Collections.emptyMap()); Review Comment: Can we please move to Map.of now? ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/SimpleAssignor.java: ## @@ -111,12 +258,26 @@ private Map> computeTargetPartitions( ); } -Set partitions = new HashSet<>(); for (int i = 0; i < numPartitions; i++) { -partitions.add(i); +targetPartitions.add(new TargetPartition(topicId, i)); } -targetPartitions.put(topicId, partitions); }); return targetPartitions; } + +record TargetPartition(Uuid topicId, int partition) { + +@Override +public boolean equals(Object o) { +if (this == o) return true; +if (o == null || getClass() != o.getClass()) return false; +TargetPartition that = (TargetPartitio
Re: [PR] KAFKA-18757: Create full-function SimpleAssignor to match KIP-932 description [kafka]
apoorvmittal10 commented on code in PR #18864: URL: https://github.com/apache/kafka/pull/18864#discussion_r1954315981 ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/SimpleAssignor.java: ## @@ -111,12 +292,41 @@ private Map> computeTargetPartitions( ); } -Set partitions = new HashSet<>(); for (int i = 0; i < numPartitions; i++) { -partitions.add(i); +targetPartitions.add(new TargetPartition(topicId, i)); } -targetPartitions.put(topicId, partitions); }); return targetPartitions; } + +static class TargetPartition { +Uuid topicId; +int partition; Review Comment: This can be a record class. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-18673) Provide means to gracefully update Producer transational id mapping state in case of lasting inactivity
[ https://issues.apache.org/jira/browse/KAFKA-18673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Tran reassigned KAFKA-18673: - Assignee: Alex Tran > Provide means to gracefully update Producer transational id mapping state in > case of lasting inactivity > --- > > Key: KAFKA-18673 > URL: https://issues.apache.org/jira/browse/KAFKA-18673 > Project: Kafka > Issue Type: Improvement >Reporter: Danil Shkodin >Assignee: Alex Tran >Priority: Major > > Consider adding some method of preventing a transactional _Producer_ instance > from expiring, please. > Currently, for ones that run services 24/7 that write transactional messages > to Kafka very sparsely there are several options to keep the program highly > available. > The first being what Spring > [does|https://docs.spring.io/spring-kafka/reference/kafka/transactions.html#overview]: > rotating transactional producers at intervals lower than the expiration > timeout. > {code:java} > void fixTransactionalIdExpiration() { > try { > producer.close(timeout); > } catch (Exception error) { > logger.warn("...", error); > } > producer = null; > try { > producer = new KafkaProducer<>(settings); > } catch (Exception error) { > logger.warn("...", error); > // handle failure > return; > } > try { > producer.initTransactions(); > } catch (Exception error) { > logger.warn("...", error); > // close producer and clean up, handle failure > return; > } > }{code} > The other similar one is to also act periodically, but to just write an empty > record transactionally instead of reconnecting. > {code:java} > void fixTransactionalIdExpiration() { > try { > producer.beginTransaction(); > var topic = "project_prefix.__dummy_topic"; > var message = new ProducerRecord<>(topic, (String) null, (String) null); > producer.send(message); > producer.abortTransaction(); > // or producer.commitTransaction(); does not matter > } catch (Exception error) { > logger.warn("...", error); > // handle failure > } > }{code} > Personally, I do not like the necessity of introducing a service topic. This > inelegance overweights reconnection troubles for me. > Suprisingly, producing an empty transaction does not prevent expiration. > [Probably|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/TransactionManager.java#L840], > there is a guard in the transactional manager that would prevent actual > updates to the transactinal producer mappings if there is nothing to write. > {code:java} > void fixTransactionalIdExpiration() { > // does not work > // producer will still get fenced upon transactional.id.expiration.ms > try { > producer.beginTransaction(); > producer.commitTransaction(); > } catch (Exception error) {} > } {code} > Worth noting that client code may execute one of these periodic fixes > conditionally, only if there was no activity, meaning there were no > successful _send()_ or _sendOffsetsToTransaction()_ for, say, 6 - 24 hours. > The last and obvious one is to let it fail and react to the error. > {code:java} > void sendMessage(Message message) { > try { > producer.beginTransaction(); > producer.send(message.to()); > producer.commitTransaction(); > } catch (InvalidPidMappingException error) { > // reconnect, retry > } catch (Exception error) { > // handle failure > } > }{code} > Having a dedicated method that explicitly reflects the intent to refresh > producer _transactional.id_ would line up with the _Consumer_ polling > mechanism and manifest to new kafka-clients users that lasting transactional > _Producer_ inactivity should be addressed. > This issue search optimization: > InvalidPidMappingException: The producer attempted to use a producer id which > is not currently assigned to its transactional id > transactional.id.expiration.ms > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-7952) Consider to switch to in-memory stores in test whenever possible
[ https://issues.apache.org/jira/browse/KAFKA-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Tran reassigned KAFKA-7952: Assignee: (was: Alex Tran) > Consider to switch to in-memory stores in test whenever possible > > > Key: KAFKA-7952 > URL: https://issues.apache.org/jira/browse/KAFKA-7952 > Project: Kafka > Issue Type: Improvement > Components: streams, unit tests >Reporter: Matthias J. Sax >Priority: Major > Labels: beginner, newbie > > We observed that tests can be very slow using default RocksDB stores (cf. > KAFKA-7933). > We should consider to switch to in-memory stores whenever possible to reduce > test runtime. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18784: Fix ConsumerWithLegacyMessageFormatIntegrationTest [kafka]
chia7712 commented on code in PR #18889: URL: https://github.com/apache/kafka/pull/18889#discussion_r1957459755 ## core/src/test/scala/integration/kafka/api/ConsumerWithLegacyMessageFormatIntegrationTest.scala: ## @@ -18,16 +18,43 @@ package kafka.api import kafka.utils.TestInfoUtils import org.apache.kafka.common.TopicPartition +import org.apache.kafka.common.compress.Compression +import org.apache.kafka.common.record.{AbstractRecords, CompressionType, MemoryRecords, RecordBatch, RecordVersion, SimpleRecord, TimestampType} import org.junit.jupiter.api.Assertions.{assertEquals, assertNull, assertThrows} import org.junit.jupiter.params.ParameterizedTest import org.junit.jupiter.params.provider.MethodSource +import java.nio.ByteBuffer import java.util import java.util.{Collections, Optional} import scala.jdk.CollectionConverters._ class ConsumerWithLegacyMessageFormatIntegrationTest extends AbstractConsumerTest { + private def appendLegacyRecords(numRecords: Int, tp: TopicPartition, startingTimestamp: Long, brokerId: Int): Unit = { +val records = (0 until numRecords).map { i => + new SimpleRecord(startingTimestamp + i, s"key $i".getBytes, s"value $i".getBytes) +} +val buffer = ByteBuffer.allocate(AbstractRecords.estimateSizeInBytes(RecordBatch.MAGIC_VALUE_V1, CompressionType.NONE, records.asJava)) +val builder = MemoryRecords.builder(buffer, RecordBatch.MAGIC_VALUE_V1, Compression.of(CompressionType.NONE).build, Review Comment: Could you please add test for v0 also? in v0 we don't write the timestamp, so searching record by timestamp should return null. ## core/src/test/scala/integration/kafka/api/ConsumerWithLegacyMessageFormatIntegrationTest.scala: ## @@ -80,12 +114,12 @@ class ConsumerWithLegacyMessageFormatIntegrationTest extends AbstractConsumerTes val timestampTopic2P0 = timestampOffsets.get(new TopicPartition(topic2, 0)) assertEquals(40, timestampTopic2P0.offset) assertEquals(40, timestampTopic2P0.timestamp) -assertEquals(Optional.of(0), timestampTopic2P0.leaderEpoch) +assertEquals(Optional.empty, timestampTopic2P0.leaderEpoch) Review Comment: yes, we don't set the leader epoch for the old message format. ## core/src/test/scala/integration/kafka/api/ConsumerWithLegacyMessageFormatIntegrationTest.scala: ## @@ -51,8 +78,15 @@ class ConsumerWithLegacyMessageFormatIntegrationTest extends AbstractConsumerTes for (topic <- List(topic1, topic2, topic3)) { for (part <- 0 until numParts) { val tp = new TopicPartition(topic, part) -// In sendRecords(), each message will have key, value and timestamp equal to the sequence number. -sendRecords(producer, numRecords = 100, tp, startingTimestamp = 0) +// In sendRecords() and sendLegacyRecords(), each message will have key, value and timestamp equal to the sequence number. +if (topic == topic2) { + // append legacy records to topic2 + appendLegacyRecords(100, tp, 0, part) +} else { + println("sendRecords") Review Comment: please remove this line -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Update streams compat table 3.8 [kafka]
github-actions[bot] commented on PR #17850: URL: https://github.com/apache/kafka/pull/17850#issuecomment-2661892400 This PR is being marked as stale since it has not had any activity in 90 days. If you would like to keep this PR alive, please leave a comment asking for a review. If the PR has merge conflicts, update it with the latest from the base branch. If you are having difficulty finding a reviewer, please reach out on the [mailing list](https://kafka.apache.org/contact). If this PR is no longer valid or desired, please feel free to close it. If no activity occurs in the next 30 days, it will be automatically closed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18767: Add client side config check for shareConsumer [kafka]
TaiJuWu commented on code in PR #18850: URL: https://github.com/apache/kafka/pull/18850#discussion_r1957292877 ## clients/src/main/java/org/apache/kafka/clients/consumer/GroupProtocol.java: ## @@ -23,7 +23,10 @@ public enum GroupProtocol { CLASSIC("CLASSIC"), /** Consumer group protocol */ -CONSUMER("CONSUMER"); +CONSUMER("CONSUMER"), + +/** Share group protocol */ +SHARE("SHARE"); Review Comment: c4ed6834d1b9f05c5e38247c5e0904d84150508e does not work since `GroupRebalanceConfig` has default value. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18767: Add client side config check for shareConsumer [kafka]
TaiJuWu commented on code in PR #18850: URL: https://github.com/apache/kafka/pull/18850#discussion_r1957292877 ## clients/src/main/java/org/apache/kafka/clients/consumer/GroupProtocol.java: ## @@ -23,7 +23,10 @@ public enum GroupProtocol { CLASSIC("CLASSIC"), /** Consumer group protocol */ -CONSUMER("CONSUMER"); +CONSUMER("CONSUMER"), + +/** Share group protocol */ +SHARE("SHARE"); Review Comment: cf239aefa82053cb7a5f89e5e12335829a2b4e76 does not work since `GroupRebalanceConfig` has default value so we need to do all check within `ConsumerConfig`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-15332) Eligible Leader Replicas
[ https://issues.apache.org/jira/browse/KAFKA-15332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927535#comment-17927535 ] Stanislav Kozlovski commented on KAFKA-15332: - [~calvinliu] [~dajac] [~jhooper] can you confirm whether 4.0 is just shipping the ELR part of the KIP? My understanding is the behavior without unclear recovery is basically a slight improvement from today's state, where we have and can use the ELR set with the highest watermark thing, but in case the set is empty - we're back to today's behavior more or less > Eligible Leader Replicas > > > Key: KAFKA-15332 > URL: https://issues.apache.org/jira/browse/KAFKA-15332 > Project: Kafka > Issue Type: New Feature >Reporter: Calvin Liu >Assignee: Calvin Liu >Priority: Major > > A root ticket for the KIP-966 > The KIP has accepted > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-966%3A+Eligible+Leader+Replicas] > The delivery is divided into 2 parts, ELR and unclean recovery. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-18735) the zk-related path `` is still existent in quota system, such as `ConfigEntity#name` and metrics tags
[ https://issues.apache.org/jira/browse/KAFKA-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927504#comment-17927504 ] Chia-Ping Tsai commented on KAFKA-18735: {quote} If there is some API that is setting ConfigEntity#name to for default quotas, I think that's just straightforwardly a bug that should be fixed in 4.0. {quote} there is no such case. Using "name=" can't create default quota. We can add a test to ensure that scenario. Currently, the issue I have observed is `ConfigEntity` used by callback. It return " as name when it is the default quota. In the APIs doc, it says empty string will be returned if it is the default quota. However, both and empty string are weird to me as the entity returned by Admin#describeClientQuotas is using `null` to represent default quota. We can keep returning to the callback to keep compatibility. Or we can add a "small" breaking change to the callback to align the value of entity to "null" > the zk-related path `` is still existent in quota system, such as > `ConfigEntity#name` and metrics tags > --- > > Key: KAFKA-18735 > URL: https://issues.apache.org/jira/browse/KAFKA-18735 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Ming-Yen Chung >Priority: Minor > Fix For: 4.1.0, 4.0.1 > > > the `` is a specific "flag" used by zk path to represent the > "default" quota. We don't expose it on Admin as we use `null` to represent > the default name [0], and in KAFKA-16411 we assumes `` should be > removed from 4.0 with the end of ZK mode [1] > However, the `` is still used in two "public APIs" > 1. ConfigEntity#name return for default quotas. The docs say it > return empty string, but that is incorrect. see [2] > -"client-id" -> "" and "user" -> "" are used in metrics tag > [3]- > > [0] > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/quota/ClientQuotaEntity.java#L44 > [1] > https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/config/ZooKeeperInternals.java#L24 > [2] > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ClientQuotaManager.scala#L79 > [3] > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ClientQuotaManager.scala#L479 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-18808) add test to ensure the name= is not equal to default quota
[ https://issues.apache.org/jira/browse/KAFKA-18808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927505#comment-17927505 ] Chia-Ping Tsai commented on KAFKA-18808: {code:java} Map entries = new HashMap<>(); entries.put("client-id", ""); try (var adminClient = Admin.create(Collections.singletonMap("bootstrap.servers", cluster.bootstrapServers( { adminClient.alterClientQuotas(Collections.singletonList(new ClientQuotaAlteration( new org.apache.kafka.common.quota.ClientQuotaEntity(entries), Collections.singleton(new ClientQuotaAlteration.Op("producer_byte_rate", 1D).all().get(); TimeUnit.SECONDS.sleep(2); Map> entityMapMap = adminClient.describeClientQuotas( ClientQuotaFilter.containsOnly(Collections.singletonList(ClientQuotaFilterComponent.ofDefaultEntity("client-id" .entities().get(); Assertions.assertEquals(0, entityMapMap.size()); } {code} > add test to ensure the name= is not equal to default quota > --- > > Key: KAFKA-18808 > URL: https://issues.apache.org/jira/browse/KAFKA-18808 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > Fix For: 4.1.0 > > > see discussion in KAFKA-18735 - the test should include following check. > 1. Using name= does not create default quota > 2. the returned entity should have name= > 2. the filter `ClientQuotaFilterComponent.ofDefaultEntity` should return > nothing -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-18735) the zk-related path `` is still used by quota callback
[ https://issues.apache.org/jira/browse/KAFKA-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated KAFKA-18735: --- Summary: the zk-related path `` is still used by quota callback (was: the zk-related path `` is still existent in quota system, such as `ConfigEntity#name` and metrics tags) > the zk-related path `` is still used by quota callback > --- > > Key: KAFKA-18735 > URL: https://issues.apache.org/jira/browse/KAFKA-18735 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Ming-Yen Chung >Priority: Minor > Fix For: 4.1.0, 4.0.1 > > > the `` is a specific "flag" used by zk path to represent the > "default" quota. We don't expose it on Admin as we use `null` to represent > the default name [0], and in KAFKA-16411 we assumes `` should be > removed from 4.0 with the end of ZK mode [1] > However, the `` is still used in two "public APIs" > 1. ConfigEntity#name return for default quotas. The docs say it > return empty string, but that is incorrect. see [2] > -"client-id" -> "" and "user" -> "" are used in metrics tag > [3]- > > [0] > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/quota/ClientQuotaEntity.java#L44 > [1] > https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/config/ZooKeeperInternals.java#L24 > [2] > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ClientQuotaManager.scala#L79 > [3] > https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ClientQuotaManager.scala#L479 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-14484: Decouple UnifiedLog and RemoteLogManager [kafka]
kamalcph commented on code in PR #18460: URL: https://github.com/apache/kafka/pull/18460#discussion_r1957373391 ## storage/src/main/java/org/apache/kafka/storage/internals/log/AsyncOffsetReader.java: ## @@ -0,0 +1,47 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.kafka.storage.internals.log; + +import org.apache.kafka.common.TopicPartition; +import org.apache.kafka.common.record.FileRecords; +import org.apache.kafka.storage.internals.epoch.LeaderEpochFileCache; + +import java.util.Optional; +import java.util.function.Supplier; + +/** + * Interface used to decouple UnifiedLog and RemoteLogManager. + */ +public interface AsyncOffsetReader { + +/** + * Retrieve the offset for the specified timestamp. UnifiedLog may call this method when handling OffsetRequest > 0 Review Comment: > when handling OffsetRequest > 0 I couldn't understand, What does it mean? Is it referring to timestamp > 0? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-14484: Decouple UnifiedLog and RemoteLogManager [kafka]
kamalcph commented on code in PR #18460: URL: https://github.com/apache/kafka/pull/18460#discussion_r1957373391 ## storage/src/main/java/org/apache/kafka/storage/internals/log/AsyncOffsetReader.java: ## @@ -0,0 +1,47 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.kafka.storage.internals.log; + +import org.apache.kafka.common.TopicPartition; +import org.apache.kafka.common.record.FileRecords; +import org.apache.kafka.storage.internals.epoch.LeaderEpochFileCache; + +import java.util.Optional; +import java.util.function.Supplier; + +/** + * Interface used to decouple UnifiedLog and RemoteLogManager. + */ +public interface AsyncOffsetReader { + +/** + * Retrieve the offset for the specified timestamp. UnifiedLog may call this method when handling OffsetRequest > 0 Review Comment: > when handling OffsetRequest > 0 I couldn't understand, What does it mean? ## core/src/main/scala/kafka/log/UnifiedLog.scala: ## @@ -1235,7 +1234,7 @@ class UnifiedLog(@volatile var logStartOffset: Long, * All special timestamp offset results are returned immediately irrespective of the remote storage. * */ - def fetchOffsetByTimestamp(targetTimestamp: Long, remoteLogManager: Option[RemoteLogManager] = None): OffsetResultHolder = { + def fetchOffsetByTimestamp(targetTimestamp: Long, remoteLogManager: Optional[AsyncOffsetReader] = Optional.empty): OffsetResultHolder = { Review Comment: nit: Can we rename the variable `remoteLogManager` to `remoteOffsetReader`? As, the other methods in RemoteLogManager are no longer accessible. ## core/src/main/java/kafka/log/remote/RemoteLogManager.java: ## @@ -664,10 +664,10 @@ private Optional maybeLeaderEpoch(int leaderEpoch) { public AsyncOffsetReadFutureHolder asyncOffsetRead( Review Comment: Add `@Override` to the method. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18641: AsyncKafkaConsumer could lose records with auto offset commit [kafka]
junrao commented on code in PR #18737: URL: https://github.com/apache/kafka/pull/18737#discussion_r1957375625 ## clients/src/main/java/org/apache/kafka/clients/consumer/internals/events/ApplicationEventProcessor.java: ## @@ -206,13 +206,25 @@ public void process(ApplicationEvent event) { } private void process(final PollEvent event) { +// Trigger a reconciliation that can safely commit offsets if needed to rebalance, +// as we're processing before any new fetching starts in the app thread + requestManagers.consumerMembershipManager.ifPresent(consumerMembershipManager -> +consumerMembershipManager.maybeReconcile(true)); if (requestManagers.commitRequestManager.isPresent()) { -requestManagers.commitRequestManager.ifPresent(m -> m.updateAutoCommitTimer(event.pollTimeMs())); +CommitRequestManager commitRequestManager = requestManagers.commitRequestManager.get(); +commitRequestManager.updateTimerAndMaybeCommit(event.pollTimeMs()); +// all commit request generation points have been passed, +// so it's safe to notify the app thread could proceed and start fetching +event.future().complete(null); Review Comment: Technically, the event hasn't been fully processed yet. So, it's a bit weird to complete the event midway. Perhaps we could introduce a separate future like `reconcileAndAutoCommit` and wait for that. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18806: Resource leak in MemoryRecordsBuilder in RecordsUtil [kafka]
xelathan commented on PR #18919: URL: https://github.com/apache/kafka/pull/18919#issuecomment-2661578406 Build finished with unrelated test failure in `ConsumerCoordinatorTest > testOutdatedCoordinatorAssignment ` Issue is tracked with [KAFKA-15900](https://issues.apache.org/jira/browse/KAFKA-15900) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18757: Create full-function SimpleAssignor to match KIP-932 description [kafka]
AndrewJSchofield commented on code in PR #18864: URL: https://github.com/apache/kafka/pull/18864#discussion_r1957425369 ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/SimpleAssignor.java: ## @@ -72,36 +77,178 @@ private GroupAssignment assignHomogenous( if (subscribeTopicIds.isEmpty()) return new GroupAssignment(Collections.emptyMap()); -Map> targetPartitions = computeTargetPartitions( +// Subscribed topic partitions for the share group. +List targetPartitions = computeTargetPartitions( subscribeTopicIds, subscribedTopicDescriber); -return new GroupAssignment(groupSpec.memberIds().stream().collect(Collectors.toMap( -Function.identity(), memberId -> new MemberAssignmentImpl(targetPartitions; +// The current assignment from topic partition to members. +Map> currentAssignment = currentAssignment(groupSpec); +return newAssignmentHomogeneous(groupSpec, subscribeTopicIds, targetPartitions, currentAssignment); } private GroupAssignment assignHeterogeneous( GroupSpec groupSpec, SubscribedTopicDescriber subscribedTopicDescriber ) { -Map members = new HashMap<>(); +Map> memberToPartitionsSubscription = new HashMap<>(); for (String memberId : groupSpec.memberIds()) { MemberSubscription spec = groupSpec.memberSubscription(memberId); if (spec.subscribedTopicIds().isEmpty()) continue; -Map> targetPartitions = computeTargetPartitions( +// Subscribed topic partitions for the share group member. +List targetPartitions = computeTargetPartitions( spec.subscribedTopicIds(), subscribedTopicDescriber); +memberToPartitionsSubscription.put(memberId, targetPartitions); +} + +// The current assignment from topic partition to members. +Map> currentAssignment = currentAssignment(groupSpec); +return newAssignmentHeterogeneous(groupSpec, memberToPartitionsSubscription, currentAssignment); +} -members.put(memberId, new MemberAssignmentImpl(targetPartitions)); +// Get the current assignment for subscribed topic partitions to share group members. +private Map> currentAssignment(GroupSpec groupSpec) { +Map> assignment = new HashMap<>(); + +for (String member : groupSpec.memberIds()) { +Map> assignedTopicPartitions = groupSpec.memberAssignment(member).partitions(); +assignedTopicPartitions.forEach((topicId, partitions) -> partitions.forEach( +partition -> assignment.computeIfAbsent(new TargetPartition(topicId, partition), k -> new ArrayList<>()).add(member))); } +return assignment; +} + +private GroupAssignment newAssignmentHomogeneous( +GroupSpec groupSpec, +Set subscribeTopicIds, +List targetPartitions, +Map> currentAssignment) { + +Map> newAssignment = new HashMap<>(); +// Step 1: Hash member IDs to partitions. +memberHashAssignment(targetPartitions, groupSpec.memberIds(), newAssignment); + +// Step 2: Round-robin assignment for unassigned partitions which do not have members already assigned in the current assignment. +Set assignedPartitions = new HashSet<>(newAssignment.keySet()); +List unassignedPartitions = targetPartitions.stream() +.filter(targetPartition -> !assignedPartitions.contains(targetPartition)) +.filter(targetPartition -> !currentAssignment.containsKey(targetPartition)) +.collect(Collectors.toList()); + +roundRobinAssignment(groupSpec.memberIds(), unassignedPartitions, newAssignment); + +Map> finalAssignment = new HashMap<>(); + +// When combining current assignment, we need to only consider the topics in current assignment that are also being +// subscribed in the new assignment as well. +currentAssignment.forEach((targetPartition, members) -> { +if (subscribeTopicIds.contains(targetPartition.topicId)) +members.forEach(member -> { +if (groupSpec.memberIds().contains(member)) +finalAssignment.computeIfAbsent(member, k -> new HashSet<>()).add(targetPartition); +}); +}); +newAssignment.forEach((targetPartition, members) -> members.forEach(member -> +finalAssignment.computeIfAbsent(member, k -> new HashSet<>()).add(targetPartition))); + +return groupAssignment(finalAssignment, groupSpec.memberIds()); +} + +private GroupAssignment newAssignmentHeterogeneous( +GroupSpec groupSpec, +Map> memberToPartitionsSubscription, +Map> currentAssignment) { + +// Exhaustive set of all subscribed topic partitions. +Set targetPartition
Re: [PR] KAFKA-18757: Create full-function SimpleAssignor to match KIP-932 description [kafka]
apoorvmittal10 commented on code in PR #18864: URL: https://github.com/apache/kafka/pull/18864#discussion_r1957427989 ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/assignor/SimpleAssignor.java: ## @@ -72,36 +77,178 @@ private GroupAssignment assignHomogenous( if (subscribeTopicIds.isEmpty()) return new GroupAssignment(Collections.emptyMap()); -Map> targetPartitions = computeTargetPartitions( +// Subscribed topic partitions for the share group. +List targetPartitions = computeTargetPartitions( subscribeTopicIds, subscribedTopicDescriber); -return new GroupAssignment(groupSpec.memberIds().stream().collect(Collectors.toMap( -Function.identity(), memberId -> new MemberAssignmentImpl(targetPartitions; +// The current assignment from topic partition to members. +Map> currentAssignment = currentAssignment(groupSpec); +return newAssignmentHomogeneous(groupSpec, subscribeTopicIds, targetPartitions, currentAssignment); } private GroupAssignment assignHeterogeneous( GroupSpec groupSpec, SubscribedTopicDescriber subscribedTopicDescriber ) { -Map members = new HashMap<>(); +Map> memberToPartitionsSubscription = new HashMap<>(); for (String memberId : groupSpec.memberIds()) { MemberSubscription spec = groupSpec.memberSubscription(memberId); if (spec.subscribedTopicIds().isEmpty()) continue; -Map> targetPartitions = computeTargetPartitions( +// Subscribed topic partitions for the share group member. +List targetPartitions = computeTargetPartitions( spec.subscribedTopicIds(), subscribedTopicDescriber); +memberToPartitionsSubscription.put(memberId, targetPartitions); +} + +// The current assignment from topic partition to members. +Map> currentAssignment = currentAssignment(groupSpec); +return newAssignmentHeterogeneous(groupSpec, memberToPartitionsSubscription, currentAssignment); +} -members.put(memberId, new MemberAssignmentImpl(targetPartitions)); +// Get the current assignment for subscribed topic partitions to share group members. +private Map> currentAssignment(GroupSpec groupSpec) { +Map> assignment = new HashMap<>(); + +for (String member : groupSpec.memberIds()) { +Map> assignedTopicPartitions = groupSpec.memberAssignment(member).partitions(); +assignedTopicPartitions.forEach((topicId, partitions) -> partitions.forEach( +partition -> assignment.computeIfAbsent(new TargetPartition(topicId, partition), k -> new ArrayList<>()).add(member))); } +return assignment; +} + +private GroupAssignment newAssignmentHomogeneous( +GroupSpec groupSpec, +Set subscribeTopicIds, +List targetPartitions, +Map> currentAssignment) { + +Map> newAssignment = new HashMap<>(); +// Step 1: Hash member IDs to partitions. +memberHashAssignment(targetPartitions, groupSpec.memberIds(), newAssignment); + +// Step 2: Round-robin assignment for unassigned partitions which do not have members already assigned in the current assignment. +Set assignedPartitions = new HashSet<>(newAssignment.keySet()); +List unassignedPartitions = targetPartitions.stream() +.filter(targetPartition -> !assignedPartitions.contains(targetPartition)) +.filter(targetPartition -> !currentAssignment.containsKey(targetPartition)) +.collect(Collectors.toList()); + +roundRobinAssignment(groupSpec.memberIds(), unassignedPartitions, newAssignment); + +Map> finalAssignment = new HashMap<>(); + +// When combining current assignment, we need to only consider the topics in current assignment that are also being +// subscribed in the new assignment as well. +currentAssignment.forEach((targetPartition, members) -> { +if (subscribeTopicIds.contains(targetPartition.topicId)) +members.forEach(member -> { +if (groupSpec.memberIds().contains(member)) +finalAssignment.computeIfAbsent(member, k -> new HashSet<>()).add(targetPartition); +}); +}); +newAssignment.forEach((targetPartition, members) -> members.forEach(member -> +finalAssignment.computeIfAbsent(member, k -> new HashSet<>()).add(targetPartition))); + +return groupAssignment(finalAssignment, groupSpec.memberIds()); +} + +private GroupAssignment newAssignmentHeterogeneous( +GroupSpec groupSpec, +Map> memberToPartitionsSubscription, +Map> currentAssignment) { + +// Exhaustive set of all subscribed topic partitions. +Set targetPartitions
Re: [PR] KAFKA-18757: Create full-function SimpleAssignor to match KIP-932 description [kafka]
AndrewJSchofield commented on PR #18864: URL: https://github.com/apache/kafka/pull/18864#issuecomment-2661633762 Marked failed test as flaky in https://github.com/apache/kafka/pull/18925. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18804: Remove slf4j warning when using tool script [kafka]
frankvicky commented on PR #18918: URL: https://github.com/apache/kafka/pull/18918#issuecomment-2661387467 Hi @m1a2st You should see the warning of slf4j, but it is different from this one. ``` > Task :core:genProtocolErrorDocs SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. ``` The above warning is handled by [KAFKA-18752](https://issues.apache.org/jira/browse/KAFKA-18752) > And why don't we exclude the Log4j dependency in the core module? It is defined in `build.gradle` at line 1137. We could do that, but it is not necessary. The warning states that slf4j detects multiple bindings in projects, so removing one is enough. In this situation, it is easiest to exclude one in the tools module. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18767: Add client side config check for shareConsumer [kafka]
TaiJuWu commented on code in PR #18850: URL: https://github.com/apache/kafka/pull/18850#discussion_r1957292877 ## clients/src/main/java/org/apache/kafka/clients/consumer/GroupProtocol.java: ## @@ -23,7 +23,10 @@ public enum GroupProtocol { CLASSIC("CLASSIC"), /** Consumer group protocol */ -CONSUMER("CONSUMER"); +CONSUMER("CONSUMER"), + +/** Share group protocol */ +SHARE("SHARE"); Review Comment: cf239aefa82053cb7a5f89e5e12335829a2b4e76 does not work since `GroupRebalanceConfig` has default value. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18804: Remove slf4j warning when using tool script [kafka]
m1a2st commented on PR #18918: URL: https://github.com/apache/kafka/pull/18918#issuecomment-2661393738 > The above warning is handled by [KAFKA-18752](https://issues.apache.org/jira/browse/KAFKA-18752) I'm not referring to this warning. I tested the steps below, and no warning messages appeared. ``` ./gradlew clean ./gradlew releaseTarGz tar -zxvf $(find ./core/build/distributions/ -maxdepth 1 -type f \( -iname "kafka*tgz" ! -iname "*sit*" \)) -C /tmp/opt/kafka --strip-components=1 ./bin/kafka-storage.sh -h usage: kafka-storage [-h] {info,format,version-mapping,feature-dependencies,random-uuid} ... ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18804: Remove slf4j warning when using tool script [kafka]
frankvicky commented on PR #18918: URL: https://github.com/apache/kafka/pull/18918#issuecomment-2661398712 At the source code root, you need to complete the `build` or `releaseTarGz` at least first. This will generate a `dependant-libs-2.13.15` directory. Currently, only `core` and `tools` use `releaseOnly`. The `copyDependantLibs` tasks for these two modules will copy the `releaseOnly` dependencies to `dependant-libs-2.13.15`. Therefore, when running tool scripts under the source code root, it will find two `slf4-impl` jars. So this is indeed a dev-specific issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18767: Add client side config check for shareConsumer [kafka]
TaiJuWu commented on PR #18850: URL: https://github.com/apache/kafka/pull/18850#issuecomment-2661404879 > Thanks for the PR. Just one overall comment about why `GroupProtocol.SHARE` was defined. The `GROUP_PROTOCOL_CONFIG` is used to choose between which group protocol is used for the regular `KafkaConsumer`. It's basically the consumer switch for KIP-848. I think that's different than `GroupProtocol.SHARE` which I don't think should exist. I think the current state is the most simple solution, WDYT? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18767: Add client side config check for shareConsumer [kafka]
TaiJuWu commented on code in PR #18850: URL: https://github.com/apache/kafka/pull/18850#discussion_r1957242661 ## clients/src/main/java/org/apache/kafka/clients/consumer/GroupProtocol.java: ## @@ -23,7 +23,10 @@ public enum GroupProtocol { CLASSIC("CLASSIC"), /** Consumer group protocol */ -CONSUMER("CONSUMER"); +CONSUMER("CONSUMER"), + +/** Share group protocol */ +SHARE("SHARE"); Review Comment: Hi @AndrewJSchofield , I add config checker to `GroupRebalanceConfig`. Is this change ok? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18767: Add client side config check for shareConsumer [kafka]
TaiJuWu commented on code in PR #18850: URL: https://github.com/apache/kafka/pull/18850#discussion_r1957292877 ## clients/src/main/java/org/apache/kafka/clients/consumer/GroupProtocol.java: ## @@ -23,7 +23,10 @@ public enum GroupProtocol { CLASSIC("CLASSIC"), /** Consumer group protocol */ -CONSUMER("CONSUMER"); +CONSUMER("CONSUMER"), + +/** Share group protocol */ +SHARE("SHARE"); Review Comment: This method does not work since `GroupRebalanceConfig` has default value. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18805) Consumer Heartbeat closed change should be locked
[ https://issues.apache.org/jira/browse/KAFKA-18805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927473#comment-17927473 ] 黃竣陽 commented on KAFKA-18805: - Hello [~showuon], if you won't work on this, may I take this? > Consumer Heartbeat closed change should be locked > - > > Key: KAFKA-18805 > URL: https://issues.apache.org/jira/browse/KAFKA-18805 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Priority: Major > Labels: newbie > > This `this.closed = true` is not wrapped inside the lock, which could cause > the change is not visible in another thread. > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L1580 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-18806) Resource leak in MemoryRecordsBuilder in RecordsUtil
[ https://issues.apache.org/jira/browse/KAFKA-18806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927474#comment-17927474 ] TengYao Chi commented on KAFKA-18806: - Hi [~showuon] May I take over this one ? > Resource leak in MemoryRecordsBuilder in RecordsUtil > > > Key: KAFKA-18806 > URL: https://issues.apache.org/jira/browse/KAFKA-18806 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Priority: Major > Labels: newbie > > We forgot to close the MemoryRecordsBuilder resource (which contains streams) > here: > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java#L92. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-18806) Resource leak in MemoryRecordsBuilder in RecordsUtil
Luke Chen created KAFKA-18806: - Summary: Resource leak in MemoryRecordsBuilder in RecordsUtil Key: KAFKA-18806 URL: https://issues.apache.org/jira/browse/KAFKA-18806 Project: Kafka Issue Type: Bug Reporter: Luke Chen We forgot to close the MemoryRecordsBuilder resource (which contains streams) here: https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java#L92. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-18806) Resource leak in MemoryRecordsBuilder in RecordsUtil
[ https://issues.apache.org/jira/browse/KAFKA-18806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Tran reassigned KAFKA-18806: - Assignee: Alex Tran > Resource leak in MemoryRecordsBuilder in RecordsUtil > > > Key: KAFKA-18806 > URL: https://issues.apache.org/jira/browse/KAFKA-18806 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Assignee: Alex Tran >Priority: Major > Labels: newbie > > We forgot to close the MemoryRecordsBuilder resource (which contains streams) > here: > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java#L92. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18804: Remove slf4j warning when using tool script [kafka]
Yunyung commented on PR #18918: URL: https://github.com/apache/kafka/pull/18918#issuecomment-2661333030 This should be backported to 4.0, as the same issue happens in 4.0. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18805) Consumer Heartbeat closed change should be locked
[ https://issues.apache.org/jira/browse/KAFKA-18805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927477#comment-17927477 ] Luke Chen commented on KAFKA-18805: --- [~m1a2st] , assigned to you. Thanks. > Consumer Heartbeat closed change should be locked > - > > Key: KAFKA-18805 > URL: https://issues.apache.org/jira/browse/KAFKA-18805 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Assignee: 黃竣陽 >Priority: Major > Labels: newbie > > This `this.closed = true` is not wrapped inside the lock, which could cause > the change is not visible in another thread. > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L1580 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-18806) Resource leak in MemoryRecordsBuilder in RecordsUtil
[ https://issues.apache.org/jira/browse/KAFKA-18806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927478#comment-17927478 ] Luke Chen commented on KAFKA-18806: --- [~frankvicky] , it looks like [~alectric] is picking is up. [~alectric] , are you going to work on it? > Resource leak in MemoryRecordsBuilder in RecordsUtil > > > Key: KAFKA-18806 > URL: https://issues.apache.org/jira/browse/KAFKA-18806 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Assignee: Alex Tran >Priority: Major > Labels: newbie > > We forgot to close the MemoryRecordsBuilder resource (which contains streams) > here: > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java#L92. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-18805) Consumer Heartbeat closed change should be locked
[ https://issues.apache.org/jira/browse/KAFKA-18805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luke Chen reassigned KAFKA-18805: - Assignee: 黃竣陽 > Consumer Heartbeat closed change should be locked > - > > Key: KAFKA-18805 > URL: https://issues.apache.org/jira/browse/KAFKA-18805 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Assignee: 黃竣陽 >Priority: Major > Labels: newbie > > This `this.closed = true` is not wrapped inside the lock, which could cause > the change is not visible in another thread. > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L1580 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-18807) Group coordinator under-reports thread idle time
Sean Quah created KAFKA-18807: - Summary: Group coordinator under-reports thread idle time Key: KAFKA-18807 URL: https://issues.apache.org/jira/browse/KAFKA-18807 Project: Kafka Issue Type: Bug Components: group-coordinator Reporter: Sean Quah Assignee: Sean Quah When events arrive at the event processor with less than 1/group.coordinator.threads ms between them, the thread idle time is rounded down to 0 because of integer division. eg. when the group coordinator has 8 threads, and it receives >1k offset commit requests per second evenly split across shards, the thread idle time metric will be under-reported. https://github.com/apache/kafka/blob/2353a7c5081b53402d5092afd255e2bdf7aa2d20/coordinator-common/src/main/java/org/apache/kafka/coordinator/common/runtime/MultiThreadedEventProcessor.java#L141 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] KAFKA-18806: Resource leak in MemoryRecordsBuilder in RecordsUtil [kafka]
xelathan opened a new pull request, #18919: URL: https://github.com/apache/kafka/pull/18919 MemoryRecordsBuilder not closed in RecordsUtil. Closed implicitly with `try` https://issues.apache.org/jira/browse/KAFKA-18806 ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18806) Resource leak in MemoryRecordsBuilder in RecordsUtil
[ https://issues.apache.org/jira/browse/KAFKA-18806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927481#comment-17927481 ] Alex Tran commented on KAFKA-18806: --- [~showuon] , yes I am working on it. > Resource leak in MemoryRecordsBuilder in RecordsUtil > > > Key: KAFKA-18806 > URL: https://issues.apache.org/jira/browse/KAFKA-18806 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Assignee: Alex Tran >Priority: Major > Labels: newbie > > We forgot to close the MemoryRecordsBuilder resource (which contains streams) > here: > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java#L92. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-18806) Resource leak in MemoryRecordsBuilder in RecordsUtil
[ https://issues.apache.org/jira/browse/KAFKA-18806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927484#comment-17927484 ] TengYao Chi commented on KAFKA-18806: - No worried I will help review this one > Resource leak in MemoryRecordsBuilder in RecordsUtil > > > Key: KAFKA-18806 > URL: https://issues.apache.org/jira/browse/KAFKA-18806 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Assignee: Alex Tran >Priority: Major > Labels: newbie > > We forgot to close the MemoryRecordsBuilder resource (which contains streams) > here: > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java#L92. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] KAFKA-18805: Consumer Heartbeat closed change should be locked [kafka]
m1a2st opened a new pull request, #18920: URL: https://github.com/apache/kafka/pull/18920 Jira: https://issues.apache.org/jira/browse/KAFKA-18805 Using synchronized block to avoid the locked ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-18805) Consumer Heartbeat closed change should be locked
[ https://issues.apache.org/jira/browse/KAFKA-18805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luke Chen resolved KAFKA-18805. --- Fix Version/s: 4.1.0 Resolution: Fixed > Consumer Heartbeat closed change should be locked > - > > Key: KAFKA-18805 > URL: https://issues.apache.org/jira/browse/KAFKA-18805 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Assignee: 黃竣陽 >Priority: Major > Labels: newbie > Fix For: 4.1.0 > > > This `this.closed = true` is not wrapped inside the lock, which could cause > the change is not visible in another thread. > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L1580 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-16918: TestUtils#assertFutureThrows should use future.get with timeout [kafka]
chia7712 commented on code in PR #18891: URL: https://github.com/apache/kafka/pull/18891#discussion_r1957734129 ## clients/src/test/java/org/apache/kafka/test/TestUtils.java: ## @@ -580,21 +583,28 @@ public static Set generateRandomTopicPartitions(int numTopic, in } /** - * Assert that a future raises an expected exception cause type. Return the exception cause - * if the assertion succeeds; otherwise raise AssertionError. + * Assert that a future raises an expected exception cause type. + * This method will wait for the future to complete or timeout(15000 milliseconds) after a default period. * * @param future The future to await * @param exceptionCauseClass Class of the expected exception cause * @param Exception cause type parameter * @return The caught exception cause */ public static T assertFutureThrows(Future future, Class exceptionCauseClass) { -ExecutionException exception = assertThrows(ExecutionException.class, future::get); -Throwable cause = exception.getCause(); -assertEquals(exceptionCauseClass, cause.getClass(), -"Expected a " + exceptionCauseClass.getSimpleName() + " exception, but got " + -cause.getClass().getSimpleName()); -return exceptionCauseClass.cast(exception.getCause()); +try { +future.get(DEFAULT_MAX_WAIT_MS, TimeUnit.MILLISECONDS); +fail("Should throw expected exception " + exceptionCauseClass.getSimpleName() + " but nothing was thrown."); +} catch (InterruptedException | ExecutionException | CancellationException e) { +Throwable cause = e instanceof ExecutionException ? e.getCause() : e; +return assertInstanceOf( +exceptionCauseClass, cause, +"Expected " + exceptionCauseClass.getSimpleName() + ", but got " + cause +); +} catch (TimeoutException e) { +fail("Future is not completed within " + DEFAULT_MAX_WAIT_MS + " millisecond."); +} +throw new RuntimeException("Future should throw expected exception but unexpected error happened."); Review Comment: Could you please include the "unexpected error"? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-18810) Flaky test: ClientIdQuotaTest.testQuotaOverrideDelete
[ https://issues.apache.org/jira/browse/KAFKA-18810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai reassigned KAFKA-18810: -- Assignee: Chia-Ping Tsai > Flaky test: ClientIdQuotaTest.testQuotaOverrideDelete > - > > Key: KAFKA-18810 > URL: https://issues.apache.org/jira/browse/KAFKA-18810 > Project: Kafka > Issue Type: Bug >Reporter: Andrew Schofield >Assignee: Chia-Ping Tsai >Priority: Major > > Failed in > https://github.com/apache/kafka/actions/runs/13329397255/job/37230139655?pr=18864 > and more than 20 failures in the past month. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] Mark testQuotaOverrideDelete as flaky [kafka]
chia7712 merged PR #18925: URL: https://github.com/apache/kafka/pull/18925 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18784: Fix ConsumerWithLegacyMessageFormatIntegrationTest [kafka]
chia7712 commented on code in PR #18889: URL: https://github.com/apache/kafka/pull/18889#discussion_r1957730744 ## core/src/test/scala/integration/kafka/api/ConsumerWithLegacyMessageFormatIntegrationTest.scala: ## @@ -18,16 +18,43 @@ package kafka.api import kafka.utils.TestInfoUtils import org.apache.kafka.common.TopicPartition +import org.apache.kafka.common.compress.Compression +import org.apache.kafka.common.record.{AbstractRecords, CompressionType, MemoryRecords, RecordBatch, RecordVersion, SimpleRecord, TimestampType} import org.junit.jupiter.api.Assertions.{assertEquals, assertNull, assertThrows} import org.junit.jupiter.params.ParameterizedTest import org.junit.jupiter.params.provider.MethodSource +import java.nio.ByteBuffer import java.util import java.util.{Collections, Optional} import scala.jdk.CollectionConverters._ class ConsumerWithLegacyMessageFormatIntegrationTest extends AbstractConsumerTest { + private def appendLegacyRecords(numRecords: Int, tp: TopicPartition, startingTimestamp: Long, brokerId: Int): Unit = { +val records = (0 until numRecords).map { i => + new SimpleRecord(startingTimestamp + i, s"key $i".getBytes, s"value $i".getBytes) +} +val buffer = ByteBuffer.allocate(AbstractRecords.estimateSizeInBytes(RecordBatch.MAGIC_VALUE_V0, CompressionType.NONE, records.asJava)) +val builder = MemoryRecords.builder(buffer, RecordBatch.MAGIC_VALUE_V0, Compression.of(CompressionType.NONE).build, Review Comment: Could you please keep tests for both v0 and v1? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18712: Move Endpoint to server module [kafka]
chia7712 commented on PR #18803: URL: https://github.com/apache/kafka/pull/18803#issuecomment-2662303442 @TaiJuWu could you please rebase code? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18805: Consumer Heartbeat closed change should be locked [kafka]
showuon merged PR #18920: URL: https://github.com/apache/kafka/pull/18920 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18213: Convert CustomQuotaCallbackTest to KRaft [kafka]
chia7712 closed pull request #18126: KAFKA-18213: Convert CustomQuotaCallbackTest to KRaft URL: https://github.com/apache/kafka/pull/18126 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18213: Convert CustomQuotaCallbackTest to KRaft [kafka]
chia7712 commented on PR #18126: URL: https://github.com/apache/kafka/pull/18126#issuecomment-2662245203 close as duplicate to #18196 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-18808) add test to ensure the name= is not equal to default quota
Chia-Ping Tsai created KAFKA-18808: -- Summary: add test to ensure the name= is not equal to default quota Key: KAFKA-18808 URL: https://issues.apache.org/jira/browse/KAFKA-18808 Project: Kafka Issue Type: Sub-task Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai Fix For: 4.1.0 see discussion in KAFKA-18735 - the test should include following check. 1. Using name= does not create default quota 2. the returned entity should have name= 2. the filter `ClientQuotaFilterComponent.ofDefaultEntity` should return nothing -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18803: The acls would appear at the wrong level of the metadata shell "tree" [kafka]
Yunyung commented on PR #18916: URL: https://github.com/apache/kafka/pull/18916#issuecomment-2661447763 > 1. Manually test this patch by creating some ACLs and showing the output from metadata shell Before: ``` $ ./bin/kafka-metadata-shell.sh --snapshot /tmp/kraft-combined-logs/__cluster_metadata-0/.log Loading... [2025-02-16 13:58:14,107] ERROR Ignoring control record with type KRAFT_VERSION at offset 1 (org.apache.kafka.metadata.util.SnapshotFileReader) [2025-02-16 13:58:14,110] ERROR Ignoring control record with type KRAFT_VOTERS at offset 2 (org.apache.kafka.metadata.util.SnapshotFileReader) Starting... [ Kafka Metadata Shell ] >> ls image/acls/ 5Mu_djJ5QZaw79Fc3URLfw >> ``` After: ``` $ ./bin/kafka-metadata-shell.sh --snapshot /tmp/kraft-combined-logs/__cluster_metadata-0/.log Loading... [2025-02-16 21:59:06,250] ERROR Ignoring control record with type KRAFT_VERSION at offset 1 (org.apache.kafka.metadata.util.SnapshotFileReader) [2025-02-16 21:59:06,251] ERROR Ignoring control record with type KRAFT_VOTERS at offset 2 (org.apache.kafka.metadata.util.SnapshotFileReader) Starting... [ Kafka Metadata Shell ] >> ls image/acls/ byId >> ls image/acls/byId/ cnTIY8C5R5-PIm0EKywItg >> ``` > 2. Add a unit test for the MetadataImageNode to ensure we see the "acls" node correctly Done. Thanks for the review, PTAL. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18808) add test to ensure the name= is not equal to default quota
[ https://issues.apache.org/jira/browse/KAFKA-18808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927508#comment-17927508 ] Nick Guo commented on KAFKA-18808: -- Hi [~chia7712] ,may I take this issue? Thanks! :) > add test to ensure the name= is not equal to default quota > --- > > Key: KAFKA-18808 > URL: https://issues.apache.org/jira/browse/KAFKA-18808 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > Fix For: 4.1.0 > > > see discussion in KAFKA-18735 - the test should include following check. > 1. Using name= does not create default quota > 2. the returned entity should have name= > 2. the filter `ClientQuotaFilterComponent.ofDefaultEntity` should return > nothing -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] KAFKA-18809: Set min in sync replicas for __share_group_state. [kafka]
smjn opened a new pull request, #18922: URL: https://github.com/apache/kafka/pull/18922 * The `share.coordinator.state.topic.min.isr` config defined in `ShareCoordinatorConfig` was not being used in the `AutoTopicCreationManager`. * The `AutoTopicCreationManager` calls the `ShareCoordinatorService.shareGroupStateTopicConfigs` to configs for the topic to create. * The method `ShareCoordinatorService.shareGroupStateTopicConfigs` was not setting the supplied config value for `share.coordinator.state.topic.min.isr` to `min.insync.replicas`. * In this PR, we remedy the situation by setting the value. * A test has been added to `ShareCoordinatorServiceTest` so that this is not repeated for any configs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18809) share.coordinator.state.topic.min.isr is not applied
[ https://issues.apache.org/jira/browse/KAFKA-18809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927537#comment-17927537 ] Sushant Mahajan commented on KAFKA-18809: - [~chia7712] Raised a fix https://github.com/apache/kafka/pull/18922 > share.coordinator.state.topic.min.isr is not applied > - > > Key: KAFKA-18809 > URL: https://issues.apache.org/jira/browse/KAFKA-18809 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Blocker > Fix For: 4.1.0 > > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/share-coordinator/src/main/java/org/apache/kafka/coordinator/share/ShareCoordinatorService.java#L233 > in the `shareGroupStateTopicConfigs` we ignore the > `share.coordinator.state.topic.min.isr` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] MINOR: deflake EligibleLeaderReplicasIntegrationTest [kafka]
CalvinConfluent opened a new pull request, #18923: URL: https://github.com/apache/kafka/pull/18923 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18801 Remove ClusterGenerator and revise docs of ClusterTemplate [kafka]
JimmyWang6 commented on code in PR #18907: URL: https://github.com/apache/kafka/pull/18907#discussion_r1957392043 ## test-common/test-common-internal-api/src/main/java/org/apache/kafka/common/test/api/ClusterTemplate.java: ## @@ -30,9 +30,9 @@ /** * Used to indicate that a test should call the method given by {@link #value()} to generate a number of - * cluster configurations. The method specified by the value should accept a single argument of the type - * {@link ClusterGenerator}. Any return value from the method is ignored. A test invocation - * will be generated for each {@link ClusterConfig} provided to the ClusterGenerator instance. + * cluster configurations. The method specified by the value does not accept any arguments. Review Comment: Hi @chia7712, If the method is not defined as static, an exception is thrown as following: `Cannot invoke non-static method [private java.util.List org.apache.kafka.tools.consumer.group.DeleteConsumerGroupsTest.generator()] on a null target. org.junit.platform.commons.PreconditionViolationException: Cannot invoke non-static method [private java.util.List org.apache.kafka.tools.consumer.group.DeleteConsumerGroupsTest.generator()] on a null target. at org.apache.kafka.common.test.junit.ClusterTestExtensions.generateClusterConfiguration(ClusterTestExtensions.java:235) ` I haven't thought of a good way to prevent the method that returns List from being non-static yet. Any suggestions? I have added the sample code. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-15900) Flaky test: testOutdatedCoordinatorAssignment() – org.apache.kafka.clients.consumer.internals.EagerConsumerCoordinatorTest
[ https://issues.apache.org/jira/browse/KAFKA-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927555#comment-17927555 ] Alex Tran commented on KAFKA-15900: --- I am also seeing it fail in unrelated [PR|https://github.com/apache/kafka/pull/18919] regarding cleaning up resources in `MemoryRecordsBuilder`. [https://github.com/apache/kafka/actions/runs/13353687223/job/37293878236?pr=18919] > Flaky test: testOutdatedCoordinatorAssignment() – > org.apache.kafka.clients.consumer.internals.EagerConsumerCoordinatorTest > -- > > Key: KAFKA-15900 > URL: https://issues.apache.org/jira/browse/KAFKA-15900 > Project: Kafka > Issue Type: Bug >Reporter: Apoorv Mittal >Assignee: TaiJuWu >Priority: Major > > Build run: > [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14620/15/tests/] > > {code:java} > org.opentest4j.AssertionFailedError: expected: <1> but was: > <2>Stacktraceorg.opentest4j.AssertionFailedError: expected: <1> but was: <2> > at > app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151) >at > app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) >at > app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197) > at > app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150) > at > app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145) > at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:527) > at > app//org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest.testOutdatedCoordinatorAssignment(ConsumerCoordinatorTest.java:1067) > at > java.base@21.0.1/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) > at java.base@21.0.1/java.lang.reflect.Method.invoke(Method.java:580)at > app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728) > at > app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) >at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86) >at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) >at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-18809) share.coordinator.state.topic.min.isr is not applied
Chia-Ping Tsai created KAFKA-18809: -- Summary: share.coordinator.state.topic.min.isr is not applied Key: KAFKA-18809 URL: https://issues.apache.org/jira/browse/KAFKA-18809 Project: Kafka Issue Type: Bug Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai Fix For: 4.1.0 https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/share-coordinator/src/main/java/org/apache/kafka/coordinator/share/ShareCoordinatorService.java#L233 in the `shareGroupStateTopicConfigs` we ignore the `share.coordinator.state.topic.min.isr` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-18809) share.coordinator.state.topic.min.isr is not applied
[ https://issues.apache.org/jira/browse/KAFKA-18809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927518#comment-17927518 ] Chia-Ping Tsai commented on KAFKA-18809: [~apoorvmittal10] [~schofielaj] [~sushmahajn] not sure whether you have other plan to apply the config. If it is just a bug, I will file a patch for it. > share.coordinator.state.topic.min.isr is not applied > - > > Key: KAFKA-18809 > URL: https://issues.apache.org/jira/browse/KAFKA-18809 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Blocker > Fix For: 4.1.0 > > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/share-coordinator/src/main/java/org/apache/kafka/coordinator/share/ShareCoordinatorService.java#L233 > in the `shareGroupStateTopicConfigs` we ignore the > `share.coordinator.state.topic.min.isr` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-18808) add test to ensure the name= is not equal to default quota
[ https://issues.apache.org/jira/browse/KAFKA-18808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai reassigned KAFKA-18808: -- Assignee: Nick Guo (was: Chia-Ping Tsai) > add test to ensure the name= is not equal to default quota > --- > > Key: KAFKA-18808 > URL: https://issues.apache.org/jira/browse/KAFKA-18808 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: Nick Guo >Priority: Major > Fix For: 4.1.0 > > > see discussion in KAFKA-18735 - the test should include following check. > 1. Using name= does not create default quota > 2. the returned entity should have name= > 2. the filter `ClientQuotaFilterComponent.ofDefaultEntity` should return > nothing -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18801 Remove ClusterGenerator and revise docs of ClusterTemplate [kafka]
JimmyWang6 commented on code in PR #18907: URL: https://github.com/apache/kafka/pull/18907#discussion_r1957377060 ## test-common/test-common-internal-api/src/main/java/org/apache/kafka/common/test/api/ClusterTemplate.java: ## @@ -30,9 +30,9 @@ /** * Used to indicate that a test should call the method given by {@link #value()} to generate a number of - * cluster configurations. The method specified by the value should accept a single argument of the type - * {@link ClusterGenerator}. Any return value from the method is ignored. A test invocation - * will be generated for each {@link ClusterConfig} provided to the ClusterGenerator instance. + * cluster configurations. The method specified by the value does not accept any arguments. Review Comment: Thanks for @mumrah and @chia7712 's comments, I will do that according to your suggestions. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-15332) Eligible Leader Replicas
[ https://issues.apache.org/jira/browse/KAFKA-15332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927543#comment-17927543 ] Calvin Liu commented on KAFKA-15332: [~stanislavkozlovski] 4.0 only carries part 1 which enhances data safety when no unclean leader election is needed. Typical use cases are network partitioning, performance-degraded leaders and clean shutdown replica. For part 2, it is planned to be implemented this year. > Eligible Leader Replicas > > > Key: KAFKA-15332 > URL: https://issues.apache.org/jira/browse/KAFKA-15332 > Project: Kafka > Issue Type: New Feature >Reporter: Calvin Liu >Assignee: Calvin Liu >Priority: Major > > A root ticket for the KIP-966 > The KIP has accepted > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-966%3A+Eligible+Leader+Replicas] > The delivery is divided into 2 parts, ELR and unclean recovery. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [WIP] KAFKA-18562: standardize election/fetch timeout between Unattached and Followers [kafka]
frankvicky opened a new pull request, #18921: URL: https://github.com/apache/kafka/pull/18921 JIRA: KAFKA-18562 ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-18803) The acls would appear at the wrong level of the metadata shell "tree"
[ https://issues.apache.org/jira/browse/KAFKA-18803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated KAFKA-18803: --- Fix Version/s: 3.9.1 3.8.2 > The acls would appear at the wrong level of the metadata shell "tree" > - > > Key: KAFKA-18803 > URL: https://issues.apache.org/jira/browse/KAFKA-18803 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Jhen-Yung Hsu >Priority: Blocker > Fix For: 4.0.0, 3.9.1, 3.8.2 > > > see https://github.com/apache/kafka/pull/18845#discussion_r1956525411 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18803: The acls would appear at the wrong level of the metadata shell "tree" [kafka]
chia7712 merged PR #18916: URL: https://github.com/apache/kafka/pull/18916 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-18803) The acls would appear at the wrong level of the metadata shell "tree"
[ https://issues.apache.org/jira/browse/KAFKA-18803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-18803. Resolution: Fixed trunk: https://github.com/apache/kafka/commit/d0e516a87207b04f6266c59712775b5f01c1ed87 4.0: https://github.com/apache/kafka/commit/8797c903eef894e63fa15710cf446715eceb9198 3.9: https://github.com/apache/kafka/commit/0c288599c5f6c3d71e2cb6db76af64f0a88d5b9f 3.8: https://github.com/apache/kafka/commit/7d65d0f3fefc9dab7918d77f950408621fc5a667 > The acls would appear at the wrong level of the metadata shell "tree" > - > > Key: KAFKA-18803 > URL: https://issues.apache.org/jira/browse/KAFKA-18803 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Jhen-Yung Hsu >Priority: Blocker > Fix For: 4.0.0, 3.9.1, 3.8.2 > > > see https://github.com/apache/kafka/pull/18845#discussion_r1956525411 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-18809) share.coordinator.state.topic.min.isr is not applied
[ https://issues.apache.org/jira/browse/KAFKA-18809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-18809. Resolution: Fixed I keep this fix in 4.1, as shared consumer is not in production to 4.0 > share.coordinator.state.topic.min.isr is not applied > - > > Key: KAFKA-18809 > URL: https://issues.apache.org/jira/browse/KAFKA-18809 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Sushant Mahajan >Priority: Blocker > Fix For: 4.1.0 > > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/share-coordinator/src/main/java/org/apache/kafka/coordinator/share/ShareCoordinatorService.java#L233 > in the `shareGroupStateTopicConfigs` we ignore the > `share.coordinator.state.topic.min.isr` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-18809) share.coordinator.state.topic.min.isr is not applied
[ https://issues.apache.org/jira/browse/KAFKA-18809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated KAFKA-18809: --- Fix Version/s: 4.1.0 (was: 4.0.0) > share.coordinator.state.topic.min.isr is not applied > - > > Key: KAFKA-18809 > URL: https://issues.apache.org/jira/browse/KAFKA-18809 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Sushant Mahajan >Priority: Blocker > Fix For: 4.1.0 > > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/share-coordinator/src/main/java/org/apache/kafka/coordinator/share/ShareCoordinatorService.java#L233 > in the `shareGroupStateTopicConfigs` we ignore the > `share.coordinator.state.topic.min.isr` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-17682) Refactor RaftClusterInvocationContext to eliminate test-common-api dependencies
[ https://issues.apache.org/jira/browse/KAFKA-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-17682. Resolution: Duplicate resolved by https://github.com/apache/kafka/commit/8c0a0e07ced062419cf51c31307e5d168ad9efbc > Refactor RaftClusterInvocationContext to eliminate test-common-api > dependencies > --- > > Key: KAFKA-17682 > URL: https://issues.apache.org/jira/browse/KAFKA-17682 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: bboyleonp >Priority: Major > > see https://github.com/apache/kafka/pull/17318/files#r1784648528 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18809: Set min in sync replicas for __share_group_state. [kafka]
chia7712 merged PR #18922: URL: https://github.com/apache/kafka/pull/18922 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-18809) share.coordinator.state.topic.min.isr is not applied
[ https://issues.apache.org/jira/browse/KAFKA-18809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated KAFKA-18809: --- Fix Version/s: 4.0.0 (was: 4.1.0) > share.coordinator.state.topic.min.isr is not applied > - > > Key: KAFKA-18809 > URL: https://issues.apache.org/jira/browse/KAFKA-18809 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Sushant Mahajan >Priority: Blocker > Fix For: 4.0.0 > > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/share-coordinator/src/main/java/org/apache/kafka/coordinator/share/ShareCoordinatorService.java#L233 > in the `shareGroupStateTopicConfigs` we ignore the > `share.coordinator.state.topic.min.isr` -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18803: The acls would appear at the wrong level of the metadata shell "tree" [kafka]
chia7712 commented on PR #18916: URL: https://github.com/apache/kafka/pull/18916#issuecomment-2661601777 cherry-pick to 4.0, 3.9 and 3.8 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-18809) share.coordinator.state.topic.min.isr is not applied
[ https://issues.apache.org/jira/browse/KAFKA-18809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai reassigned KAFKA-18809: -- Assignee: Sushant Mahajan (was: Chia-Ping Tsai) > share.coordinator.state.topic.min.isr is not applied > - > > Key: KAFKA-18809 > URL: https://issues.apache.org/jira/browse/KAFKA-18809 > Project: Kafka > Issue Type: Bug >Reporter: Chia-Ping Tsai >Assignee: Sushant Mahajan >Priority: Blocker > Fix For: 4.1.0 > > > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/share-coordinator/src/main/java/org/apache/kafka/coordinator/share/ShareCoordinatorService.java#L233 > in the `shareGroupStateTopicConfigs` we ignore the > `share.coordinator.state.topic.min.isr` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] KAFKA-18733: Updating share group record acks metric (2/N) [kafka]
apoorvmittal10 opened a new pull request, #18924: URL: https://github.com/apache/kafka/pull/18924 The PR corrects the record acks rate for number of records acknowledged. Also corrects the behaviour when multiple acks type exists in a single acknowledgement batch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18806: Resource leak in MemoryRecordsBuilder in RecordsUtil [kafka]
chia7712 commented on code in PR #18919: URL: https://github.com/apache/kafka/pull/18919#discussion_r1957408912 ## clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java: ## @@ -89,10 +89,11 @@ protected static ConvertedRecords downConvert(Iterablehttps://github.com/apache/kafka/blob/d0e516a87207b04f6266c59712775b5f01c1ed87/clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java#L124 Additionally, `RecordsUtil` is useless in trunk and it will be removed by #18897 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Remove unused member in DynamicBrokerConfig [kafka]
chia7712 commented on PR #18915: URL: https://github.com/apache/kafka/pull/18915#issuecomment-2661609606 the failed test is traced by https://issues.apache.org/jira/browse/KAFKA-15474 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18806: Resource leak in MemoryRecordsBuilder in RecordsUtil [kafka]
xelathan commented on code in PR #18919: URL: https://github.com/apache/kafka/pull/18919#discussion_r1957411153 ## clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java: ## @@ -89,10 +89,11 @@ protected static ConvertedRecords downConvert(Iterablehttps://github.com/apache/kafka/pull/18897) then closing wouldn't really be necessary. I'll go ahead and close the PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18806: Resource leak in MemoryRecordsBuilder in RecordsUtil [kafka]
xelathan closed pull request #18919: KAFKA-18806: Resource leak in MemoryRecordsBuilder in RecordsUtil URL: https://github.com/apache/kafka/pull/18919 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Remove unused member in DynamicBrokerConfig [kafka]
chia7712 merged PR #18915: URL: https://github.com/apache/kafka/pull/18915 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-18755) Align timeout in kafka-share-groups.sh with other group-related tools
[ https://issues.apache.org/jira/browse/KAFKA-18755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-18755. Resolution: Fixed > Align timeout in kafka-share-groups.sh with other group-related tools > - > > Key: KAFKA-18755 > URL: https://issues.apache.org/jira/browse/KAFKA-18755 > Project: Kafka > Issue Type: Sub-task > Components: tools >Affects Versions: 4.1.0 >Reporter: Andrew Schofield >Assignee: Jimmy Wang >Priority: Major > Fix For: 4.1.0 > > > kafka-share-groups.sh uses a timeout of 5000ms for listing groups. > kafka-groups.sh uses 3ms. As a result, we have seen situations in testing > where kafka-share-groups.sh times out, when kafka-groups.sh does not. There > is no good reason for the difference so we should align them. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18806: Resource leak in MemoryRecordsBuilder in RecordsUtil [kafka]
chia7712 commented on code in PR #18919: URL: https://github.com/apache/kafka/pull/18919#discussion_r1957411650 ## clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java: ## @@ -89,10 +89,11 @@ protected static ConvertedRecords downConvert(Iterable I'll go ahead and close the PR. yes and thanks for your contribution. Please feel free to ping me if you have other PRs needs to be reviewed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-18806) Resource leak in MemoryRecordsBuilder in RecordsUtil
[ https://issues.apache.org/jira/browse/KAFKA-18806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-18806. Resolution: Won't Fix see https://github.com/apache/kafka/pull/18919#discussion_r1957408912 > Resource leak in MemoryRecordsBuilder in RecordsUtil > > > Key: KAFKA-18806 > URL: https://issues.apache.org/jira/browse/KAFKA-18806 > Project: Kafka > Issue Type: Bug >Reporter: Luke Chen >Assignee: Alex Tran >Priority: Major > Labels: newbie > > We forgot to close the MemoryRecordsBuilder resource (which contains streams) > here: > https://github.com/apache/kafka/blob/e330f0bf2570a27811fa20a2f446b101a7a656f3/clients/src/main/java/org/apache/kafka/common/record/RecordsUtil.java#L92. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18755 Align timeout in kafka-share-groups.sh [kafka]
chia7712 merged PR #18908: URL: https://github.com/apache/kafka/pull/18908 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org