[PR] KAFKA-18935: Ensure brokers do not return null records in FetchResponse [kafka]
frankvicky opened a new pull request, #19167: URL: https://github.com/apache/kafka/pull/19167 JIRA: KAFKA-18935 This patch ensures the broker will not return null records in FetchResponse. For more details, please refer to the ticket. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18944: Remove unused setters from ClusterConfig [kafka]
chia7712 merged PR #19166: URL: https://github.com/apache/kafka/pull/19166 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-18942) Add reviewers to PR body with committer-tools
[ https://issues.apache.org/jira/browse/KAFKA-18942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated KAFKA-18942: --- Fix Version/s: 4.1.0 > Add reviewers to PR body with committer-tools > - > > Key: KAFKA-18942 > URL: https://issues.apache.org/jira/browse/KAFKA-18942 > Project: Kafka > Issue Type: Sub-task > Components: build >Reporter: David Arthur >Assignee: Ming-Yen Chung >Priority: Major > Fix For: 4.1.0 > > > When we switch to the merge queue, we cannot alter the commit message > directly and instead must use the PR body for the eventual commit message. > > In order to include our "Reviewers" metadata in the commit, we must edit the > PR body after a review has happened and add the "Reviewers" manually. This is > rather annoying and we can do better. > > The committer-tools script "reviewers.py" can use the GitHub API (via "gh") > to read, modify, and update the PR body with the reviewers selected by this > tool. > > For example, > > {noformat} > $ ./committer-tools/reviewers.py > Utility to help generate 'Reviewers' string for Pull Requests. Use Ctrl+D or > Ctrl+C to exit > Name or email (case insensitive): chia > Possible matches (in order of most recent): > [1] Chia-Ping Tsai chia7...@gmail.com (1908) > [2] Chia-Ping Tsai chia7...@apache.org (13) > [3] Chia-Chuan Yu yujuan...@gmail.com (11) > [4] Chia Chuan Yu yujuan...@gmail.com (10) > Make a selection: 1 > Reviewers so far: [('Chia-Ping Tsai', 'chia7...@gmail.com', 1908)] > Name or email (case insensitive): ^C > Reviewers: Chia-Ping Tsai > Pull Request to update (Ctrl+D or Ctrl+C to skip): 19144 > Adding Reviewers to 19144... > {noformat} > > The script should be able to handle existing "Reviewers" string in the PR body -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] KAFKA-18942: Add reviewers to PR body with committer-tools [kafka]
mingyen066 opened a new pull request, #19168: URL: https://github.com/apache/kafka/pull/19168 Enhance `reviewers.py` to append reviewer message to the PR body. Add a comfirmation propmt to prevent human errors. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18031: Flaky PlaintextConsumerTest testCloseLeavesGroupOnInterrupt [kafka]
TaiJuWu commented on PR #19105: URL: https://github.com/apache/kafka/pull/19105#issuecomment-2708878915 > > If the request number exceeds maxInFlightRequestsPerConnection, the LeaveGroup request would be not sent in time when closing. > > why does it exceed the `maxInFlightRequestsPerConnection`? Does it happen in the test only? Base on log, follow requests are sent: ``` [2025-01-14 11:01:57,082] DEBUG [Consumer clientId=ConsumerTestConsumer, groupId=my-test] kkk ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@1af677f8, destination=-1, correlationId=0, clientId=ConsumerTestConsumer, createdTimeMs=1736852517076, requestBuilder=FindCoordinatorRequestData(key='my-test', keyType=0, coordinatorKeys=[])) (org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient:516) [2025-01-14 11:01:57,100] DEBUG [Consumer clientId=ConsumerTestConsumer, groupId=my-test] kkk ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28cb86b2, destination=2147483646, correlationId=5, clientId=ConsumerTestConsumer, createdTimeMs=1736852517096, requestBuilder=JoinGroupRequestData(groupId='my-test', sessionTimeoutMs=45000, rebalanceTimeoutMs=6000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 3, 0, 0, 0, 1, 0, 5, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0, -1, -1, -1, -1, -1, -1]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 3, 0, 0, 0, 1, 0, 5, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0, -1, -1, -1, -1, -1, -1])], reason='')) (org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient:516) [2025-01-14 11:01:57,109] DEBUG [Consumer clientId=ConsumerTestConsumer, groupId=my-test] kkk ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@19705650, destination=2147483646, correlationId=7, clientId=ConsumerTestConsumer, createdTimeMs=1736852517109, requestBuilder=JoinGroupRequestData(groupId='my-test', sessionTimeoutMs=45000, rebalanceTimeoutMs=6000, memberId='ConsumerTestConsumer-2034ef08-671b-4849-a8f4-fffa51ab3d28', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 3, 0, 0, 0, 1, 0, 5, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0, -1, -1, -1, -1, -1, -1]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 3, 0, 0, 0, 1, 0, 5, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0, -1, -1, -1, -1, -1, -1])], reason='need to re-join with the given member-id: ConsumerTestConsumer-2034ef08-671b-4849 -a8f4-fffa51ab3d28')) (org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient:516) [2025-01-14 11:01:57,145] DEBUG [Consumer clientId=ConsumerTestConsumer, groupId=my-test] kkk ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@1a76202b, destination=2147483646, correlationId=8, clientId=ConsumerTestConsumer, createdTimeMs=1736852517144, requestBuilder=SyncGroupRequestData(groupId='my-test', generationId=1, memberId='ConsumerTestConsumer-2034ef08-671b-4849-a8f4-fffa51ab3d28', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='ConsumerTestConsumer-2034ef08-671b-4849-a8f4-fffa51ab3d28', assignment=[0, 3, 0, 0, 0, 1, 0, 5, 116, 111, 112, 105, 99, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 1, -1, -1, -1, -1])])) (org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient:516) [2025-01-14 11:01:57,168] DEBUG [Consumer clientId=ConsumerTestConsumer, groupId=my-test] kkk ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@74b1838, destination=2147483646, correlationId=9, clientId=ConsumerTestConsumer, createdTimeMs=1736852517166, requestBuilder=OffsetFetchRequestData(groupId='', topics=[], groups=[OffsetFetchRequestGroup(groupId='my-test', memberId=null, memberEpoch=-1, topics=[OffsetFetchRequestTopics(name='topic', partitionIndexes=[0, 1])])], requireStable=true)) (org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient:516) ``` > Does it happen in the test only? If I didn't miss anything, I did not see other integration test related `consumer#close`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18942: Add reviewers to PR body with committer-tools [kafka]
mingyen066 commented on code in PR #19168: URL: https://github.com/apache/kafka/pull/19168#discussion_r1986334149 ## committer-tools/reviewers.py: ## @@ -35,6 +37,31 @@ def prompt_for_user(): return clean_input +def append_message_to_pr_body(pr_url, message): +try: +cmd_get_pr = ["gh", "pr", "view", pr_url, "--json", "title,body"] +result = subprocess.run(cmd_get_pr, capture_output=True, text=True, check=True) +current_pr_body = json.loads(result.stdout).get("body", {}) +pr_title = json.loads(result.stdout).get("title", {}) +print(f"The new PR body will be:\n{current_pr_body}{message}") +escaped_message = message.replace("<", "\\<").replace(">", "\\>") +updated_pr_body = f"{current_pr_body}{escaped_message}" +except subprocess.CalledProcessError as e: +print("Failed to retrieve PR description:", e.stderr) +return + +choice = input(f"Update the body of {pr_title}? (y/n): ").strip().lower() Review Comment: That's because I print the PR body to the console. The upper is the PR body message, the below is message output by the tool. After I remove the debug message, there is no duplicated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18947) refactor MetadataShell
[ https://issues.apache.org/jira/browse/KAFKA-18947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17933639#comment-17933639 ] Chia-Ping Tsai commented on KAFKA-18947: {code:java} chia7712@chia7712-ubuntu:~/project/kafka$ ./bin/kafka-metadata-shell.sh Unexpected error: You must specify either a raft manager or a snapshot file reader. java.lang.RuntimeException: You must specify either a raft manager or a snapshot file reader. at org.apache.kafka.shell.MetadataShell.run(MetadataShell.java:198) at org.apache.kafka.shell.MetadataShell.main(MetadataShell.java:271) {code} the error message is unclear and out-of-date > refactor MetadataShell > -- > > Key: KAFKA-18947 > URL: https://issues.apache.org/jira/browse/KAFKA-18947 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > > # the error message is out-of-date. we only support to use snapshot file for > now > # remove unused fields - for example raftManager > # enhance the docs of snapshot file to help users input correct file path -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18031: Flaky PlaintextConsumerTest testCloseLeavesGroupOnInterrupt [kafka]
chia7712 commented on PR #19105: URL: https://github.com/apache/kafka/pull/19105#issuecomment-2708891288 @TaiJuWu thanks for your sharing. Do you know why consumer sends join-group and sync-group request before receiving response of FindCoordinatorRequestData? for another, is it flaky on classic consumer only? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-18420) Find out the license which is in the license file but is not in distribution
[ https://issues.apache.org/jira/browse/KAFKA-18420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kangning.li resolved KAFKA-18420. - Resolution: Duplicate > Find out the license which is in the license file but is not in distribution > > > Key: KAFKA-18420 > URL: https://issues.apache.org/jira/browse/KAFKA-18420 > Project: Kafka > Issue Type: Improvement >Reporter: kangning.li >Assignee: kangning.li >Priority: Major > > see discussion: > https://github.com/apache/kafka/pull/18299#discussion_r1904604076 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINOR:correct user reference in quota configuration from 'userA' to 'user1' [kafka]
chia7712 merged PR #19140: URL: https://github.com/apache/kafka/pull/19140 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Clean up metadata module [kafka]
chia7712 commented on code in PR #19069: URL: https://github.com/apache/kafka/pull/19069#discussion_r1986355627 ## metadata/src/main/java/org/apache/kafka/metadata/authorizer/StandardAuthorizerData.java: ## @@ -437,14 +436,14 @@ private void checkSection( /** * The set of operations which imply DESCRIBE permission, when used in an ALLOW acl. */ -private static final Set IMPLIES_DESCRIBE = Collections.unmodifiableSet( -EnumSet.of(DESCRIBE, READ, WRITE, DELETE, ALTER)); +private static final Set IMPLIES_DESCRIBE = +EnumSet.of(DESCRIBE, READ, WRITE, DELETE, ALTER); /** * The set of operations which imply DESCRIBE_CONFIGS permission, when used in an ALLOW acl. */ -private static final Set IMPLIES_DESCRIBE_CONFIGS = Collections.unmodifiableSet( -EnumSet.of(DESCRIBE_CONFIGS, ALTER_CONFIGS)); +private static final Set IMPLIES_DESCRIBE_CONFIGS = +EnumSet.of(DESCRIBE_CONFIGS, ALTER_CONFIGS); Review Comment: the return set of `EnumSet.of` is not immutable. Maybe we can use `Set.of` instead? ## metadata/src/test/java/org/apache/kafka/image/FakeSnapshotWriter.java: ## @@ -33,11 +32,7 @@ public class FakeSnapshotWriter implements SnapshotWriter private boolean closed = false; public List> batches() { -List> result = new ArrayList<>(); -for (List batch : batches) { -result.add(Collections.unmodifiableList(batch)); -} -return Collections.unmodifiableList(result); +return new ArrayList<>(batches); Review Comment: if we want to follow origin design, all lists in this method should be immutable. ```java return batches.stream().map(List::copyOf).toList(); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Rewrite unchecked operations in Mock API [kafka]
chia7712 commented on PR #19071: URL: https://github.com/apache/kafka/pull/19071#issuecomment-2708926789 @clarkwtc #17767 introduces another similar issue. could you please it in this PR as well? ``` > Task :tools:compileTestJava Note: /home/chia7712/project/kafka/tools/src/test/java/org/apache/kafka/tools/ConfigCommandIntegrationTest.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-18845) Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled
[ https://issues.apache.org/jira/browse/KAFKA-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] PoAn Yang resolved KAFKA-18845. --- Resolution: Fixed > Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled > -- > > Key: KAFKA-18845 > URL: https://issues.apache.org/jira/browse/KAFKA-18845 > Project: Kafka > Issue Type: Bug >Reporter: 黃竣陽 >Assignee: PoAn Yang >Priority: Major > Attachments: Screenshot 2025-03-09 at 11.46.53 PM.png > > > The test always fail when I using this command `I=0; while ./gradlew clean > :metadata:test --tests "QuorumControllerTest" --rerun --fail-fast; do (( > I=$I+1 )); echo "Completed run: $I"; sleep 1; done` on my local machine > {code:java} > java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.StaleBrokerEpochException: Expected broker > epoch 22, but got broker epoch 7 > at > java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) > at > java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) > at > org.apache.kafka.controller.QuorumControllerIntegrationTestUtils.sendBrokerHeartbeatToUnfenceBrokers(QuorumControllerIntegrationTestUtils.java:177) > at > org.apache.kafka.controller.QuorumControllerTest.testUncleanShutdownBrokerElrEnabled(QuorumControllerTest.java:498) > at java.base/java.lang.reflect.Method.invoke(Method.java:568) > at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) > at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) > Caused by: org.apache.kafka.common.errors.StaleBrokerEpochException: Expected > broker epoch 22, but got broker epoch 7 {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-18845) Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled
[ https://issues.apache.org/jira/browse/KAFKA-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] PoAn Yang updated KAFKA-18845: -- Attachment: Screenshot 2025-03-09 at 11.46.53 PM.png > Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled > -- > > Key: KAFKA-18845 > URL: https://issues.apache.org/jira/browse/KAFKA-18845 > Project: Kafka > Issue Type: Bug >Reporter: 黃竣陽 >Assignee: PoAn Yang >Priority: Major > Attachments: Screenshot 2025-03-09 at 11.46.53 PM.png > > > The test always fail when I using this command `I=0; while ./gradlew clean > :metadata:test --tests "QuorumControllerTest" --rerun --fail-fast; do (( > I=$I+1 )); echo "Completed run: $I"; sleep 1; done` on my local machine > {code:java} > java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.StaleBrokerEpochException: Expected broker > epoch 22, but got broker epoch 7 > at > java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) > at > java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) > at > org.apache.kafka.controller.QuorumControllerIntegrationTestUtils.sendBrokerHeartbeatToUnfenceBrokers(QuorumControllerIntegrationTestUtils.java:177) > at > org.apache.kafka.controller.QuorumControllerTest.testUncleanShutdownBrokerElrEnabled(QuorumControllerTest.java:498) > at java.base/java.lang.reflect.Method.invoke(Method.java:568) > at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) > at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) > Caused by: org.apache.kafka.common.errors.StaleBrokerEpochException: Expected > broker epoch 22, but got broker epoch 7 {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-10863: Convert ControRecordType schema to use auto-generated protocol [kafka]
chia7712 commented on code in PR #19110: URL: https://github.com/apache/kafka/pull/19110#discussion_r1986358115 ## clients/src/main/resources/common/message/ControlRecordType.json: ## @@ -0,0 +1,26 @@ +// Licensed to the Apache Software Foundation (ASF) under one or more +// contributor license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright ownership. +// The ASF licenses this file to You under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with +// the License. You may obtain a copy of the License at +// +//http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +{ + "type": "data", + "name": "ControlRecordTypeSchema", Review Comment: how about `ControlRecordType`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR KIP link change to use immutable link [kafka]
chia7712 merged PR #19153: URL: https://github.com/apache/kafka/pull/19153 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-18946) Move BrokerReconfigurable and DynamicProducerStateManagerConfig to server module
[ https://issues.apache.org/jira/browse/KAFKA-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated KAFKA-18946: --- Summary: Move BrokerReconfigurable and DynamicProducerStateManagerConfig to server module (was: Move DynamicLogConfig, BrokerReconfigurable, and DynamicProducerStateManagerConfig to server module) > Move BrokerReconfigurable and DynamicProducerStateManagerConfig to server > module > > > Key: KAFKA-18946 > URL: https://issues.apache.org/jira/browse/KAFKA-18946 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: TengYao Chi >Priority: Major > > as title -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-18948) Move DynamicLogConfig to server module
TengYao Chi created KAFKA-18948: --- Summary: Move DynamicLogConfig to server module Key: KAFKA-18948 URL: https://issues.apache.org/jira/browse/KAFKA-18948 Project: Kafka Issue Type: Sub-task Reporter: TengYao Chi Assignee: TengYao Chi This needs to wait for `kafka.log.LogManager`. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINOR: Clean up metadata module [kafka]
sjhajharia commented on PR #19069: URL: https://github.com/apache/kafka/pull/19069#issuecomment-2708939212 Thanks @chia7712 for looking into the PR. I have addressed the comments. PTAL when possible! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18003: add test to make sure `Admin#deleteRecords` can handle the corrupted records [kafka]
chia7712 commented on code in PR #17840: URL: https://github.com/apache/kafka/pull/17840#discussion_r1986365438 ## core/src/test/scala/integration/kafka/api/PlaintextAdminIntegrationTest.scala: ## @@ -1543,6 +1547,76 @@ class PlaintextAdminIntegrationTest extends BaseAdminIntegrationTest { assertNull(returnedOffsets.get(topicPartition)) } + @ParameterizedTest(name = TestInfoUtils.TestWithParameterizedQuorumAndGroupProtocolNames) + @MethodSource(Array("getTestQuorumAndGroupProtocolParametersClassicGroupProtocolOnly")) Review Comment: open https://issues.apache.org/jira/browse/KAFKA-18949 to trace it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18949) fix testDeleteRecordsAfterCorruptRecords for consumer protocol
[ https://issues.apache.org/jira/browse/KAFKA-18949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17933648#comment-17933648 ] PoAn Yang commented on KAFKA-18949: --- Hi [~chia7712], if you're not working on this, may I take it? Thanks. > fix testDeleteRecordsAfterCorruptRecords for consumer protocol > -- > > Key: KAFKA-18949 > URL: https://issues.apache.org/jira/browse/KAFKA-18949 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > > https://github.com/apache/kafka/pull/17840#discussion_r1845353488 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-18949) fix testDeleteRecordsAfterCorruptRecords for consumer protocol
Chia-Ping Tsai created KAFKA-18949: -- Summary: fix testDeleteRecordsAfterCorruptRecords for consumer protocol Key: KAFKA-18949 URL: https://issues.apache.org/jira/browse/KAFKA-18949 Project: Kafka Issue Type: Improvement Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai https://github.com/apache/kafka/pull/17840#discussion_r1845353488 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-18949) fix testDeleteRecordsAfterCorruptRecords for consumer protocol
[ https://issues.apache.org/jira/browse/KAFKA-18949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai reassigned KAFKA-18949: -- Assignee: PoAn Yang (was: Chia-Ping Tsai) > fix testDeleteRecordsAfterCorruptRecords for consumer protocol > -- > > Key: KAFKA-18949 > URL: https://issues.apache.org/jira/browse/KAFKA-18949 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: PoAn Yang >Priority: Major > > https://github.com/apache/kafka/pull/17840#discussion_r1845353488 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-18845) Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled
[ https://issues.apache.org/jira/browse/KAFKA-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17933645#comment-17933645 ] PoAn Yang commented on KAFKA-18845: --- The QuorumControllerTest#testUncleanShutdownBrokerElrEnabled is not flaky after [https://github.com/apache/kafka/commit/88a23dab3ea6f76cc32066066149ce7843bf24ab]. Close the issue. > Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled > -- > > Key: KAFKA-18845 > URL: https://issues.apache.org/jira/browse/KAFKA-18845 > Project: Kafka > Issue Type: Bug >Reporter: 黃竣陽 >Assignee: PoAn Yang >Priority: Major > Attachments: Screenshot 2025-03-09 at 11.46.53 PM.png > > > The test always fail when I using this command `I=0; while ./gradlew clean > :metadata:test --tests "QuorumControllerTest" --rerun --fail-fast; do (( > I=$I+1 )); echo "Completed run: $I"; sleep 1; done` on my local machine > {code:java} > java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.StaleBrokerEpochException: Expected broker > epoch 22, but got broker epoch 7 > at > java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) > at > java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) > at > org.apache.kafka.controller.QuorumControllerIntegrationTestUtils.sendBrokerHeartbeatToUnfenceBrokers(QuorumControllerIntegrationTestUtils.java:177) > at > org.apache.kafka.controller.QuorumControllerTest.testUncleanShutdownBrokerElrEnabled(QuorumControllerTest.java:498) > at java.base/java.lang.reflect.Method.invoke(Method.java:568) > at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) > at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) > Caused by: org.apache.kafka.common.errors.StaleBrokerEpochException: Expected > broker epoch 22, but got broker epoch 7 {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18837: Validate controller.quorum.fetch.timeout.ms is a positive value [kafka]
mimaison commented on code in PR #18998: URL: https://github.com/apache/kafka/pull/18998#discussion_r1986366041 ## raft/src/test/java/org/apache/kafka/raft/QuorumConfigTest.java: ## @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.kafka.raft; + +import org.apache.kafka.common.config.AbstractConfig; +import org.apache.kafka.common.config.ConfigException; + +import org.junit.jupiter.api.Test; + +import java.util.HashMap; +import java.util.Map; + +import static org.junit.jupiter.api.Assertions.assertThrows; + +public class QuorumConfigTest { +@Test +public void testLegalConfig() { + verifyLegalConfig(Map.of(QuorumConfig.QUORUM_ELECTION_TIMEOUT_MS_CONFIG, "0")); +verifyLegalConfig(Map.of(QuorumConfig.QUORUM_FETCH_TIMEOUT_MS_CONFIG, "0")); + verifyLegalConfig(Map.of(QuorumConfig.QUORUM_ELECTION_BACKOFF_MAX_MS_CONFIG, "0")); +verifyLegalConfig(Map.of(QuorumConfig.QUORUM_LINGER_MS_CONFIG, "-1")); + verifyLegalConfig(Map.of(QuorumConfig.QUORUM_REQUEST_TIMEOUT_MS_CONFIG, "-1")); +verifyLegalConfig(Map.of(QuorumConfig.QUORUM_RETRY_BACKOFF_MS_CONFIG, "-1")); +} + +private void verifyLegalConfig(Map overrideConfig) { Review Comment: It's a bit strange to have a method called `verifyLegalConfig` but expect it to throw. ## raft/src/test/java/org/apache/kafka/raft/QuorumConfigTest.java: ## @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.kafka.raft; + +import org.apache.kafka.common.config.AbstractConfig; +import org.apache.kafka.common.config.ConfigException; + +import org.junit.jupiter.api.Test; + +import java.util.HashMap; +import java.util.Map; + +import static org.junit.jupiter.api.Assertions.assertThrows; + +public class QuorumConfigTest { +@Test +public void testLegalConfig() { + verifyLegalConfig(Map.of(QuorumConfig.QUORUM_ELECTION_TIMEOUT_MS_CONFIG, "0")); +verifyLegalConfig(Map.of(QuorumConfig.QUORUM_FETCH_TIMEOUT_MS_CONFIG, "0")); + verifyLegalConfig(Map.of(QuorumConfig.QUORUM_ELECTION_BACKOFF_MAX_MS_CONFIG, "0")); +verifyLegalConfig(Map.of(QuorumConfig.QUORUM_LINGER_MS_CONFIG, "-1")); + verifyLegalConfig(Map.of(QuorumConfig.QUORUM_REQUEST_TIMEOUT_MS_CONFIG, "-1")); +verifyLegalConfig(Map.of(QuorumConfig.QUORUM_RETRY_BACKOFF_MS_CONFIG, "-1")); +} + +private void verifyLegalConfig(Map overrideConfig) { +Map props = new HashMap<>(); +props.put(QuorumConfig.QUORUM_VOTERS_CONFIG, "1@localhost:9092"); +props.put(QuorumConfig.QUORUM_BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); +props.put(QuorumConfig.QUORUM_ELECTION_TIMEOUT_MS_CONFIG, "10"); +props.put(QuorumConfig.QUORUM_FETCH_TIMEOUT_MS_CONFIG, "10"); +props.put(QuorumConfig.QUORUM_ELECTION_BACKOFF_MAX_MS_CONFIG, "10"); +props.put(QuorumConfig.QUORUM_LINGER_MS_CONFIG, "10"); +props.put(QuorumConfig.QUORUM_REQUEST_TIMEOUT_MS_CONFIG, "10"); +props.put(QuorumConfig.QUORUM_RETRY_BACKOFF_MS_CONFIG, "10"); + +props.putAll(overrideConfig); + +assertThrows(ConfigException.class, () -> new QuorumConfig(new QuorumTestConfig(props))); Review Comment: Could we do something like: ``` assertThrows(ConfigException.class, () -> QuorumConfig.CONFIG_DEF.parse(props)); ``` instead of defining `QuorumTestConfig`? ## ra
[jira] [Created] (KAFKA-18945) Enahnce the docs of Admin#describeCluster and Admin#describeConfigs
Chia-Ping Tsai created KAFKA-18945: -- Summary: Enahnce the docs of Admin#describeCluster and Admin#describeConfigs Key: KAFKA-18945 URL: https://issues.apache.org/jira/browse/KAFKA-18945 Project: Kafka Issue Type: Improvement Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai in the [https://github.com/apache/kafka/pull/19100,] we add the difference of dynamic config between "broker" and "controller". However, the docs of public APIs does not include that. we should enhance it as well -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-18945) Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for bootstrap-controller
[ https://issues.apache.org/jira/browse/KAFKA-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kuan Po Tseng updated KAFKA-18945: -- Description: in the [https://github.com/apache/kafka/pull/19100|https://github.com/apache/kafka/pull/19100], we add the difference of dynamic config between "broker" and "controller". However, the docs of public APIs does not include that. we should enhance it as well was: in the [https://github.com/apache/kafka/pull/19100,] we add the difference of dynamic config between "broker" and "controller". However, the docs of public APIs does not include that. we should enhance it as well > Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for > bootstrap-controller > > > Key: KAFKA-18945 > URL: https://issues.apache.org/jira/browse/KAFKA-18945 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > > in the > [https://github.com/apache/kafka/pull/19100|https://github.com/apache/kafka/pull/19100], > we add the difference of dynamic config between "broker" and "controller". > However, the docs of public APIs does not include that. we should enhance it > as well -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-18945) Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for bootstrap-controller
[ https://issues.apache.org/jira/browse/KAFKA-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17933623#comment-17933623 ] Chia-Ping Tsai commented on KAFKA-18945: # the cluster returned by describeCluster depends on the bootstrap # we can't describe controller config (or logger) when using bootstrap broker. # we can't describe broker config (or logger) when using bootstrap controller # DescribeClusterResult#controller is random server in using bootstrap broker # DescribeClusterResult#controller is voter leader in using bootstrap controller > Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for > bootstrap-controller > > > Key: KAFKA-18945 > URL: https://issues.apache.org/jira/browse/KAFKA-18945 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > > in the > [https://github.com/apache/kafka/pull/19100|https://github.com/apache/kafka/pull/19100], > we add the difference of dynamic config between "broker" and "controller". > However, the docs of public APIs does not include that. we should enhance it > as well -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-18945) Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for bootstrap-controller
[ https://issues.apache.org/jira/browse/KAFKA-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai reassigned KAFKA-18945: -- Assignee: Kuan Po Tseng (was: Chia-Ping Tsai) > Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for > bootstrap-controller > > > Key: KAFKA-18945 > URL: https://issues.apache.org/jira/browse/KAFKA-18945 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Kuan Po Tseng >Priority: Major > > in the > [https://github.com/apache/kafka/pull/19100|https://github.com/apache/kafka/pull/19100], > we add the difference of dynamic config between "broker" and "controller". > However, the docs of public APIs does not include that. we should enhance it > as well -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-18945) Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for bootstrap-controller
[ https://issues.apache.org/jira/browse/KAFKA-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated KAFKA-18945: --- Summary: Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for bootstrap-controller (was: Enahnce the docs of Admin#describeCluster and Admin#describeConfigs) > Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for > bootstrap-controller > > > Key: KAFKA-18945 > URL: https://issues.apache.org/jira/browse/KAFKA-18945 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > > in the [https://github.com/apache/kafka/pull/19100,] we add the difference of > dynamic config between "broker" and "controller". > However, the docs of public APIs does not include that. we should enhance it > as well -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18602: Incorrect FinalizedVersionLevel reported for dynamic KRaft quorum [kafka]
junrao commented on code in PR #18685: URL: https://github.com/apache/kafka/pull/18685#discussion_r1983832557 ## core/src/main/scala/kafka/server/metadata/KRaftMetadataCache.scala: ## @@ -522,10 +522,14 @@ class KRaftMetadataCache( if (kraftVersionLevel > 0) { finalizedFeatures.put(KRaftVersion.FEATURE_NAME, kraftVersionLevel) } +var metadataVersion = MetadataVersion.MINIMUM_VERSION Review Comment: Hmm, is this because the KRaft client needs to serve ApiVersionRequest during bootstrap? MV won't be ready during bootstrap, but ApiVersionResponse needs to include finalized features that depend on MV being available. ## core/src/main/scala/kafka/server/metadata/KRaftMetadataCache.scala: ## @@ -522,10 +522,14 @@ class KRaftMetadataCache( if (kraftVersionLevel > 0) { finalizedFeatures.put(KRaftVersion.FEATURE_NAME, kraftVersionLevel) } +var metadataVersion = MetadataVersion.MINIMUM_VERSION +if (!image.features().metadataVersion().isEmpty) { + metadataVersion = image.features().metadataVersionOrThrow() +} new FinalizedFeatures( - image.features().metadataVersionOrThrow(), + metadataVersion, finalizedFeatures, - image.highestOffsetAndEpoch().offset) + image.highestOffsetAndEpoch().epoch()) Review Comment: Hmm, this doesn't seem correct. Within the same KRaft leader's life time (i.e. the same epoch), multiple rounds of feature updates could have happened. It would be incorrect to have the same FinalizedFeaturesEpoch after each round of update. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18942: Add reviewers to PR body with committer-tools [kafka]
chia7712 commented on code in PR #19168: URL: https://github.com/apache/kafka/pull/19168#discussion_r1986329085 ## committer-tools/reviewers.py: ## @@ -35,6 +37,31 @@ def prompt_for_user(): return clean_input +def append_message_to_pr_body(pr_url, message): +try: +cmd_get_pr = ["gh", "pr", "view", pr_url, "--json", "title,body"] +result = subprocess.run(cmd_get_pr, capture_output=True, text=True, check=True) +current_pr_body = json.loads(result.stdout).get("body", {}) +pr_title = json.loads(result.stdout).get("title", {}) +print(f"The new PR body will be:\n{current_pr_body}{message}") Review Comment: please remove the debug message ## committer-tools/reviewers.py: ## @@ -35,6 +37,31 @@ def prompt_for_user(): return clean_input +def append_message_to_pr_body(pr_url, message): +try: +cmd_get_pr = ["gh", "pr", "view", pr_url, "--json", "title,body"] +result = subprocess.run(cmd_get_pr, capture_output=True, text=True, check=True) +current_pr_body = json.loads(result.stdout).get("body", {}) +pr_title = json.loads(result.stdout).get("title", {}) +print(f"The new PR body will be:\n{current_pr_body}{message}") +escaped_message = message.replace("<", "\\<").replace(">", "\\>") +updated_pr_body = f"{current_pr_body}{escaped_message}" +except subprocess.CalledProcessError as e: +print("Failed to retrieve PR description:", e.stderr) +return + +choice = input(f"Update the body of {pr_title}? (y/n): ").strip().lower() Review Comment: Also, there are duplicate messages ``` // here Update the body of KAFKA-18942: Add reviewers to PR body with committer-tools? (y/n): y https://github.com/apache/kafka/pull/19168 PR description updated successfully! ``` Reviewers: Chia-Ping Tsai \, TengYao Chi \ Reviewers: Chia-Ping Tsai // here Update the body of KAFKA-18942: Add reviewers to PR body with committer-tools? (y/n): y https://github.com/apache/kafka/pull/19168 PR description updated successfully! ``` ## committer-tools/reviewers.py: ## @@ -35,6 +37,31 @@ def prompt_for_user(): return clean_input +def append_message_to_pr_body(pr_url, message): +try: +cmd_get_pr = ["gh", "pr", "view", pr_url, "--json", "title,body"] +result = subprocess.run(cmd_get_pr, capture_output=True, text=True, check=True) +current_pr_body = json.loads(result.stdout).get("body", {}) +pr_title = json.loads(result.stdout).get("title", {}) +print(f"The new PR body will be:\n{current_pr_body}{message}") +escaped_message = message.replace("<", "\\<").replace(">", "\\>") +updated_pr_body = f"{current_pr_body}{escaped_message}" +except subprocess.CalledProcessError as e: +print("Failed to retrieve PR description:", e.stderr) +return + +choice = input(f"Update the body of {pr_title}? (y/n): ").strip().lower() Review Comment: Could you add `"` to `pr_title` - otherwise, it is a bit unreadable. ``` Update the body of KAFKA-18942: Add reviewers to PR body with committer-tools? (y/n): y ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18945) Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for bootstrap-controller
[ https://issues.apache.org/jira/browse/KAFKA-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17933621#comment-17933621 ] Kuan Po Tseng commented on KAFKA-18945: --- Hi [~chia7712], may I take over this issue if you are not working on it? Thank you! > Enahnce the docs of Admin#describeCluster and Admin#describeConfigs for > bootstrap-controller > > > Key: KAFKA-18945 > URL: https://issues.apache.org/jira/browse/KAFKA-18945 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Major > > in the [https://github.com/apache/kafka/pull/19100,] we add the difference of > dynamic config between "broker" and "controller". > However, the docs of public APIs does not include that. we should enhance it > as well -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18909: Move DynamicThreadPool to server module [kafka]
chia7712 commented on PR #19081: URL: https://github.com/apache/kafka/pull/19081#issuecomment-2708761638 the failed test is traced by https://issues.apache.org/jira/browse/KAFKA-18606 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-18909) Move DynamicThreadPool to server module
[ https://issues.apache.org/jira/browse/KAFKA-18909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-18909. Fix Version/s: 4.1.0 Resolution: Fixed > Move DynamicThreadPool to server module > --- > > Key: KAFKA-18909 > URL: https://issues.apache.org/jira/browse/KAFKA-18909 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: Wei-Ting Chen >Priority: Major > Fix For: 4.1.0 > > > as title -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18909: Move DynamicThreadPool to server module [kafka]
chia7712 merged PR #19081: URL: https://github.com/apache/kafka/pull/19081 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-18944) Remove unused setters from ClusterConfig
[ https://issues.apache.org/jira/browse/KAFKA-18944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-18944. Fix Version/s: 4.1.0 Resolution: Fixed > Remove unused setters from ClusterConfig > > > Key: KAFKA-18944 > URL: https://issues.apache.org/jira/browse/KAFKA-18944 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Wei-Ting Chen >Priority: Major > Fix For: 4.1.0 > > > saslServerProperties, saslClientProperties, adminClientProperties, > producerProperties, consumerProperties > > those setters are not used actually, so we should remove them to avoid > misleading developers. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18031: Flaky PlaintextConsumerTest testCloseLeavesGroupOnInterrupt [kafka]
chia7712 commented on PR #19105: URL: https://github.com/apache/kafka/pull/19105#issuecomment-2708850790 > If the request number exceeds maxInFlightRequestsPerConnection, the LeaveGroup request would be not sent in time when closing. why does it exceed the `maxInFlightRequestsPerConnection`? Does it happen in the test only? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-18946) Move DynamicLogConfig, BrokerReconfigurable, and DynamicProducerStateManagerConfig to server module
[ https://issues.apache.org/jira/browse/KAFKA-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] TengYao Chi reassigned KAFKA-18946: --- Assignee: TengYao Chi (was: Chia-Ping Tsai) > Move DynamicLogConfig, BrokerReconfigurable, and > DynamicProducerStateManagerConfig to server module > --- > > Key: KAFKA-18946 > URL: https://issues.apache.org/jira/browse/KAFKA-18946 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: TengYao Chi >Priority: Major > > as title -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18942: Add reviewers to PR body with committer-tools [kafka]
chia7712 commented on PR #19168: URL: https://github.com/apache/kafka/pull/19168#issuecomment-2708854016 @mingyen066 could you please take a look at following error? ``` chia7712@chia7712-ubuntu:~/project/kafka$ ./committer-tools/reviewers.py /home/chia7712/project/kafka/./committer-tools/reviewers.py:46: SyntaxWarning: invalid escape sequence '\<' escaped_message = message.replace("<", "\<").replace(">", "\>") /home/chia7712/project/kafka/./committer-tools/reviewers.py:46: SyntaxWarning: invalid escape sequence '\>' escaped_message = message.replace("<", "\<").replace(">", "\>") Utility to help generate 'Reviewers' string for Pull Requests. Use Ctrl+D or Ctrl+C to exit ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-17856 Move ConfigCommandTest and ConfigCommandIntegrationTest to tool module [kafka]
chia7712 merged PR #17767: URL: https://github.com/apache/kafka/pull/17767 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-18946) Move DynamicLogConfig, BrokerReconfigurable, and DynamicProducerStateManagerConfig to server module
Chia-Ping Tsai created KAFKA-18946: -- Summary: Move DynamicLogConfig, BrokerReconfigurable, and DynamicProducerStateManagerConfig to server module Key: KAFKA-18946 URL: https://issues.apache.org/jira/browse/KAFKA-18946 Project: Kafka Issue Type: Sub-task Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai as title -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18942: Add reviewers to PR body with committer-tools [kafka]
mingyen066 commented on PR #19168: URL: https://github.com/apache/kafka/pull/19168#issuecomment-2708858624 @chia7712 Fixed by adding another escape(two backslash) for python. It's weired that one backslash is working fine in mac. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [KAFKA-18941] Removes unneeded tests from upgrade_tests.py [kafka]
chia7712 commented on PR #19162: URL: https://github.com/apache/kafka/pull/19162#issuecomment-2708849438 @josefk31 in #18386 we made the e2e work for the "valid" upgrade path - single folder. Hence, could you please share more details of the description "upgrade test in question is not supported for AK 3.3.2 ", thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-18947) refactor MetadataShell
[ https://issues.apache.org/jira/browse/KAFKA-18947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming-Yen Chung reassigned KAFKA-18947: -- Assignee: Ming-Yen Chung (was: Chia-Ping Tsai) > refactor MetadataShell > -- > > Key: KAFKA-18947 > URL: https://issues.apache.org/jira/browse/KAFKA-18947 > Project: Kafka > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Ming-Yen Chung >Priority: Major > > # the error message is out-of-date. we only support to use snapshot file for > now > # remove unused fields - for example raftManager > # enhance the docs of snapshot file to help users input correct file path -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-18947) refactor MetadataShell
Chia-Ping Tsai created KAFKA-18947: -- Summary: refactor MetadataShell Key: KAFKA-18947 URL: https://issues.apache.org/jira/browse/KAFKA-18947 Project: Kafka Issue Type: Improvement Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai # the error message is out-of-date. we only support to use snapshot file for now # remove unused fields - for example raftManager # enhance the docs of snapshot file to help users input correct file path -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18942: Add reviewers to PR body with committer-tools [kafka]
chia7712 commented on code in PR #19168: URL: https://github.com/apache/kafka/pull/19168#discussion_r1986315184 ## committer-tools/reviewers.py: ## @@ -87,9 +112,16 @@ def prompt_for_user(): continue if selected_reviewers: -out = "\n\nReviewers: " -out += ", ".join([f"{name} <{email}>" for name, email, _ in selected_reviewers]) -out += "\n" -print(out) +reviewer_message = f'\n\nReviewers: {", ".join([f"{name} <{email}>" for name, email, _ in selected_reviewers])}\n' Review Comment: it seems the "<>" is disappeared in the PR description. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-18706) Move AclPublisher to metadata module
[ https://issues.apache.org/jira/browse/KAFKA-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-18706. Fix Version/s: 4.1.0 Resolution: Fixed > Move AclPublisher to metadata module > > > Key: KAFKA-18706 > URL: https://issues.apache.org/jira/browse/KAFKA-18706 > Project: Kafka > Issue Type: Sub-task >Reporter: PoAn Yang >Assignee: PoAn Yang >Priority: Major > Fix For: 4.1.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18706: Move AclPublisher to metadata module [kafka]
chia7712 merged PR #18802: URL: https://github.com/apache/kafka/pull/18802 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18858: Refactor FeatureControlManager to avoid using uninitialized MV [kafka]
chia7712 commented on PR #19040: URL: https://github.com/apache/kafka/pull/19040#issuecomment-2708841989 @FrankYang0529 could you please fix the conflicts? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-17856) Move ConfigCommandTest and ConfigCommandIntegrationTest to tool module
[ https://issues.apache.org/jira/browse/KAFKA-17856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-17856. Fix Version/s: 4.1.0 Resolution: Fixed > Move ConfigCommandTest and ConfigCommandIntegrationTest to tool module > -- > > Key: KAFKA-17856 > URL: https://issues.apache.org/jira/browse/KAFKA-17856 > Project: Kafka > Issue Type: Sub-task >Reporter: Chia-Ping Tsai >Assignee: 黃竣陽 >Priority: Minor > Fix For: 4.1.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [WIP] KIP-891: Connect Multiversion Support (Updates to status and metrics) [kafka]
snehashisp commented on PR #17988: URL: https://github.com/apache/kafka/pull/17988#issuecomment-2708848557 Hi @gharris1727. Apologies for the delay in getting this PR updated. Please review when you get some time. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18942: Add reviewers to PR body with committer-tools [kafka]
mingyen066 commented on code in PR #19168: URL: https://github.com/apache/kafka/pull/19168#discussion_r1986320862 ## committer-tools/reviewers.py: ## @@ -87,9 +112,16 @@ def prompt_for_user(): continue if selected_reviewers: -out = "\n\nReviewers: " -out += ", ".join([f"{name} <{email}>" for name, email, _ in selected_reviewers]) -out += "\n" -print(out) +reviewer_message = f'\n\nReviewers: {", ".join([f"{name} <{email}>" for name, email, _ in selected_reviewers])}\n' Review Comment: Fixed by adding a backslash before the angle brackets to escape them for Markdown. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR KIP link change to use immutable link [kafka]
chia7712 commented on PR #19153: URL: https://github.com/apache/kafka/pull/19153#issuecomment-2708849650 @m1a2st could you please sync code to run CI again? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Update upgrade steps [kafka]
dajac merged PR #19132: URL: https://github.com/apache/kafka/pull/19132 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-18913) Consider removing state-updater feature flag
[ https://issues.apache.org/jira/browse/KAFKA-18913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17933655#comment-17933655 ] Janindu Pathirana commented on KAFKA-18913: --- Hi [~mjsax] , Would you please be able to give me a more detailed overview on what should be done? Still new to streams so need more info as to what should be done and the purpose!:D > Consider removing state-updater feature flag > > > Key: KAFKA-18913 > URL: https://issues.apache.org/jira/browse/KAFKA-18913 > Project: Kafka > Issue Type: Task > Components: streams >Reporter: Matthias J. Sax >Assignee: Janindu Pathirana >Priority: Blocker > Fix For: 4.1.0 > > > We did enable the new StateUpdated thread with 3.8 release. > We should consider removing the internal feature flag, and drop the old code. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] KAFKA-18947: Remove unused raftManager in metadataShell [kafka]
mingyen066 opened a new pull request, #19169: URL: https://github.com/apache/kafka/pull/19169 * Remove unused `raftManager` in `metadataShell` * Enhance error message when no snapshot provided. * Since `raftManager` is removed, make `snapshot` a required argument. Result when no snapshot is given ``` $ ./bin/kafka-metadata-shell.sh usage: kafka-metadata-shell [-h] --snapshot SNAPSHOT [command [command ...]] kafka-metadata-shell: error: argument --snapshot/-s is required ``` ``` $ ./bin/kafka-metadata-shell.sh --help usage: kafka-metadata-shell [-h] --snapshot SNAPSHOT [command [command ...]] The Apache Kafka metadata shell positional arguments: commandThe command to run. optional arguments: -h, --help show this help message and exit --snapshot SNAPSHOT, -s SNAPSHOT The metadata snapshot file to read. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18932: Removed usage of partition max bytes from share fetch requests on the broker [kafka]
apoorvmittal10 commented on code in PR #19148: URL: https://github.com/apache/kafka/pull/19148#discussion_r1986418726 ## core/src/main/java/kafka/server/share/SharePartitionManager.java: ## @@ -258,14 +257,14 @@ public CompletableFuture> fetchMessages( FetchParams fetchParams, int sessionEpoch, int batchSize, -LinkedHashMap partitionMaxBytes +ArrayList topicPartitions Review Comment: ```suggestion List topicIdPartitions ``` ## core/src/test/java/kafka/server/share/DelayedShareFetchTest.java: ## @@ -526,7 +518,7 @@ public void testForceCompleteTriggersDelayedActionsQueue() { TopicIdPartition tp0 = new TopicIdPartition(topicId, new TopicPartition("foo", 0)); TopicIdPartition tp1 = new TopicIdPartition(topicId, new TopicPartition("foo", 1)); TopicIdPartition tp2 = new TopicIdPartition(topicId, new TopicPartition("foo", 2)); -LinkedHashMap partitionMaxBytes1 = orderedMap(PARTITION_MAX_BYTES, tp0, tp1); +ArrayList topicIdPartitions1 = arrayList(tp0, tp1); Review Comment: ```suggestion List topicIdPartitions1 = List.of(tp0, tp1); ``` ## server/src/main/java/org/apache/kafka/server/share/fetch/PartitionRotateStrategy.java: ## @@ -48,7 +47,7 @@ public String toString() { * * @return the rotated topicIdPartitions */ -LinkedHashMap rotate(LinkedHashMap topicIdPartitions, PartitionRotateMetadata metadata); +ArrayList rotate(ArrayList topicIdPartitions, PartitionRotateMetadata metadata); Review Comment: ```suggestion List rotate(List topicIdPartitions, PartitionRotateMetadata metadata); ``` ## clients/src/main/java/org/apache/kafka/common/requests/ShareFetchRequest.java: ## @@ -151,7 +149,7 @@ public String toString() { } private final ShareFetchRequestData data; -private volatile LinkedHashMap shareFetchData = null; +private volatile ArrayList shareFetchData = null; Review Comment: ```suggestion private volatile List shareFetchData = null; ``` ## server/src/main/java/org/apache/kafka/server/share/fetch/PartitionRotateStrategy.java: ## @@ -64,8 +63,8 @@ static PartitionRotateStrategy type(StrategyType type) { * * @return the rotated topicIdPartitions */ -static LinkedHashMap rotateRoundRobin( -LinkedHashMap topicIdPartitions, +static ArrayList rotateRoundRobin( Review Comment: ```suggestion static List rotateRoundRobin( ``` ## server/src/main/java/org/apache/kafka/server/share/session/ShareSession.java: ## @@ -110,25 +109,20 @@ public synchronized LastUsedKey lastUsedKey() { return new LastUsedKey(key, lastUsedMs); } -// Visible for testing -public synchronized long creationMs() { -return creationMs; -} - // Update the cached partition data based on the request. -public synchronized Map> update(Map shareFetchData, List toForget) { +public synchronized Map> update( +List shareFetchData, +List toForget) { Review Comment: ```suggestion List toForget ) { ``` ## server/src/main/java/org/apache/kafka/server/share/fetch/PartitionRotateStrategy.java: ## @@ -80,20 +79,18 @@ static LinkedHashMap rotateRoundRobin( return topicIdPartitions; } -// TODO: Once the partition max bytes is removed then the partition will be a linked list and rotation -// will be a simple operation. Else consider using ImplicitLinkedHashCollection. -LinkedHashMap suffixPartitions = new LinkedHashMap<>(rotateAt); -LinkedHashMap rotatedPartitions = new LinkedHashMap<>(topicIdPartitions.size()); +ArrayList suffixPartitions = new ArrayList<>(rotateAt); +ArrayList rotatedPartitions = new ArrayList<>(topicIdPartitions.size()); int i = 0; -for (Map.Entry entry : topicIdPartitions.entrySet()) { +for (TopicIdPartition topicIdPartition : topicIdPartitions) { if (i < rotateAt) { -suffixPartitions.put(entry.getKey(), entry.getValue()); +suffixPartitions.add(topicIdPartition); } else { -rotatedPartitions.put(entry.getKey(), entry.getValue()); +rotatedPartitions.add(topicIdPartition); } i++; } -rotatedPartitions.putAll(suffixPartitions); +rotatedPartitions.addAll(suffixPartitions); Review Comment: Should we use `Collections.rotate` instead? ## core/src/main/java/kafka/server/share/SharePartitionManager.java: ## @@ -427,29 +426,20 @@ private CompletableFuture shareFetchData, +public ShareFetchContext newContext(String groupId, List shareFetchData, List toForget, ShareRequestMetadata reqMetad
[jira] [Resolved] (KAFKA-18023) Enforcing Explicit Naming for Kafka Streams Internal Topics (KIP-1111)
[ https://issues.apache.org/jira/browse/KAFKA-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax resolved KAFKA-18023. - Fix Version/s: 4.1.0 Resolution: Fixed > Enforcing Explicit Naming for Kafka Streams Internal Topics (KIP-) > -- > > Key: KAFKA-18023 > URL: https://issues.apache.org/jira/browse/KAFKA-18023 > Project: Kafka > Issue Type: Improvement > Components: kip, streams >Reporter: Sebastien Viale >Assignee: Sebastien Viale >Priority: Minor > Fix For: 4.1.0 > > > Jira to follow work on KIP: > https://cwiki.apache.org/confluence/display/KAFKA/KIP-%3A+Enforcing+Explicit+Naming+for+Kafka+Streams+Internal+Topics -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-18947: Remove unused raftManager in metadataShell [kafka]
Yunyung commented on code in PR #19169: URL: https://github.com/apache/kafka/pull/19169#discussion_r1986574459 ## shell/src/test/java/org/apache/kafka/shell/MetadataShellIntegrationTest.java: ## @@ -84,9 +84,9 @@ public void close() { @ValueSource(booleans = {false, true}) public void testLock(boolean canLock) throws Exception { try (IntegrationEnv env = new IntegrationEnv()) { -env.shell = new MetadataShell(null, +env.shell = new MetadataShell( new File(new File(env.tempDir, "__cluster_metadata-0"), "000122906351-000226.checkpoint").getAbsolutePath(), -env.faultHandler); Review Comment: The tab should NOT be removed here. ## shell/src/main/java/org/apache/kafka/shell/MetadataShell.java: ## @@ -292,4 +251,4 @@ void waitUntilCaughtUp() throws InterruptedException { Thread.sleep(10); } } -} +} Review Comment: Please add a space at the end of the file. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Fix minor issues in ops doc [kafka]
MahsaSeifikar closed pull request #19158: MINOR: Fix minor issues in ops doc URL: https://github.com/apache/kafka/pull/19158 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-7302) Remove Java7 examples from Streams Docs
[ https://issues.apache.org/jira/browse/KAFKA-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax reassigned KAFKA-7302: -- Assignee: (was: Vijay) > Remove Java7 examples from Streams Docs > --- > > Key: KAFKA-7302 > URL: https://issues.apache.org/jira/browse/KAFKA-7302 > Project: Kafka > Issue Type: Improvement > Components: documentation, streams >Affects Versions: 2.0.0 >Reporter: Matthias J. Sax >Priority: Minor > > In 2.0 release, Java7 support was dropped. We might consider removing Java7 > examples from the Streams docs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINOR: Fix minor issues in ops doc [kafka]
MahsaSeifikar commented on PR #19158: URL: https://github.com/apache/kafka/pull/19158#issuecomment-2709166623 Thanks for letting me know! I'll go ahead and close this PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Rewrite unchecked operations in Mock API [kafka]
clarkwtc commented on PR #19071: URL: https://github.com/apache/kafka/pull/19071#issuecomment-2709276660 @chia7712 No problem. I fixed that similar issue. Update the information at the top of the conversation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] KAFKA-17808: InstanceAlreadyExistsException: kafka.admin.client:type=app-info,id=connector-dlq-adminclient- when add connector with tasks [kafka]
Yunyung opened a new pull request, #19171: URL: https://github.com/apache/kafka/pull/19171 ## Description Fix id typo for connector-dlq-adminclient. Please see Jira: https://issues.apache.org/jira/browse/KAFKA-17808 ## Verification ### Setup Steps: Moefiy default `config/connect-file-sink.properties` (enabling dlq): ``` name=local-file-sink connector.class=FileStreamSink -tasks.max=1 +tasks.max=2 file=test.sink.txt topics=connect-test +errors.deadletterqueue.topic.name=dlq-topic +errors.deadletterqueue.context.headers.enable=true +errors.deadletterqueue.topic.replication.factor=1 +errors.tolerance=all ``` Then, running: ``` bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-sink.properties ``` ### Before $cat logs/connect.log | grep -ri connector-dlq-adminclient- logs/connect.log: client.id = connector-dlq-adminclient- logs/connect.log: client.id = connector-dlq-adminclient- logs/connect.log:[2025-03-05 03:51:55,766] INFO [local-file-sink|task-1] The mbean of App info: [kafka.admin.client], id: [connector-dlq-adminclient-] already exists, so skipping a new mbean creation. (org.apache.kafka.common.utils.AppInfoParser:66) logs/connect.log:[2025-03-05 03:51:55,768] INFO App info kafka.admin.client for connector-dlq-adminclient- unregistered (org.apache.kafka.common.utils.AppInfoParser:89) logs/connect.log:[2025-03-05 03:51:55,770] INFO App info kafka.admin.client for connector-dlq-adminclient- unregistered (org.apache.kafka.common.utils.AppInfoParser:89) ### After ``` $cat logs/connect.log | grep -ri connector-dlq-adminclient- logs/connect.log: client.id = connector-dlq-adminclient-local-file-sink-0 logs/connect.log: client.id = connector-dlq-adminclient-local-file-sink-1 logs/connect.log:[2025-03-07 12:53:44,220] INFO App info kafka.admin.client for connector-dlq-adminclient-local-file-sink-1 unregistered (org.apache.kafka.common.utils.AppInfoParser:89) logs/connect.log:[2025-03-07 12:53:44,221] INFO App info kafka.admin.client for connector-dlq-adminclient-local-file-sink-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:89) ``` `` ## Test Since the admin is not exposed, it is hard to write a unit test to check the ID. https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java#L84-L89 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-17808) InstanceAlreadyExistsException: kafka.admin.client:type=app-info,id=connector-dlq-adminclient- when add connector with tasks
[ https://issues.apache.org/jira/browse/KAFKA-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17933701#comment-17933701 ] Jhen-Yung Hsu commented on KAFKA-17808: --- Since the admin is not exposed, it is hard to write a unit test to check the ID. [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java#L84-L89] So, I clipped the log to verify the fix in the PR. PTAL. > InstanceAlreadyExistsException: > kafka.admin.client:type=app-info,id=connector-dlq-adminclient- when add > connector with tasks > > > Key: KAFKA-17808 > URL: https://issues.apache.org/jira/browse/KAFKA-17808 > Project: Kafka > Issue Type: Bug > Components: connect >Reporter: ignat233 >Assignee: Jhen-Yung Hsu >Priority: Major > Attachments: image-2024-10-16-13-00-36-667.png > > > Why do we always create an admin client with the same > "connector-dlq-adminclient-" value id? > [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L1008] > For all other cases, a postfix is added. > !image-2024-10-16-13-00-36-667.png! > I get "Error registering AppInfo mbean > javax.management.InstanceAlreadyExistsException: > kafka.admin.client:type=app-info,id=connector-dlq-adminclient-." error for > all tasks. > It looks like the ConnectorTaskId should be added. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-7079) ValueTransformerWithKeySupplier is not mentioned in the documentation
[ https://issues.apache.org/jira/browse/KAFKA-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax resolved KAFKA-7079. Resolution: Won't Fix > ValueTransformerWithKeySupplier is not mentioned in the documentation > - > > Key: KAFKA-7079 > URL: https://issues.apache.org/jira/browse/KAFKA-7079 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 1.1.0 > Environment: Fedora 27 >Reporter: Hashan Gayasri Udugahapattuwa >Priority: Major > > ValueTransformer#transform does not pass the key > KStream#transformValues(ValueTransformerWithKeySupplier . method is not > documented. It might lead to people to use workarounds or fall back to using > Transformer. This is very likely if the user is using a wrapper API (i.e: for > Scala) as the user would be checking the documentation more than the > available API functions in code. > > > > > > *Original issue (as it might be useful as a business requirement)* > ValueTransformers' transform method doesn't pass the key to user-code. > Reporting this as a bug since it currently requires workarounds. > Context: > I'm currently in the process of converting two stateful "*aggregate*" DSL > operations to the Processor API since the state of those operations are > relatively large and takes 99% + of CPU time (when profiled) for serializing > and deserializing them via Kryo. > Since DSL aggregations use state stores of [Bytes, Array[Byte]]] even when > using the in-memory state store, it seems like the only way to reduce the > serialization/deserialization overhead is to convert heavy aggregates to > *transform*s. > In my case, *ValueTransformer* seems to be the option. However, since > ValueTransformers' _transform_ method only exposes the _value_, I'd either > have to pre-process and add the key to the value or use *Transformer* instead > (which is not my intent). > > As internal _*InternalValueTransformerWithKey*_ already has the readOnlyKey, > it seems like a good idea to pass the key to the transform method as well, > esp since in a stateful transformation, generally the state store has to be > queried by the key. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINOR: Rewrite unchecked operations in Mock API [kafka]
clarkwtc commented on code in PR #19071: URL: https://github.com/apache/kafka/pull/19071#discussion_r1986517381 ## streams/src/test/java/org/apache/kafka/streams/kstream/internals/KTableImplTest.java: ## @@ -587,19 +577,16 @@ public void shouldThrowNullPointerOnTransformValuesWithKeyWhenTransformerSupplie assertThrows(NullPointerException.class, () -> table.transformValues(null)); } -@SuppressWarnings("unchecked") @Test public void shouldThrowNullPointerOnTransformValuesWithKeyWhenMaterializedIsNull() { -final ValueTransformerWithKeySupplier valueTransformerSupplier = -mock(ValueTransformerWithKeySupplier.class); +final ValueTransformerWithKeySupplier valueTransformerSupplier = mock(); assertThrows(NullPointerException.class, () -> table.transformValues(valueTransformerSupplier, (Materialized>) null)); } -@SuppressWarnings("unchecked") + @Test public void shouldThrowNullPointerOnTransformValuesWithKeyWhenStoreNamesNull() { -final ValueTransformerWithKeySupplier valueTransformerSupplier = -mock(ValueTransformerWithKeySupplier.class); +final ValueTransformerWithKeySupplier valueTransformerSupplier = mock(); assertThrows(NullPointerException.class, () -> table.transformValues(valueTransformerSupplier, (String[]) null)); Review Comment: Thank you for your comments. I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Rewrite unchecked operations in Mock API [kafka]
clarkwtc commented on code in PR #19071: URL: https://github.com/apache/kafka/pull/19071#discussion_r1986517609 ## streams/src/test/java/org/apache/kafka/streams/kstream/internals/KTableImplTest.java: ## @@ -587,19 +577,16 @@ public void shouldThrowNullPointerOnTransformValuesWithKeyWhenTransformerSupplie assertThrows(NullPointerException.class, () -> table.transformValues(null)); } -@SuppressWarnings("unchecked") @Test public void shouldThrowNullPointerOnTransformValuesWithKeyWhenMaterializedIsNull() { -final ValueTransformerWithKeySupplier valueTransformerSupplier = -mock(ValueTransformerWithKeySupplier.class); +final ValueTransformerWithKeySupplier valueTransformerSupplier = mock(); assertThrows(NullPointerException.class, () -> table.transformValues(valueTransformerSupplier, (Materialized>) null)); Review Comment: I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Rewrite unchecked operations in Mock API [kafka]
clarkwtc commented on code in PR #19071: URL: https://github.com/apache/kafka/pull/19071#discussion_r1986517982 ## streams/src/test/java/org/apache/kafka/streams/kstream/internals/KTableImplTest.java: ## @@ -391,24 +386,20 @@ public void testStateStore() { public void shouldNotEnableSendingOldValuesIfNotMaterializedAlreadyAndNotForcedToMaterialize() { final StreamsBuilder builder = new StreamsBuilder(); -final KTableImpl table = -(KTableImpl) builder.table("topic1", consumed); +final var kTable = assertInstanceOf(KTableImpl.class, builder.table("topic1", consumed)); +kTable.enableSendingOldValues(false); -table.enableSendingOldValues(false); - -assertThat(table.sendingOldValueEnabled(), is(false)); +assertFalse(kTable.sendingOldValueEnabled()); } @Test public void shouldEnableSendingOldValuesIfNotMaterializedAlreadyButForcedToMaterialize() { final StreamsBuilder builder = new StreamsBuilder(); -final KTableImpl table = -(KTableImpl) builder.table("topic1", consumed); - -table.enableSendingOldValues(true); +final var kTable = assertInstanceOf(KTableImpl.class, builder.table("topic1", consumed)); +kTable.enableSendingOldValues(true); -assertThat(table.sendingOldValueEnabled(), is(true)); +assertTrue(kTable.sendingOldValueEnabled()); Review Comment: Sure, I don't need to rename this variable. I fixed it. ## streams/src/test/java/org/apache/kafka/streams/kstream/internals/KTableImplTest.java: ## @@ -391,24 +386,20 @@ public void testStateStore() { public void shouldNotEnableSendingOldValuesIfNotMaterializedAlreadyAndNotForcedToMaterialize() { final StreamsBuilder builder = new StreamsBuilder(); -final KTableImpl table = -(KTableImpl) builder.table("topic1", consumed); +final var kTable = assertInstanceOf(KTableImpl.class, builder.table("topic1", consumed)); +kTable.enableSendingOldValues(false); -table.enableSendingOldValues(false); - -assertThat(table.sendingOldValueEnabled(), is(false)); +assertFalse(kTable.sendingOldValueEnabled()); Review Comment: I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Fix infinite loop and standardize options in MetadataSchemaCheckerTool [kafka]
chia7712 commented on PR #19165: URL: https://github.com/apache/kafka/pull/19165#issuecomment-2709273480 @ahuang98 is there a existent jira? if not, could you please add "MINOR:" to the title? thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18932: Removed usage of partition max bytes from share fetch requests on the broker [kafka]
adixitconfluent commented on code in PR #19148: URL: https://github.com/apache/kafka/pull/19148#discussion_r1986588598 ## server/src/main/java/org/apache/kafka/server/share/session/ShareSession.java: ## @@ -110,25 +109,20 @@ public synchronized LastUsedKey lastUsedKey() { return new LastUsedKey(key, lastUsedMs); } -// Visible for testing -public synchronized long creationMs() { -return creationMs; -} Review Comment: yes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: improve upgrade to v4.0.0 doc [kafka]
showuon commented on PR #19172: URL: https://github.com/apache/kafka/pull/19172#issuecomment-2709433524 @dajac , I think we should merge this PR to v4.0.0, though it's not a blocker for the RC3. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Improve grammar and clarity in upgrade.html [kafka]
chia7712 merged PR #19141: URL: https://github.com/apache/kafka/pull/19141 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18700: Migrate SnapshotPath and Entry in LogHistory to record classes [kafka]
chia7712 commented on PR #19062: URL: https://github.com/apache/kafka/pull/19062#issuecomment-2709553640 > For the classes without hashCode/equals, we are verifying that it's ok to change the implementation of hashCode and equals? Regarding `LogAppendInfo`, `LogFetchInfo`, and `SnapshotPath`: Currently, their `hashCode/equals` methods are not utilized in production code. A potential concern is `LogFetchInfo`'s mutable `Records` field, which is generally discouraged. @ijuma, what are your thoughts? Should we revert the LogFetchInfo changes? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: improve upgrade to v4.0.0 doc [kafka]
chia7712 commented on code in PR #19172: URL: https://github.com/apache/kafka/pull/19172#discussion_r1986684323 ## docs/upgrade.html: ## @@ -31,6 +31,12 @@ Notable changes in 4 Upgrading to 4.0.0 from any version 3.3.x through 3.9.x +Note: Apache Kafka 4.0 only supports KRaft mode - ZooKeeper mode has been removed. As such, broker upgrades to 4.0.x (and higher) require KRaft mode and Review Comment: should we remove the duplicate section from `Notable changes in 4.0.0`?  -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18933 Extract an interface from ClusterInstance [kafka]
ijuma commented on PR #18697: URL: https://github.com/apache/kafka/pull/18697#issuecomment-2709277315 @mumrah A bit hard to comment without understanding the goal. Is the goal to move client integration tests outside of `core`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18276 Migrate ProducerRebootstrapTest to new test infra [kafka]
ijuma commented on PR #19046: URL: https://github.com/apache/kafka/pull/19046#issuecomment-2709290586 We should probably create the clients integration test module before doing these conversions so that the Java classes end up in a Java only module. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18933 Add client integration tests module [kafka]
ijuma commented on code in PR #19144: URL: https://github.com/apache/kafka/pull/19144#discussion_r1986524784 ## build.gradle: ## @@ -1531,15 +1531,15 @@ project(':test-common:test-common-runtime') { } dependencies { -implementation project(':test-common:test-common-internal-api') -implementation project(':clients') -implementation project(':core') -implementation project(':group-coordinator') -implementation project(':metadata') -implementation project(':raft') -implementation project(':server') -implementation project(':server-common') -implementation project(':storage') +api project(':core') Review Comment: Is that a good idea though? Generally speaking, we should only declare `api` if the public classes in this module accept/return apis from the relevant module. Also, any module that uses classes from a given module should declare an explicit dependency on that module (versus relying on transitive dependencies). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18379: Enforce resigned cannot transition to any other state in same epoch [kafka]
github-actions[bot] commented on PR #18789: URL: https://github.com/apache/kafka/pull/18789#issuecomment-2709316719 A label of 'needs-attention' was automatically added to this PR in order to raise the attention of the committers. Once this issue has been triaged, the `triage` label should be removed to prevent this automation from happening again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Improve grammar and clarity in upgrade.html [kafka]
Yunyung commented on code in PR #19141: URL: https://github.com/apache/kafka/pull/19141#discussion_r1986550265 ## docs/upgrade.html: ## @@ -385,21 +385,21 @@ Notable changes in 4 See https://cwiki.apache.org/confluence/x/jA3OEg";>KIP-1074 for more details. -KIP-714 is now enabled for Kafka Streams via https://cwiki.apache.org/confluence/x/XA-OEg";>KIP-1076. +https://cwiki.apache.org/confluence/x/2xRRCg";>KIP-714 is now enabled for Kafka Streams via https://cwiki.apache.org/confluence/x/XA-OEg";>KIP-1076. This allows to not only collect the metric of the internally used clients of a Kafka Streams application via a broker-side plugin, but also to collect the metrics of the Kafka Streams runtime itself. -The default value of 'num.recovery.threads.per.data.dir' has been changed from 1 to 2. The impact of this is faster +The default value of num.recovery.threads.per.data.dir has been changed from 1 to 2. The impact of this is faster recovery post unclean shutdown at the expense of extra IO cycles. -See https://cwiki.apache.org/confluence/x/FAqpEQ";>KIP-1030 +See https://cwiki.apache.org/confluence/x/FAqpEQ";>KIP-1030 -The default value of 'message.timestamp.after.max.ms' has been changed from Long.Max to 1 hour. The impact of this messages with a +The default value of message.timestamp.after.max.ms has been changed from Long.Max to 1 hour. The impact of this messages with a timestamp of more than 1 hour in the future will be rejected when message.timestamp.type=CreateTime is set. -See https://cwiki.apache.org/confluence/x/FAqpEQ";>KIP-1030 +See https://cwiki.apache.org/confluence/x/FAqpEQ";>KIP-1030 -Introduced in KIP-890, the TransactionAbortableException enhances error handling within transactional operations by clearly indicating scenarios where transactions should be aborted due to errors. It is important for applications to properly manage both TimeoutException and TransactionAbortableException when working with transaction producers. +Introduced in https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense";>KIP-890, the TransactionAbortableException enhances error handling within transactional operations by clearly indicating scenarios where transactions should be aborted due to errors. It is important for applications to properly manage both TimeoutException and TransactionAbortableException when working with transaction producers. Review Comment: ditto. Please change the link on line 70 to https://cwiki.apache.org/confluence/x/HhD1D -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: improve upgrade to v4.0.0 doc [kafka]
showuon commented on code in PR #19172: URL: https://github.com/apache/kafka/pull/19172#discussion_r1986687663 ## docs/upgrade.html: ## @@ -31,6 +31,12 @@ Notable changes in 4 Upgrading to 4.0.0 from any version 3.3.x through 3.9.x +Note: Apache Kafka 4.0 only supports KRaft mode - ZooKeeper mode has been removed. As such, broker upgrades to 4.0.x (and higher) require KRaft mode and Review Comment: I don't know, to be honest, because that's also a notable change in v4.0.0... Maybe we can hear the 3rd opinion from the community. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR Improve PR linter output [kafka]
chia7712 commented on code in PR #19159: URL: https://github.com/apache/kafka/pull/19159#discussion_r1986686393 ## .github/workflows/README.md: ## @@ -122,6 +122,15 @@ structure of the PR Note that the pr-reviewed.yml workflow uses the `ci-approved` mechanism described above. +The following checks are performed on our PRs: +* Title is not too short or too long +* Title starts with "KAFKA-", "MINOR-", or "HOTFIX-" Review Comment: "MINOR", or "HOTFIX" ## .github/scripts/pr-format.py: ## @@ -105,50 +112,55 @@ def parse_trailers(title, body) -> Dict: body = gh_json["body"] reviews = gh_json["reviews"] -warnings = [] -errors = [] - -# Check title -if title.endswith("..."): -errors.append("Title appears truncated") - -if len(title) < 15: -errors.append("Title is too short") +checks = [] # (bool (0=ok, 1=error), message) -if len(title) > 120: -errors.append("Title is too long") +def check(positive_assertion, ok_msg, err_msg): +if positive_assertion: +checks.append((0, f"{ok} {ok_msg}")) +else: +checks.append((1, f"{err} {err_msg}")) -if not title.startswith("KAFKA-") and not title.startswith("MINOR") and not title.startswith("HOTFIX"): -errors.append("Title is missing KAFKA-X or MINOR/HOTFIX prefix") +# Check title +check(not title.endswith("..."), "Title is not truncated", "Title appears truncated") +check(len(title) >= 15, "Title is not too short", "Title is too short") +check(len(title) < 120, "Title is not too long", "Title is too long") Review Comment: While we could debate this point indefinitely and then never talking to each other, establishing a title length limit is a constructive approach :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: improve upgrade to v4.0.0 doc [kafka]
chia7712 commented on code in PR #19172: URL: https://github.com/apache/kafka/pull/19172#discussion_r1986702996 ## docs/upgrade.html: ## @@ -31,6 +31,12 @@ Notable changes in 4 Upgrading to 4.0.0 from any version 3.3.x through 3.9.x +Note: Apache Kafka 4.0 only supports KRaft mode - ZooKeeper mode has been removed. As such, broker upgrades to 4.0.x (and higher) require KRaft mode and Review Comment: Okay, I just find it odd that there are two identical descriptions. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-18951) Validate client code and documentation in examples/ directory
Kirk True created KAFKA-18951: - Summary: Validate client code and documentation in examples/ directory Key: KAFKA-18951 URL: https://issues.apache.org/jira/browse/KAFKA-18951 Project: Kafka Issue Type: Improvement Components: clients, consumer, producer Affects Versions: 4.0.0 Reporter: Kirk True Assignee: Kirk True The examples/ directory contains example client code that might be a bit neglected. This task is to check for and fix any obvious errors in documentation or out-of-date code. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-18951) Validate client code and documentation in examples/ directory
[ https://issues.apache.org/jira/browse/KAFKA-18951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk True updated KAFKA-18951: -- Description: The {{examples/}} directory contains example client code that might be a bit neglected. This task is to check for and fix any obvious errors in documentation or out-of-date code. (was: The examples/ directory contains example client code that might be a bit neglected. This task is to check for and fix any obvious errors in documentation or out-of-date code.) > Validate client code and documentation in examples/ directory > - > > Key: KAFKA-18951 > URL: https://issues.apache.org/jira/browse/KAFKA-18951 > Project: Kafka > Issue Type: Improvement > Components: clients, consumer, producer >Affects Versions: 4.0.0 >Reporter: Kirk True >Assignee: Kirk True >Priority: Minor > > The {{examples/}} directory contains example client code that might be a bit > neglected. This task is to check for and fix any obvious errors in > documentation or out-of-date code. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-10863: Convert ControRecordType schema to use auto-generated protocol [kafka]
dengziming commented on code in PR #19110: URL: https://github.com/apache/kafka/pull/19110#discussion_r1986514556 ## clients/src/main/resources/common/message/ControlRecordType.json: ## @@ -0,0 +1,26 @@ +// Licensed to the Apache Software Foundation (ASF) under one or more +// contributor license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright ownership. +// The ASF licenses this file to You under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with +// the License. You may obtain a copy of the License at +// +//http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +{ + "type": "data", + "name": "ControlRecordTypeSchema", Review Comment: I initially used the name `ControlRecordType`, but since we already have a `enum ControlRecordType` we can't import it in this `enum ControlRecordType` and should use full name. And I'm still working on this since there is something wrong with it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-18933 Add client integration tests module [kafka]
ijuma commented on code in PR #19144: URL: https://github.com/apache/kafka/pull/19144#discussion_r1986524784 ## build.gradle: ## @@ -1531,15 +1531,15 @@ project(':test-common:test-common-runtime') { } dependencies { -implementation project(':test-common:test-common-internal-api') -implementation project(':clients') -implementation project(':core') -implementation project(':group-coordinator') -implementation project(':metadata') -implementation project(':raft') -implementation project(':server') -implementation project(':server-common') -implementation project(':storage') +api project(':core') Review Comment: Is that a good idea though? Generally speaking, we should only declare `api` if the public classes in this module accept/return apis from the relevant module. Also, any module that uses classes from a given module should declare a dependency on that module (versus relying on transitive dependencies). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-7302) Remove Java7 examples from Streams Docs
[ https://issues.apache.org/jira/browse/KAFKA-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax resolved KAFKA-7302. Fix Version/s: 4.0.0 Resolution: Fixed I believe we did do this cleanup during 4.0.0 release. > Remove Java7 examples from Streams Docs > --- > > Key: KAFKA-7302 > URL: https://issues.apache.org/jira/browse/KAFKA-7302 > Project: Kafka > Issue Type: Improvement > Components: documentation, streams >Affects Versions: 2.0.0 >Reporter: Matthias J. Sax >Priority: Minor > Fix For: 4.0.0 > > > In 2.0 release, Java7 support was dropped. We might consider removing Java7 > examples from the Streams docs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINOR: Improve grammar and clarity in upgrade.html [kafka]
chia7712 commented on PR #19141: URL: https://github.com/apache/kafka/pull/19141#issuecomment-2708923054 @MahsaSeifikar could you please fix the conflicts? thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-8871) Allow timestamp manipulation in ValueTransformerWithKey
[ https://issues.apache.org/jira/browse/KAFKA-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax resolved KAFKA-8871. Resolution: Won't Fix With the new type-safe PAPI (v2) `ValueTransformerWithKey` is going to be deprecated, so this ticket does not apply any longer. > Allow timestamp manipulation in ValueTransformerWithKey > --- > > Key: KAFKA-8871 > URL: https://issues.apache.org/jira/browse/KAFKA-8871 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Levani Kokhreidze >Priority: Minor > Labels: needs-kip > > h3. Motivation > When using `KStream#transform` in Kafka Streams DSL to manipulate the > timestamp, `KStreamImpl#transform` implementation marks *repartitionRequired* > as *true,* which isn't necessarily okay when one may just want to manipulate > with timestamp without affecting the key. It would be great if DSL user could > manipulate the timestamp in `ValueTransformerWithKey`. -- This message was sent by Atlassian Jira (v8.20.10#820010)