[jira] [Created] (KAFKA-18432) Remove AutoTopicCreationManager

2025-01-07 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-18432:
--

 Summary: Remove AutoTopicCreationManager
 Key: KAFKA-18432
 URL: https://issues.apache.org/jira/browse/KAFKA-18432
 Project: Kafka
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


as title



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-10790: Add deadlock detection to producer#flush [kafka]

2025-01-07 Thread via GitHub


frankvicky commented on PR #17946:
URL: https://github.com/apache/kafka/pull/17946#issuecomment-2575260003

   Hi @AndrewJSchofield @chia7712 
   Since the KIP has been accepted, I think we could merge this one. 
   WDYT?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-17544: Fix for loading big files while performing load tests [kafka]

2025-01-07 Thread via GitHub


m1a2st commented on code in PR #18391:
URL: https://github.com/apache/kafka/pull/18391#discussion_r1905417391


##
tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java:
##
@@ -194,9 +195,16 @@ static List readPayloadFile(String 
payloadFilePath, String payloadDelimi
 throw new IllegalArgumentException("File does not exist or 
empty file provided.");
 }
 
-String[] payloadList = 
Files.readString(path).split(payloadDelimiter);
+List payloadList = new ArrayList<>();
+try (Scanner payLoadScanner = new Scanner(path, 
StandardCharsets.UTF_8)) {
+//setting the delimiter while parsing the file, avoids loading 
entire data in memory before split
+payLoadScanner.useDelimiter(payloadDelimiter);
+while (payLoadScanner.hasNext()) {
+payloadList.add(payLoadScanner.next());

Review Comment:
   I still have some question at this line, I think the `payLoadScanner.next()` 
will have the same problem when a string objects is so big, WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18368 Remove TestUtils#MockZkConnect and remove zkConnect from TestUtils#createBrokerConfig [kafka]

2025-01-07 Thread via GitHub


chia7712 merged PR #18352:
URL: https://github.com/apache/kafka/pull/18352


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Remove RaftManager.maybeDeleteMetadataLogDir and AutoTopicCreationManagerTest.scala [kafka]

2025-01-07 Thread via GitHub


chia7712 merged PR #17365:
URL: https://github.com/apache/kafka/pull/17365


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15057: Use new interface from zstd-jni [kafka]

2025-01-07 Thread via GitHub


mimaison commented on PR #13814:
URL: https://github.com/apache/kafka/pull/13814#issuecomment-2575469133

   This seemed like a potential significant performance improvement. Any 
specific reason you're abandoning this change? If it's due to lack of time, we 
may want to find if anybody else is interested in looking into this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18415: Fix for event queue metric and flaky test [kafka]

2025-01-07 Thread via GitHub


lianetm opened a new pull request, #18416:
URL: https://github.com/apache/kafka/pull/18416

   Fix for flaky unit test. The test used to fail when asserting the queue size 
metric after adding an event, because of a race condition (the background 
thread could have set the metric back to 0 when processing all events).
   
   Fix logic to ensure we always record the incremented queue size. Also fix 
the test to just verify that the update happened, instead of checking the 
metric end value.
   
   Test passes consistently with this fix (locally)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-17607: Add CI step to verify LICENSE-binary [kafka]

2025-01-07 Thread via GitHub


chia7712 commented on code in PR #18299:
URL: https://github.com/apache/kafka/pull/18299#discussion_r1905584608


##
build.gradle:
##
@@ -1326,6 +1326,42 @@ project(':core') {
 from(project(':tools:tools-api').jar) { into("libs/") }
 from(project(':tools:tools-api').configurations.runtimeClasspath) { 
into("libs/") }
 duplicatesStrategy 'exclude'
+
+doLast {
+  def prefixPath = "$rootDir/core/build/distributions/"
+  def bathPath = prefixPath + 
"kafka_${versions.baseScala}-${project.properties['version']}"
+  def tgzPath = bathPath  + '.tgz'
+  exec {
+commandLine 'tar', 'xzf', tgzPath, '-C', prefixPath
+  }
+
+  def licenseFile = file(bathPath + '/LICENSE')
+  def libsDir = file(bathPath + '/libs')
+  def result = true
+  libsDir.eachFile { file ->
+if (file.name.endsWith('.jar')
+&& !file.name.startsWith('kafka')
+&& !file.name.startsWith('connect')
+&& !file.name.startsWith('trogdor')) {
+
+  def isLicensePresent = false
+  licenseFile.eachLine { line ->
+if (line.contains(file.name - '.jar')) {
+  isLicensePresent = true
+}
+  }
+  if (!isLicensePresent) {
+println "warning: ${file.name} is missing in the license file"
+result = false
+  }
+}
+  }
+
+  if (result)
+println "LIBS LICENSE check succeed !"
+  else
+println "LIBS LICENSE check failed !"

Review Comment:
   should we mark the task failed to ensure developers have to fix it? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-17921: Support SASL_PLAINTEXT protocol with java.security.auth.login.config [kafka]

2025-01-07 Thread via GitHub


chia7712 commented on code in PR #17671:
URL: https://github.com/apache/kafka/pull/17671#discussion_r1905589260


##
test-common/src/main/java/org/apache/kafka/common/test/KafkaClusterTestKit.java:
##
@@ -176,6 +179,15 @@ private KafkaConfig createNodeConfig(TestKitNode node) 
throws IOException {
 // reduce log cleaner offset map memory usage
 
props.putIfAbsent(CleanerConfig.LOG_CLEANER_DEDUPE_BUFFER_SIZE_PROP, "2097152");
 
+if 
(brokerSecurityProtocol.equals(SecurityProtocol.SASL_PLAINTEXT.name)) {

Review Comment:
   it seems we set security for both controller and broker based on the 
`brokerSecurityProtocol`, right? if so, that is weird to me as we totally 
ignore `controllerSecurityProtocol` ...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-18434) enrich the authorization error message of connecting to controller

2025-01-07 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-18434:
--

 Summary: enrich the authorization error message of connecting to 
controller
 Key: KAFKA-18434
 URL: https://issues.apache.org/jira/browse/KAFKA-18434
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: PoAn Yang


DESCRIBE_CLUSTER request needs ALTER permission when connecting to controller. 
However, the error message does not include the reason. There is a good 
explanation in the origin PR 
(https://github.com/apache/kafka/pull/14306#discussion_r1312367762) and we 
should add it to the exception message



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-15599: Move SegmentPosition/TimingWheelExpirationService to raf… [kafka]

2025-01-07 Thread via GitHub


mimaison commented on code in PR #18094:
URL: https://github.com/apache/kafka/pull/18094#discussion_r1905622701


##
raft/src/main/java/org/apache/kafka/raft/TimingWheelExpirationService.java:
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.raft;
+
+import org.apache.kafka.common.errors.TimeoutException;
+import org.apache.kafka.server.util.ShutdownableThread;
+import org.apache.kafka.server.util.timer.Timer;
+import org.apache.kafka.server.util.timer.TimerTask;
+
+import java.util.concurrent.CompletableFuture;
+
+public class TimingWheelExpirationService implements ExpirationService {
+
+private static final long WORK_TIMEOUT_MS = 200L;
+
+private final ExpiredOperationReaper expirationReaper = new 
ExpiredOperationReaper();
+private final Timer timer;
+
+public TimingWheelExpirationService(Timer timer) {
+this.timer = timer;
+expirationReaper.start();
+}
+
+@Override
+public  CompletableFuture failAfter(long timeoutMs) {
+TimerTaskCompletableFuture task = new 
TimerTaskCompletableFuture<>(timeoutMs);

Review Comment:
   Thanks @divijvaidya for the details, I had missed this change. It's unclear 
if this would have an impact but let's just convert the existing code for now. 
I'll revert this bit.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18380) Remove KafkaApisTest `raftSupport` and change test to Kraft mode

2025-01-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-18380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

黃竣陽 resolved KAFKA-18380.
-
Resolution: Duplicate

> Remove KafkaApisTest `raftSupport` and change test to Kraft mode
> 
>
> Key: KAFKA-18380
> URL: https://issues.apache.org/jira/browse/KAFKA-18380
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
>
> https://github.com/apache/kafka/pull/18352#pullrequestreview-2525842712



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18380) Remove KafkaApisTest `raftSupport` and change test to Kraft mode

2025-01-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/KAFKA-18380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910662#comment-17910662
 ] 

黃竣陽 commented on KAFKA-18380:
-

This Jira would be merge into https://issues.apache.org/jira/browse/KAFKA-18399 
, I will close it.

> Remove KafkaApisTest `raftSupport` and change test to Kraft mode
> 
>
> Key: KAFKA-18380
> URL: https://issues.apache.org/jira/browse/KAFKA-18380
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
>
> https://github.com/apache/kafka/pull/18352#pullrequestreview-2525842712



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-15369: Implement KIP-919: Allow AC to Talk Directly with Controllers [kafka]

2025-01-07 Thread via GitHub


chia7712 commented on code in PR #14306:
URL: https://github.com/apache/kafka/pull/14306#discussion_r1905622310


##
core/src/main/scala/kafka/server/ControllerApis.scala:
##
@@ -1005,4 +1023,20 @@ class ControllerApis(val requestChannel: RequestChannel,
 }
   }
   }
+
+  def handleDescribeCluster(request: RequestChannel.Request): 
CompletableFuture[Unit] = {
+// Unlike on the broker, DESCRIBE_CLUSTER on the controller requires a 
high level of
+// permissions (ALTER on CLUSTER).

Review Comment:
   Thanks for this excellent explanation, @cmccabe! I've created JIRA issue 
https://issues.apache.org/jira/browse/KAFKA-18434 to incorporate this 
explanation into the error message. This will help users understand the 
permission distinctions when using controllers as bootstrap servers.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] MINOR: Remove Sanitizer document with zookeeper description [kafka]

2025-01-07 Thread via GitHub


m1a2st opened a new pull request, #18420:
URL: https://github.com/apache/kafka/pull/18420

   As we depreacate Zk in Kafka 4.0, we should clean the document with zk 
description.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-18435) Remove zookeeper dependencies in build.gradle

2025-01-07 Thread Jira
黃竣陽 created KAFKA-18435:
---

 Summary: Remove zookeeper dependencies in build.gradle
 Key: KAFKA-18435
 URL: https://issues.apache.org/jira/browse/KAFKA-18435
 Project: Kafka
  Issue Type: Improvement
Reporter: 黃竣陽
Assignee: 黃竣陽


After we clean up all zookeeper class, we could remove the zookeeper 
dependencies from Kafka project



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18435) Remove zookeeper dependencies in build.gradle

2025-01-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-18435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

黃竣陽 updated KAFKA-18435:

Parent: KAFKA-17611
Issue Type: Sub-task  (was: Improvement)

> Remove zookeeper dependencies in build.gradle
> -
>
> Key: KAFKA-18435
> URL: https://issues.apache.org/jira/browse/KAFKA-18435
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
>
> After we clean up all zookeeper class, we could remove the zookeeper 
> dependencies from Kafka project



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-13093: Log compaction should write new segments with record version v2 (KIP-724) [kafka]

2025-01-07 Thread via GitHub


junrao commented on code in PR #18321:
URL: https://github.com/apache/kafka/pull/18321#discussion_r1905814075


##
core/src/main/scala/kafka/log/UnifiedLog.scala:
##
@@ -509,8 +509,8 @@ class UnifiedLog(@volatile var logStartOffset: Long,
   }
 
   private def initializeLeaderEpochCache(): Unit = lock synchronized {
-leaderEpochCache = UnifiedLog.maybeCreateLeaderEpochCache(
-  dir, topicPartition, logDirFailureChannel, logIdent, leaderEpochCache, 
scheduler)
+leaderEpochCache = Some(UnifiedLog.createLeaderEpochCache(

Review Comment:
   We set `shouldReinitialize` is to false when the replica is no longer 
needed. So, we can just set `leaderEpochCache` to null in that case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18415: Fix for event queue metric and flaky test [kafka]

2025-01-07 Thread via GitHub


lianetm commented on PR #18416:
URL: https://github.com/apache/kafka/pull/18416#issuecomment-2575853893

   This doesn't affect the unit test in `BackgroundEventHandlerTest` in the 
same way because the the app thread does not automatically remove from the 
background event queue (it requires explicit calls to consumer.poll or 
unsubscribe). So the BackgroundEventHandlerTest can add events to its 
background queue and safely check it in isolation. Makes sense?
   
   Just to double check, that background unit test hasn't been flaky 
https://ge.apache.org/scans/tests?search.names=Git%20branch&search.relativeStartTime=P28D&search.rootProjectNames=kafka&search.timeZoneId=America%2FToronto&search.values=trunk&tests.container=org.apache.kafka.clients.consumer.internals.BackgroundEventHandlerTest&tests.test=testRecordBackgroundEventQueueSize()


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18216) High water mark or last stable offset aren't always updated after a fetch request is completed

2025-01-07 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans updated KAFKA-18216:
---
Labels: consumer-threading-refactor  (was: )

> High water mark or last stable offset aren't always updated after a fetch 
> request is completed
> --
>
> Key: KAFKA-18216
> URL: https://issues.apache.org/jira/browse/KAFKA-18216
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: TengYao Chi
>Priority: Minor
>  Labels: consumer-threading-refactor
>
> We've noticed AsyncKafkaConsumer doesn't always update the high water 
> mark/LSO followed by handling a successful fetch response. And we know 
> consumer lag metrics is calculated by HWM/LSO - current fetched position.  We 
> are suspecting this could have a subtle effect into how consumer lag is 
> recorded, which might have a slight impact into the accuracy of client 
> metrics reporting.
> The consumer records consumer lag when reading the fetched record 
> The consumer updates the HWM/LSO when the background thread completes the 
> fetched request.
> In the original implementation, the fetcher consistently updates the HWM/LSO 
> after handling the completed fetch request.
> In the new implementation, due to the async threading model, we can't 
> guarantee the sequence of the event.
> This defect is affecting neither performance nor correctness and is therefore 
> marked as "Minor"
>  
> This can be easily reproduced using the java-produce-consumer-demo.sh 
> example.  Ensure to produce enough records (I use 2 records, less is 
> fine as well).  Custom logging is required.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18398) Log a warning if actual topic configs are inconsistent with the required topic configs

2025-01-07 Thread Lucas Brutschy (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910552#comment-17910552
 ] 

Lucas Brutschy commented on KAFKA-18398:


Yes, we could make this slightly less annoying by distinguishing the invalid 
configs from non-default configs. Maybe it would still be good to output an 
"INFO" log for the non-default ones, as people may want to know about it. It 
also seems to be a "good" log message. It's perfectly clear and would only be 
emitted once.

So it seems to me the ticket is valid? This came out of thread where the wrong 
`cleanup.policy` for repartition topics caused problems. So we could restrict 
this ticket to log `cleanup.policy` with WARN and other configs as INFO, and 
then add more configs on demand?

> Log a warning if actual topic configs are inconsistent with the required 
> topic configs
> --
>
> Key: KAFKA-18398
> URL: https://issues.apache.org/jira/browse/KAFKA-18398
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Lucas Brutschy
>Assignee: Alieh Saeedi
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Fix typo in CommitRequestManager [kafka]

2025-01-07 Thread via GitHub


dajac merged PR #18407:
URL: https://github.com/apache/kafka/pull/18407


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18427) Remove KafkaZkClient and ZooKeeperClient

2025-01-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-18427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

黃竣陽 updated KAFKA-18427:

Summary: Remove KafkaZkClient and ZooKeeperClient  (was: Remove 
KafkaZkClient)

> Remove KafkaZkClient and ZooKeeperClient
> 
>
> Key: KAFKA-18427
> URL: https://issues.apache.org/jira/browse/KAFKA-18427
> Project: Kafka
>  Issue Type: Improvement
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
>
> We could remove it since Kafka no longer relies on Zookeeper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-18427) Remove KafkaZkClient

2025-01-07 Thread Jira
黃竣陽 created KAFKA-18427:
---

 Summary: Remove KafkaZkClient
 Key: KAFKA-18427
 URL: https://issues.apache.org/jira/browse/KAFKA-18427
 Project: Kafka
  Issue Type: Improvement
Reporter: 黃竣陽
Assignee: 黃竣陽


We could remove it since Kafka no longer relies on Zookeeper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18429) Remove ZkFinalizedFeatureCache

2025-01-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-18429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

黃竣陽 updated KAFKA-18429:

Parent: KAFKA-17611
Issue Type: Sub-task  (was: Bug)

> Remove ZkFinalizedFeatureCache
> --
>
> Key: KAFKA-18429
> URL: https://issues.apache.org/jira/browse/KAFKA-18429
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
>
> We could remove it since Kafka no longer relies on Zookeeper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-18429) Remove ZkFinalizedFeatureCache

2025-01-07 Thread Jira
黃竣陽 created KAFKA-18429:
---

 Summary: Remove ZkFinalizedFeatureCache
 Key: KAFKA-18429
 URL: https://issues.apache.org/jira/browse/KAFKA-18429
 Project: Kafka
  Issue Type: Bug
Reporter: 黃竣陽
Assignee: 黃竣陽


We could remove it since Kafka no longer relies on Zookeeper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18430) Remove ZkNodeChangeNotificationListener

2025-01-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-18430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

黃竣陽 updated KAFKA-18430:

Parent: KAFKA-17611
Issue Type: Sub-task  (was: Improvement)

> Remove ZkNodeChangeNotificationListener
> ---
>
> Key: KAFKA-18430
> URL: https://issues.apache.org/jira/browse/KAFKA-18430
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
>
> We could remove it since Kafka no longer relies on Zookeeper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18427) Remove KafkaZkClient and ZooKeeperClient

2025-01-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-18427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

黃竣陽 updated KAFKA-18427:

Parent: KAFKA-17611
Issue Type: Sub-task  (was: Improvement)

> Remove KafkaZkClient and ZooKeeperClient
> 
>
> Key: KAFKA-18427
> URL: https://issues.apache.org/jira/browse/KAFKA-18427
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
>
> We could remove it since Kafka no longer relies on Zookeeper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-18428) Measure share consumers performance

2025-01-07 Thread Abhinav Dixit (Jira)
Abhinav Dixit created KAFKA-18428:
-

 Summary: Measure share consumers performance
 Key: KAFKA-18428
 URL: https://issues.apache.org/jira/browse/KAFKA-18428
 Project: Kafka
  Issue Type: Sub-task
Reporter: Abhinav Dixit
Assignee: Abhinav Dixit


Create a script to evaluate share consumers performance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] MINOR: Replace ImplicitConversions with CollectionConverters [kafka]

2025-01-07 Thread via GitHub


frankvicky opened a new pull request, #18412:
URL: https://github.com/apache/kafka/pull/18412

   `ImplicitConversion` has been deprecated since Scala 2.13, we should replace 
it with `CollectionConverters`
   Reference: 
https://www.scala-lang.org/api/2.13.9/scala/collection/convert/ImplicitConversions$.html
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-18430) Remove ZkNodeChangeNotificationListener

2025-01-07 Thread Jira
黃竣陽 created KAFKA-18430:
---

 Summary: Remove ZkNodeChangeNotificationListener
 Key: KAFKA-18430
 URL: https://issues.apache.org/jira/browse/KAFKA-18430
 Project: Kafka
  Issue Type: Improvement
Reporter: 黃竣陽
Assignee: 黃竣陽


We could remove it since Kafka no longer relies on Zookeeper.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18413) Remove AdminZkClient

2025-01-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/KAFKA-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910597#comment-17910597
 ] 

黃竣陽 commented on KAFKA-18413:
-

Should also remove TopicAlreadyMarkedForDeletionException

> Remove AdminZkClient
> 
>
> Key: KAFKA-18413
> URL: https://issues.apache.org/jira/browse/KAFKA-18413
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: TengYao Chi
>Assignee: TengYao Chi
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18297) Fix flaky PlaintextAdminIntegrationTest.testConsumerGroups

2025-01-07 Thread David Jacot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910601#comment-17910601
 ] 

David Jacot commented on KAFKA-18297:
-

I have a PR for fix one case: [https://github.com/apache/kafka/pull/18409.] I 
think that this is the most common one according to 
https://ge.apache.org/scans/tests?search.rootProjectNames=kafka&search.timeZoneId=Europe%2FZurich&tests.container=kafka.api.PlaintextAdminIntegrationTest&tests.test=testConsumerGroups(String%2C%20String)%5B2%5D.

> Fix flaky PlaintextAdminIntegrationTest.testConsumerGroups
> --
>
> Key: KAFKA-18297
> URL: https://issues.apache.org/jira/browse/KAFKA-18297
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Blocker
>  Labels: kip-848-client-support
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18384 Remove ZkAlterPartitionManager [kafka]

2025-01-07 Thread via GitHub


m1a2st commented on PR #18364:
URL: https://github.com/apache/kafka/pull/18364#issuecomment-2575152665

   Thanks for @mimaison review, addressed all comments :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18411 Remove ZkProducerIdManager [kafka]

2025-01-07 Thread via GitHub


m1a2st opened a new pull request, #18413:
URL: https://github.com/apache/kafka/pull/18413

   Jira: https://issues.apache.org/jira/browse/KAFKA-18411
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18413) Remove AdminZkClient and TopicAlreadyMarkedForDeletionException

2025-01-07 Thread TengYao Chi (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TengYao Chi updated KAFKA-18413:

Summary: Remove AdminZkClient and TopicAlreadyMarkedForDeletionException  
(was: Remove AdminZkClient)

> Remove AdminZkClient and TopicAlreadyMarkedForDeletionException
> ---
>
> Key: KAFKA-18413
> URL: https://issues.apache.org/jira/browse/KAFKA-18413
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: TengYao Chi
>Assignee: TengYao Chi
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18413) Remove AdminZkClient

2025-01-07 Thread TengYao Chi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910606#comment-17910606
 ] 

TengYao Chi commented on KAFKA-18413:
-

You're right, I will add it to this ticket.

> Remove AdminZkClient
> 
>
> Key: KAFKA-18413
> URL: https://issues.apache.org/jira/browse/KAFKA-18413
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: TengYao Chi
>Assignee: TengYao Chi
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Bump the default Consumer group command timeout default to 30 sec [kafka]

2025-01-07 Thread via GitHub


omkreddy merged PR #16406:
URL: https://github.com/apache/kafka/pull/16406


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18360) Remove config in ZkConfigs, ZKConfig, and ZKClientConfig

2025-01-07 Thread PoAn Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PoAn Yang updated KAFKA-18360:
--
Summary: Remove config in ZkConfigs, ZKConfig, and ZKClientConfig  (was: 
Remove config in ZkConfigs)

> Remove config in ZkConfigs, ZKConfig, and ZKClientConfig
> 
>
> Key: KAFKA-18360
> URL: https://issues.apache.org/jira/browse/KAFKA-18360
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: PoAn Yang
>Assignee: PoAn Yang
>Priority: Major
>
> Remove all config in 
> [https://github.com/apache/kafka/blob/trunk/server/src/main/java/org/apache/kafka/server/config/ZkConfigs.java]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Remove RaftManager.maybeDeleteMetadataLogDir [kafka]

2025-01-07 Thread via GitHub


chia7712 commented on PR #17365:
URL: https://github.com/apache/kafka/pull/17365#issuecomment-2575265039

   oh, sorry that the PR I just merge causes the conflicts on 
`AutoTopicCreationManagerTest.scala`. I will fix the conflicts and then merge 
this PR since the `AutoTopicCreationManagerTest.scala` is unnecessary anymore.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10790: Add deadlock detection to producer#flush [kafka]

2025-01-07 Thread via GitHub


AndrewJSchofield commented on PR #17946:
URL: https://github.com/apache/kafka/pull/17946#issuecomment-2575281857

   @frankvicky It is unfortunate this behaviour change didn't make the 4.0 
deadline. Personally, because the old behaviour was liable to cause deadlocks, 
I'm good with making such a change in 4.1. @chia7712 What do you think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-18432) Remove AutoTopicCreationManager

2025-01-07 Thread Logan Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Logan Zhu reassigned KAFKA-18432:
-

Assignee: Logan Zhu  (was: Chia-Ping Tsai)

> Remove AutoTopicCreationManager
> ---
>
> Key: KAFKA-18432
> URL: https://issues.apache.org/jira/browse/KAFKA-18432
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Logan Zhu
>Priority: Major
>
> as title



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-10790: Add deadlock detection to producer#flush [kafka]

2025-01-07 Thread via GitHub


AndrewJSchofield commented on code in PR #17946:
URL: https://github.com/apache/kafka/pull/17946#discussion_r1905447643


##
docs/upgrade.html:
##
@@ -202,6 +202,8 @@ Notable changes in 4
 
 The deprecated 
sendOffsetsToTransaction(Map, 
String) method has been removed from the Producer API.
 
+The flush method now detects 
potential deadlocks and prohibits its use inside a callback. This change 
prevents unintended blocking behavior, which was a known risk in earlier 
versions.

Review Comment:
   This text now needs to be in an "upgrading to 4.1" section.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Improve PlaintextAdminIntegrationTest#testConsumerGroups [kafka]

2025-01-07 Thread via GitHub


dajac merged PR #18409:
URL: https://github.com/apache/kafka/pull/18409


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18412) Remove EmbeddedZookeeper

2025-01-07 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-18412.

Resolution: Fixed

> Remove EmbeddedZookeeper
> 
>
> Key: KAFKA-18412
> URL: https://issues.apache.org/jira/browse/KAFKA-18412
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: TengYao Chi
>Assignee: TengYao Chi
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18412: Remove EmbeddedZookeeper [kafka]

2025-01-07 Thread via GitHub


mimaison commented on PR #18399:
URL: https://github.com/apache/kafka/pull/18399#issuecomment-2575720963

   Applied to 4.0: 
https://github.com/apache/kafka/commit/8b6f8cf60dabc25192aeff68d46b82e7a4d36ad8


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18418: Use CDL to block the thread termination to avoid flaky tests [kafka]

2025-01-07 Thread via GitHub


aoli-al opened a new pull request, #18418:
URL: https://github.com/apache/kafka/pull/18418

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   This PR fixes 
[KAFKA-18418](https://issues.apache.org/jira/browse/KAFKA-18418). 
KafkaStreamsTest uses `Thread.sleep` to prevent threads from terminating. This 
introduces flaky tests if the sleep duration is not long enough. This patch 
fixes the issue by replacing the `Thread.sleep` with a `CountDownLatch`. The 
`CountDownLatch` will be released after assertions are validated. 
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   I tested the proposed fix against the patch: 
https://github.com/aoli-al/kafka/commit/aced4f1430139c315809cceca26558416140883d
 and verified that all tests have passed. I also tested the new code using 
[Fray](https://github.com/cmu-pasta/fray) (the tool that found the bug), and 
Fray did not report any bug using the POS strategy after 10 minutes.  
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10790: Add deadlock detection to producer#flush [kafka]

2025-01-07 Thread via GitHub


AndrewJSchofield merged PR #17946:
URL: https://github.com/apache/kafka/pull/17946


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-10790) Detect/Prevent Deadlock on Producer Network Thread

2025-01-07 Thread Andrew Schofield (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Schofield resolved KAFKA-10790.
--
Resolution: Fixed

> Detect/Prevent Deadlock on Producer Network Thread
> --
>
> Key: KAFKA-10790
> URL: https://issues.apache.org/jira/browse/KAFKA-10790
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Gary Russell
>Assignee: TengYao Chi
>Priority: Major
> Fix For: 4.1.0
>
>
> I realize this is contrived, but I stumbled across the problem while testing 
> some library code with 2.7.0 RC3 (although the issue is not limited to 2.7).
> For example, calling flush() on the producer callback deadlocks the network 
> thread (and any attempt to close the producer thereafter).
> {code:java}
> producer.send(new ProducerRecord("foo", "bar"), (rm, ex) -> {
>   producer.flush();
> });
> Thread.sleep(1000);
> producer.close();
> {code}
> It took some time to figure out why the close was blocked.
> There is existing logic in close() to avoid it blocking if called from the 
> callback; perhaps similar logic could be added to flush() (and any other 
> methods that might block), even if it means throwing an exception to make it 
> clear that you can't call flush() from the callback. 
> These stack traces are with the 2.6.0 client.
> {noformat}
> "main" #1 prio=5 os_prio=31 cpu=1333.10ms elapsed=13.05s 
> tid=0x7ff259012800 nid=0x2803 in Object.wait()  [0x7fda5000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(java.base@14.0.2/Native Method)
>   - waiting on <0x000700d0> (a 
> org.apache.kafka.common.utils.KafkaThread)
>   at java.lang.Thread.join(java.base@14.0.2/Thread.java:1297)
>   - locked <0x000700d0> (a 
> org.apache.kafka.common.utils.KafkaThread)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1205)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1182)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1158)
>   at com.example.demo.Rk1Application.lambda$2(Rk1Application.java:55)
> "kafka-producer-network-thread | producer-1" #24 daemon prio=5 os_prio=31 
> cpu=225.80ms elapsed=11.64s tid=0x7ff256963000 nid=0x7103 waiting on 
> condition  [0x700011d04000]
>java.lang.Thread.State: WAITING (parking)
>   at jdk.internal.misc.Unsafe.park(java.base@14.0.2/Native Method)
>   - parking to wait for  <0x0007020b27e0> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at 
> java.util.concurrent.locks.LockSupport.park(java.base@14.0.2/LockSupport.java:211)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@14.0.2/AbstractQueuedSynchronizer.java:714)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@14.0.2/AbstractQueuedSynchronizer.java:1046)
>   at 
> java.util.concurrent.CountDownLatch.await(java.base@14.0.2/CountDownLatch.java:232)
>   at 
> org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:712)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:)
>   at com.example.demo.Rk1Application.lambda$3(Rk1Application.java:52)
>   at 
> com.example.demo.Rk1Application$$Lambda$528/0x000800e28840.onCompletion(Unknown
>  Source)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1363)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:228)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:197)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:653)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:634)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:554)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.lambda$sendProduceRequest$0(Sender.java:743)
>   at 
> org.apache.kafka.clients.producer.internals.Sender$$Lambda$642/0x000800ea2040.onComplete(Unknown
>  Source)
>   at 
> org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
>   at 
> org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:566)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:558)
>   

Re: [PR] KAFKA-18399 Remove ZooKeeper from KafkaApis (part 1) [kafka]

2025-01-07 Thread via GitHub


m1a2st commented on code in PR #18417:
URL: https://github.com/apache/kafka/pull/18417#discussion_r1905629931


##
core/src/test/scala/unit/kafka/server/KafkaApisTest.scala:
##
@@ -9705,77 +9514,6 @@ class KafkaApisTest extends Logging {
 assertEquals(expectedError, leaderAndIsrResponse.error())
   }
 
-  @Test
-  def testStopReplicaRequestWithCurrentBrokerEpoch(): Unit = {
-val currentBrokerEpoch = 1239875L
-testStopReplicaRequest(currentBrokerEpoch, currentBrokerEpoch, Errors.NONE)
-  }
-
-  @Test
-  def testStopReplicaRequestWithNewerBrokerEpochIsValid(): Unit = {
-val currentBrokerEpoch = 1239875L
-testStopReplicaRequest(currentBrokerEpoch, currentBrokerEpoch + 1, 
Errors.NONE)
-  }
-
-  @Test
-  def testStopReplicaRequestWithStaleBrokerEpochIsRejected(): Unit = {
-val currentBrokerEpoch = 1239875L
-testStopReplicaRequest(currentBrokerEpoch, currentBrokerEpoch - 1, 
Errors.STALE_BROKER_EPOCH)
-  }

Review Comment:
   These tests are testing `STOP_REPLICA` api, it only use in zookeeper, thus 
we should remove it.



##
core/src/test/scala/unit/kafka/server/KafkaApisTest.scala:
##
@@ -3035,98 +3002,6 @@ class KafkaApisTest extends Logging {
 assertEquals(2, markersResponse.errorsByProducerId.size())
   }
 
-  @Test
-  def 
shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlagAndLeaderEpoch(): 
Unit = {
-shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlag(
-  LeaderAndIsr.INITIAL_LEADER_EPOCH + 2, deletePartition = true)
-  }
-
-  @Test
-  def 
shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlagAndDeleteSentinel(): 
Unit = {
-shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlag(
-  LeaderAndIsr.EPOCH_DURING_DELETE, deletePartition = true)
-  }
-
-  @Test
-  def 
shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlagAndNoEpochSentinel():
 Unit = {
-shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlag(
-  LeaderAndIsr.NO_EPOCH, deletePartition = true)
-  }
-
-  @Test
-  def shouldNotResignCoordinatorsIfStopReplicaReceivedWithoutDeleteFlag(): 
Unit = {
-shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlag(
-  LeaderAndIsr.INITIAL_LEADER_EPOCH + 2, deletePartition = false)
-  }
-
-  def shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlag(leaderEpoch: 
Int,

Review Comment:
   These tests are testing `STOP_REPLICA` api, it only use in zookeeper, thus 
we should remove it.



##
core/src/test/scala/unit/kafka/server/KafkaApisTest.scala:
##
@@ -9590,12 +9405,6 @@ class KafkaApisTest extends Logging {
 assertEquals(records.sizeInBytes(), 
brokerTopicStats.allTopicsStats.replicationBytesOutRate.get.count())
   }
 
-  @Test
-  def testUpdateMetadataRequestWithCurrentBrokerEpoch(): Unit = {
-val currentBrokerEpoch = 1239875L
-testUpdateMetadataRequest(currentBrokerEpoch, currentBrokerEpoch, 
Errors.NONE)
-  }

Review Comment:
   This test is testing`UPDATE_METADATA` api, it only use in zookeeper, thus we 
should remove it.



##
core/src/test/scala/unit/kafka/server/KafkaApisTest.scala:
##
@@ -4279,66 +4154,6 @@ class KafkaApisTest extends Logging {
 assertEquals(Set(0), response.brokers.asScala.map(_.id).toSet)
   }
 
-
-  /**
-   * Metadata request to fetch all topics should not result in the followings:
-   * 1) Auto topic creation
-   * 2) UNKNOWN_TOPIC_OR_PARTITION
-   *
-   * This case is testing the case that a topic is being deleted from 
MetadataCache right after
-   * authorization but before checking in MetadataCache.
-   */
-  @Test
-  def 
testGetAllTopicMetadataShouldNotCreateTopicOrReturnUnknownTopicPartition(): 
Unit = {
-// Setup: authorizer authorizes 2 topics, but one got deleted in metadata 
cache
-metadataCache = mock(classOf[ZkMetadataCache])
-when(metadataCache.getAliveBrokerNodes(any())).thenReturn(List(new 
Node(brokerId,"localhost", 0)))
-when(metadataCache.getControllerId).thenReturn(None)
-
-// 2 topics returned for authorization in during handle
-val topicsReturnedFromMetadataCacheForAuthorization = 
Set("remaining-topic", "later-deleted-topic")
-
when(metadataCache.getAllTopics()).thenReturn(topicsReturnedFromMetadataCacheForAuthorization)
-// 1 topic is deleted from metadata right at the time between 
authorization and the next getTopicMetadata() call
-when(metadataCache.getTopicMetadata(
-  ArgumentMatchers.eq(topicsReturnedFromMetadataCacheForAuthorization),
-  any[ListenerName],
-  anyBoolean,
-  anyBoolean
-)).thenReturn(Seq(
-  new MetadataResponseTopic()
-.setErrorCode(Errors.NONE.code)
-.setName("remaining-topic")
-.setIsInternal(false)
-))
-
-
-var createTopicIsCalled: Boolean = false
-// Specific mock on zkClient for this use case
-// Expect it's never called to do auto topic creation
-when(zkClient.setOrCreateEntityConfigs(
-  ArgumentMatchers.eq(ConfigType.TOPIC),
- 

[PR] KAFKA-18399 Remove ZooKeeper from KafkaApis (part 1) [kafka]

2025-01-07 Thread via GitHub


m1a2st opened a new pull request, #18417:
URL: https://github.com/apache/kafka/pull/18417

   Jira: https://issues.apache.org/jira/browse/KAFKA-18399
   
   I will check these test is valid in Kraft, if it is replies on zookeeper, I 
will remove it.
   - `shouldNotResignCoordinatorsIfStopReplicaReceivedWithoutDeleteFlag`
   - `shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlagAndLeaderEpoch`
   - 
`shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlagAndDeleteSentinel`
   - 
`shouldResignCoordinatorsIfStopReplicaReceivedWithDeleteFlagAndNoEpochSentinel`
   - `testStopReplicaRequestWithNewerBrokerEpochIsValid`
   - `testLeaderAndIsrRequestWithCurrentBrokerEpoch`
   - `testGetAllTopicMetadataShouldNotCreateTopicOrReturnUnknownTopicPartition`
   - `testCreatePartitionsAuthorization`
   - `testUpdateMetadataRequestWithCurrentBrokerEpoch`
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-17921: Support SASL_PLAINTEXT protocol with java.security.auth.login.config [kafka]

2025-01-07 Thread via GitHub


FrankYang0529 commented on code in PR #17671:
URL: https://github.com/apache/kafka/pull/17671#discussion_r1905618223


##
test-common/test-common-api/src/test/java/org/apache/kafka/common/test/api/ClusterTestExtensionsTest.java:
##
@@ -331,4 +344,111 @@ public void testControllerListenerName(ClusterInstance 
cluster) throws Execution
 assertEquals(1, 
admin.describeMetadataQuorum().quorumInfo().get().nodes().size());
 }
 }
+
+@ClusterTest(
+types = {Type.KRAFT, Type.CO_KRAFT},
+brokerSecurityProtocol = SecurityProtocol.SASL_PLAINTEXT,
+controllerSecurityProtocol = SecurityProtocol.SASL_PLAINTEXT,

Review Comment:
   Yes, we already support it in this PR. I update 
KafkaClusterTestKit.Builder#createNodeConfig to respect 
`controllerSecurityProtocol` setting.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18428: Measure share consumers performance [kafka]

2025-01-07 Thread via GitHub


adixitconfluent opened a new pull request, #18415:
URL: https://github.com/apache/kafka/pull/18415

   ### About
   Added code to measure performance of share consumer/consumers in a share 
group.Added the following files - 
   
   1. `ShareConsumerPerformance.java` - Code which measures the performance of 
share consumer/consumers.
   2. `ShareConsumerPerformanceTest.java` - Contains unit tests for individual 
functionalities in `ShareConsumerPerformance.java`
   3. `kafka-share-consumer-perf-test.sh` - CLI utility to run 
`ShareConsumerPerformance.java`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15057: Use new interface from zstd-jni [kafka]

2025-01-07 Thread via GitHub


divijvaidya closed pull request #13814: KAFKA-15057: Use new interface from 
zstd-jni
URL: https://github.com/apache/kafka/pull/13814


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10790: Add deadlock detection to producer#flush [kafka]

2025-01-07 Thread via GitHub


chia7712 commented on code in PR #17946:
URL: https://github.com/apache/kafka/pull/17946#discussion_r1905457391


##
docs/upgrade.html:
##
@@ -202,6 +202,8 @@ Notable changes in 4
 
 The deprecated 
sendOffsetsToTransaction(Map, 
String) method has been removed from the Producer API.
 
+The flush method now detects 
potential deadlocks and prohibits its use inside a callback. This change 
prevents unintended blocking behavior, which was a known risk in earlier 
versions.

Review Comment:
   I agree with @AndrewJSchofield that this PR will not be backported to the 
4.0 release. Therefore, this change should not be included in the 
"upgrade_4_0_0" chapter. @frankvicky, could you please create a new chapter 
titled "upgrade_4_1_0" and move this documentation to the "upgrade_4_1_0" 
chapter?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Minor : Improve Exception log [kafka]

2025-01-07 Thread via GitHub


omkreddy merged PR #12394:
URL: https://github.com/apache/kafka/pull/12394


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18418) Flaky test in KafkaStreamsTest::shouldThrowOnCleanupWhileShuttingDownStreamClosedWithCloseOptionLeaveGroupFalse

2025-01-07 Thread Ao Li (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910687#comment-17910687
 ] 

Ao Li commented on KAFKA-18418:
---

[~mjsax] Thanks for your comments! I've submitted a new PR based on your 
review. 

This time, I also tested the new patch using 
[Fray|https://github.com/cmu-pasta/fray], and it did not report any issue after 
10 minutes. (p.s. if you are interested, Fray is a concurrency testing 
framework we built to test concurrent programs under different thread 
interleavings deterministically. You may have noticed that I submitted many bug 
reports related to concurrency issues, and they were all found by Fray!)

> Flaky test in 
> KafkaStreamsTest::shouldThrowOnCleanupWhileShuttingDownStreamClosedWithCloseOptionLeaveGroupFalse
> ---
>
> Key: KAFKA-18418
> URL: https://issues.apache.org/jira/browse/KAFKA-18418
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Ao Li
>Assignee: Ao Li
>Priority: Major
>
> KafkaStreams does not synchronize with CloseThread after shutdown thread 
> starts at line 
> https://github.com/apache/kafka/blob/c1163549081561cade03bbc6a29bfe6caad332a2/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java#L1571
> So it is possible for the shutdown helper update the state of the 
> KafkaStreams 
> (https://github.com/apache/kafka/blob/c1163549081561cade03bbc6a29bfe6caad332a2/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java#L1530)
>  before `waitOnState` is called 
> (https://github.com/apache/kafka/blob/c1163549081561cade03bbc6a29bfe6caad332a2/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java#L1577).
>   
> If this happens, 
> `KafkaStreamsTest::shouldThrowOnCleanupWhileShuttingDownStreamClosedWithCloseOptionLeaveGroupFalse`
>  will fail. 
> Trace:
> ```
> Gradle Test Run :streams:test > Gradle Test Executor 7 > KafkaStreamsTest > 
> shouldThrowOnCleanupWhileShuttingDownStreamClosedWithCloseOptionLeaveGroupFalse()
>  FAILED
> java.lang.AssertionError:
> Expected: 
>  but: was 
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileShuttingDownStreamClosedWithCloseOptionLeaveGroupFalse(KafkaStreamsTest.java:986)
> ```
> Please check code https://github.com/aoli-al/kafka/tree/KAFKA-159, and run 
> `./gradlew :streams:test --tests 
> "org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileShuttingDownStreamClosedWithCloseOptionLeaveGroupFalse"`
>  to reproduce the failure.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18034: CommitRequestManager should fail pending requests on fatal coordinator errors [kafka]

2025-01-07 Thread via GitHub


m1a2st commented on code in PR #18050:
URL: https://github.com/apache/kafka/pull/18050#discussion_r1905777105


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/CommitRequestManager.java:
##
@@ -196,6 +198,14 @@ public NetworkClientDelegate.PollResult poll(final long 
currentTimeMs) {
 return new NetworkClientDelegate.PollResult(timeUntilNextPoll, 
requests);
 }
 
+private void maybeFailPendingRequestsOnCoordinatorFatalError() {
+Optional fatalError = 
coordinatorRequestManager.fatalError();
+if (fatalError.isPresent()) {
+pendingRequests.unsentOffsetCommits.forEach(request -> 
request.future.completeExceptionally(fatalError.get()));
+pendingRequests.unsentOffsetFetches.forEach(request -> 
request.future.completeExceptionally(fatalError.get()));

Review Comment:
   I addressed the clearAll() method after @lianetm’s comments. It makes sense 
to clear all pendingRequests once we’ve completed all unused requests and 
fetches.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Improve PlaintextAdminIntegrationTest#testConsumerGroups [kafka]

2025-01-07 Thread via GitHub


dajac commented on PR #18409:
URL: https://github.com/apache/kafka/pull/18409#issuecomment-2575439307

   Merged to trunk and to 4.0.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [KAFKA-8830] KIP-512: make Record Headers available in onAcknowledgement [kafka]

2025-01-07 Thread via GitHub


chia7712 commented on code in PR #17099:
URL: https://github.com/apache/kafka/pull/17099#discussion_r1905466781


##
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java:
##
@@ -1546,6 +1547,7 @@ private AppendCallbacks(Callback userCallback, 
ProducerInterceptors interc
 // whole lifetime of the batch.
 // We don't want to have an NPE here, because the interceptors 
would not be notified (see .doSend).
 topic = record != null ? record.topic() : null;
+headers = record != null ? record.headers() : new RecordHeaders();

Review Comment:
   To maintain header immutability, especially given your efforts in this area, 
it might be prudent to pass immutable headers when record is null.



##
clients/src/main/java/org/apache/kafka/clients/producer/ProducerInterceptor.java:
##
@@ -81,12 +82,37 @@ public interface ProducerInterceptor extends 
Configurable, AutoCloseable {
  * @param metadata The metadata for the record that was sent (i.e. the 
partition and offset).
  * If an error occurred, metadata will contain only valid 
topic and maybe
  * partition. If partition is not given in ProducerRecord 
and an error occurs
- * before partition gets assigned, then partition will be 
set to RecordMetadata.NO_PARTITION.
+ * before partition gets assigned, then partition will be 
set to {@link RecordMetadata#UNKNOWN_PARTITION}.
  * The metadata may be null if the client passed null 
record to
  * {@link 
org.apache.kafka.clients.producer.KafkaProducer#send(ProducerRecord)}.
  * @param exception The exception thrown during processing of this record. 
Null if no error occurred.
  */
-void onAcknowledgement(RecordMetadata metadata, Exception exception);
+default void onAcknowledgement(RecordMetadata metadata, Exception 
exception) {}
+
+/**
+ * This method is called when the record sent to the server has been 
acknowledged, or when sending the record fails before
+ * it gets sent to the server.
+ * 
+ * This method is generally called just before the user callback is 
called, and in additional cases when KafkaProducer.send()
+ * throws an exception.
+ * 
+ * Any exception thrown by this method will be ignored by the caller.
+ * 
+ * This method will generally execute in the background I/O thread, so the 
implementation should be reasonably fast.
+ * Otherwise, sending of messages from other threads could be delayed.
+ *
+ * @param metadata The metadata for the record that was sent (i.e. the 
partition and offset).
+ * If an error occurred, metadata will contain only valid 
topic and maybe
+ * partition. If partition is not given in ProducerRecord 
and an error occurs
+ * before partition gets assigned, then partition will be 
set to {@link RecordMetadata#UNKNOWN_PARTITION}.
+ * The metadata may be null if the client passed null 
record to
+ * {@link 
org.apache.kafka.clients.producer.KafkaProducer#send(ProducerRecord)}.
+ * @param exception The exception thrown during processing of this record. 
Null if no error occurred.
+ * @param headers The headers for the record that was sent.

Review Comment:
   Should we remind users that they should not change the headers?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15599: Move SegmentPosition/TimingWheelExpirationService to raf… [kafka]

2025-01-07 Thread via GitHub


divijvaidya commented on code in PR #18094:
URL: https://github.com/apache/kafka/pull/18094#discussion_r1905471308


##
raft/src/main/java/org/apache/kafka/raft/TimingWheelExpirationService.java:
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.raft;
+
+import org.apache.kafka.common.errors.TimeoutException;
+import org.apache.kafka.server.util.ShutdownableThread;
+import org.apache.kafka.server.util.timer.Timer;
+import org.apache.kafka.server.util.timer.TimerTask;
+
+import java.util.concurrent.CompletableFuture;
+
+public class TimingWheelExpirationService implements ExpirationService {
+
+private static final long WORK_TIMEOUT_MS = 200L;
+
+private final ExpiredOperationReaper expirationReaper = new 
ExpiredOperationReaper();
+private final Timer timer;
+
+public TimingWheelExpirationService(Timer timer) {
+this.timer = timer;
+expirationReaper.start();
+}
+
+@Override
+public  CompletableFuture failAfter(long timeoutMs) {
+TimerTaskCompletableFuture task = new 
TimerTaskCompletableFuture<>(timeoutMs);

Review Comment:
   There concern is that we are adding one more responsibility to 
`ForkJoinPool` with this refactor, which could lead to thread contention and 
maybe starve other tasks using `ForkJoinPool`. In general, in a refactor CR, I 
would prefer to not have any change in behaviour at all and here, changing 
responsibility from one thread (`SystemTimer`) to another is a change which 
could have unintended side effects.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: improvement MetadataShell to use System.out.println [kafka]

2025-01-07 Thread via GitHub


m1a2st closed pull request #18117: MINOR: improvement MetadataShell to use 
System.out.println
URL: https://github.com/apache/kafka/pull/18117


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18397: Added null check before sending background event from ShareConsumeRequestManager. [kafka]

2025-01-07 Thread via GitHub


ShivsundarR opened a new pull request, #18419:
URL: https://github.com/apache/kafka/pull/18419

   *What*
   Added a null check for acknowledgements before sending background event from 
ShareConsumeRequestManager. 
   Added a unit test which verifies the change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-17544: Fix for loading big files while performing load tests [kafka]

2025-01-07 Thread via GitHub


manoj-mathivanan commented on code in PR #18391:
URL: https://github.com/apache/kafka/pull/18391#discussion_r1905788910


##
tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java:
##
@@ -194,9 +195,16 @@ static List readPayloadFile(String 
payloadFilePath, String payloadDelimi
 throw new IllegalArgumentException("File does not exist or 
empty file provided.");
 }
 
-String[] payloadList = 
Files.readString(path).split(payloadDelimiter);
+List payloadList = new ArrayList<>();
+try (Scanner payLoadScanner = new Scanner(path, 
StandardCharsets.UTF_8)) {
+//setting the delimiter while parsing the file, avoids loading 
entire data in memory before split
+payLoadScanner.useDelimiter(payloadDelimiter);
+while (payLoadScanner.hasNext()) {
+payloadList.add(payLoadScanner.next());

Review Comment:
   @m1a2st The `payLoadScanner.next()` starts parsing from the position where 
it is currently at and keeps searching for the delimiter till the end of file 
as seen here: 
https://github.com/openjdk/jdk/blob/f1d85ab3e61f923b4e120cf30e16109e04505b53/src/java.base/share/classes/java/util/Scanner.java#L1011
   So if the delimiter does not exist, we will have the same problem which will 
happen even in the previous case where we load the entire string.
   If the delimiter exists, the substrings are fetched and the new position is 
updated. The entire String is not in the memory. 



##
tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java:
##
@@ -194,9 +195,16 @@ static List readPayloadFile(String 
payloadFilePath, String payloadDelimi
 throw new IllegalArgumentException("File does not exist or 
empty file provided.");
 }
 
-String[] payloadList = 
Files.readString(path).split(payloadDelimiter);
+List payloadList = new ArrayList<>();
+try (Scanner payLoadScanner = new Scanner(path, 
StandardCharsets.UTF_8)) {
+//setting the delimiter while parsing the file, avoids loading 
entire data in memory before split
+payLoadScanner.useDelimiter(payloadDelimiter);
+while (payLoadScanner.hasNext()) {
+payloadList.add(payLoadScanner.next());

Review Comment:
   @m1a2st The `payLoadScanner.next()` starts parsing from the position where 
it is currently at and keeps searching for the delimiter till the end of file 
as seen here: 
https://github.com/openjdk/jdk/blob/f1d85ab3e61f923b4e120cf30e16109e04505b53/src/java.base/share/classes/java/util/Scanner.java#L1011
   
   So if the delimiter does not exist, we will have the same problem which will 
happen even in the previous case where we load the entire string.
   If the delimiter exists, the substrings are fetched and the new position is 
updated. The entire String is not in the memory. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15599: Move SegmentPosition/TimingWheelExpirationService to raf… [kafka]

2025-01-07 Thread via GitHub


mimaison commented on code in PR #18094:
URL: https://github.com/apache/kafka/pull/18094#discussion_r1905794800


##
raft/src/main/java/org/apache/kafka/raft/TimingWheelExpirationService.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.raft;
+
+import org.apache.kafka.server.util.ShutdownableThread;
+import org.apache.kafka.server.util.timer.Timer;
+
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.TimeUnit;
+
+public class TimingWheelExpirationService implements ExpirationService {
+
+private static final long WORK_TIMEOUT_MS = 200L;
+
+private final ExpiredOperationReaper expirationReaper = new 
ExpiredOperationReaper();

Review Comment:
   Yes good idea, done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15599: Move SegmentPosition/TimingWheelExpirationService to raf… [kafka]

2025-01-07 Thread via GitHub


mimaison commented on code in PR #18094:
URL: https://github.com/apache/kafka/pull/18094#discussion_r1905795147


##
raft/src/main/java/org/apache/kafka/raft/TimingWheelExpirationService.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.raft;
+
+import org.apache.kafka.server.util.ShutdownableThread;
+import org.apache.kafka.server.util.timer.Timer;
+
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.TimeUnit;
+
+public class TimingWheelExpirationService implements ExpirationService {

Review Comment:
   I reverted this change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18384 Remove ZkAlterPartitionManager [kafka]

2025-01-07 Thread via GitHub


mimaison merged PR #18364:
URL: https://github.com/apache/kafka/pull/18364


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18368) Remove TestUtils#MockZkConnect and remove zkConnect from TestUtils#createBrokerConfig

2025-01-07 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-18368.

Fix Version/s: 4.0.0
   Resolution: Fixed

trunk: 
https://github.com/apache/kafka/commit/d874aa42f3d54f1474f2e617c792e48ed01f6a77

4.0: 
https://github.com/apache/kafka/commit/3061e9ca14ffaade56785bc1f49ecbbe9689e9de

> Remove TestUtils#MockZkConnect and remove zkConnect from 
> TestUtils#createBrokerConfig
> -
>
> Key: KAFKA-18368
> URL: https://issues.apache.org/jira/browse/KAFKA-18368
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: 黃竣陽
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18297) Fix flaky PlaintextAdminIntegrationTest.testConsumerGroups

2025-01-07 Thread Mingdao Yang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910634#comment-17910634
 ] 

Mingdao Yang commented on KAFKA-18297:
--

[~dajac] The GitHub URL has an extra period(.) which would not take us to 
[https://github.com/apache/kafka/pull/18409] 

> Fix flaky PlaintextAdminIntegrationTest.testConsumerGroups
> --
>
> Key: KAFKA-18297
> URL: https://issues.apache.org/jira/browse/KAFKA-18297
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Blocker
>  Labels: kip-848-client-support
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-18431) Remove KafkaController

2025-01-07 Thread PoAn Yang (Jira)
PoAn Yang created KAFKA-18431:
-

 Summary: Remove KafkaController
 Key: KAFKA-18431
 URL: https://issues.apache.org/jira/browse/KAFKA-18431
 Project: Kafka
  Issue Type: Sub-task
Reporter: PoAn Yang
Assignee: PoAn Yang


[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/controller/KafkaController.scala]

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Remove RaftManager.maybeDeleteMetadataLogDir [kafka]

2025-01-07 Thread via GitHub


mimaison commented on PR #17365:
URL: https://github.com/apache/kafka/pull/17365#issuecomment-2575187788

   The test failure is https://issues.apache.org/jira/browse/KAFKA-18297, so 
it's not related to this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-18433) Adjust ShareFetchRequest RPC to optimise fetching and acknowledgement

2025-01-07 Thread Andrew Schofield (Jira)
Andrew Schofield created KAFKA-18433:


 Summary: Adjust ShareFetchRequest RPC to optimise fetching and 
acknowledgement
 Key: KAFKA-18433
 URL: https://issues.apache.org/jira/browse/KAFKA-18433
 Project: Kafka
  Issue Type: Sub-task
Reporter: Andrew Schofield
Assignee: Andrew Schofield
 Fix For: 4.1.0


Change the RPC request to better align the consumer and broker processing for 
fetching and acknowledgement. Specifically:
* Remove PartitionMaxBytes because for share fetching the broker has 
responsibility for which partitions are fetched as opposed to the consumer
* Add BatchSize which gives the optimal number of records for batches of 
acquired records and acknowledgements.

There will likely be a future round of optimisation to follow.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Delete unused local var `oldConfig` in `LocalLog.java` [kafka]

2025-01-07 Thread via GitHub


divijvaidya merged PR #18410:
URL: https://github.com/apache/kafka/pull/18410


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10790: Add deadlock detection to producer#flush [kafka]

2025-01-07 Thread via GitHub


frankvicky commented on code in PR #17946:
URL: https://github.com/apache/kafka/pull/17946#discussion_r1905476456


##
docs/upgrade.html:
##
@@ -202,6 +202,8 @@ Notable changes in 4
 
 The deprecated 
sendOffsetsToTransaction(Map, 
String) method has been removed from the Producer API.
 
+The flush method now detects 
potential deadlocks and prohibits its use inside a callback. This change 
prevents unintended blocking behavior, which was a known risk in earlier 
versions.

Review Comment:
   I have applied the suggestion. Here is the preview:
   ![Screenshot from 2025-01-07 
21-43-32](https://github.com/user-attachments/assets/015a16b1-8268-4965-8aab-e6adb0c842ce)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-17921: Support SASL_PLAINTEXT protocol with java.security.auth.login.config [kafka]

2025-01-07 Thread via GitHub


chia7712 commented on code in PR #17671:
URL: https://github.com/apache/kafka/pull/17671#discussion_r1905506205


##
test-common/src/main/java/org/apache/kafka/common/test/KafkaClusterTestKit.java:
##
@@ -602,6 +637,13 @@ public void close() throws Exception {
 waitForAllFutures(futureEntries);
 futureEntries.clear();
 Utils.delete(baseDirectory);
+jaasFile.ifPresent(f -> {
+try {

Review Comment:
   To enhance readability, we should explore alternatives to nested try-catch 
blocks, even if it involves using a more traditional coding style. for example: 
`if (jaasFile.isPresent()) Utils.delete(jaasFile.get());`



##
test-common/src/main/java/org/apache/kafka/common/test/JaasUtils.java:
##
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.test;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import javax.security.auth.login.Configuration;
+
+public class JaasUtils {
+public record JaasSection(String contextName, List modules) {
+@Override
+public String toString() {
+return String.format(
+"%s {%n  %s%n};%n",
+contextName,
+
modules.stream().map(Object::toString).collect(Collectors.joining("\n  "))
+);
+}
+}
+
+public static final String KAFKA_SERVER_CONTEXT_NAME = "KafkaServer";
+
+public static final String KAFKA_PLAIN_USER1 = "plain-user1";
+public static final String KAFKA_PLAIN_USER1_PASSWORD = 
"plain-user1-secret";
+public static final String KAFKA_PLAIN_ADMIN = "plain-admin";
+public static final String KAFKA_PLAIN_ADMIN_PASSWORD = 
"plain-admin-secret";
+
+public static File writeJaasContextsToFile(Map 
jaasSections) throws IOException {

Review Comment:
   It seems we don't use the `key`, so maybe we can change the collection from 
Map to Set?



##
test-common/test-common-api/src/test/java/org/apache/kafka/common/test/api/ClusterTestExtensionsTest.java:
##
@@ -331,4 +344,111 @@ public void testControllerListenerName(ClusterInstance 
cluster) throws Execution
 assertEquals(1, 
admin.describeMetadataQuorum().quorumInfo().get().nodes().size());
 }
 }
+
+@ClusterTest(
+types = {Type.KRAFT, Type.CO_KRAFT},
+brokerSecurityProtocol = SecurityProtocol.SASL_PLAINTEXT,
+controllerSecurityProtocol = SecurityProtocol.SASL_PLAINTEXT,

Review Comment:
   Do we support the `SASL_PLAINTEXT` on controller? If not, could you please 
remove this config and open a jira for it?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Revert Multiversioning Changes from 4.0 release. [kafka]

2025-01-07 Thread via GitHub


chia7712 commented on PR #18411:
URL: https://github.com/apache/kafka/pull/18411#issuecomment-2575386897

   @snehashisp could you please add the jira number to the title?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18412: Remove EmbeddedZookeeper [kafka]

2025-01-07 Thread via GitHub


mimaison merged PR #18399:
URL: https://github.com/apache/kafka/pull/18399


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18384) Remove ZkAlterPartitionManager

2025-01-07 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-18384.

Fix Version/s: 4.0.0
   Resolution: Fixed

> Remove ZkAlterPartitionManager 
> ---
>
> Key: KAFKA-18384
> URL: https://issues.apache.org/jira/browse/KAFKA-18384
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-18436) Revert KIP-891 from 4.0 release

2025-01-07 Thread Greg Harris (Jira)
Greg Harris created KAFKA-18436:
---

 Summary: Revert KIP-891 from 4.0 release
 Key: KAFKA-18436
 URL: https://issues.apache.org/jira/browse/KAFKA-18436
 Project: Kafka
  Issue Type: Task
  Components: connect
Affects Versions: 4.0.0
Reporter: Greg Harris
Assignee: Snehashis Pal






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18376) High CPU load when AsyncKafkaConsumer uses a small max poll value

2025-01-07 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans updated KAFKA-18376:
---
Labels: consumer-threading-refactor  (was: )

> High CPU load when AsyncKafkaConsumer uses a small max poll value
> -
>
> Key: KAFKA-18376
> URL: https://issues.apache.org/jira/browse/KAFKA-18376
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>  Labels: consumer-threading-refactor
>
> We stress tested the AsyncConsumer using maxPoll = 5 and observed abnormally 
> high CPU usage.  Under normal usage (with defaults), the consumers on average 
> use around 10% of the CPU with 20mb/s byte rate, which is aligned with the 
> ClassicKafkaConsumer.  As we tested the consumer with a small max poll value, 
> we observed the CPU usage spikes to > 50% while the classic consumer stays at 
> around 10%.
>  
> _note: percentage of CPU usage may depend on the running pod hardware._
>  
> The profiling results shows two major contributors to the CPU cycles
>  # AsyncKafkaConsumer.updateFetchPosition (addAndGet & new 
> CheckAndUpdatePositionEvent())
>  # AbstractFetch.fetchablePartitions from the fetchrequestmanager
>  
> for AsyncKafkaConsumer.updateFetchPosition - It seems like
>  * UUID generation can become quite expensive. This is particularly 
> noticeable when creating large number of events
>  * ConsumerUtils.getResult, which uses future.get() also consumes quite a bit 
> of CPU cycles
> for fetchablePartitions, FetchBuffer.bufferedPartitions which uses Java 
> ConcurrentLinkedQueue.forEach also consumes quite a bit of CPUs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18384 Remove ZkAlterPartitionManager [kafka]

2025-01-07 Thread via GitHub


mimaison commented on PR #18364:
URL: https://github.com/apache/kafka/pull/18364#issuecomment-2575863864

   Applied to 4.0: 
https://github.com/apache/kafka/commit/5bcbf247dfa85d4bdb552a59dcc00d0b7841c77b


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-13093: Log compaction should write new segments with record version v2 (KIP-724) [kafka]

2025-01-07 Thread via GitHub


junrao commented on code in PR #18321:
URL: https://github.com/apache/kafka/pull/18321#discussion_r1905814075


##
core/src/main/scala/kafka/log/UnifiedLog.scala:
##
@@ -509,8 +509,8 @@ class UnifiedLog(@volatile var logStartOffset: Long,
   }
 
   private def initializeLeaderEpochCache(): Unit = lock synchronized {
-leaderEpochCache = UnifiedLog.maybeCreateLeaderEpochCache(
-  dir, topicPartition, logDirFailureChannel, logIdent, leaderEpochCache, 
scheduler)
+leaderEpochCache = Some(UnifiedLog.createLeaderEpochCache(

Review Comment:
   We set `shouldReinitialize` to false when the replica is no longer needed. 
So, we can just set `leaderEpochCache` to null in that case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18217) Slow HWM/LSO update might have subtle effect on the consumer lag reporting

2025-01-07 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans updated KAFKA-18217:
---
Labels: consumer-threading-refactor  (was: )

> Slow HWM/LSO update might have subtle effect on the consumer lag reporting
> --
>
> Key: KAFKA-18217
> URL: https://issues.apache.org/jira/browse/KAFKA-18217
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Reporter: Philip Nee
>Priority: Minor
>  Labels: consumer-threading-refactor
>
> We've discovered the consumer lag metrics appear spiky for the 
> AsyncKafkaConsumer.  We examined how HWM/LSO is updated and measure the 
> cadence between the two consumer using the local examples. TL;DR - Consumer 
> Lag metrics can sometimes be off due to KAFKA-18216 and slowness of HWM/LSO 
> update.
>  
> Context: Fetcher performs multiple consumer lag measurements between two 
> HWM/LSO updates.  The closer the HWM/LSO update, the better the lag 
> measurement is because
> lag = HWM/LSO - fetch position
> The elementary statics show the behavioral differences between the 2 consumer 
> implementations.  The data will vary based on the platform running these 
> tests, so this is just for the reader's reference. (These are the outputs of 
> my custom script).  Both are measuring by produce-consuming 200 million 
> records.
>  
> AsyncKafkaConsumer
> Updating 7179 HWM/LSO
> Average HWM/LSO increment: 3589.99
> Standard deviation of increment: 2381.07
> Average number of 'recording lag' count: 7.69
> Standard deviation of 'recording lag' count: 4.66
>  
> ClassicKafkaConsumer
> Updating 58418 HWM/LSO
> Average HWM/LSO increment 1223.02
> Standard deviation of increment: 532.52
> Average 'recording lag' count: 2.95
> Standard deviation of 'recording lag' count: 1.10



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-13093: Log compaction should write new segments with record version v2 (KIP-724) [kafka]

2025-01-07 Thread via GitHub


junrao commented on code in PR #18321:
URL: https://github.com/apache/kafka/pull/18321#discussion_r1905822853


##
core/src/main/scala/kafka/log/UnifiedLog.scala:
##
@@ -757,7 +769,8 @@ class UnifiedLog(@volatile var logStartOffset: Long,
  leaderEpoch: Int,
  requestLocal: Option[RequestLocal],
  verificationGuard: VerificationGuard,
- ignoreRecordSize: Boolean): LogAppendInfo = {
+ ignoreRecordSize: Boolean,
+ toMagic: Byte = RecordBatch.CURRENT_MAGIC_VALUE): 
LogAppendInfo = {

Review Comment:
   There is some validation logic for v0/v1 records in `LogValidator`, which is 
potentially useful for ingesting records of old format. So, we can keep this as 
it is for now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18436: Revert Multiversioning Changes from 4.0 release. [kafka]

2025-01-07 Thread via GitHub


gharris1727 merged PR #18411:
URL: https://github.com/apache/kafka/pull/18411


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18120) KIP-891: Support for multiple versions of connect plugins.

2025-01-07 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris updated KAFKA-18120:

Affects Version/s: 4.1.0
   (was: 4.0.0)

> KIP-891: Support for multiple versions of connect plugins.
> --
>
> Key: KAFKA-18120
> URL: https://issues.apache.org/jira/browse/KAFKA-18120
> Project: Kafka
>  Issue Type: New Feature
>  Components: connect
>Affects Versions: 4.1.0
>Reporter: Snehashis Pal
>Assignee: Snehashis Pal
>Priority: Major
>
> Jira to track implementation of KIP-891 [KIP-891: Running multiple versions 
> of Connector plugins - Apache Kafka - Apache Software 
> Foundation|https://cwiki.apache.org/confluence/display/KAFKA/KIP-891%3A+Running+multiple+versions+of+Connector+plugins]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18215) Accept plugin.version configurations for connector and converters

2025-01-07 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris updated KAFKA-18215:

Fix Version/s: (was: 4.0.0)

> Accept plugin.version configurations for connector and converters
> -
>
> Key: KAFKA-18215
> URL: https://issues.apache.org/jira/browse/KAFKA-18215
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Greg Harris
>Assignee: Snehashis Pal
>Priority: Major
> Fix For: 4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-17675) Add tests to RaftEventSimulationTest

2025-01-07 Thread Alyssa Huang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910741#comment-17910741
 ] 

Alyssa Huang commented on KAFKA-17675:
--

Hey Mingdao, sorry for the late response. To make sure we have coverage for 
4.0, I'm adding a sanity check test to make sure the ping-pong scenario in the 
KIP is covered. We will still need a more generalized invariant to be added to 
the simulation test which can be done after 4.0 code freeze.

> Add tests to RaftEventSimulationTest
> 
>
> Key: KAFKA-17675
> URL: https://issues.apache.org/jira/browse/KAFKA-17675
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Alyssa Huang
>Assignee: Alyssa Huang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18215) Accept plugin.version configurations for connector and converters

2025-01-07 Thread Greg Harris (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910735#comment-17910735
 ] 

Greg Harris commented on KAFKA-18215:
-

Reverted from 4.0 in KAFKA-18436

> Accept plugin.version configurations for connector and converters
> -
>
> Key: KAFKA-18215
> URL: https://issues.apache.org/jira/browse/KAFKA-18215
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Greg Harris
>Assignee: Snehashis Pal
>Priority: Major
> Fix For: 4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-18437) Correct version of ShareUpdateValue record from v1 to v0

2025-01-07 Thread Andrew Schofield (Jira)
Andrew Schofield created KAFKA-18437:


 Summary: Correct version of ShareUpdateValue record from v1 to v0
 Key: KAFKA-18437
 URL: https://issues.apache.org/jira/browse/KAFKA-18437
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 4.0.0
Reporter: Andrew Schofield
Assignee: Andrew Schofield
 Fix For: 4.0.0


SHARE_UPDATE_RECORD_VALUE_VERSION should be 0. In AK 4.1, this is being fixed 
as part of a larger refactor to how all coordinator records are generated and 
managed, but in AK 4.0, a smaller fix is required.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-18439) Add invariant to Raft simulation test for prevote

2025-01-07 Thread Peter Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Lee reassigned KAFKA-18439:
-

Assignee: Peter Lee

> Add invariant to Raft simulation test for prevote
> -
>
> Key: KAFKA-18439
> URL: https://issues.apache.org/jira/browse/KAFKA-18439
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Alyssa Huang
>Assignee: Peter Lee
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-15599: Move SegmentPosition/TimingWheelExpirationService to raf… [kafka]

2025-01-07 Thread via GitHub


divijvaidya commented on code in PR #18094:
URL: https://github.com/apache/kafka/pull/18094#discussion_r1905940530


##
raft/src/main/java/org/apache/kafka/raft/SegmentPosition.java:
##
@@ -14,10 +14,12 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package kafka.raft
+package org.apache.kafka.raft;
 
-import org.apache.kafka.raft.OffsetMetadata
+public record SegmentPosition(long baseOffset, int relativePosition) 
implements OffsetMetadata {

Review Comment:
   IMO both SegmentPosition and OffsetMetadata should be in storage layer. 
Ideally, layers above UnifiedLog should not know that there is a concept of 
segment which is used to physically store data.
   
   Happy to punt this to a different PR for later so that we can at least start 
with conversion of scala to java here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15057) Use new interface ZstdBufferDecompressingStreamNoFinalizer from zstd-jni

2025-01-07 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya updated KAFKA-15057:
-
Fix Version/s: (was: 4.0.0)

> Use new interface ZstdBufferDecompressingStreamNoFinalizer from zstd-jni
> 
>
> Key: KAFKA-15057
> URL: https://issues.apache.org/jira/browse/KAFKA-15057
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 3.6.0
>Reporter: Divij Vaidya
>Assignee: Divij Vaidya
>Priority: Major
> Attachments: zstd-upgrade.xlsx
>
>
> h1. Background
> In Kafka's code, every batch of records is stored in a in-memory byte buffer. 
> For compressed workload, this buffer contains data in compressed form. Before 
> writing it to the log, Kafka performs some validations such as ensuring that 
> offsets are monotonically increasing etc. To perform this validation, Kafka 
> needs to uncompress the data stored in byte buffer.
> For zstd compressed batches, Kafka uses ZstdInputStreamNoFinalizer interface 
> provided by the downstream zstd-jni library to perform decompression. 
> ZstdInputStreamNoFinalizer takes input an InputStream and provides output an 
> InputStream. Since, Kafka stores the entire batch in a ByteBuffer, Kafka 
> wraps the ByteBuffer into an InputStream to satisfy the input contract for 
> ZstdInputStreamNoFinalizer.
> h1. Problem
> ZstdInputStreamNoFinalizer is not a good fit for our use case because we 
> already have the entire compressed data stored in a buffer. We don't have a 
> need for an interface which takes InputStream as an input. Our requirement is 
> for an interface which takes a ByteBuffer as an input and provides a stream 
> of uncompressed data as output. Prior to zstd-jni 1.5.5, no such interface 
> existed. Hence, we were forced to use ZstdInputStreamNoFinalizer.
> Usage of ZstdInputStreamNoFinalizer has the following problems:
> 1. When decompression of batch is complete, we try to read another byte to 
> check if the actual batch size if equal to declared batch size. This is done 
> at RecordIterator#next(). This extra call to read another byte leads to a JNI 
> call in existing interface.
> 2. Since this interface requires input as a InputStream, we take the 
> ByteBuffer containing compressed batch and convert it into a InputStream. 
> This interface internally uses an intermediate buffer to read data from this 
> InputStream in chunks. The chunk size is determined by underlying zstd 
> library and hence, we will allocate a new buffer with very batch. This leads 
> to the following transformation: ByteBuffer (compressed batch) -> InputStream 
> (compressed batch) -> data copy to intermediate ByteBuffer (chunk of 
> compressed batch) -> send chunk to zstd library for decompression -> refill 
> the intermediate buffer by copying the data to intermediate ByteBuffer (next 
> chunk of compressed batch)
> h1. Solution
> I have extended an an interface in downstream library zstd-jni to suit the 
> use case of Kafka. The new interface is called 
> ZstdBufferDecompressingStreamNoFinalizer. It provides an interface where it 
> takes input as a ByteBuffer containing compressed data and provides output as 
> an InputStream. It solves the above problems as follows:
> 1. When we read the final decompressed frame, this interface sets a flag to 
> mark that all uncompressed data has been consumed. When RecordIterator#next() 
> tries to determine if the stream has ended, we simply read the flag and 
> hence, do not have to make a JNI call.
> 2. It does not require any buffer allocation for input. It takes the input 
> buffer and passes it across the JNI boundary without any intermediate 
> copying. Hence, we don't perform any buffer allocation.
> h1. References
> h2. Changes in downstream zstd-jni
> Add new interface - 
> [https://github.com/luben/zstd-jni/commit/d65490e8b8aadc4b59545755e55f7dd368fe8aa5]
> Bug fixes in new interface - 
> [https://github.com/luben/zstd-jni/commit/8bf8066438785ce55b62fc7e6816faafe1e3b39e]
>  
> [https://github.com/luben/zstd-jni/commit/100c434dfcec17a865ca2c2b844afe1046ce1b10]
> [https://github.com/luben/zstd-jni/commit/355b8511a2967d097a619047a579930cac2ccd9d]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15057) Use new interface ZstdBufferDecompressingStreamNoFinalizer from zstd-jni

2025-01-07 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya updated KAFKA-15057:
-
Labels: performance  (was: )

> Use new interface ZstdBufferDecompressingStreamNoFinalizer from zstd-jni
> 
>
> Key: KAFKA-15057
> URL: https://issues.apache.org/jira/browse/KAFKA-15057
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 3.6.0
>Reporter: Divij Vaidya
>Assignee: Divij Vaidya
>Priority: Major
>  Labels: performance
> Attachments: zstd-upgrade.xlsx
>
>
> h1. Background
> In Kafka's code, every batch of records is stored in a in-memory byte buffer. 
> For compressed workload, this buffer contains data in compressed form. Before 
> writing it to the log, Kafka performs some validations such as ensuring that 
> offsets are monotonically increasing etc. To perform this validation, Kafka 
> needs to uncompress the data stored in byte buffer.
> For zstd compressed batches, Kafka uses ZstdInputStreamNoFinalizer interface 
> provided by the downstream zstd-jni library to perform decompression. 
> ZstdInputStreamNoFinalizer takes input an InputStream and provides output an 
> InputStream. Since, Kafka stores the entire batch in a ByteBuffer, Kafka 
> wraps the ByteBuffer into an InputStream to satisfy the input contract for 
> ZstdInputStreamNoFinalizer.
> h1. Problem
> ZstdInputStreamNoFinalizer is not a good fit for our use case because we 
> already have the entire compressed data stored in a buffer. We don't have a 
> need for an interface which takes InputStream as an input. Our requirement is 
> for an interface which takes a ByteBuffer as an input and provides a stream 
> of uncompressed data as output. Prior to zstd-jni 1.5.5, no such interface 
> existed. Hence, we were forced to use ZstdInputStreamNoFinalizer.
> Usage of ZstdInputStreamNoFinalizer has the following problems:
> 1. When decompression of batch is complete, we try to read another byte to 
> check if the actual batch size if equal to declared batch size. This is done 
> at RecordIterator#next(). This extra call to read another byte leads to a JNI 
> call in existing interface.
> 2. Since this interface requires input as a InputStream, we take the 
> ByteBuffer containing compressed batch and convert it into a InputStream. 
> This interface internally uses an intermediate buffer to read data from this 
> InputStream in chunks. The chunk size is determined by underlying zstd 
> library and hence, we will allocate a new buffer with very batch. This leads 
> to the following transformation: ByteBuffer (compressed batch) -> InputStream 
> (compressed batch) -> data copy to intermediate ByteBuffer (chunk of 
> compressed batch) -> send chunk to zstd library for decompression -> refill 
> the intermediate buffer by copying the data to intermediate ByteBuffer (next 
> chunk of compressed batch)
> h1. Solution
> I have extended an an interface in downstream library zstd-jni to suit the 
> use case of Kafka. The new interface is called 
> ZstdBufferDecompressingStreamNoFinalizer. It provides an interface where it 
> takes input as a ByteBuffer containing compressed data and provides output as 
> an InputStream. It solves the above problems as follows:
> 1. When we read the final decompressed frame, this interface sets a flag to 
> mark that all uncompressed data has been consumed. When RecordIterator#next() 
> tries to determine if the stream has ended, we simply read the flag and 
> hence, do not have to make a JNI call.
> 2. It does not require any buffer allocation for input. It takes the input 
> buffer and passes it across the JNI boundary without any intermediate 
> copying. Hence, we don't perform any buffer allocation.
> h1. References
> h2. Changes in downstream zstd-jni
> Add new interface - 
> [https://github.com/luben/zstd-jni/commit/d65490e8b8aadc4b59545755e55f7dd368fe8aa5]
> Bug fixes in new interface - 
> [https://github.com/luben/zstd-jni/commit/8bf8066438785ce55b62fc7e6816faafe1e3b39e]
>  
> [https://github.com/luben/zstd-jni/commit/100c434dfcec17a865ca2c2b844afe1046ce1b10]
> [https://github.com/luben/zstd-jni/commit/355b8511a2967d097a619047a579930cac2ccd9d]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-17921: Support SASL_PLAINTEXT protocol with java.security.auth.login.config [kafka]

2025-01-07 Thread via GitHub


chia7712 commented on PR #17671:
URL: https://github.com/apache/kafka/pull/17671#issuecomment-2576076093

   The failed tests are unrelated and they pass on my local


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18436) Revert KIP-891 from 4.0 release

2025-01-07 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris resolved KAFKA-18436.
-
Resolution: Fixed

> Revert KIP-891 from 4.0 release
> ---
>
> Key: KAFKA-18436
> URL: https://issues.apache.org/jira/browse/KAFKA-18436
> Project: Kafka
>  Issue Type: Task
>  Components: connect
>Affects Versions: 4.0.0
>Reporter: Greg Harris
>Assignee: Snehashis Pal
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18436) Revert KIP-891 from 4.0 release

2025-01-07 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris updated KAFKA-18436:

Fix Version/s: 4.0.0

> Revert KIP-891 from 4.0 release
> ---
>
> Key: KAFKA-18436
> URL: https://issues.apache.org/jira/browse/KAFKA-18436
> Project: Kafka
>  Issue Type: Task
>  Components: connect
>Affects Versions: 4.0.0
>Reporter: Greg Harris
>Assignee: Snehashis Pal
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18401) Transaction version 2 does not support commit transaction without records

2025-01-07 Thread Justine Olshan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910745#comment-17910745
 ] 

Justine Olshan commented on KAFKA-18401:


> where the EndTxnRequest is not sent if no partitions or offsets have been 
> successfully added to the transaction. If this behavior is expected, we 
> should document it and let user know this change.
We can not do this because we don't know for certain that the transaction 
actually added the partitions on the coordinator side due to the 2 hop process. 
We should always be able to send an EndTransaction abort regardless of whether 
partitions were added or not (in the case of failures)

I guess under this we don't allow an EndTxn commit because we can't say for 
certain if a partition has been added. I'm trying to understand the use case 
for committing a transaction with no content. 

> Transaction version 2 does not support commit transaction without records
> -
>
> Key: KAFKA-18401
> URL: https://issues.apache.org/jira/browse/KAFKA-18401
> Project: Kafka
>  Issue Type: Bug
>Reporter: Kuan Po Tseng
>Assignee: Kuan Po Tseng
>Priority: Major
>
> This issue was observed when implementing 
> https://issues.apache.org/jira/browse/KAFKA-18206.
> In short, under transaction version 2, it doesn't support commit transaction 
> without sending any records while transaction version 0 & 1 do support this 
> kind of scenario.
> Commit transactions without sending any records is fine when using 
> transaction versions 0 or 1 because the producer won't send EndTxnRequest to 
> the broker [0]. However, with transaction version 2, the producer still sends 
> an EndTxnRequest to the broker while in transaction coordinator, the txn 
> state is still in EMPTY, resulting in an error from the broker.
> This issue can be reproduced with the test in below. I'm unsure if this 
> behavior is expected. If it's not, one potential fix could be to follow the 
> approach used in TV_0 and TV_1, where the EndTxnRequest is not sent if no 
> partitions or offsets have been successfully added to the transaction. If 
> this behavior is expected, we should document it and let user know this 
> change.
> {code:java}
> @ClusterTests({
> @ClusterTest(brokers = 3, features = {
> @ClusterFeature(feature = Feature.TRANSACTION_VERSION, version = 
> 0)}),
> @ClusterTest(brokers = 3, features = {
> @ClusterFeature(feature = Feature.TRANSACTION_VERSION, version = 
> 1)}),
> @ClusterTest(brokers = 3, features = {
> @ClusterFeature(feature = Feature.TRANSACTION_VERSION, version = 
> 2)})
> })
> public void testProducerEndTransaction2(ClusterInstance cluster) throws 
> InterruptedException {
> Map properties = new HashMap<>();
> properties.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "foobar");
> properties.put(ProducerConfig.CLIENT_ID_CONFIG, "test");
> properties.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
> try (Producer producer1 = 
> cluster.producer(properties)) {
> producer1.initTransactions();
> producer1.beginTransaction();
> producer1.commitTransaction(); // In TV_2, we'll get 
> InvalidTxnStateException
> }
> }
> {code}
> Another test case, which is essentially the same as the previous one, starts 
> with a transaction that includes records, and then proceeds to start the next 
> transaction. When using transaction version 2, we encounter an error, but 
> this time it's a different error from the one seen in the previous case.
> {code:java}
> @ClusterTests({
> @ClusterTest(brokers = 3, features = {
> @ClusterFeature(feature = Feature.TRANSACTION_VERSION, version = 
> 0)}),
> @ClusterTest(brokers = 3, features = {
> @ClusterFeature(feature = Feature.TRANSACTION_VERSION, version = 
> 1)}),
> @ClusterTest(brokers = 3, features = {
> @ClusterFeature(feature = Feature.TRANSACTION_VERSION, version = 
> 2)})
> })
> public void testProducerEndTransaction(ClusterInstance cluster) {
> Map properties = new HashMap<>();
> properties.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "foobar");
> properties.put(ProducerConfig.CLIENT_ID_CONFIG, "test");
> properties.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
> try (Producer producer1 = 
> cluster.producer(properties)) {
> producer1.initTransactions();
> producer1.beginTransaction();
> producer1.send(new ProducerRecord<>("test", "key".getBytes(), 
> "value".getBytes()));
> producer1.commitTransaction();
> producer1.beginTransaction();
> produce

[jira] [Updated] (KAFKA-16164) Pre-Vote 4.0 changes

2025-01-07 Thread Alyssa Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alyssa Huang updated KAFKA-16164:
-
Summary: Pre-Vote 4.0 changes  (was: Pre-Vote)

> Pre-Vote 4.0 changes
> 
>
> Key: KAFKA-16164
> URL: https://issues.apache.org/jira/browse/KAFKA-16164
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Alyssa Huang
>Assignee: Alyssa Huang
>Priority: Major
>
> Implementing pre-vote as described in 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-996%3A+Pre-Vote



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] KAFKA-18399 Remove ZooKeeper from KafkaApis (part 2) [kafka]

2025-01-07 Thread via GitHub


TaiJuWu opened a new pull request, #18422:
URL: https://github.com/apache/kafka/pull/18422

   KRaft mode does not support follow RPC and this PR would remove them.
   
   - case ApiKeys.LEADER_AND_ISR => handleLeaderAndIsrRequest(request)
   - case ApiKeys.STOP_REPLICA => handleStopReplicaRequest(request)
   - case ApiKeys.UPDATE_METADATA => handleUpdateMetadataRequest(request, 
requestLocal)
   - case ApiKeys.CONTROLLED_SHUTDOWN => 
handleControlledShutdownRequest(request)
   - case ApiKeys.ENVELOPE => handleEnvelope(request, requestLocal)
   - case ApiKeys.ALLOCATE_PRODUCER_IDS => 
handleAllocateProducerIdsRequest(request)
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >