Re: [PR] KAFKA-18936: Fix share fetch when records are larger than max bytes [kafka]

2025-03-12 Thread via GitHub


AndrewJSchofield merged PR #19145:
URL: https://github.com/apache/kafka/pull/19145


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


mingdaoy commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1990997212


##
docs/upgrade.html:
##
@@ -19,6 +19,10 @@
 
 

Re: [PR] KAFKA-18932: Removed usage of partition max bytes from share fetch requests [kafka]

2025-03-12 Thread via GitHub


adixitconfluent commented on code in PR #19148:
URL: https://github.com/apache/kafka/pull/19148#discussion_r1990559413


##
server/src/main/java/org/apache/kafka/server/share/fetch/PartitionRotateStrategy.java:
##
@@ -80,20 +81,8 @@ static LinkedHashMap 
rotateRoundRobin(
 return topicIdPartitions;
 }
 
-// TODO: Once the partition max bytes is removed then the partition 
will be a linked list and rotation
-//  will be a simple operation. Else consider using 
ImplicitLinkedHashCollection.
-LinkedHashMap suffixPartitions = new 
LinkedHashMap<>(rotateAt);
-LinkedHashMap rotatedPartitions = new 
LinkedHashMap<>(topicIdPartitions.size());
-int i = 0;
-for (Map.Entry entry : 
topicIdPartitions.entrySet()) {
-if (i < rotateAt) {
-suffixPartitions.put(entry.getKey(), entry.getValue());
-} else {
-rotatedPartitions.put(entry.getKey(), entry.getValue());
-}
-i++;
-}
-rotatedPartitions.putAll(suffixPartitions);
+List rotatedPartitions = new 
ArrayList<>(topicIdPartitions);
+Collections.rotate(rotatedPartitions, -1 * rotateAt);

Review Comment:
   I have added the comment `We want the elements from the end of the list to 
move left by the distance provided i.e. if the original list is [1,2,3], and we 
want to rotate it by 1, we want the output as [2,3,1] and not [3,1,2]. Hence, 
we need negation of distance here` in the code. Regarding why I multiplied -1 
to `rotateAt` instead of `-rotateAt`, its because `-1 * rotateAt` looks cleaner 
code to me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18932: Removed usage of partition max bytes from share fetch requests [kafka]

2025-03-12 Thread via GitHub


apoorvmittal10 commented on code in PR #19148:
URL: https://github.com/apache/kafka/pull/19148#discussion_r1991041594


##
server/src/main/java/org/apache/kafka/server/share/ErroneousAndValidPartitionData.java:
##
@@ -20,47 +20,48 @@
 import org.apache.kafka.common.TopicIdPartition;
 import org.apache.kafka.common.message.ShareFetchResponseData;
 import org.apache.kafka.common.protocol.Errors;
-import org.apache.kafka.common.requests.ShareFetchRequest;
 import org.apache.kafka.common.requests.ShareFetchResponse;
 
+import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.List;
 import java.util.Map;
 
 /**
  * Helper class to return the erroneous partitions and valid partition data
  */
 public class ErroneousAndValidPartitionData {
 private final Map 
erroneous;
-private final Map 
validTopicIdPartitions;
+private final List validTopicIdPartitions;
 
 public ErroneousAndValidPartitionData(Map erroneous,
-  Map validTopicIdPartitions) {
+  List 
validTopicIdPartitions) {
 this.erroneous = erroneous;
 this.validTopicIdPartitions = validTopicIdPartitions;
 }
 
-public ErroneousAndValidPartitionData(Map shareFetchData) {
+public ErroneousAndValidPartitionData(List 
shareFetchData) {
 erroneous = new HashMap<>();
-validTopicIdPartitions = new HashMap<>();
-shareFetchData.forEach((topicIdPartition, sharePartitionData) -> {
+validTopicIdPartitions = new ArrayList<>();
+shareFetchData.forEach(topicIdPartition -> {
 if (topicIdPartition.topic() == null) {
 erroneous.put(topicIdPartition, 
ShareFetchResponse.partitionResponse(topicIdPartition, 
Errors.UNKNOWN_TOPIC_ID));

Review Comment:
   Why? The problem is that topic name doesn't exist but we reply with that 
topic id is unknown, isn't it incorrect?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18932: Removed usage of partition max bytes from share fetch requests [kafka]

2025-03-12 Thread via GitHub


apoorvmittal10 commented on code in PR #19148:
URL: https://github.com/apache/kafka/pull/19148#discussion_r1991048323


##
server/src/main/java/org/apache/kafka/server/share/fetch/PartitionRotateStrategy.java:
##
@@ -80,20 +81,11 @@ static LinkedHashMap 
rotateRoundRobin(
 return topicIdPartitions;
 }
 
-// TODO: Once the partition max bytes is removed then the partition 
will be a linked list and rotation
-//  will be a simple operation. Else consider using 
ImplicitLinkedHashCollection.
-LinkedHashMap suffixPartitions = new 
LinkedHashMap<>(rotateAt);
-LinkedHashMap rotatedPartitions = new 
LinkedHashMap<>(topicIdPartitions.size());
-int i = 0;
-for (Map.Entry entry : 
topicIdPartitions.entrySet()) {
-if (i < rotateAt) {
-suffixPartitions.put(entry.getKey(), entry.getValue());
-} else {
-rotatedPartitions.put(entry.getKey(), entry.getValue());
-}
-i++;
-}
-rotatedPartitions.putAll(suffixPartitions);
+// We don't want to modify the original list, hence created a copy.
+List rotatedPartitions = new 
ArrayList<>(topicIdPartitions);
+// We want the elements from the end of the list to move left by the 
distance provided i.e. if the original list is [1,2,3],
+// and we want to rotate it by 1, we want the output as [2,3,1] and 
not [3,1,2]. Hence, we need negation of distance here.

Review Comment:
   ```suggestion
   // Elements from the list should move left by the distance provided 
i.e. if the original list is [1,2,3],
   // and rotation is by 1, then output should be [2,3,1] and not 
[3,1,2]. Hence, negate the distance here.
   ```



##
server/src/test/java/org/apache/kafka/server/share/fetch/PartitionRotateStrategyTest.java:
##
@@ -35,64 +36,64 @@ public class PartitionRotateStrategyTest {
 @Test
 public void testRoundRobinStrategy() {
 PartitionRotateStrategy strategy = 
PartitionRotateStrategy.type(StrategyType.ROUND_ROBIN);
-LinkedHashMap partitions = 
createPartitions(3);
+List partitions = createPartitions(3);
 
-LinkedHashMap result = 
strategy.rotate(partitions, new PartitionRotateMetadata(1));
+List result = strategy.rotate(partitions, new 
PartitionRotateMetadata(1));
 assertEquals(3, result.size());
-validateRotatedMapEquals(partitions, result, 1);
+validateRotatedListEquals(partitions, result, 1);
 
 // Session epoch is greater than the number of partitions.
 result = strategy.rotate(partitions, new PartitionRotateMetadata(5));
 assertEquals(3, result.size());
-validateRotatedMapEquals(partitions, result, 2);
+validateRotatedListEquals(partitions, result, 2);
 
 // Session epoch is at Integer.MAX_VALUE.
 result = strategy.rotate(partitions, new 
PartitionRotateMetadata(Integer.MAX_VALUE));
 assertEquals(3, result.size());
-validateRotatedMapEquals(partitions, result, 1);
+validateRotatedListEquals(partitions, result, 1);
 
 // No rotation at same size as epoch.
 result = strategy.rotate(partitions, new PartitionRotateMetadata(3));
 assertEquals(3, result.size());
-validateRotatedMapEquals(partitions, result, 0);
+validateRotatedListEquals(partitions, result, 0);
 }
 
 @Test
 public void testRoundRobinStrategyWithSpecialSessionEpochs() {
 PartitionRotateStrategy strategy = 
PartitionRotateStrategy.type(StrategyType.ROUND_ROBIN);
 
-LinkedHashMap partitions = 
createPartitions(3);
-LinkedHashMap result = strategy.rotate(
+List partitions = createPartitions(3);
+List result = strategy.rotate(
 partitions,
 new PartitionRotateMetadata(ShareRequestMetadata.INITIAL_EPOCH));
 assertEquals(3, result.size());
-validateRotatedMapEquals(partitions, result, 0);
+validateRotatedListEquals(partitions, result, 0);
 
 result = strategy.rotate(
 partitions,
 new PartitionRotateMetadata(ShareRequestMetadata.FINAL_EPOCH));
 assertEquals(3, result.size());
-validateRotatedMapEquals(partitions, result, 0);
+validateRotatedListEquals(partitions, result, 0);
 }
 
 @Test
 public void testRoundRobinStrategyWithEmptyPartitions() {
 PartitionRotateStrategy strategy = 
PartitionRotateStrategy.type(StrategyType.ROUND_ROBIN);
 // Empty partitions.
-LinkedHashMap result = strategy.rotate(new 
LinkedHashMap<>(), new PartitionRotateMetadata(5));
+List result = strategy.rotate(new ArrayList<>(), new 
PartitionRotateMetadata(5));
 // The result should be empty.
 assertTrue(result.isEmpty());
 }
 
 /**
- * Create an ordered map of TopicIdPartition to partition max bytes.
+  

Re: [PR] KAFKA-18932: Removed usage of partition max bytes from share fetch requests [kafka]

2025-03-12 Thread via GitHub


adixitconfluent commented on PR #19148:
URL: https://github.com/apache/kafka/pull/19148#issuecomment-2717068996

   @apoorvmittal10 , I have addressed your comments. Please review when you 
can. Also, I had to do a force push to fix the branch history because it got 
corrupted by a merge commit performed by IntelliJ. That's why you see so many 
labels in the PR. If possible, can you remove the extra labels `streams`, 
`producer`, `tools`, `connect`, `kraft`, `mirror-maker-2`, `storage`, 
`tiered-storage`, `build` and `generator`? 
   Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


dajac commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1990907805


##
docs/upgrade.html:
##
@@ -19,6 +19,10 @@
 
 

[jira] [Created] (KAFKA-18964) Allow to set weights for controller nodes for leader election

2025-03-12 Thread Luke Chen (Jira)
Luke Chen created KAFKA-18964:
-

 Summary: Allow to set weights for controller nodes for leader 
election
 Key: KAFKA-18964
 URL: https://issues.apache.org/jira/browse/KAFKA-18964
 Project: Kafka
  Issue Type: Improvement
Reporter: Luke Chen


In the stretch cluster environment, the nodes are located in different data 
center for disaster recovery. So the backup cluster controller nodes should be 
served as the follower. Only when the disaster happened, the controller nodes 
in backup cluster need to be elected as leader. In our current design, the 
candidate node is randomly chosen. We can consider to apply the "weight" to the 
controller nodes to achieve the situation mentioned above.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


mingdaoy commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991148865


##
docs/upgrade.html:
##
@@ -29,7 +31,26 @@ Notable changes in 4
 
 
 
-Upgrading to 4.0.0 from any 
version 3.3.x through 3.9.x
+
+Upgrading to 4.0.0
+
+Upgrading 
Clients to 4.0.0
+
+For a rolling upgrade:
+
+
+Upgrade the clients one at a time: shut down the client, update the 
code, and restart it.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
+For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.

Review Comment:
   
![image](https://github.com/user-attachments/assets/11687c39-b3eb-4086-b4cf-70093a6f97c9)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


ijuma commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991162921


##
docs/upgrade.html:
##
@@ -29,7 +31,26 @@ Notable changes in 4
 
 
 
-Upgrading to 4.0.0 from any 
version 3.3.x through 3.9.x
+
+Upgrading to 4.0.0
+
+Upgrading 
Clients to 4.0.0
+
+For a rolling upgrade:
+
+
+Upgrade the clients one at a time: shut down the client, update the 
code, and restart it.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
+For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.

Review Comment:
   We don't need to say what's not feasible, we should just say what's 
supported.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


mingdaoy commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991184428


##
docs/upgrade.html:
##
@@ -29,7 +31,26 @@ Notable changes in 4
 
 
 
-Upgrading to 4.0.0 from any 
version 3.3.x through 3.9.x
+
+Upgrading to 4.0.0
+
+Upgrading 
Clients to 4.0.0
+
+For a rolling upgrade:
+
+
+Upgrade the clients one at a time: shut down the client, update the 
code, and restart it.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
+For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.

Review Comment:
   
![image](https://github.com/user-attachments/assets/7a2b2585-54c0-4c06-a943-ca516a33ee52)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18932: Removed usage of partition max bytes from share fetch requests [kafka]

2025-03-12 Thread via GitHub


adixitconfluent commented on code in PR #19148:
URL: https://github.com/apache/kafka/pull/19148#discussion_r1991101107


##
server/src/main/java/org/apache/kafka/server/share/ErroneousAndValidPartitionData.java:
##
@@ -20,47 +20,48 @@
 import org.apache.kafka.common.TopicIdPartition;
 import org.apache.kafka.common.message.ShareFetchResponseData;
 import org.apache.kafka.common.protocol.Errors;
-import org.apache.kafka.common.requests.ShareFetchRequest;
 import org.apache.kafka.common.requests.ShareFetchResponse;
 
+import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.List;
 import java.util.Map;
 
 /**
  * Helper class to return the erroneous partitions and valid partition data
  */
 public class ErroneousAndValidPartitionData {
 private final Map 
erroneous;
-private final Map 
validTopicIdPartitions;
+private final List validTopicIdPartitions;
 
 public ErroneousAndValidPartitionData(Map erroneous,
-  Map validTopicIdPartitions) {
+  List 
validTopicIdPartitions) {
 this.erroneous = erroneous;
 this.validTopicIdPartitions = validTopicIdPartitions;
 }
 
-public ErroneousAndValidPartitionData(Map shareFetchData) {
+public ErroneousAndValidPartitionData(List 
shareFetchData) {
 erroneous = new HashMap<>();
-validTopicIdPartitions = new HashMap<>();
-shareFetchData.forEach((topicIdPartition, sharePartitionData) -> {
+validTopicIdPartitions = new ArrayList<>();
+shareFetchData.forEach(topicIdPartition -> {
 if (topicIdPartition.topic() == null) {
 erroneous.put(topicIdPartition, 
ShareFetchResponse.partitionResponse(topicIdPartition, 
Errors.UNKNOWN_TOPIC_ID));

Review Comment:
   If we are unable to resolve a topic name using its topic id, we give out 
this error. IIRC, there is a `MetadataCache` object through which  we get 
`topicIdsToNames` map. We check it from there if the topic id exists.
   You can find similar code in the handling of regular fetch request 
[here](https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaApis.scala#L557)
 and 
[here](https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaApis.scala#L574).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


mingdaoy commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991109201


##
docs/upgrade.html:
##
@@ -29,7 +31,26 @@ Notable changes in 4
 
 
 
-Upgrading to 4.0.0 from any 
version 3.3.x through 3.9.x
+
+Upgrading to 4.0.0
+
+Upgrading 
Clients to 4.0.0
+
+For a rolling upgrade:
+
+
+Upgrade the clients one at a time: shut down the client, update the 
code, and restart it.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
+For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.

Review Comment:
   I changed it to the following,
   
   > For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly **from certain versions, 
such as 2.0 clients (or older)**, to 4.x is not feasible. **2.1 is the oldest 
supported version compatible with 4.0**



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


mingdaoy commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991166491


##
docs/upgrade.html:
##
@@ -52,9 +73,6 @@ Upgrading to 
4.0.0 from any vers
 Every https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java";>MetadataVersion
 has a boolean parameter that indicates if there are metadata changes 
(i.e. IBP_4_0_IV1(23, "4.0", "IV1", true) means this version has 
metadata changes).
 Given your current and target versions, a downgrade is only possible 
if there are no metadata changes in the versions between.
-For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
-For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.
-
 
 
 Notable 
changes in 4.0.0

Review Comment:
   
![image](https://github.com/user-attachments/assets/6d0acb11-6c34-4f88-9e3e-d3d4aa8b60d3)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14484: Move UnifiedLog to storage module [kafka]

2025-03-12 Thread via GitHub


mimaison commented on code in PR #19030:
URL: https://github.com/apache/kafka/pull/19030#discussion_r1991245995


##
storage/src/main/java/org/apache/kafka/storage/internals/log/UnifiedLog.java:
##
@@ -55,6 +113,2298 @@ public class UnifiedLog {
 public static final String STRAY_DIR_SUFFIX = 
LogFileUtils.STRAY_DIR_SUFFIX;
 public static final long UNKNOWN_OFFSET = LocalLog.UNKNOWN_OFFSET;
 
+// For compatibility, metrics are defined to be under `Log` class
+private final KafkaMetricsGroup metricsGroup = new 
KafkaMetricsGroup("kafka.log", "Log");
+
+/* A lock that guards all modifications to the log */
+private final Object lock = new Object();
+private final Map> metricNames = new 
HashMap<>();
+
+// localLog The LocalLog instance containing non-empty log segments 
recovered from disk
+private final LocalLog localLog;
+private final BrokerTopicStats brokerTopicStats;
+private final ProducerStateManager producerStateManager;
+private final boolean remoteStorageSystemEnable;
+private final ScheduledFuture producerExpireCheck;
+private final int producerIdExpirationCheckIntervalMs;
+private final String logIdent;
+private final Logger logger;
+private final Logger futureTimestampLogger;
+private final LogValidator.MetricsRecorder validatorMetricsRecorder;
+
+/* The earliest offset which is part of an incomplete transaction. This is 
used to compute the
+ * last stable offset (LSO) in ReplicaManager. Note that it is possible 
that the "true" first unstable offset
+ * gets removed from the log (through record or segment deletion). In this 
case, the first unstable offset
+ * will point to the log start offset, which may actually be either part 
of a completed transaction or not
+ * part of a transaction at all. However, since we only use the LSO for 
the purpose of restricting the
+ * read_committed consumer to fetching decided data (i.e. committed, 
aborted, or non-transactional), this
+ * temporary abuse seems justifiable and saves us from scanning the log 
after deletion to find the first offsets
+ * of each ongoing transaction in order to compute a new first unstable 
offset. It is possible, however,
+ * that this could result in disagreement between replicas depending on 
when they began replicating the log.
+ * In the worst case, the LSO could be seen by a consumer to go backwards.
+ */
+private volatile Optional firstUnstableOffsetMetadata = 
Optional.empty();
+private volatile Optional partitionMetadataFile = 
Optional.empty();
+// This is the offset(inclusive) until which segments are copied to the 
remote storage.
+private volatile long highestOffsetInRemoteStorage = -1L;
+
+/* Keep track of the current high watermark in order to ensure that 
segments containing offsets at or above it are
+ * not eligible for deletion. This means that the active segment is only 
eligible for deletion if the high watermark
+ * equals the log end offset (which may never happen for a partition under 
consistent load). This is needed to
+ * prevent the log start offset (which is exposed in fetch responses) from 
getting ahead of the high watermark.
+ */
+private volatile LogOffsetMetadata highWatermarkMetadata;
+private volatile long localLogStartOffset;
+private volatile long logStartOffset;
+private volatile LeaderEpochFileCache leaderEpochCache;
+private volatile Optional topicId;
+private volatile LogOffsetsListener logOffsetsListener;
+
+/**
+ * A log which presents a unified view of local and tiered log segments.
+ *
+ * The log consists of tiered and local segments with the tiered 
portion of the log being optional. There could be an
+ * overlap between the tiered and local segments. The active segment is 
always guaranteed to be local. If tiered segments
+ * are present, they always appear at the beginning of the log, followed 
by an optional region of overlap, followed by the local
+ * segments including the active segment.
+ *
+ * NOTE: this class handles state and behavior specific to tiered 
segments as well as any behavior combining both tiered
+ * and local segments. The state and behavior specific to local segments 
are handled by the encapsulated LocalLog instance.
+ *
+ * @param logStartOffset The earliest offset allowed to be exposed to 
kafka client.
+ *   The logStartOffset can be updated by :
+ *   - user's DeleteRecordsRequest
+ *   - broker's log retention
+ *   - broker's log truncation
+ *   - broker's log recovery
+ *   The logStartOffset is used to decide the 
following:
+ *   - Log deletion. LogSegment whose nextOffset <= 
log's logStartOffset can be deleted.
+ *

[jira] [Resolved] (KAFKA-18936) Share fetch stuck when records are larger than fetch max bytes

2025-03-12 Thread Apoorv Mittal (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apoorv Mittal resolved KAFKA-18936.
---
Resolution: Fixed

> Share fetch stuck when records are larger than fetch max bytes
> --
>
> Key: KAFKA-18936
> URL: https://issues.apache.org/jira/browse/KAFKA-18936
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apoorv Mittal
>Assignee: Apoorv Mittal
>Priority: Major
> Fix For: 4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: call the serialize method including headers from the MockProducer [kafka]

2025-03-12 Thread via GitHub


gklijs commented on PR #11144:
URL: https://github.com/apache/kafka/pull/11144#issuecomment-2718126895

   @divijvaidya not sure if something needs to be done anymore besides merging?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18760: Deprecate Optional and return String from public EndPoint#listenerName (wip) [kafka]

2025-03-12 Thread via GitHub


FrankYang0529 opened a new pull request, #19191:
URL: https://github.com/apache/kafka/pull/19191

   
   
   Delete this text and replace it with a detailed description of your change. 
The 
   PR title and body will become the squashed commit message.
   
   If you would like to tag individuals, add some commentary, upload images, or
   include other supplemental information that should not be part of the 
eventual
   commit message, please use a separate comment.
   
   If applicable, please include a summary of the testing strategy (including 
   rationale) for the proposed change. Unit and/or integration tests are 
expected
   for any behavior change and system tests should be considered for larger
   changes.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18736: Do not send fields if not needed [kafka]

2025-03-12 Thread via GitHub


cadonna commented on code in PR #19181:
URL: https://github.com/apache/kafka/pull/19181#discussion_r1991658719


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StreamsGroupHeartbeatRequestManager.java:
##
@@ -82,29 +97,41 @@ public StreamsGroupHeartbeatRequestData buildRequestData() {
 data.setMemberId(membershipManager.memberId());
 data.setMemberEpoch(membershipManager.memberEpoch());
 membershipManager.groupInstanceId().ifPresent(data::setInstanceId);
-StreamsGroupHeartbeatRequestData.Topology topology = new 
StreamsGroupHeartbeatRequestData.Topology();
-
topology.setSubtopologies(getTopologyFromStreams(streamsRebalanceData.subtopologies()));
-topology.setEpoch(streamsRebalanceData.topologyEpoch());
-data.setRebalanceTimeoutMs(rebalanceTimeoutMs);
-data.setTopology(topology);
-data.setProcessId(streamsRebalanceData.processId().toString());
-streamsRebalanceData.endpoint().ifPresent(userEndpoint -> {
-data.setUserEndpoint(new 
StreamsGroupHeartbeatRequestData.Endpoint()
-.setHost(userEndpoint.host())
-.setPort(userEndpoint.port())
-);
-});
-
data.setClientTags(streamsRebalanceData.clientTags().entrySet().stream()
-.map(entry -> new StreamsGroupHeartbeatRequestData.KeyValue()
-.setKey(entry.getKey())
-.setValue(entry.getValue())
-)
-.collect(Collectors.toList()));
+
+boolean joining = membershipManager.state() == MemberState.JOINING;
+
+if (joining) {
+StreamsGroupHeartbeatRequestData.Topology topology = new 
StreamsGroupHeartbeatRequestData.Topology();
+
topology.setSubtopologies(getTopologyFromStreams(streamsRebalanceData.subtopologies()));
+topology.setEpoch(streamsRebalanceData.topologyEpoch());
+data.setTopology(topology);
+data.setRebalanceTimeoutMs(rebalanceTimeoutMs);
+data.setProcessId(streamsRebalanceData.processId().toString());

Review Comment:
   Type `uuid` cannot be nullable, but we need field `processId` to be 
nullable, because if it did not change, we do not want to send it again.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18899: Improve handling of timeouts for commitAsync() in ShareConsumer. [kafka]

2025-03-12 Thread via GitHub


ShivsundarR commented on code in PR #19192:
URL: https://github.com/apache/kafka/pull/19192#discussion_r1991728528


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareConsumeRequestManager.java:
##
@@ -1004,8 +1014,10 @@ private void 
updateLeaderInfoMap(ShareAcknowledgeResponseData.PartitionData part
 }
 
 private TopicIdPartition lookupTopicId(Uuid topicId, int partitionIndex) {
-String topicName = metadata.topicNames().getOrDefault(topicId,
-topicNamesMap.remove(new IdAndPartition(topicId, 
partitionIndex)));
+String topicName = metadata.topicNames().get(topicId);
+if (topicName == null) {
+topicName = topicNamesMap.remove(new IdAndPartition(topicId, 
partitionIndex));
+}

Review Comment:
   I was observing a weird behaviour where even if `metadata.topicNames()` had 
a key-value pair, calling 
   ```
   metadata.topicNames().getOrDefault(topicId,
   topicNamesMap.remove(new IdAndPartition(topicId, 
partitionIndex))
   ```
   would result in the execution of "`topicNamesMap.remove(new 
IdAndPartition(topicId, partitionIndex)`".
   If we had topic names in the metadata and still removed the topicId from the 
`topicNamesMap`, then when the subscription changed, we would not see the topic 
name as everytime we would pop the topic name from the map during 
`handleShareFetch` itself.
   I have temporarily changed the usage here to have 2 explicit checks. But 
still not clear why the `getOrDefault` did not work.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-17715 remove force_use_zk_connection from e2e [kafka]

2025-03-12 Thread via GitHub


mimaison commented on PR #17465:
URL: https://github.com/apache/kafka/pull/17465#issuecomment-2718236601

   @mingdaoy Are you still working on this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18276 Migrate ProducerRebootstrapTest to new test infra [kafka]

2025-03-12 Thread via GitHub


clarkwtc commented on PR #19046:
URL: https://github.com/apache/kafka/pull/19046#issuecomment-2718333704

   @chia7712 
   Thank you for your note.
   I have also updated this PR to fix the related issues from 
https://github.com/apache/kafka/pull/19094#discussion_r1986208718


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18964) Allow to set weights for controller nodes for leader election

2025-03-12 Thread TengYao Chi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17934496#comment-17934496
 ] 

TengYao Chi commented on KAFKA-18964:
-

Hi [~showuon] 

I'd like to take over this one.

Do you think this needs a KIP?

> Allow to set weights for controller nodes for leader election
> -
>
> Key: KAFKA-18964
> URL: https://issues.apache.org/jira/browse/KAFKA-18964
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Luke Chen
>Assignee: TengYao Chi
>Priority: Major
>
> In the stretch cluster environment, the nodes are located in different data 
> center for disaster recovery. So the backup cluster controller nodes should 
> be served as the follower. Only when the disaster happened, the controller 
> nodes in backup cluster need to be elected as leader. In our current design, 
> the candidate node is randomly chosen. We can consider to apply the "weight" 
> to the controller nodes to achieve the situation mentioned above.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18837: Ensure controller quorum timeouts and backoffs are at least 0 [kafka]

2025-03-12 Thread via GitHub


mimaison merged PR #18998:
URL: https://github.com/apache/kafka/pull/18998


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18942) Add reviewers to PR body with committer-tools

2025-03-12 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-18942.

Resolution: Fixed

> Add reviewers to PR body with committer-tools
> -
>
> Key: KAFKA-18942
> URL: https://issues.apache.org/jira/browse/KAFKA-18942
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: David Arthur
>Assignee: Ming-Yen Chung
>Priority: Major
> Fix For: 4.1.0
>
>
> When we switch to the merge queue, we cannot alter the commit message 
> directly and instead must use the PR body for the eventual commit message.
>  
> In order to include our "Reviewers" metadata in the commit, we must edit the 
> PR body after a review has happened and add the "Reviewers" manually. This is 
> rather annoying and we can do better.
>  
> The committer-tools script "reviewers.py" can use the GitHub API (via "gh") 
> to read, modify, and update the PR body with the reviewers selected by this 
> tool.
>  
> For example, 
>  
> {noformat}
> $ ./committer-tools/reviewers.py
> Utility to help generate 'Reviewers' string for Pull Requests. Use Ctrl+D or 
> Ctrl+C to exit
> Name or email (case insensitive): chia
> Possible matches (in order of most recent):
> [1] Chia-Ping Tsai chia7...@gmail.com (1908)
> [2] Chia-Ping Tsai chia7...@apache.org (13)
> [3] Chia-Chuan Yu yujuan...@gmail.com (11)
> [4] Chia Chuan Yu yujuan...@gmail.com (10)
> Make a selection: 1
> Reviewers so far: [('Chia-Ping Tsai', 'chia7...@gmail.com', 1908)]
> Name or email (case insensitive): ^C
> Reviewers: Chia-Ping Tsai 
> Pull Request to update (Ctrl+D or Ctrl+C to skip): 19144
> Adding Reviewers to 19144...
> {noformat}
>  
> The script should be able to handle existing "Reviewers" string in the PR body



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18915: Rewrite AdminClientRebootstrapTest to cover the current scenario [kafka]

2025-03-12 Thread via GitHub


Yunyung commented on code in PR #19187:
URL: https://github.com/apache/kafka/pull/19187#discussion_r1991792486


##
core/src/test/java/kafka/test/api/AdminClientRebootstrapTest.java:
##
@@ -20,91 +20,85 @@
 import org.apache.kafka.clients.admin.NewTopic;
 import org.apache.kafka.common.config.TopicConfig;
 import org.apache.kafka.common.test.ClusterInstance;
-import org.apache.kafka.common.test.api.ClusterConfig;
-import org.apache.kafka.common.test.api.ClusterTemplate;
+import org.apache.kafka.common.test.api.ClusterConfigProperty;
+import org.apache.kafka.common.test.api.ClusterTest;
 import org.apache.kafka.common.test.api.Type;
 import org.apache.kafka.coordinator.group.GroupCoordinatorConfig;
 import org.apache.kafka.test.TestUtils;
 
-import java.util.HashMap;
+import java.time.Duration;
 import java.util.List;
 import java.util.Map;
-import java.util.Set;
 import java.util.concurrent.TimeUnit;
-import java.util.stream.Stream;
+import java.util.concurrent.TimeoutException;
+
+import static org.junit.jupiter.api.Assertions.assertThrows;
 
 public class AdminClientRebootstrapTest {
-private static final int BROKER_COUNT = 2;
-
-private static List generator() {
-// Enable unclean leader election for the test topic
-Map serverProperties = Map.of(
-TopicConfig.UNCLEAN_LEADER_ELECTION_ENABLE_CONFIG, "true",
-GroupCoordinatorConfig.OFFSETS_TOPIC_REPLICATION_FACTOR_CONFIG, 
String.valueOf(BROKER_COUNT)
-);
-
-return Stream.of(false, true)
-.map(AdminClientRebootstrapTest::getRebootstrapConfig)
-.map(rebootstrapProperties -> 
AdminClientRebootstrapTest.buildConfig(serverProperties, rebootstrapProperties))
-.toList();
-}
+private static final String TOPIC = "topic";
+private static final int PARTITIONS = 2;
 
-private static Map getRebootstrapConfig(boolean 
useRebootstrapTriggerMs) {
-Map properties = new HashMap<>();
-if (useRebootstrapTriggerMs) {
-
properties.put(CommonClientConfigs.METADATA_RECOVERY_REBOOTSTRAP_TRIGGER_MS_CONFIG,
 "5000");
-} else {
-
properties.put(CommonClientConfigs.METADATA_RECOVERY_REBOOTSTRAP_TRIGGER_MS_CONFIG,
 "360");
-
properties.put(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG, 
"5000");
-
properties.put(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG,
 "5000");
-properties.put(CommonClientConfigs.RECONNECT_BACKOFF_MS_CONFIG, 
"1000");
-
properties.put(CommonClientConfigs.RECONNECT_BACKOFF_MAX_MS_CONFIG, "1000");
+@ClusterTest(
+brokers = 2,
+types = {Type.KRAFT},
+serverProperties = {
+@ClusterConfigProperty(key = 
TopicConfig.UNCLEAN_LEADER_ELECTION_ENABLE_CONFIG, value = "true"),
+@ClusterConfigProperty(key = 
GroupCoordinatorConfig.OFFSETS_TOPIC_REPLICATION_FACTOR_CONFIG, value = "2")
 }
-properties.put(CommonClientConfigs.METADATA_RECOVERY_STRATEGY_CONFIG, 
"rebootstrap");
-return properties;
-}
-
-private static ClusterConfig buildConfig(Map 
serverProperties, Map rebootstrapProperties) {
-return ClusterConfig.defaultBuilder()
-.setTypes(Set.of(Type.KRAFT))
-.setBrokers(BROKER_COUNT)
-.setServerProperties(serverProperties).build();
-}
-
-@ClusterTemplate(value = "generator")
+)
 public void testRebootstrap(ClusterInstance clusterInstance) throws 
InterruptedException {
-var topic = "topic";
+var broker0 = 0;
+var broker1 = 1;
 var timeout = 5;
-try (var admin = clusterInstance.admin()) {
-admin.createTopics(List.of(new NewTopic(topic, BROKER_COUNT, 
(short) 2)));
 
-var server0 = clusterInstance.brokers().get(0);
-var server1 = clusterInstance.brokers().get(1);
+clusterInstance.shutdownBroker(broker0);
 
-server1.shutdown();
-server1.awaitShutdown();
+try (var admin = clusterInstance.admin()) {
+admin.createTopics(List.of(new NewTopic(TOPIC, PARTITIONS, (short) 
2)));
 
-// Only the server 0 is available for the admin client during the 
bootstrap.
-TestUtils.waitForCondition(() -> 
admin.listTopics().names().get(timeout, TimeUnit.MINUTES).contains(topic),
+// Only the broker 1 is available for the admin client during the 
bootstrap.
+TestUtils.waitForCondition(() -> 
admin.listTopics().names().get(timeout, TimeUnit.MINUTES).contains(TOPIC),
 "timed out waiting for topics");
 
-server0.shutdown();
-server0.awaitShutdown();
-server1.startup();
+clusterInstance.shutdownBroker(broker1);
+clusterInstance.startBroker(broker0);
 
-// The server 0, originally cached during the bootstrap, is 
offline.
-//

Re: [PR] KAFKA-17715 remove force_use_zk_connection from e2e [kafka]

2025-03-12 Thread via GitHub


chia7712 commented on PR #17465:
URL: https://github.com/apache/kafka/pull/17465#issuecomment-2718305049

   @mimaison this ticket will be handled by @mingyen066 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18613: Unit tests for usage of incorrect RPCs [kafka]

2025-03-12 Thread via GitHub


lucasbru commented on PR #18383:
URL: https://github.com/apache/kafka/pull/18383#issuecomment-2718091230

   I retargeted this to trunk (I thought I had already). The tests weren't 
changed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18561: Remove withKip853Rpc and replace it with withRaftProtocol [kafka]

2025-03-12 Thread via GitHub


frankvicky commented on code in PR #18600:
URL: https://github.com/apache/kafka/pull/18600#discussion_r1990879080


##
raft/src/test/java/org/apache/kafka/raft/KafkaRaftClientTest.java:
##
@@ -92,25 +97,25 @@ public void testNodeDirectoryId() {
 }
 
 @ParameterizedTest
-@ValueSource(booleans = { true, false })
-public void testInitializeSingleMemberQuorum(boolean withKip853Rpc) throws 
IOException {
+@EnumSource(value = RaftProtocol.class, names = {"KIP_595_PROTOCOL", 
"KIP_853_PROTOCOL"})

Review Comment:
   Please ignore the above comment.
   I tried to enable all raft protocols for `KafkaRaftClientTest`, and all 
tests passed.
   I will include this change in the next commit.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18736: Do not send fields if not needed [kafka]

2025-03-12 Thread via GitHub


cadonna commented on code in PR #19181:
URL: https://github.com/apache/kafka/pull/19181#discussion_r1991647238


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StreamsGroupHeartbeatRequestManager.java:
##
@@ -61,9 +61,23 @@ public class StreamsGroupHeartbeatRequestManager implements 
RequestManager {
 
 static class HeartbeatState {
 
+// Fields of StreamsGroupHeartbeatRequest sent in the most recent 
request
+static class LastSentFields {

Review Comment:
   In future, it will contain more fields like `TaskOffsets` and 
`TaskEndOffsets` from KIP-1071. Plus, the class `LastSentFields` also documents 
what these fields are about. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18832) ShareFetch behaviour seems incorrect when MaxBytes is less than record size

2025-03-12 Thread Apoorv Mittal (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apoorv Mittal resolved KAFKA-18832.
---
Resolution: Fixed

> ShareFetch behaviour seems incorrect when MaxBytes is less than record size
> ---
>
> Key: KAFKA-18832
> URL: https://issues.apache.org/jira/browse/KAFKA-18832
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 4.1.0
>Reporter: Andrew Schofield
>Assignee: Apoorv Mittal
>Priority: Major
> Fix For: 4.1.0
>
>
> Experimenting with the console-share-consumer, I set the fetch.max.bytes 
> consumer property to a low value, 1. Then I started producing records 
> larger than this. No records were returned, essentially blocking delivery of 
> the topic.
> The limit should be interpreted as a soft limit, just as it is for regular 
> consumers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18832) ShareFetch behaviour seems incorrect when MaxBytes is less than record size

2025-03-12 Thread Apoorv Mittal (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17934553#comment-17934553
 ] 

Apoorv Mittal commented on KAFKA-18832:
---

Duplicate of: https://issues.apache.org/jira/browse/KAFKA-18936

> ShareFetch behaviour seems incorrect when MaxBytes is less than record size
> ---
>
> Key: KAFKA-18832
> URL: https://issues.apache.org/jira/browse/KAFKA-18832
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 4.1.0
>Reporter: Andrew Schofield
>Assignee: Apoorv Mittal
>Priority: Major
> Fix For: 4.1.0
>
>
> Experimenting with the console-share-consumer, I set the fetch.max.bytes 
> consumer property to a low value, 1. Then I started producing records 
> larger than this. No records were returned, essentially blocking delivery of 
> the topic.
> The limit should be interpreted as a soft limit, just as it is for regular 
> consumers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18651: Add Streams-specific broker configurations [kafka]

2025-03-12 Thread via GitHub


lucasbru commented on PR #19176:
URL: https://github.com/apache/kafka/pull/19176#issuecomment-2718102555

   @aliehsaeedii there seem to be test fialures


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18736: Do not send fields if not needed [kafka]

2025-03-12 Thread via GitHub


cadonna commented on code in PR #19181:
URL: https://github.com/apache/kafka/pull/19181#discussion_r1991641888


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StreamsGroupHeartbeatRequestManager.java:
##
@@ -119,9 +146,9 @@ private static 
List convertTaskIdColle
 .map(entry -> {
 StreamsGroupHeartbeatRequestData.TaskIds ids = new 
StreamsGroupHeartbeatRequestData.TaskIds();
 ids.setSubtopologyId(entry.getKey());
-ids.setPartitions(entry.getValue());
+
ids.setPartitions(entry.getValue().stream().sorted().collect(Collectors.toList()));

Review Comment:
   Ah, good catch!
   I believe, I added them to quickly fix the test failures, but then forgot to 
re-iterate over the change.
   I removed the sorting and added a custom assert for `TaskIds` to the tests.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18736: Do not send fields if not needed [kafka]

2025-03-12 Thread via GitHub


cadonna commented on code in PR #19181:
URL: https://github.com/apache/kafka/pull/19181#discussion_r1991660967


##
clients/src/test/java/org/apache/kafka/clients/consumer/internals/StreamsGroupHeartbeatRequestManagerTest.java:
##
@@ -476,20 +492,25 @@ public void 
testNotSendingLeaveHeartbeatIfPollTimerExpiredAndMemberIsLeaving() {
 }
 
 @Test
-public void testSendingFullHeartbeatRequest() {
+public void testSendingLeaveHeartbeatRequestWhenPollTimerExpired() {
 try (
 final MockedConstruction 
heartbeatRequestStateMockedConstruction = mockConstruction(
 HeartbeatRequestState.class,
 (mock, context) -> {
 
when(mock.canSendRequest(time.milliseconds())).thenReturn(true);
-})
+});
+final MockedConstruction pollTimerMockedConstruction = 
mockConstruction(
+Timer.class,
+(mock, context) -> {
+when(mock.isExpired()).thenReturn(true);
+});

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18899: Improve handling of timeouts for commitAsync() in ShareConsumer. [kafka]

2025-03-12 Thread via GitHub


ShivsundarR opened a new pull request, #19192:
URL: https://github.com/apache/kafka/pull/19192

   *What*
   
   - Previously, the ShareConsumer.commitAsync() method retried sending 
ShareAcknowledge requests
   
   indefinitely. Now it will instead use the defaultApiTimeout config to expire 
the request so that it does not retry forever.
   
   - PR also fixes a bug in processing `commitSync() `requests, where we need 
an additional check if the node is free.
   
   - Added unit tests to verify the above changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18965) Improve release validation for kafka-clients

2025-03-12 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur updated KAFKA-18965:
-
Fix Version/s: 4.1.0

> Improve release validation for kafka-clients
> 
>
> Key: KAFKA-18965
> URL: https://issues.apache.org/jira/browse/KAFKA-18965
> Project: Kafka
>  Issue Type: Improvement
>  Components: build, clients, release
>Reporter: David Arthur
>Priority: Major
> Fix For: 4.1.0
>
>
> It would be nice if we could improve (and automate!) some of the release 
> validation for kafka-clients.
>  
> We can create sample projects which consume kafka-clients in different ways 
> (Gradle, Maven, SBT, etc). This will let us validate several things
>  
>  * Downstream projects can be built with new JAR
>  * Downstream projects can be run with new JAR
>  * Our shaded dependencies are working properly (and not conflicting with the 
> consuming project's dependencies)
>  * Licenses are properly included in the JAR
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18965) Improve release validation for kafka-clients

2025-03-12 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur updated KAFKA-18965:
-
Component/s: build
 clients
 release

> Improve release validation for kafka-clients
> 
>
> Key: KAFKA-18965
> URL: https://issues.apache.org/jira/browse/KAFKA-18965
> Project: Kafka
>  Issue Type: Improvement
>  Components: build, clients, release
>Reporter: David Arthur
>Priority: Major
>
> It would be nice if we could improve (and automate!) some of the release 
> validation for kafka-clients.
>  
> We can create sample projects which consume kafka-clients in different ways 
> (Gradle, Maven, SBT, etc). This will let us validate several things
>  
>  * Downstream projects can be built with new JAR
>  * Downstream projects can be run with new JAR
>  * Our shaded dependencies are working properly (and not conflicting with the 
> consuming project's dependencies)
>  * Licenses are properly included in the JAR
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-18965) Improve release validation for kafka-clients

2025-03-12 Thread David Arthur (Jira)
David Arthur created KAFKA-18965:


 Summary: Improve release validation for kafka-clients
 Key: KAFKA-18965
 URL: https://issues.apache.org/jira/browse/KAFKA-18965
 Project: Kafka
  Issue Type: Improvement
Reporter: David Arthur


It would be nice if we could improve (and automate!) some of the release 
validation for kafka-clients.

 

We can create sample projects which consume kafka-clients in different ways 
(Gradle, Maven, SBT, etc). This will let us validate several things

 
 * Downstream projects can be built with new JAR
 * Downstream projects can be run with new JAR
 * Our shaded dependencies are working properly (and not conflicting with the 
consuming project's dependencies)
 * Licenses are properly included in the JAR

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15844) Broker doesn't re-register after losing ZK session

2025-03-12 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-15844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

José Armando García Sancio resolved KAFKA-15844.

Resolution: Won't Fix

Marking it as won't fix since Kafka doesn't use ZK anymore.

> Broker doesn't re-register after losing ZK session
> --
>
> Key: KAFKA-15844
> URL: https://issues.apache.org/jira/browse/KAFKA-15844
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: José Armando García Sancio
>Priority: Major
>  Labels: zookeeper
>
> We experienced a case where a Kafka broker lost connection to the ZK cluster 
> and was not able to recreate the registration znode. Only, after the broker 
> was restarted did the registration znode get created.
> The interesting observation is that the "ACL authorizer" ZK client identified 
> the session lost and recreated the ZK client but the "Kafka server" ZK client 
> never received an SessionExpiredException exception.
> Here is an example session where this happened. The controller sees the 
> broker go offline:
> {code:java}
> INFO [Controller id=32] Newly added brokers: , deleted brokers: 37, bounced 
> brokers: , all live brokers: ...{code}
> "ACL authorizer" ZK session is lost and recreated in broker 37:
> {code:java}
> [Broker=37] WARN Client session timed out, have not heard from server in 
> 3026ms for sessionid 0x504b9c08b5e0025
> ...
> INFO [ZooKeeperClient ACL authorizer] Session expired.
> ...
> INFO [ZooKeeperClient ACL authorizer] Initializing a new session to ...
> ...
> [Broker=37] INFO Session establishment complete on server ..., sessionid = 
> 0x604dd0ad7180045, negotiated timeout = 18000{code}
> Unfortunately, we never see similar logs for the "Kafka server":
> {code:java}
> WARN Client session timed out, have not heard from server in 14227ms for 
> sessionid 0x304beeed4930026 (org.apache.zookeeper.ClientCnxn)
> ...
> INFO Client session timed out, have not heard from server in 14227ms for 
> sessionid 0x304beeed4930026, closing socket connection and attempting 
> reconnect (org.apache.zookeeper.ClientCnxn)
> ...
> WARN Client session timed out, have not heard from server in 4548ms for 
> sessionid 0x304beeed4930026 (org.apache.zookeeper.ClientCnxn)
> ...
> INFO Client session timed out, have not heard from server in 4548ms for 
> sessionid 0x304beeed4930026, closing socket connection and attempting 
> reconnect (org.apache.zookeeper.ClientCnxn){code}
> Maybe we are running into this issue from the ZOOKEEPER-1159 discussion:
> {quote}As I understand it, the problem here may be that a disconnected client 
> cannot discover that its session has expired. Only the server can declare a 
> session expired which on the client side leads to the 
> SessionExpiredException, but only when the client is connected.
> If this assumption is correct, I'm not sure how best to address it.
> {quote}
>  
> Restarting broker 37 resolved the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18915: Rewrite AdminClientRebootstrapTest to cover the current scenario [kafka]

2025-03-12 Thread via GitHub


Yunyung commented on code in PR #19187:
URL: https://github.com/apache/kafka/pull/19187#discussion_r1991771351


##
core/src/test/java/kafka/test/api/AdminClientRebootstrapTest.java:
##
@@ -20,91 +20,85 @@
 import org.apache.kafka.clients.admin.NewTopic;
 import org.apache.kafka.common.config.TopicConfig;
 import org.apache.kafka.common.test.ClusterInstance;
-import org.apache.kafka.common.test.api.ClusterConfig;
-import org.apache.kafka.common.test.api.ClusterTemplate;
+import org.apache.kafka.common.test.api.ClusterConfigProperty;
+import org.apache.kafka.common.test.api.ClusterTest;
 import org.apache.kafka.common.test.api.Type;
 import org.apache.kafka.coordinator.group.GroupCoordinatorConfig;
 import org.apache.kafka.test.TestUtils;
 
-import java.util.HashMap;
+import java.time.Duration;
 import java.util.List;
 import java.util.Map;
-import java.util.Set;
 import java.util.concurrent.TimeUnit;
-import java.util.stream.Stream;
+import java.util.concurrent.TimeoutException;
+
+import static org.junit.jupiter.api.Assertions.assertThrows;
 
 public class AdminClientRebootstrapTest {
-private static final int BROKER_COUNT = 2;
-
-private static List generator() {
-// Enable unclean leader election for the test topic
-Map serverProperties = Map.of(
-TopicConfig.UNCLEAN_LEADER_ELECTION_ENABLE_CONFIG, "true",
-GroupCoordinatorConfig.OFFSETS_TOPIC_REPLICATION_FACTOR_CONFIG, 
String.valueOf(BROKER_COUNT)
-);
-
-return Stream.of(false, true)
-.map(AdminClientRebootstrapTest::getRebootstrapConfig)
-.map(rebootstrapProperties -> 
AdminClientRebootstrapTest.buildConfig(serverProperties, rebootstrapProperties))
-.toList();
-}
+private static final String TOPIC = "topic";
+private static final int PARTITIONS = 2;
 
-private static Map getRebootstrapConfig(boolean 
useRebootstrapTriggerMs) {
-Map properties = new HashMap<>();
-if (useRebootstrapTriggerMs) {
-
properties.put(CommonClientConfigs.METADATA_RECOVERY_REBOOTSTRAP_TRIGGER_MS_CONFIG,
 "5000");
-} else {
-
properties.put(CommonClientConfigs.METADATA_RECOVERY_REBOOTSTRAP_TRIGGER_MS_CONFIG,
 "360");
-
properties.put(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG, 
"5000");
-
properties.put(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG,
 "5000");
-properties.put(CommonClientConfigs.RECONNECT_BACKOFF_MS_CONFIG, 
"1000");
-
properties.put(CommonClientConfigs.RECONNECT_BACKOFF_MAX_MS_CONFIG, "1000");
+@ClusterTest(
+brokers = 2,
+types = {Type.KRAFT},
+serverProperties = {
+@ClusterConfigProperty(key = 
TopicConfig.UNCLEAN_LEADER_ELECTION_ENABLE_CONFIG, value = "true"),
+@ClusterConfigProperty(key = 
GroupCoordinatorConfig.OFFSETS_TOPIC_REPLICATION_FACTOR_CONFIG, value = "2")
 }
-properties.put(CommonClientConfigs.METADATA_RECOVERY_STRATEGY_CONFIG, 
"rebootstrap");
-return properties;
-}
-
-private static ClusterConfig buildConfig(Map 
serverProperties, Map rebootstrapProperties) {
-return ClusterConfig.defaultBuilder()
-.setTypes(Set.of(Type.KRAFT))
-.setBrokers(BROKER_COUNT)
-.setServerProperties(serverProperties).build();
-}
-
-@ClusterTemplate(value = "generator")
+)
 public void testRebootstrap(ClusterInstance clusterInstance) throws 
InterruptedException {
-var topic = "topic";
+var broker0 = 0;
+var broker1 = 1;
 var timeout = 5;
-try (var admin = clusterInstance.admin()) {
-admin.createTopics(List.of(new NewTopic(topic, BROKER_COUNT, 
(short) 2)));
 
-var server0 = clusterInstance.brokers().get(0);
-var server1 = clusterInstance.brokers().get(1);
+clusterInstance.shutdownBroker(broker0);
 
-server1.shutdown();
-server1.awaitShutdown();
+try (var admin = clusterInstance.admin()) {
+admin.createTopics(List.of(new NewTopic(TOPIC, PARTITIONS, (short) 
2)));
 
-// Only the server 0 is available for the admin client during the 
bootstrap.
-TestUtils.waitForCondition(() -> 
admin.listTopics().names().get(timeout, TimeUnit.MINUTES).contains(topic),
+// Only the broker 1 is available for the admin client during the 
bootstrap.
+TestUtils.waitForCondition(() -> 
admin.listTopics().names().get(timeout, TimeUnit.MINUTES).contains(TOPIC),
 "timed out waiting for topics");
 
-server0.shutdown();
-server0.awaitShutdown();
-server1.startup();
+clusterInstance.shutdownBroker(broker1);
+clusterInstance.startBroker(broker0);
 
-// The server 0, originally cached during the bootstrap, is 
offline.
-//

Re: [PR] KAFKA-17715 remove force_use_zk_connection from e2e [kafka]

2025-03-12 Thread via GitHub


chia7712 closed pull request #17465: KAFKA-17715 remove force_use_zk_connection 
from e2e
URL: https://github.com/apache/kafka/pull/17465


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18915: Rewrite AdminClientRebootstrapTest to cover the current scenario [kafka]

2025-03-12 Thread via GitHub


chia7712 commented on PR #19187:
URL: https://github.com/apache/kafka/pull/19187#issuecomment-2718310300

   @clarkwtc cloud you please move this test to clients-integration-tests 
module?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


mingdaoy commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991109201


##
docs/upgrade.html:
##
@@ -29,7 +31,26 @@ Notable changes in 4
 
 
 
-Upgrading to 4.0.0 from any 
version 3.3.x through 3.9.x
+
+Upgrading to 4.0.0
+
+Upgrading 
Clients to 4.0.0
+
+For a rolling upgrade:
+
+
+Upgrade the clients one at a time: shut down the client, update the 
code, and restart it.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
+For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.

Review Comment:
   I changed it to the following,
   
   > For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly **from certain versions, 
such as **2.0 clients (or older)**, to 4.x is not feasible.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18964) Allow to set weights for controller nodes for leader election

2025-03-12 Thread TengYao Chi (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TengYao Chi updated KAFKA-18964:

Labels: needs-kip  (was: )

> Allow to set weights for controller nodes for leader election
> -
>
> Key: KAFKA-18964
> URL: https://issues.apache.org/jira/browse/KAFKA-18964
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Luke Chen
>Assignee: TengYao Chi
>Priority: Major
>  Labels: needs-kip
>
> In the stretch cluster environment, the nodes are located in different data 
> center for disaster recovery. So the backup cluster controller nodes should 
> be served as the follower. Only when the disaster happened, the controller 
> nodes in backup cluster need to be elected as leader. In our current design, 
> the candidate node is randomly chosen. We can consider to apply the "weight" 
> to the controller nodes to achieve the situation mentioned above.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


dajac commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991454625


##
docs/upgrade.html:
##
@@ -29,7 +31,26 @@ Notable changes in 4
 
 
 
-Upgrading to 4.0.0 from any 
version 3.3.x through 3.9.x
+
+Upgrading to 4.0.0
+
+Upgrading 
Clients to 4.0.0
+
+For a rolling upgrade:
+
+
+Upgrade the clients one at a time: shut down the client, update the 
code, and restart it.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
+For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.

Review Comment:
   @mingdaoy Do you have time for updating it? We can merge it afterwards.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


dajac commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991430908


##
docs/upgrade.html:
##
@@ -29,7 +31,26 @@ Notable changes in 4
 
 
 
-Upgrading to 4.0.0 from any 
version 3.3.x through 3.9.x
+
+Upgrading to 4.0.0
+
+Upgrading 
Clients to 4.0.0
+
+For a rolling upgrade:
+
+
+Upgrade the clients one at a time: shut down the client, update the 
code, and restart it.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
+For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.

Review Comment:
   I would use the following:
   
   > 2. Clients (including Streams and Connect) must be on version 2.1 or 
higher before upgrading to 4.0. Many deprecated APIs were removed in Kafka 4.0. 
For more information about the compatibility, please refer to compatibility 
matrix (link it) or KIP-1124 for details.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


dajac commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991554768


##
docs/upgrade.html:
##
@@ -29,7 +31,26 @@ Notable changes in 4
 
 
 
-Upgrading to 4.0.0 from any 
version 3.3.x through 3.9.x
+
+Upgrading to 4.0.0
+
+Upgrading 
Clients to 4.0.0
+
+For a rolling upgrade:
+
+
+Upgrade the clients one at a time: shut down the client, update the 
code, and restart it.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
+For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.

Review Comment:
   @mingdaoy I hope you don't mind but I push a small commit to tweak the text.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


dajac merged PR #19119:
URL: https://github.com/apache/kafka/pull/19119


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-18819) StreamsGroupHeartbeat API and StreamsGroupDescribe API must check topic describe

2025-03-12 Thread Lucas Brutschy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lucas Brutschy reassigned KAFKA-18819:
--

Assignee: Lan Ding  (was: Lucas Brutschy)

> StreamsGroupHeartbeat API and StreamsGroupDescribe API must check topic 
> describe
> 
>
> Key: KAFKA-18819
> URL: https://issues.apache.org/jira/browse/KAFKA-18819
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Lucas Brutschy
>Assignee: Lan Ding
>Priority: Major
>
> StreamsGroupHeartbeat API and StreamsGroupDescribe API must check topic 
> describe to ensure that we don't leak topic information to clients without 
> the required permissions. The simplest approach seems to filter out 
> unauthorised topics from the responses of those APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


dajac commented on PR #19119:
URL: https://github.com/apache/kafka/pull/19119#issuecomment-2717976372

   Merged to trunk and cherry-picked to 4.0 (manually resolved the conflicts).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-12 Thread via GitHub


chia7712 commented on code in PR #19119:
URL: https://github.com/apache/kafka/pull/19119#discussion_r1991562442


##
docs/upgrade.html:
##
@@ -29,7 +31,26 @@ Notable changes in 4
 
 
 
-Upgrading to 4.0.0 from any 
version 3.3.x through 3.9.x
+
+Upgrading to 4.0.0
+
+Upgrading 
Clients to 4.0.0
+
+For a rolling upgrade:
+
+
+Upgrade the clients one at a time: shut down the client, update the 
code, and restart it.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.
+For more information, please refer to https://cwiki.apache.org/confluence/x/y4kgF";>KIP-1124.

Review Comment:
   @dajac you were one second faster than me. Thank you for the fix.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18286) Add support for streams groups to kafka-group.sh

2025-03-12 Thread Lucas Brutschy (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17934539#comment-17934539
 ] 

Lucas Brutschy commented on KAFKA-18286:


We'll need KAFAK-18613, to be able to write a useful integration test

> Add support for streams groups to kafka-group.sh
> 
>
> Key: KAFKA-18286
> URL: https://issues.apache.org/jira/browse/KAFKA-18286
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Lucas Brutschy
>Assignee: Lucas Brutschy
>Priority: Major
>
> Add support for streams groups in kafka-group.sh.
> This is already present on the kip1071 feature branch and needs to be added 
> to trunk.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-18422) add Kafka client upgrade path

2025-03-12 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-18422.
-
Resolution: Fixed

> add Kafka client upgrade path
> -
>
> Key: KAFKA-18422
> URL: https://issues.apache.org/jira/browse/KAFKA-18422
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Blocker
>  Labels: need-kip
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/pull/18193#issuecomment-2572283545



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18613: Improve test coverage for missing topics [kafka]

2025-03-12 Thread via GitHub


lucasbru commented on PR #19189:
URL: https://github.com/apache/kafka/pull/19189#issuecomment-2717992037

   @cadonna Could you have a look? I'm adding three tests as a separate PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18613: Improve test coverage for missing topics [kafka]

2025-03-12 Thread via GitHub


lucasbru commented on code in PR #19189:
URL: https://github.com/apache/kafka/pull/19189#discussion_r1991576556


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/streams/topics/InternalTopicManager.java:
##
@@ -180,17 +178,13 @@ private static void enforceCopartitioning(final 
StreamsTopology topology,
 x.repartitionSourceTopics().stream().filter(y -> 
y.partitions() == 0)
 
).map(StreamsGroupTopologyValue.TopicInfo::name).collect(Collectors.toSet());
 
-if (fixedRepartitionTopics.isEmpty() && 
flexibleRepartitionTopics.isEmpty()) {
-log.info("Skipping the repartition topic validation since there 
are no repartition topics.");

Review Comment:
   Skipping here is actually not correct - we need to enforce copartitioning 
also for source topics.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18606) Flaky test DeleteSegmentsByRetentionTimeTest#executeTieredStorageTest

2025-03-12 Thread David Arthur (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17934546#comment-17934546
 ] 

David Arthur commented on KAFKA-18606:
--

This test is still very flaky 
https://develocity.apache.org/scans/tests?search.names=Git%20Repository&search.rootProjectNames=kafka&search.tags=github&search.tasks=test&search.timeZoneId=America%2FNew_York&search.values=https:%2F%2Fgithub.com%2Fapache%2Fkafka&tests.container=org.apache.kafka.tiered.storage.integration.DeleteSegmentsByRetentionTimeTest

> Flaky test DeleteSegmentsByRetentionTimeTest#executeTieredStorageTest
> -
>
> Key: KAFKA-18606
> URL: https://issues.apache.org/jira/browse/KAFKA-18606
> Project: Kafka
>  Issue Type: Test
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
>
> [https://ge.apache.org/scans/tests?search.rootProjectNames=kafka&search.tasks=test&search.timeZoneId=Asia%2FTaipei&tests.container=org.apache.kafka.tiered.storage.integration.DeleteSegmentsByRetentionTimeTest&tests.test=executeTieredStorageTest(String%2C%20String)%5B2%5D]
> [Develocity 
> results|https://develocity.apache.org/scans/tests?search.relativeStartTime=P28D&search.rootProjectNames=kafka&search.timeZoneId=America%2FLos_Angeles&tests.container=org.apache.kafka.tiered.storage.integration.DeleteSegmentsByRetentionTimeTest]
>  for the last 28 days show this test is 8% flaky.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-18606) Flaky test DeleteSegmentsByRetentionTimeTest#executeTieredStorageTest

2025-03-12 Thread David Arthur (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17934546#comment-17934546
 ] 

David Arthur edited comment on KAFKA-18606 at 3/12/25 2:05 PM:
---

This test class is still very flaky 
[https://develocity.apache.org/scans/tests?search.names=Git%20Repository&search.rootProjectNames=kafka&search.tags=github&search.tasks=test&search.timeZoneId=America%2FNew_York&search.values=https:%2F%2Fgithub.com%2Fapache%2Fkafka&tests.container=org.apache.kafka.tiered.storage.integration.DeleteSegmentsByRetentionTimeTest]


was (Author: davidarthur):
This test is still very flaky 
https://develocity.apache.org/scans/tests?search.names=Git%20Repository&search.rootProjectNames=kafka&search.tags=github&search.tasks=test&search.timeZoneId=America%2FNew_York&search.values=https:%2F%2Fgithub.com%2Fapache%2Fkafka&tests.container=org.apache.kafka.tiered.storage.integration.DeleteSegmentsByRetentionTimeTest

> Flaky test DeleteSegmentsByRetentionTimeTest#executeTieredStorageTest
> -
>
> Key: KAFKA-18606
> URL: https://issues.apache.org/jira/browse/KAFKA-18606
> Project: Kafka
>  Issue Type: Test
>Reporter: 黃竣陽
>Assignee: 黃竣陽
>Priority: Major
>
> [https://ge.apache.org/scans/tests?search.rootProjectNames=kafka&search.tasks=test&search.timeZoneId=Asia%2FTaipei&tests.container=org.apache.kafka.tiered.storage.integration.DeleteSegmentsByRetentionTimeTest&tests.test=executeTieredStorageTest(String%2C%20String)%5B2%5D]
> [Develocity 
> results|https://develocity.apache.org/scans/tests?search.relativeStartTime=P28D&search.rootProjectNames=kafka&search.timeZoneId=America%2FLos_Angeles&tests.container=org.apache.kafka.tiered.storage.integration.DeleteSegmentsByRetentionTimeTest]
>  for the last 28 days show this test is 8% flaky.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-18845) Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled

2025-03-12 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur reopened KAFKA-18845:
--

> Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled
> --
>
> Key: KAFKA-18845
> URL: https://issues.apache.org/jira/browse/KAFKA-18845
> Project: Kafka
>  Issue Type: Bug
>Reporter: 黃竣陽
>Assignee: PoAn Yang
>Priority: Major
> Attachments: Screenshot 2025-03-09 at 11.46.53 PM.png
>
>
> The test always fail when I using this command `I=0; while ./gradlew clean 
> :metadata:test --tests "QuorumControllerTest" --rerun --fail-fast; do (( 
> I=$I+1 )); echo "Completed run: $I"; sleep 1; done` on my local machine 
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.StaleBrokerEpochException: Expected broker 
> epoch 22, but got broker epoch 7
> at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
> at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073)
> at 
> org.apache.kafka.controller.QuorumControllerIntegrationTestUtils.sendBrokerHeartbeatToUnfenceBrokers(QuorumControllerIntegrationTestUtils.java:177)
> at 
> org.apache.kafka.controller.QuorumControllerTest.testUncleanShutdownBrokerElrEnabled(QuorumControllerTest.java:498)
> at java.base/java.lang.reflect.Method.invoke(Method.java:568)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> Caused by: org.apache.kafka.common.errors.StaleBrokerEpochException: Expected 
> broker epoch 22, but got broker epoch 7 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18845) Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled

2025-03-12 Thread David Arthur (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17934548#comment-17934548
 ] 

David Arthur commented on KAFKA-18845:
--

There was another failure on trunk 
https://github.com/apache/kafka/actions/runs/13792418104/job/38581952591

> Fail test QuorumControllerTest#testUncleanShutdownBrokerElrEnabled
> --
>
> Key: KAFKA-18845
> URL: https://issues.apache.org/jira/browse/KAFKA-18845
> Project: Kafka
>  Issue Type: Bug
>Reporter: 黃竣陽
>Assignee: PoAn Yang
>Priority: Major
> Attachments: Screenshot 2025-03-09 at 11.46.53 PM.png
>
>
> The test always fail when I using this command `I=0; while ./gradlew clean 
> :metadata:test --tests "QuorumControllerTest" --rerun --fail-fast; do (( 
> I=$I+1 )); echo "Completed run: $I"; sleep 1; done` on my local machine 
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.StaleBrokerEpochException: Expected broker 
> epoch 22, but got broker epoch 7
> at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
> at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073)
> at 
> org.apache.kafka.controller.QuorumControllerIntegrationTestUtils.sendBrokerHeartbeatToUnfenceBrokers(QuorumControllerIntegrationTestUtils.java:177)
> at 
> org.apache.kafka.controller.QuorumControllerTest.testUncleanShutdownBrokerElrEnabled(QuorumControllerTest.java:498)
> at java.base/java.lang.reflect.Method.invoke(Method.java:568)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> Caused by: org.apache.kafka.common.errors.StaleBrokerEpochException: Expected 
> broker epoch 22, but got broker epoch 7 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Remove unused ConfigCommandOptions#forceOpt [kafka]

2025-03-12 Thread via GitHub


chia7712 commented on PR #19170:
URL: https://github.com/apache/kafka/pull/19170#issuecomment-2718377684

   the failed test is traced by 
https://issues.apache.org/jira/browse/KAFKA-16024


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18736: Do not send fields if not needed [kafka]

2025-03-12 Thread via GitHub


cadonna commented on code in PR #19181:
URL: https://github.com/apache/kafka/pull/19181#discussion_r1991820732


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StreamsGroupHeartbeatRequestManager.java:
##
@@ -61,9 +61,23 @@ public class StreamsGroupHeartbeatRequestManager implements 
RequestManager {
 
 static class HeartbeatState {
 
+// Fields of StreamsGroupHeartbeatRequest sent in the most recent 
request
+static class LastSentFields {
+
+private StreamsRebalanceData.Assignment assignment = null;

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18736: Do not send fields if not needed [kafka]

2025-03-12 Thread via GitHub


cadonna commented on code in PR #19181:
URL: https://github.com/apache/kafka/pull/19181#discussion_r1991827905


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StreamsGroupHeartbeatRequestManager.java:
##
@@ -82,29 +97,41 @@ public StreamsGroupHeartbeatRequestData buildRequestData() {
 data.setMemberId(membershipManager.memberId());
 data.setMemberEpoch(membershipManager.memberEpoch());
 membershipManager.groupInstanceId().ifPresent(data::setInstanceId);
-StreamsGroupHeartbeatRequestData.Topology topology = new 
StreamsGroupHeartbeatRequestData.Topology();
-
topology.setSubtopologies(getTopologyFromStreams(streamsRebalanceData.subtopologies()));
-topology.setEpoch(streamsRebalanceData.topologyEpoch());
-data.setRebalanceTimeoutMs(rebalanceTimeoutMs);
-data.setTopology(topology);
-data.setProcessId(streamsRebalanceData.processId().toString());
-streamsRebalanceData.endpoint().ifPresent(userEndpoint -> {
-data.setUserEndpoint(new 
StreamsGroupHeartbeatRequestData.Endpoint()
-.setHost(userEndpoint.host())
-.setPort(userEndpoint.port())
-);
-});
-
data.setClientTags(streamsRebalanceData.clientTags().entrySet().stream()
-.map(entry -> new StreamsGroupHeartbeatRequestData.KeyValue()
-.setKey(entry.getKey())
-.setValue(entry.getValue())
-)
-.collect(Collectors.toList()));
+
+boolean joining = membershipManager.state() == MemberState.JOINING;
+
+if (joining) {
+StreamsGroupHeartbeatRequestData.Topology topology = new 
StreamsGroupHeartbeatRequestData.Topology();
+
topology.setSubtopologies(getTopologyFromStreams(streamsRebalanceData.subtopologies()));
+topology.setEpoch(streamsRebalanceData.topologyEpoch());
+data.setTopology(topology);
+data.setRebalanceTimeoutMs(rebalanceTimeoutMs);
+data.setProcessId(streamsRebalanceData.processId().toString());
+streamsRebalanceData.endpoint().ifPresent(userEndpoint -> {
+data.setUserEndpoint(new 
StreamsGroupHeartbeatRequestData.Endpoint()
+.setHost(userEndpoint.host())
+.setPort(userEndpoint.port())
+);
+});
+
data.setClientTags(streamsRebalanceData.clientTags().entrySet().stream()
+.map(entry -> new 
StreamsGroupHeartbeatRequestData.KeyValue()
+.setKey(entry.getKey())
+.setValue(entry.getValue())
+)
+.collect(Collectors.toList()));
+data.setActiveTasks(convertTaskIdCollection(Set.of()));

Review Comment:
   The protocol specifies that the assignment needs to consists of empty sets 
when joining. The heartbeat handler broker side distinguishes between an empty 
assignment and an assignment that was not sent in the heartbeat. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18843: Fix MirrorMaker2 workerId is not unique, but use the sam… [kafka]

2025-03-12 Thread via GitHub


viktorsomogyi commented on code in PR #18994:
URL: https://github.com/apache/kafka/pull/18994#discussion_r1991829652


##
checkstyle/import-control.xml:
##
@@ -567,6 +567,7 @@
   
   
   
+  

Review Comment:
   @k0b3rIT sorry, I was doing a final pass while noticed this. Do you think 
you could avoid using the UriBuilder class below and thus avoid adding this 
exception?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Remove unused ConfigCommandOptions#forceOpt [kafka]

2025-03-12 Thread via GitHub


chia7712 merged PR #19170:
URL: https://github.com/apache/kafka/pull/19170


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10731: add support for SSL hot reload [kafka]

2025-03-12 Thread via GitHub


mimaison commented on PR #17987:
URL: https://github.com/apache/kafka/pull/17987#issuecomment-2718390145

   Thanks for opening a KIP. I commented on the thread you started on the dev 
list.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18904) Listing of configs for dynamically created resources is mysterious

2025-03-12 Thread PoAn Yang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17934589#comment-17934589
 ] 

PoAn Yang commented on KAFKA-18904:
---

KIP-1142: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1142%3A+Allow+to+list+non-existent+group+which+has+dynamic+config

Discussion thread: 
https://lists.apache.org/thread/c3q584qtrodys2mkzdg0qxrktzbcjzmp


> Listing of configs for dynamically created resources is mysterious
> --
>
> Key: KAFKA-18904
> URL: https://issues.apache.org/jira/browse/KAFKA-18904
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Schofield
>Assignee: PoAn Yang
>Priority: Major
>  Labels: needs-kip
>
> The `kafka-configs.sh` tool can be used to set configurations on dynamically 
> created resources such as groups and client metrics. However, the way that 
> listing of the configs works is unhelpful.
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --group G1 
> --add-config consumer.heartbeat.interval.ms=1
> * This defines the config consumer.heartbeat.interval.ms
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type groups
> * This only describes the configs of groups that actually exist, as will 
> happen when the group actually has started being used.
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type groups --entity-name G1
> * This actually displays the configs for G1.
> The problem is that using `--describe` with no entity name, the tool lists 
> the resources (the groups) not the configs. As a result, if you define 
> configs in preparation for the use of groups in the future, you need to 
> remember what you created. You cannot list the groups for which configs are 
> defined, only the groups which actually exist from the point of view of the 
> group coordinator.
> Client metrics are a bit better because there is at least an RPC for listing 
> the client metrics resources.
> There is a second class of related problem.
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type groups --entity-name DOESNOTEXIST
> * This does not return an error for a non-existent resource.
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type client-metrics --entity-name DOESNOTEXIST
> * This does not return an error for a non-existent resource.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-15931: Reopen TransactionIndex if channel is closed (#15241) [kafka]

2025-03-12 Thread via GitHub


junrao commented on PR #19150:
URL: https://github.com/apache/kafka/pull/19150#issuecomment-2719366230

   @jeqo : Thanks for the PR. We probably want to hold off on this PR under 
https://github.com/apache/kafka/pull/15241#discussion_r1988994745 is resolved.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18973) Review MetadataSchemaCheckerToolTest.testVerifyEvolutionGit requiring git project

2025-03-12 Thread PoAn Yang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17935028#comment-17935028
 ] 

PoAn Yang commented on KAFKA-18973:
---

Hi [~lianetm], if you're not working on this, may I take it? Thanks.

> Review MetadataSchemaCheckerToolTest.testVerifyEvolutionGit requiring git 
> project
> -
>
> Key: KAFKA-18973
> URL: https://issues.apache.org/jira/browse/KAFKA-18973
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 4.0.0
>Reporter: Lianet Magrans
>Priority: Major
> Fix For: 4.1.0
>
>
> While testing a release candidate we noticed that the fact that the test 
> testVerifyEvolutionGit requires a git directory, causes that running tests on 
> a release source folder after download doesn't work anymore (as it used to in 
> previous versions). As of 4.0 it fails with: {_}java.lang.RuntimeException: 
> Invalid directory, need to be within a Git repository{_}.
> This started failing on 4.0 which is the first release to include this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18973) Review MetadataSchemaCheckerToolTest.testVerifyEvolutionGit requiring git project

2025-03-12 Thread Lianet Magrans (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17935037#comment-17935037
 ] 

Lianet Magrans commented on KAFKA-18973:


Sure, thanks!

> Review MetadataSchemaCheckerToolTest.testVerifyEvolutionGit requiring git 
> project
> -
>
> Key: KAFKA-18973
> URL: https://issues.apache.org/jira/browse/KAFKA-18973
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 4.0.0
>Reporter: Lianet Magrans
>Priority: Major
> Fix For: 4.1.0
>
>
> While testing a release candidate we noticed that the fact that the test 
> testVerifyEvolutionGit requires a git directory, causes that running tests on 
> a release source folder after download doesn't work anymore (as it used to in 
> previous versions). As of 4.0 it fails with: {_}java.lang.RuntimeException: 
> Invalid directory, need to be within a Git repository{_}.
> This started failing on 4.0 which is the first release to include this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18074: Add kafka client compatibility matrix [kafka]

2025-03-12 Thread via GitHub


dajac commented on PR #18091:
URL: https://github.com/apache/kafka/pull/18091#issuecomment-2713111637

   @m1a2st Would you have time for addressing the remaining comments? This is a 
blocker for the 4.0 release so we need to merge it asap. Thanks for your 
support!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18915: Rewrite AdminClientRebootstrapTest to cover the current scenario [kafka]

2025-03-12 Thread via GitHub


clarkwtc commented on PR #19187:
URL: https://github.com/apache/kafka/pull/19187#issuecomment-2719603347

   @chia7712 
   Sorry, I missed it.
   I've fixed it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-18974) Uneven distribution of topic partitions across consumers while using Cooperative Sticky Assignor

2025-03-12 Thread Gangadharan (Jira)
Gangadharan created KAFKA-18974:
---

 Summary: Uneven distribution of topic partitions across consumers 
while using Cooperative Sticky Assignor
 Key: KAFKA-18974
 URL: https://issues.apache.org/jira/browse/KAFKA-18974
 Project: Kafka
  Issue Type: Bug
  Components: clients, consumer
Affects Versions: 3.8.1
Reporter: Gangadharan


I came across a scenario where we see the spread of partitions with topic 
across consumer threads is uneven. The topic with high TPS (for ex. 85% 
traffic) had more partitions compared to the topics with low TPS (for ex. 15% 
traffic).  The consumer threads had subscribed to both set of topics. 
Subsequently, some of the consumer threads were assigned with the more 
partitions of low TPS topics. As a result, the pods with the consumer threads 
that had more partitions of high TPS topics had to slog more resulting in 
higher lag. However, if we choose round robin, the distribution is even between 
threads and across pods. But we are limited by the stop the world condition.

There was already an issue raised and fixed on this context. However, it 
doesn't fix the whole problem. I suspect that it is because, during the 
rebalance the partitions that only the that are supposed to be moved from 
existing consumers are sorted and distributed. However, there was no logic to 
also check if the retained partitions should be moved to ensure even spread 
across consumers. 

[KAFKA-16277] CooperativeStickyAssignor does not spread topics evenly among 
consumer group - ASF Jira

 

Below is a sample test:

2 pods with 6 consumer threads in each. Two topics with 18 partitions each 
(test_topic_1 with higher inflow compared to test_topicone_1). As we could see, 
the test_topic_1 is concentrated in pod1 as a result, it starts to create the 
lag for the cooperative sticky strategy. However, for round robin, we see it is 
distributed between pods.

Note: The sample test with same partition count was put for the sake of 
understanding. Irrespective of the partition count of the topics, the behavior 
seems to be same.
 

Cooperative Sticky:

Pod1

c--> consumer 1912486590767 [test_topic_1-1, test_topic_1-3, 
{*}test_topicone_1{*}-1]
c--> consumer 1922696734819 [test_topic_1-11, test_topic_1-6, 
{*}test_topicone_1{*}-6]
c--> consumer 1941340051228 [test_topic_1-12, test_topic_1-7, 
{*}test_topicone_1{*}-7]
c--> consumer 1940955938996 [test_topic_1-0, test_topic_1-8, 
{*}test_topicone_1{*}-0]
c--> consumer 1941837822481 [test_topic_1-2, test_topic_1-9, 
{*}test_topicone_1{*}-2] 
c--> consumer 1942719746188 [test_topic_1-10, test_topic_1-4, 
{*}test_topicone_1{*}-4] 

 
Pod2

c--> consumer 1941486742305 [test_topic_1-13, {*}test_topicone_1{*}-13, 
{*}test_topicone_1{*}-5] 
c--> consumer 1941837974018 [test_topic_1-14, {*}test_topicone_1{*}-14, 
{*}test_topicone_1{*}-8] 
c--> consumer 1942719897724 [test_topic_1-15, {*}test_topicone_1{*}-15, 
{*}test_topicone_1{*}-9]
c--> consumer 1942696886353 [test_topic_1-16, {*}test_topicone_1{*}-10, 
{*}test_topicone_1{*}-16]
c--> consumer 1941340202762 [test_topic_1-17, {*}test_topicone_1{*}-11, 
{*}test_topicone_1{*}-17]
c--> consumer 1940956090534 [test_topic_1-5, {*}test_topicone_1{*}-12, 
{*}test_topicone_1{*}-3]

-

Round Robin:

Pod1

c--> consumer 1941408797822 [test_topic_1-0, test_topic_1-12, 
{*}test_topicone_1{*}-6]
c--> consumer 1941456423553 [test_topic_1-9, {*}test_topicone_1{*}-15, 
{*}test_topicone_1{*}-3]
c--> consumer 1942070859325 [test_topic_1-14, test_topic_1-2, 
{*}test_topicone_1{*}-8]
c--> consumer 1941385036886 [test_topic_1-16, test_topic_1-4, 
{*}test_topicone_1{*}-10]
c--> consumer 1941105638483 [test_topic_1-6, {*}test_topicone_1{*}-0, 
{*}test_topicone_1{*}-12] 
c--> consumer 1941885698382 [test_topic_1-10, {*}test_topicone_1{*}-16, 
{*}test_topicone_1{*}-4]

Pod2

c--> consumer 1941456538287 [test_topic_1-8, {*}test_topicone_1{*}-14, 
{*}test_topicone_1{*}-2]
c--> consumer 1942070974058 [test_topic_1-15, test_topic_1-3, 
{*}test_topicone_1{*}-9]
c--> consumer 1941885813119 [test_topic_1-11, {*}test_topicone_1{*}-19, 
{*}test_topicone_1{*}-5]
c--> consumer 1941408912555 [test_topic_1-1, test_topic_1-13, 
{*}test_topicone_1{*}-7]
c--> consumer 1941385151618 [test_topic_1-17, test_topic_1-5, 
{*}test_topicone_1{*}-11]
c--> consumer 1941105753216 [test_topic_1-7, {*}test_topicone_1{*}-1, 
{*}test_topicone_1{*}-13]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] add JDK 11 check and test [kafka]

2025-03-12 Thread via GitHub


Rancho-7 opened a new pull request, #19196:
URL: https://github.com/apache/kafka/pull/19196

   Delete this text and replace it with a detailed description of your change. 
The 
   PR title and body will become the squashed commit message.
   
   If you would like to tag individuals, add some commentary, upload images, or
   include other supplemental information that should not be part of the 
eventual
   commit message, please use a separate comment.
   
   If applicable, please include a summary of the testing strategy (including 
   rationale) for the proposed change. Unit and/or integration tests are 
expected
   for any behavior change and system tests should be considered for larger
   changes.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16277) CooperativeStickyAssignor does not spread topics evenly among consumer group

2025-03-12 Thread Cameron Redpath (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17935045#comment-17935045
 ] 

Cameron Redpath commented on KAFKA-16277:
-

Given both topics had 12 partitions, even distribution should be possible with 
1/2/3/4/6/12 consumers, so yes when it goes to 7/8/9/10/11 consumers, even 
distribution is impossible. That wasn't really an issue when there was 
7/8/9/10/11 consumers as the difference is smaller, but when you scale back to 
fewer consumers, the difference is more noticeable as it ends up very 
imbalanced relatively between topics

 

Per Sophie's comment "emphasis ... was put on "stickiness" and partition-number 
balance, with good data parallelism ie topic-level balance being best-effort at 
most" so I think it's not a priority of this assignor, so maybe not a bug, but 
could be improved with some effort. Our use case is not too affected by the 
"stop the world" condition as you put it, so decided to simply go back to what 
was working for us.

 

> CooperativeStickyAssignor does not spread topics evenly among consumer group
> 
>
> Key: KAFKA-16277
> URL: https://issues.apache.org/jira/browse/KAFKA-16277
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Cameron Redpath
>Assignee: Cameron Redpath
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
> Attachments: image-2024-02-19-13-00-28-306.png
>
>
> Consider the following scenario:
> `topic-1`: 12 partitions
> `topic-2`: 12 partitions
>  
> Of note, `topic-1` gets approximately 10 times more messages through it than 
> `topic-2`. 
>  
> Both of these topics are consumed by a single application, single consumer 
> group, which scales under load. Each member of the consumer group subscribes 
> to both topics. The `partition.assignment.strategy` being used is 
> `org.apache.kafka.clients.consumer.CooperativeStickyAssignor`. The 
> application may start with one consumer. It consumes all partitions from both 
> topics.
>  
> The problem begins when the application scales up to two consumers. What is 
> seen is that all partitions from `topic-1` go to one consumer, and all 
> partitions from `topic-2` go to the other consumer. In the case with one 
> topic receiving more messages than the other, this results in a very 
> imbalanced group where one consumer is receiving 10x the traffic of the other 
> due to partition assignment.
>  
> This is the issue being seen in our cluster at the moment. See this graph of 
> the number of messages being processed by each consumer as the group scales 
> from one to four consumers:
> !image-2024-02-19-13-00-28-306.png|width=537,height=612!
> Things to note from this graphic:
>  * With two consumers, the partitions for a topic all go to a single consumer 
> each
>  * With three consumers, the partitions for a topic are split between two 
> consumers each
>  * With four consumers, the partitions for a topic are split between three 
> consumers each
>  * The total number of messages being processed by each consumer in the group 
> is very imbalanced throughout the entire period
>  
> With regard to the number of _partitions_ being assigned to each consumer, 
> the group is balanced. However, the assignment appears to be biased so that 
> partitions from the same topic go to the same consumer. In our scenario, this 
> leads to very undesirable partition assignment.
>  
> I question if the behaviour of the assignor should be revised, so that each 
> topic has its partitions maximally spread across all available members of the 
> consumer group. In the above scenario, this would result in much more even 
> distribution of load. The behaviour would then be:
>  * With two consumers, 6 partitions from each topic go to each consumer
>  * With three consumers, 4 partitions from each topic go to each consumer
>  * With four consumers, 3 partitions from each topic go to each consumer
>  
> Of note, we only saw this behaviour after migrating to the 
> `CooperativeStickyAssignor`. It was not an issue with the default partition 
> assignment strategy.
>  
> It is possible this may be intended behaviour. In which case, what is the 
> preferred workaround for our scenario? Our current workaround if we decide to 
> go ahead with the update to `CooperativeStickyAssignor` may be to limit our 
> consumers so they only subscribe to one topic, and have two consumer threads 
> per instance of the application.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18964) Allow to set weights for controller nodes for leader election

2025-03-12 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17935047#comment-17935047
 ] 

Luke Chen commented on KAFKA-18964:
---

[~frankvicky] , forgot to say, welcome to propose your idea or share your 
thought!

> Allow to set weights for controller nodes for leader election
> -
>
> Key: KAFKA-18964
> URL: https://issues.apache.org/jira/browse/KAFKA-18964
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Luke Chen
>Assignee: TengYao Chi
>Priority: Major
>  Labels: needs-kip
>
> In the stretch cluster environment, the nodes are located in different data 
> center for disaster recovery. So the backup cluster controller nodes should 
> be served as the follower. Only when the disaster happened, the controller 
> nodes in backup cluster need to be elected as leader. In our current design, 
> the candidate node is randomly chosen. We can consider to apply the "weight" 
> to the controller nodes to achieve the situation mentioned above.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18379: Enforce resigned cannot transition to any other state in same epoch [kafka]

2025-03-12 Thread via GitHub


github-actions[bot] commented on PR #18789:
URL: https://github.com/apache/kafka/pull/18789#issuecomment-2719723238

   A label of 'needs-attention' was automatically added to this PR in order to 
raise the
   attention of the committers. Once this issue has been triaged, the `triage` 
label
   should be removed to prevent this automation from happening again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16277) CooperativeStickyAssignor does not spread topics evenly among consumer group

2025-03-12 Thread Gangadharan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17935048#comment-17935048
 ] 

Gangadharan commented on KAFKA-16277:
-

thanks for the quick feedback Cameron.

> CooperativeStickyAssignor does not spread topics evenly among consumer group
> 
>
> Key: KAFKA-16277
> URL: https://issues.apache.org/jira/browse/KAFKA-16277
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Cameron Redpath
>Assignee: Cameron Redpath
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
> Attachments: image-2024-02-19-13-00-28-306.png
>
>
> Consider the following scenario:
> `topic-1`: 12 partitions
> `topic-2`: 12 partitions
>  
> Of note, `topic-1` gets approximately 10 times more messages through it than 
> `topic-2`. 
>  
> Both of these topics are consumed by a single application, single consumer 
> group, which scales under load. Each member of the consumer group subscribes 
> to both topics. The `partition.assignment.strategy` being used is 
> `org.apache.kafka.clients.consumer.CooperativeStickyAssignor`. The 
> application may start with one consumer. It consumes all partitions from both 
> topics.
>  
> The problem begins when the application scales up to two consumers. What is 
> seen is that all partitions from `topic-1` go to one consumer, and all 
> partitions from `topic-2` go to the other consumer. In the case with one 
> topic receiving more messages than the other, this results in a very 
> imbalanced group where one consumer is receiving 10x the traffic of the other 
> due to partition assignment.
>  
> This is the issue being seen in our cluster at the moment. See this graph of 
> the number of messages being processed by each consumer as the group scales 
> from one to four consumers:
> !image-2024-02-19-13-00-28-306.png|width=537,height=612!
> Things to note from this graphic:
>  * With two consumers, the partitions for a topic all go to a single consumer 
> each
>  * With three consumers, the partitions for a topic are split between two 
> consumers each
>  * With four consumers, the partitions for a topic are split between three 
> consumers each
>  * The total number of messages being processed by each consumer in the group 
> is very imbalanced throughout the entire period
>  
> With regard to the number of _partitions_ being assigned to each consumer, 
> the group is balanced. However, the assignment appears to be biased so that 
> partitions from the same topic go to the same consumer. In our scenario, this 
> leads to very undesirable partition assignment.
>  
> I question if the behaviour of the assignor should be revised, so that each 
> topic has its partitions maximally spread across all available members of the 
> consumer group. In the above scenario, this would result in much more even 
> distribution of load. The behaviour would then be:
>  * With two consumers, 6 partitions from each topic go to each consumer
>  * With three consumers, 4 partitions from each topic go to each consumer
>  * With four consumers, 3 partitions from each topic go to each consumer
>  
> Of note, we only saw this behaviour after migrating to the 
> `CooperativeStickyAssignor`. It was not an issue with the default partition 
> assignment strategy.
>  
> It is possible this may be intended behaviour. In which case, what is the 
> preferred workaround for our scenario? Our current workaround if we decide to 
> go ahead with the update to `CooperativeStickyAssignor` may be to limit our 
> consumers so they only subscribe to one topic, and have two consumer threads 
> per instance of the application.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-17171: Add test cases for `STATIC_BROKER_CONFIG`in kraft mode [kafka]

2025-03-12 Thread via GitHub


github-actions[bot] commented on PR #18463:
URL: https://github.com/apache/kafka/pull/18463#issuecomment-2719723304

   A label of 'needs-attention' was automatically added to this PR in order to 
raise the
   attention of the committers. Once this issue has been triaged, the `triage` 
label
   should be removed to prevent this automation from happening again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [WIP] KAFKA-18279: add JDK 11 check and test for clients and streams module [kafka]

2025-03-12 Thread via GitHub


Rancho-7 commented on PR #19196:
URL: https://github.com/apache/kafka/pull/19196#issuecomment-2719723657

   I found that our current test flow doesn't work with JDK 11 for the 
`clients` and `streams` modules. So I created a new test flow to handle this.
   
   Here's what the output looks like:
   https://github.com/user-attachments/assets/a75ef2ac-6576-48fb-81e2-32599fc20ff8";
 />
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18142 Switch to `com.gradleup.shadow` [kafka]

2025-03-12 Thread via GitHub


mumrah merged PR #18018:
URL: https://github.com/apache/kafka/pull/18018


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18031: Flaky PlaintextConsumerTest testCloseLeavesGroupOnInterrupt [kafka]

2025-03-12 Thread via GitHub


github-actions[bot] commented on PR #19105:
URL: https://github.com/apache/kafka/pull/19105#issuecomment-2719723038

   A label of 'needs-attention' was automatically added to this PR in order to 
raise the
   attention of the committers. Once this issue has been triaged, the `triage` 
label
   should be removed to prevent this automation from happening again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [WIP] KAFKA-18279: add JDK 11 check and test for clients and streams module [kafka]

2025-03-12 Thread via GitHub


Rancho-7 commented on PR #19196:
URL: https://github.com/apache/kafka/pull/19196#issuecomment-2719734796

   I am getting this error while testing due to the `-Werror` flag in 
`build.gradle`,and I am working on fixing it.
   
   ```
   > Task :streams:compileJava
   
/home/runner/work/kafka/kafka/streams/src/main/java/org/apache/kafka/streams/Topology.java:318:
 warning: [overloads] 
addSource(AutoOffsetReset,String,Deserializer,Deserializer,String...)
 in Topology is potentially ambiguous with 
addSource(AutoOffsetReset,String,TimestampExtractor,Deserializer,Deserializer,String...)
 in Topology
   public synchronized  Topology addSource(final AutoOffsetReset 
offsetReset,
   ^
 where K#1,V#1,K#2,V#2 are type-variables:
   K#1 extends Object declared in method 
addSource(AutoOffsetReset,String,Deserializer,Deserializer,String...)
   V#1 extends Object declared in method 
addSource(AutoOffsetReset,String,Deserializer,Deserializer,String...)
   K#2 extends Object declared in method 
addSource(AutoOffsetReset,String,TimestampExtractor,Deserializer,Deserializer,String...)
   V#2 extends Object declared in method 
addSource(AutoOffsetReset,String,TimestampExtractor,Deserializer,Deserializer,String...)
   
/home/runner/work/kafka/kafka/streams/src/main/java/org/apache/kafka/streams/Topology.java:330:
 warning: [overloads] 
addSource(AutoOffsetReset,String,Deserializer,Deserializer,String...)
 in Topology is potentially ambiguous with 
addSource(AutoOffsetReset,String,TimestampExtractor,Deserializer,Deserializer,String...)
 in Topology
   public synchronized  Topology addSource(final 
org.apache.kafka.streams.AutoOffsetReset offsetReset,
   ^
 where K#1,V#1,K#2,V#2 are type-variables:
   K#1 extends Object declared in method 
addSource(AutoOffsetReset,String,Deserializer,Deserializer,String...)
   V#1 extends Object declared in method 
addSource(AutoOffsetReset,String,Deserializer,Deserializer,String...)
   K#2 extends Object declared in method 
addSource(AutoOffsetReset,String,TimestampExtractor,Deserializer,Deserializer,String...)
   V#2 extends Object declared in method 
addSource(AutoOffsetReset,String,TimestampExtractor,Deserializer,Deserializer,String...)
   error: warnings found and -Werror specified
   1 error
   2 warnings
   
   > Task :streams:compileJava FAILED
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] create separate merge_group workflow [kafka-merge-queue-sandbox]

2025-03-12 Thread via GitHub


mumrah opened a new pull request, #62:
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/62

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18606: Flaky test DeleteSegmentsByRetentionTimeTest#executeTieredStorageTest [kafka]

2025-03-12 Thread via GitHub


junrao commented on code in PR #18861:
URL: https://github.com/apache/kafka/pull/18861#discussion_r1992116002


##
storage/src/test/java/org/apache/kafka/tiered/storage/integration/BaseDeleteSegmentsTest.java:
##
@@ -55,7 +56,7 @@ protected void 
writeTestSpecifications(TieredStorageTestBuilder builder) {
 .expectSegmentToBeOffloaded(broker0, topicA, p0, 2, new 
KeyValueSpec("k2", "v2"))
 .expectEarliestLocalOffsetInLogDirectory(topicA, p0, 3L)
 .produceWithTimestamp(topicA, p0, new KeyValueSpec("k0", 
"v0"), new KeyValueSpec("k1", "v1"),
-new KeyValueSpec("k2", "v2"), new KeyValueSpec("k3", 
"v3", System.currentTimeMillis()))
+new KeyValueSpec("k2", "v2"), new KeyValueSpec("k3", 
"v3", System.currentTimeMillis() + TimeUnit.HOURS.toMillis(1)))

Review Comment:
   Also, the error message in ConsumerAction is not very clear. It would be 
useful to include the expected count, the actual count and the OperationType.
   
   `Number of FETCH_OFFSET_INDEX requests from broker 0 to the tier storage 
does not match the expected value for topic-partition topicA-0 ==> expected: 
 but was: `



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-18955) MetadataSchemaCheckerTool bug fixes

2025-03-12 Thread Alyssa Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alyssa Huang reassigned KAFKA-18955:


Assignee: Alyssa Huang

> MetadataSchemaCheckerTool bug fixes
> ---
>
> Key: KAFKA-18955
> URL: https://issues.apache.org/jira/browse/KAFKA-18955
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.9.0
>Reporter: Alyssa Huang
>Assignee: Alyssa Huang
>Priority: Major
>
> Infinite loop and mis-assignment of variable in schema checker logic, all 
> "path" related arguments should be consistent in taking in fully qualified 
> path



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Clean up metadata module [kafka]

2025-03-12 Thread via GitHub


sjhajharia commented on code in PR #19069:
URL: https://github.com/apache/kafka/pull/19069#discussion_r1992084076


##
metadata/src/test/java/org/apache/kafka/metadata/ControllerRegistrationTest.java:
##
@@ -46,18 +44,18 @@ static  Map doubleMap(K k1, V v1, K k2, V v2) {
 HashMap map = new HashMap<>();
 map.put(k1, v1);
 map.put(k2, v2);
-return Collections.unmodifiableMap(map);
+return map;

Review Comment:
   Thanks @chia7712 
   Addressed the same.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Cleanups in CoreUtils [kafka]

2025-03-12 Thread via GitHub


mimaison merged PR #19175:
URL: https://github.com/apache/kafka/pull/19175


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-18970) [kafka-clients] Gradle module metadata publication should be enabled

2025-03-12 Thread Jira
Dejan Stojadinović created KAFKA-18970:
--

 Summary: [kafka-clients] Gradle module metadata publication should 
be enabled
 Key: KAFKA-18970
 URL: https://issues.apache.org/jira/browse/KAFKA-18970
 Project: Kafka
  Issue Type: Task
  Components: build, clients
Reporter: Dejan Stojadinović


(!) _*Prologue:*_ see this github  PR 
[https://github.com/apache/kafka/pull/18018#discussion_r1968540211]

(i) *_More details:_* while changing shadow plugin (KAFKA-18142) we were forced 
to disable Gradle module metadata publication for _*clients*_ submodule (i.e. 
for _*kafka-clients*_ artifacts):
 * 
[https://docs.gradle.org/8.10.2/userguide/publishing_gradle_module_metadata.html#sub:disabling-gmm-publication]
 * 
[https://github.com/apache/kafka/commit/e3080684c08c1832b172f9ec3183a2c238306ff2#diff-49a96e7eea8a94af862798a45174e6ac43eb4f8b4bd40759b5da63ba31ec3ef7R1826]

(on) _*Action point:*_ remove this switch in order to enable default behavior 
(i.e. to enable Gradle module metadata publication for _*clients*_ submodule)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16538) Support UpdateFeatures for kraft.version so we can go from static quorums to dynamic

2025-03-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/KAFKA-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17934595#comment-17934595
 ] 

José Armando García Sancio commented on KAFKA-16538:


[~zheguang] Kafka 3.9.0 support dynamically changing your controller cluster if 
the new cluster is created using 3.9.0. What 3.9.0 doesn't support is upgrading 
an existing cluster to 3.9.0 and have it support dynamically changing your 
controller cluster.

If a user needs dynamically changing the controller cluster (KIP-853) to 
migrate from ZK to KRaft, the recommendation is to first upgrade the ZK Kafka 
cluster to 3.9.0. Deploy a new Kafka controller cluster that supports dynamic 
membership and finally perform the ZK to KRaft migration.

Hope that helps.

> Support UpdateFeatures for kraft.version so we can go from static quorums to 
> dynamic
> 
>
> Key: KAFKA-16538
> URL: https://issues.apache.org/jira/browse/KAFKA-16538
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: José Armando García Sancio
>Assignee: José Armando García Sancio
>Priority: Major
>
> Should:
>  # Route request to cluster metadata kraft client.
>  # KRaft leader should check the supported version of all voters and observers
>  ## voter information comes from VoterSet
>  ## observer information is push down to kraft by the metadata controller
>  # Persist both the kraft.version and voter set in one control batch
> We need to allow for the kraft.version to succeed while the metadata 
> controller changes may fail. This is needed because there will be two batches 
> for this updates. One control record batch which includes kraft.version and 
> voter set, and one metadata batch which includes the feature records.
>  
> This change should also improve the handling of UpdateVoter to allow the 
> request when the kraft.version is 0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] see what happens if a pr enqueued event fails [kafka-merge-queue-sandbox]

2025-03-12 Thread via GitHub


mumrah merged PR #61:
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/61


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18142 Switch to `com.gradleup.shadow` [kafka]

2025-03-12 Thread via GitHub


dejan2609 commented on PR #18018:
URL: https://github.com/apache/kafka/pull/18018#issuecomment-2718833869

   @mumrah related JIRA ticket is created here: 
   https://issues.apache.org/jira/browse/KAFKA-18970 **_[kafka-clients] Gradle 
module metadata publication should be enabled_**


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] create separate merge_group workflow [kafka-merge-queue-sandbox]

2025-03-12 Thread via GitHub


mumrah merged PR #62:
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/62


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18142) Switch Kafka's Gradle build shadow plugin to `com.gradleup.shadow` (and upgrade version)

2025-03-12 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dejan Stojadinović resolved KAFKA-18142.

Resolution: Resolved

Github PR [https://github.com/apache/kafka/pull/18018] is merged (hence 
resolving this Jira ticket).

> Switch Kafka's Gradle build shadow plugin to `com.gradleup.shadow` (and 
> upgrade version)
> 
>
> Key: KAFKA-18142
> URL: https://issues.apache.org/jira/browse/KAFKA-18142
> Project: Kafka
>  Issue Type: Task
>  Components: build
>Reporter: Dejan Stojadinović
>Assignee: Dejan Stojadinović
>Priority: Major
>  Labels: gradle, shadow
>
> {panel:title=Prologue|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> * [https://github.com/apache/kafka/pull/16295] *KAFKA-16803: Update 
> ShadowJavaPlugin*
> * [https://github.com/apache/kafka/pull/17218] *Revert "KAFKA-16803: Change 
> fork, update ShadowJavaPlugin to 8.1.7 (#16295)"*
> * [https://github.com/apache/kafka/pull/16489] *KAFKA-17052: Downgrade 
> ShadowJarPlugin to 8.1.3*
> {panel}
> *{color:blue}Action point:{color}* 
> * switch shadow plugin from *_io.github.goooler.shadow_* to 
> *_com.gradleup.shadow_*
> * upgrade version from _*8.1.3*_ to _*8.3.5*_ (release notes: 
> https://gradleup.com/shadow/changes/#v8-3-5-2024-11-03)
> *{color:red}Rationale:{color}* both previous one and currently used shadow 
> plugins are now deprecated (in favor of *_com.gradleup.shadow_*):
>  - *_com.github.johnrengelman.shadow_* maintenance was transferred to 
> *_com.gradleup.shadow_*: 
> [https://github.com/GradleUp/shadow/tree/8.3.5?tab=readme-ov-file#gradle-shadow]
>  - *_io.github.goooler.shadow_*: changes are ported to 
> *_com.gradleup.shadow_*: 
> [https://github.com/Goooler/shadow?tab=readme-ov-file#gradle-shadow]
> (!) {color:green}*Pitfall (to keep in mind):*{color} 
> [https://github.com/apache/kafka/pull/15532] *KAFKA-16359: Corrected manifest 
> file for kafka-clients*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18927: Remove LATEST_0_11, LATEST_1_0, LATEST_1_1, LATEST_2_0 [kafka]

2025-03-12 Thread via GitHub


mjsax commented on PR #19134:
URL: https://github.com/apache/kafka/pull/19134#issuecomment-2718868615

   
http://ducktape-open-source-results.confluent.io.s3-website-us-west-2.amazonaws.com/confluent-open-source-kafka-branch-builder-system-test-results/?prefix=trunk/2025-03-12--001.75a62180-48c6-4d05-bc92-4c2f53205d7b--1741790546--Parkerhiphop--kakfa-18927--4bc1bdd3a8


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-18971) Update AK system tests for AK 4.0

2025-03-12 Thread Alieh Saeedi (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alieh Saeedi reassigned KAFKA-18971:


Assignee: Alieh Saeedi

> Update AK system tests for AK 4.0
> -
>
> Key: KAFKA-18971
> URL: https://issues.apache.org/jira/browse/KAFKA-18971
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Alieh Saeedi
>Assignee: Alieh Saeedi
>Priority: Major
>
> Update AK system tests and add new “upgrade_from” version to {{StreamsConfig}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-18971) Update AK system tests for AK 4.0

2025-03-12 Thread Alieh Saeedi (Jira)
Alieh Saeedi created KAFKA-18971:


 Summary: Update AK system tests for AK 4.0
 Key: KAFKA-18971
 URL: https://issues.apache.org/jira/browse/KAFKA-18971
 Project: Kafka
  Issue Type: Task
  Components: streams
Reporter: Alieh Saeedi






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18971) Update AK system tests for AK 4.0

2025-03-12 Thread Alieh Saeedi (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alieh Saeedi updated KAFKA-18971:
-
Description: Update AK system tests and add new “upgrade_from” version to 
{{StreamsConfig}}

> Update AK system tests for AK 4.0
> -
>
> Key: KAFKA-18971
> URL: https://issues.apache.org/jira/browse/KAFKA-18971
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Alieh Saeedi
>Priority: Major
>
> Update AK system tests and add new “upgrade_from” version to {{StreamsConfig}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [MINOR] Cleanup Server Common Module [kafka]

2025-03-12 Thread via GitHub


github-actions[bot] commented on PR #19085:
URL: https://github.com/apache/kafka/pull/19085#issuecomment-2716296530

   A label of 'needs-attention' was automatically added to this PR in order to 
raise the
   attention of the committers. Once this issue has been triaged, the `triage` 
label
   should be removed to prevent this automation from happening again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18276 Migrate ProducerRebootstrapTest to new test infra [kafka]

2025-03-12 Thread via GitHub


clarkwtc commented on PR #19046:
URL: https://github.com/apache/kafka/pull/19046#issuecomment-2718481150

   @chia7712 @ijuma 
   I already moved this test to clients-integration-tests.
   I have also added `` on 
`import-control-clients-integration-tests.xml`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >