Re: [PR] KAFKA-18804: Remove slf4j warning when using tool script [kafka]

2025-03-05 Thread via GitHub


chia7712 merged PR #18918:
URL: https://github.com/apache/kafka/pull/18918


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-17607: Add CI step to verify LICENSE-binary [kafka]

2025-03-05 Thread via GitHub


xijiu commented on PR #18299:
URL: https://github.com/apache/kafka/pull/18299#issuecomment-2700736205

   @chia7712   Thanks for CR and I have fixed them.  There is a small point 
that when I executed the python script locally, it prompted that I needed to 
provide parameters.  Please take a look.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18811: Included admin.client.config in VerifiableShareConsumer.java [kafka]

2025-03-05 Thread via GitHub


chirag-wadhwa5 opened a new pull request, #19108:
URL: https://github.com/apache/kafka/pull/19108

   This PR includes a new flag in VerifiableShareConsumer.java called 
admin.client.config to include a properties file for admin client related 
configs
   
   Reference: [KAFKA-18811](https://issues.apache.org/jira/browse/KAFKA-18811)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14484: Move UnifiedLog to storage module [kafka]

2025-03-05 Thread via GitHub


mimaison commented on code in PR #19030:
URL: https://github.com/apache/kafka/pull/19030#discussion_r1981276951


##
storage/src/main/java/org/apache/kafka/storage/internals/log/UnifiedLog.java:
##
@@ -55,6 +113,2289 @@ public class UnifiedLog {
 public static final String STRAY_DIR_SUFFIX = 
LogFileUtils.STRAY_DIR_SUFFIX;
 public static final long UNKNOWN_OFFSET = LocalLog.UNKNOWN_OFFSET;
 
+// For compatibility, metrics are defined to be under `Log` class
+private final KafkaMetricsGroup metricsGroup = new 
KafkaMetricsGroup("kafka.log", "Log");
+
+/* A lock that guards all modifications to the log */
+private final Object lock = new Object();
+private final Map> metricNames = new 
HashMap<>();
+
+// localLog The LocalLog instance containing non-empty log segments 
recovered from disk
+private final LocalLog localLog;
+private final BrokerTopicStats brokerTopicStats;
+private final ProducerStateManager producerStateManager;
+private final boolean remoteStorageSystemEnable;
+private final ScheduledFuture producerExpireCheck;
+private final int producerIdExpirationCheckIntervalMs;
+private final String logIdent;
+private final Logger logger;
+private final LogValidator.MetricsRecorder validatorMetricsRecorder;
+
+/* The earliest offset which is part of an incomplete transaction. This is 
used to compute the
+ * last stable offset (LSO) in ReplicaManager. Note that it is possible 
that the "true" first unstable offset
+ * gets removed from the log (through record or segment deletion). In this 
case, the first unstable offset
+ * will point to the log start offset, which may actually be either part 
of a completed transaction or not
+ * part of a transaction at all. However, since we only use the LSO for 
the purpose of restricting the
+ * read_committed consumer to fetching decided data (i.e. committed, 
aborted, or non-transactional), this
+ * temporary abuse seems justifiable and saves us from scanning the log 
after deletion to find the first offsets
+ * of each ongoing transaction in order to compute a new first unstable 
offset. It is possible, however,
+ * that this could result in disagreement between replicas depending on 
when they began replicating the log.
+ * In the worst case, the LSO could be seen by a consumer to go backwards.
+ */
+private volatile Optional firstUnstableOffsetMetadata = 
Optional.empty();
+private volatile Optional partitionMetadataFile = 
Optional.empty();
+// This is the offset(inclusive) until which segments are copied to the 
remote storage.
+private volatile long highestOffsetInRemoteStorage = -1L;
+
+/* Keep track of the current high watermark in order to ensure that 
segments containing offsets at or above it are
+ * not eligible for deletion. This means that the active segment is only 
eligible for deletion if the high watermark
+ * equals the log end offset (which may never happen for a partition under 
consistent load). This is needed to
+ * prevent the log start offset (which is exposed in fetch responses) from 
getting ahead of the high watermark.
+ */
+private volatile LogOffsetMetadata highWatermarkMetadata;
+private volatile long localLogStartOffset;
+private volatile long logStartOffset;
+private volatile LeaderEpochFileCache leaderEpochCache;
+private volatile Optional topicId;
+private volatile LogOffsetsListener logOffsetsListener;
+
+/**
+ * A log which presents a unified view of local and tiered log segments.
+ *
+ * The log consists of tiered and local segments with the tiered 
portion of the log being optional. There could be an
+ * overlap between the tiered and local segments. The active segment is 
always guaranteed to be local. If tiered segments
+ * are present, they always appear at the beginning of the log, followed 
by an optional region of overlap, followed by the local
+ * segments including the active segment.
+ *
+ * NOTE: this class handles state and behavior specific to tiered 
segments as well as any behavior combining both tiered
+ * and local segments. The state and behavior specific to local segments 
are handled by the encapsulated LocalLog instance.
+ *
+ * @param logStartOffset The earliest offset allowed to be exposed to 
kafka client.
+ *   The logStartOffset can be updated by :
+ *   - user's DeleteRecordsRequest
+ *   - broker's log retention
+ *   - broker's log truncation
+ *   - broker's log recovery
+ *   The logStartOffset is used to decide the 
following:
+ *   - Log deletion. LogSegment whose nextOffset <= 
log's logStartOffset can be deleted.
+ * It may trigger log rolling if the active 
segmen

[jira] [Assigned] (KAFKA-18924) Running the storage module tests produces a storage/storage.log file

2025-03-05 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-18924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

黃竣陽 reassigned KAFKA-18924:
---

Assignee: 黃竣陽

> Running the storage module tests produces a storage/storage.log file
> 
>
> Key: KAFKA-18924
> URL: https://issues.apache.org/jira/browse/KAFKA-18924
> Project: Kafka
>  Issue Type: Task
>Reporter: Mickael Maison
>Assignee: 黃竣陽
>Priority: Major
>
> Running tests should not produce untracked files.
> Either we delete it when tests complete, or not produce it by default, or add 
> it to gitignore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Refactor GroupCoordinatorConfig [kafka]

2025-03-05 Thread via GitHub


dajac merged PR #19092:
URL: https://github.com/apache/kafka/pull/19092


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-10864: Convert end txn marker schema to use auto-generated protocol [kafka]

2025-03-05 Thread via GitHub


chia7712 merged PR #9766:
URL: https://github.com/apache/kafka/pull/9766


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-18924) Running the storage module tests produces a storage/storage.log file

2025-03-05 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-18924:
--

 Summary: Running the storage module tests produces a 
storage/storage.log file
 Key: KAFKA-18924
 URL: https://issues.apache.org/jira/browse/KAFKA-18924
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison


Running tests should not produce untracked files.

Either we delete it when tests complete, or not produce it by default, or add 
it to gitignore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18811: Included admin.client.config in VerifiableShareConsumer.java [kafka]

2025-03-05 Thread via GitHub


apoorvmittal10 commented on code in PR #19108:
URL: https://github.com/apache/kafka/pull/19108#discussion_r1981287886


##
tools/src/main/java/org/apache/kafka/tools/VerifiableShareConsumer.java:
##
@@ -562,6 +557,13 @@ private static ArgumentParser argParser() {
 .metavar("CONFIG_FILE")
 .help("Consumer config properties file (config options shared with 
command line parameters will be overridden).");
 
+parser.addArgument("--admin.client.config")

Review Comment:
   Just an observation, the arguments for the file are mostly `-` separated 
with an exception of `consumer.config` and `admin.client.config`, why?



##
tools/src/main/java/org/apache/kafka/tools/VerifiableShareConsumer.java:
##
@@ -596,18 +599,29 @@ public static VerifiableShareConsumer 
createFromArgs(ArgumentParser parser, Stri
 StringDeserializer deserializer = new StringDeserializer();
 KafkaShareConsumer consumer = new 
KafkaShareConsumer<>(consumerProps, deserializer, deserializer);
 
+Properties adminClientProps = new Properties();
+if (adminClientConfigFile != null) {
+try {
+
adminClientProps.putAll(Utils.loadProps(adminClientConfigFile));
+} catch (IOException e) {
+throw new ArgumentParserException(e.getMessage(), parser);
+}
+}
+
+adminClientProps.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, 
brokerHostandPort);

Review Comment:
   This means the bootstrap server which is also generally included in client 
config will be ignored for admin client property file, rather 
`consumer.config`'s bootstrap server will be used. What admin properties do you 
need so an explicit admin.client.config is required to be passes?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HOTFIX: Do not use highest version when version is valid [kafka]

2025-03-05 Thread via GitHub


dengziming commented on PR #19109:
URL: https://github.com/apache/kafka/pull/19109#issuecomment-2700803255

   Hello @chia7712 , sorry for my negligence, I accidentally set highest 
version in all cases, it's influential currently but is a time bomb for future.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-17808) InstanceAlreadyExistsException: kafka.admin.client:type=app-info,id=connector-dlq-adminclient- when add connector with tasks

2025-03-05 Thread Jhen-Yung Hsu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932554#comment-17932554
 ] 

Jhen-Yung Hsu commented on KAFKA-17808:
---

[~ignat233] Could you provide the steps to reproduce the error `Error 
registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: 
kafka.admin.client:type=app-info,id=connector-dlq-adminclient-`?

This will help me verify that the fix and test case work and cover our target.


I was only able to see this message in the log with the following modification
log:
{code:java}
cat logs/connect.log| grep -ri connector-dlq-adminclient-   
logs/connect.log:   client.id = connector-dlq-adminclient-
logs/connect.log:   client.id = connector-dlq-adminclient-
logs/connect.log:[2025-03-05 03:51:55,766] INFO [local-file-sink|task-1] The 
mbean of App info: [kafka.admin.client], id: [connector-dlq-adminclient-] 
already exists, so skipping a new mbean creation. 
(org.apache.kafka.common.utils.AppInfoParser:66)
logs/connect.log:[2025-03-05 03:51:55,768] INFO App info kafka.admin.client for 
connector-dlq-adminclient- unregistered 
(org.apache.kafka.common.utils.AppInfoParser:89)
logs/connect.log:[2025-03-05 03:51:55,770] INFO App info kafka.admin.client for 
connector-dlq-adminclient- unregistered 
(org.apache.kafka.common.utils.AppInfoParser:89) {code}
 

 
config/connect-file-sink.properties (enabling dlq):
 
{code:java}
name=local-file-sink
connector.class=FileStreamSink
-tasks.max=1
+tasks.max=2
file=test.sink.txt
topics=connect-test

+errors.deadletterqueue.topic.name=dlq-topic
+errors.deadletterqueue.context.headers.enable=true
+errors.deadletterqueue.topic.replication.factor=1
+errors.tolerance=all {code}
 


Then, running:

{code:java}
bin/connect-standalone.sh config/connect-standalone.properties 
config/connect-file-sink.properties{code}



 

> InstanceAlreadyExistsException: 
> kafka.admin.client:type=app-info,id=connector-dlq-adminclient- when add 
> connector with tasks
> 
>
> Key: KAFKA-17808
> URL: https://issues.apache.org/jira/browse/KAFKA-17808
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: ignat233
>Assignee: Jhen-Yung Hsu
>Priority: Major
> Attachments: image-2024-10-16-13-00-36-667.png
>
>
> Why do we always create an admin client with the same 
> "connector-dlq-adminclient-" value id?
> [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L1008]
> For all other cases, a postfix is ​​added.
> !image-2024-10-16-13-00-36-667.png! 
>  I get "Error registering AppInfo mbean
> javax.management.InstanceAlreadyExistsException: 
> kafka.admin.client:type=app-info,id=connector-dlq-adminclient-." error for 
> all tasks.
> It looks like the ConnectorTaskId should be added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [MINOR] Cleanup Storage Module [kafka]

2025-03-05 Thread via GitHub


sjhajharia commented on PR #19072:
URL: https://github.com/apache/kafka/pull/19072#issuecomment-2700255546

   Thanks @m1a2st 
   Seems like I missed it as it looked different. Have fixed the same.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18031: Flaky PlaintextConsumerTest testCloseLeavesGroupOnInterrupt [kafka]

2025-03-05 Thread via GitHub


TaiJuWu opened a new pull request, #19105:
URL: https://github.com/apache/kafka/pull/19105

   Jira: https://issues.apache.org/jira/browse/KAFKA-18031
   
   This test expose an issue:
   If the request number exceeds `maxInFlightRequestsPerConnection`, the 
`LeaveGroup` request would be not sent in time.
   
   I do follow change:
   1. Allow `LeaveGroup` request to send even if the `InFlightRequest` meet the 
max number. 
   2. Move `MaybeThrowInterrupt` to the end of `ConsumerNetwork.poll`
   3. For test, honor the close time instead of default value, this can make 
this test more stable.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18766:Docs: Make usage of allow.everyone.if.no.acl.found config clearer [kafka]

2025-03-05 Thread via GitHub


AndrewJSchofield commented on code in PR #19077:
URL: https://github.com/apache/kafka/pull/19077#discussion_r1981043279


##
docs/security.html:
##
@@ -1248,11 +1248,15 @@ https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface";>KIP-11
 and
 resource patterns in https://cwiki.apache.org/confluence/display/KAFKA/KIP-290%3A+Support+for+Prefixed+ACLs";>KIP-290.
-In order to add, remove, or list ACLs, you can use the Kafka ACL CLI 
kafka-acls.sh. By default, if no ResourcePatterns match a specific 
Resource R,
-then R has no associated ACLs, and therefore no one other than super users 
is allowed to access R.
-If you want to change that behavior, you can include the following in 
server.properties.
+In order to add, remove, or list ACLs, you can use the Kafka ACL CLI 
kafka-acls.sh. 
+Default Behavior Without ACLs:
+If a resource (R) does not have any ACLs defined—that is, if no ACL 
matches the resource—Kafka will restrict access to that resource. In this 
situation, only super users are allowed to access it.
+
+Changing the Default Behavior:
+If you prefer that resources without any ACLs be accessible by all 
users (instead of just super users), you can change the default behavior. To do 
this, add the following line to your server.properties file:
 allow.everyone.if.no.acl.found=true
-One can also add super users in server.properties like the following (note 
that the delimiter is semicolon since SSL user names may contain comma). 
Default PrincipalType string "User" is case sensitive.
+With this setting enabled, if a resource does not have any ACLs defined, 
Kafka will allow access to everyone. If a resource has one or more ACLs 
defined, those ACL rules will be enforced as usual, regardless of the setting.

Review Comment:
   nit: The use of `` and `` around the preformatted blocks is not 
entirely consistent.



##
docs/security.html:
##
@@ -1248,11 +1248,15 @@ https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface";>KIP-11
 and
 resource patterns in https://cwiki.apache.org/confluence/display/KAFKA/KIP-290%3A+Support+for+Prefixed+ACLs";>KIP-290.
-In order to add, remove, or list ACLs, you can use the Kafka ACL CLI 
kafka-acls.sh. By default, if no ResourcePatterns match a specific 
Resource R,
-then R has no associated ACLs, and therefore no one other than super users 
is allowed to access R.
-If you want to change that behavior, you can include the following in 
server.properties.
+In order to add, remove, or list ACLs, you can use the Kafka ACL CLI 
kafka-acls.sh. 
+Default Behavior Without ACLs:
+If a resource (R) does not have any ACLs defined—that is, if no ACL 
matches the resource—Kafka will restrict access to that resource. In this 
situation, only super users are allowed to access it.

Review Comment:
   nit: `defined-that` and `resource-Kafka` look like hyphenated words. You 
perhaps want `—` instead of `-`, so `resource — Kafka`. Personally, 
I would rewrite like this:
   
   `If a resource (R) does not have any ACLs defined, meaning that no ACL 
matches the resource, Kafka will restrict`...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18142 switch Kafka's Gradle build shadow plugin to `com.gradleup.shadow` (and upgrade plugin version) [kafka]

2025-03-05 Thread via GitHub


apoorvmittal10 commented on PR #18018:
URL: https://github.com/apache/kafka/pull/18018#issuecomment-2700506846

   @mumrah Can you please review it as well and help merging then changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14484: Move UnifiedLog to storage module [kafka]

2025-03-05 Thread via GitHub


mimaison commented on code in PR #19030:
URL: https://github.com/apache/kafka/pull/19030#discussion_r1981156216


##
storage/src/main/java/org/apache/kafka/storage/internals/log/UnifiedLog.java:
##
@@ -55,6 +113,2289 @@ public class UnifiedLog {
 public static final String STRAY_DIR_SUFFIX = 
LogFileUtils.STRAY_DIR_SUFFIX;
 public static final long UNKNOWN_OFFSET = LocalLog.UNKNOWN_OFFSET;
 
+// For compatibility, metrics are defined to be under `Log` class
+private final KafkaMetricsGroup metricsGroup = new 
KafkaMetricsGroup("kafka.log", "Log");
+
+/* A lock that guards all modifications to the log */
+private final Object lock = new Object();
+private final Map> metricNames = new 
HashMap<>();
+
+// localLog The LocalLog instance containing non-empty log segments 
recovered from disk
+private final LocalLog localLog;
+private final BrokerTopicStats brokerTopicStats;
+private final ProducerStateManager producerStateManager;
+private final boolean remoteStorageSystemEnable;
+private final ScheduledFuture producerExpireCheck;
+private final int producerIdExpirationCheckIntervalMs;
+private final String logIdent;
+private final Logger logger;
+private final LogValidator.MetricsRecorder validatorMetricsRecorder;
+
+/* The earliest offset which is part of an incomplete transaction. This is 
used to compute the
+ * last stable offset (LSO) in ReplicaManager. Note that it is possible 
that the "true" first unstable offset
+ * gets removed from the log (through record or segment deletion). In this 
case, the first unstable offset
+ * will point to the log start offset, which may actually be either part 
of a completed transaction or not
+ * part of a transaction at all. However, since we only use the LSO for 
the purpose of restricting the
+ * read_committed consumer to fetching decided data (i.e. committed, 
aborted, or non-transactional), this
+ * temporary abuse seems justifiable and saves us from scanning the log 
after deletion to find the first offsets
+ * of each ongoing transaction in order to compute a new first unstable 
offset. It is possible, however,
+ * that this could result in disagreement between replicas depending on 
when they began replicating the log.
+ * In the worst case, the LSO could be seen by a consumer to go backwards.
+ */
+private volatile Optional firstUnstableOffsetMetadata = 
Optional.empty();
+private volatile Optional partitionMetadataFile = 
Optional.empty();
+// This is the offset(inclusive) until which segments are copied to the 
remote storage.
+private volatile long highestOffsetInRemoteStorage = -1L;
+
+/* Keep track of the current high watermark in order to ensure that 
segments containing offsets at or above it are
+ * not eligible for deletion. This means that the active segment is only 
eligible for deletion if the high watermark
+ * equals the log end offset (which may never happen for a partition under 
consistent load). This is needed to
+ * prevent the log start offset (which is exposed in fetch responses) from 
getting ahead of the high watermark.
+ */
+private volatile LogOffsetMetadata highWatermarkMetadata;
+private volatile long localLogStartOffset;
+private volatile long logStartOffset;
+private volatile LeaderEpochFileCache leaderEpochCache;
+private volatile Optional topicId;
+private volatile LogOffsetsListener logOffsetsListener;
+
+/**
+ * A log which presents a unified view of local and tiered log segments.
+ *
+ * The log consists of tiered and local segments with the tiered 
portion of the log being optional. There could be an
+ * overlap between the tiered and local segments. The active segment is 
always guaranteed to be local. If tiered segments
+ * are present, they always appear at the beginning of the log, followed 
by an optional region of overlap, followed by the local
+ * segments including the active segment.
+ *
+ * NOTE: this class handles state and behavior specific to tiered 
segments as well as any behavior combining both tiered
+ * and local segments. The state and behavior specific to local segments 
are handled by the encapsulated LocalLog instance.
+ *
+ * @param logStartOffset The earliest offset allowed to be exposed to 
kafka client.
+ *   The logStartOffset can be updated by :
+ *   - user's DeleteRecordsRequest
+ *   - broker's log retention
+ *   - broker's log truncation
+ *   - broker's log recovery
+ *   The logStartOffset is used to decide the 
following:
+ *   - Log deletion. LogSegment whose nextOffset <= 
log's logStartOffset can be deleted.
+ * It may trigger log rolling if the active 
segmen

Re: [PR] KAFKA-18031: Flaky PlaintextConsumerTest testCloseLeavesGroupOnInterrupt [kafka]

2025-03-05 Thread via GitHub


TaiJuWu commented on PR #19105:
URL: https://github.com/apache/kafka/pull/19105#issuecomment-2700559090

   @lianetm Please take a look when you are available, thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14484: Move UnifiedLog to storage module [kafka]

2025-03-05 Thread via GitHub


mimaison commented on code in PR #19030:
URL: https://github.com/apache/kafka/pull/19030#discussion_r1981166972


##
storage/src/main/java/org/apache/kafka/storage/internals/log/UnifiedLog.java:
##
@@ -55,6 +113,2289 @@ public class UnifiedLog {
 public static final String STRAY_DIR_SUFFIX = 
LogFileUtils.STRAY_DIR_SUFFIX;
 public static final long UNKNOWN_OFFSET = LocalLog.UNKNOWN_OFFSET;
 
+// For compatibility, metrics are defined to be under `Log` class
+private final KafkaMetricsGroup metricsGroup = new 
KafkaMetricsGroup("kafka.log", "Log");
+
+/* A lock that guards all modifications to the log */
+private final Object lock = new Object();
+private final Map> metricNames = new 
HashMap<>();
+
+// localLog The LocalLog instance containing non-empty log segments 
recovered from disk
+private final LocalLog localLog;
+private final BrokerTopicStats brokerTopicStats;
+private final ProducerStateManager producerStateManager;
+private final boolean remoteStorageSystemEnable;
+private final ScheduledFuture producerExpireCheck;
+private final int producerIdExpirationCheckIntervalMs;
+private final String logIdent;
+private final Logger logger;
+private final LogValidator.MetricsRecorder validatorMetricsRecorder;
+
+/* The earliest offset which is part of an incomplete transaction. This is 
used to compute the
+ * last stable offset (LSO) in ReplicaManager. Note that it is possible 
that the "true" first unstable offset
+ * gets removed from the log (through record or segment deletion). In this 
case, the first unstable offset
+ * will point to the log start offset, which may actually be either part 
of a completed transaction or not
+ * part of a transaction at all. However, since we only use the LSO for 
the purpose of restricting the
+ * read_committed consumer to fetching decided data (i.e. committed, 
aborted, or non-transactional), this
+ * temporary abuse seems justifiable and saves us from scanning the log 
after deletion to find the first offsets
+ * of each ongoing transaction in order to compute a new first unstable 
offset. It is possible, however,
+ * that this could result in disagreement between replicas depending on 
when they began replicating the log.
+ * In the worst case, the LSO could be seen by a consumer to go backwards.
+ */
+private volatile Optional firstUnstableOffsetMetadata = 
Optional.empty();
+private volatile Optional partitionMetadataFile = 
Optional.empty();
+// This is the offset(inclusive) until which segments are copied to the 
remote storage.
+private volatile long highestOffsetInRemoteStorage = -1L;
+
+/* Keep track of the current high watermark in order to ensure that 
segments containing offsets at or above it are
+ * not eligible for deletion. This means that the active segment is only 
eligible for deletion if the high watermark
+ * equals the log end offset (which may never happen for a partition under 
consistent load). This is needed to
+ * prevent the log start offset (which is exposed in fetch responses) from 
getting ahead of the high watermark.
+ */
+private volatile LogOffsetMetadata highWatermarkMetadata;
+private volatile long localLogStartOffset;
+private volatile long logStartOffset;
+private volatile LeaderEpochFileCache leaderEpochCache;
+private volatile Optional topicId;
+private volatile LogOffsetsListener logOffsetsListener;
+
+/**
+ * A log which presents a unified view of local and tiered log segments.
+ *
+ * The log consists of tiered and local segments with the tiered 
portion of the log being optional. There could be an
+ * overlap between the tiered and local segments. The active segment is 
always guaranteed to be local. If tiered segments
+ * are present, they always appear at the beginning of the log, followed 
by an optional region of overlap, followed by the local
+ * segments including the active segment.
+ *
+ * NOTE: this class handles state and behavior specific to tiered 
segments as well as any behavior combining both tiered
+ * and local segments. The state and behavior specific to local segments 
are handled by the encapsulated LocalLog instance.
+ *
+ * @param logStartOffset The earliest offset allowed to be exposed to 
kafka client.
+ *   The logStartOffset can be updated by :
+ *   - user's DeleteRecordsRequest
+ *   - broker's log retention
+ *   - broker's log truncation
+ *   - broker's log recovery
+ *   The logStartOffset is used to decide the 
following:
+ *   - Log deletion. LogSegment whose nextOffset <= 
log's logStartOffset can be deleted.
+ * It may trigger log rolling if the active 
segmen

[PR] MINOR: KAFKA-18876 4.0 docs follow-up [kafka]

2025-03-05 Thread via GitHub


mingdaoy opened a new pull request, #19107:
URL: https://github.com/apache/kafka/pull/19107

   https://github.com/apache/kafka/pull/19103#issuecomment-2700303106
   
   Fixed the snapshot file format:
   
![image](https://github.com/user-attachments/assets/7acf4c81-b684-4b1d-9479-1c03096b05a3)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: KAFKA-18876 4.0 docs follow-up (2) [kafka]

2025-03-05 Thread via GitHub


chia7712 commented on code in PR #19107:
URL: https://github.com/apache/kafka/pull/19107#discussion_r1981171909


##
docs/ops.html:
##
@@ -3965,7 +3965,7 @@ The kafka-metadata-shell.sh tool can be used to interactively inspect the 
state of the cluster metadata partition:
 
-  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/.checkpoint
+  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/7228-01.checkpoint
 >> ls /

Review Comment:
   Maybe we can remind users that checkpoint has no cluster metadata, 
so users should not read it by metadata shell



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15371) MetadataShell is stuck when bootstrapping

2025-03-05 Thread Deng Ziming (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932594#comment-17932594
 ] 

Deng Ziming commented on KAFKA-15371:
-

[~chia7712] As far as I can see, the .log file has the same format with 
.checkpoint file, right? so this problem still makes sense, it's worth checking 
it.

> MetadataShell is stuck when bootstrapping
> -
>
> Key: KAFKA-15371
> URL: https://issues.apache.org/jira/browse/KAFKA-15371
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.5.1
>Reporter: Deng Ziming
>Priority: Major
> Attachments: image-2023-08-17-10-35-01-039.png, 
> image-2023-08-17-10-35-36-067.png, image-2023-10-31-09-04-53-966.png, 
> image-2023-10-31-09-11-53-118.png, image-2023-10-31-09-12-19-051.png, 
> image-2023-10-31-09-15-34-821.png
>
>
> I  downloaded the 3.5.1 package and startup it, then use metadata shell to 
> inspect the data
>  
> {code:java}
> // shell
> bin/kafka-metadata-shell.sh --snapshot 
> /tmp/kraft-combined-logs/__cluster_metadata-0/.log  {code}
> Then process will stuck at loading.
>  
>  
> !image-2023-08-17-10-35-36-067.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: KAFKA-18876 4.0 docs follow-up (2) [kafka]

2025-03-05 Thread via GitHub


frankvicky commented on code in PR #19107:
URL: https://github.com/apache/kafka/pull/19107#discussion_r1981192398


##
docs/ops.html:
##
@@ -3965,7 +3965,7 @@ The kafka-metadata-shell.sh tool can be used to interactively inspect the 
state of the cluster metadata partition:
 
-  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/.checkpoint
+  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/7228-01.checkpoint
 >> ls /

Review Comment:
   How about placeholder?
   using <> to imply this is placeholder.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14121: AlterPartitionReassignments API should allow callers to specify the option of preserving the replication factor [kafka]

2025-03-05 Thread via GitHub


clolov merged PR #18983:
URL: https://github.com/apache/kafka/pull/18983


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14484: Move UnifiedLog to storage module [kafka]

2025-03-05 Thread via GitHub


mimaison commented on code in PR #19030:
URL: https://github.com/apache/kafka/pull/19030#discussion_r1981152506


##
storage/src/main/java/org/apache/kafka/storage/internals/log/UnifiedLog.java:
##
@@ -55,6 +113,2289 @@ public class UnifiedLog {
 public static final String STRAY_DIR_SUFFIX = 
LogFileUtils.STRAY_DIR_SUFFIX;
 public static final long UNKNOWN_OFFSET = LocalLog.UNKNOWN_OFFSET;
 
+// For compatibility, metrics are defined to be under `Log` class
+private final KafkaMetricsGroup metricsGroup = new 
KafkaMetricsGroup("kafka.log", "Log");
+
+/* A lock that guards all modifications to the log */
+private final Object lock = new Object();
+private final Map> metricNames = new 
HashMap<>();
+
+// localLog The LocalLog instance containing non-empty log segments 
recovered from disk
+private final LocalLog localLog;
+private final BrokerTopicStats brokerTopicStats;
+private final ProducerStateManager producerStateManager;
+private final boolean remoteStorageSystemEnable;
+private final ScheduledFuture producerExpireCheck;
+private final int producerIdExpirationCheckIntervalMs;
+private final String logIdent;
+private final Logger logger;
+private final LogValidator.MetricsRecorder validatorMetricsRecorder;
+
+/* The earliest offset which is part of an incomplete transaction. This is 
used to compute the
+ * last stable offset (LSO) in ReplicaManager. Note that it is possible 
that the "true" first unstable offset
+ * gets removed from the log (through record or segment deletion). In this 
case, the first unstable offset
+ * will point to the log start offset, which may actually be either part 
of a completed transaction or not
+ * part of a transaction at all. However, since we only use the LSO for 
the purpose of restricting the
+ * read_committed consumer to fetching decided data (i.e. committed, 
aborted, or non-transactional), this
+ * temporary abuse seems justifiable and saves us from scanning the log 
after deletion to find the first offsets
+ * of each ongoing transaction in order to compute a new first unstable 
offset. It is possible, however,
+ * that this could result in disagreement between replicas depending on 
when they began replicating the log.
+ * In the worst case, the LSO could be seen by a consumer to go backwards.
+ */
+private volatile Optional firstUnstableOffsetMetadata = 
Optional.empty();
+private volatile Optional partitionMetadataFile = 
Optional.empty();
+// This is the offset(inclusive) until which segments are copied to the 
remote storage.
+private volatile long highestOffsetInRemoteStorage = -1L;
+
+/* Keep track of the current high watermark in order to ensure that 
segments containing offsets at or above it are
+ * not eligible for deletion. This means that the active segment is only 
eligible for deletion if the high watermark
+ * equals the log end offset (which may never happen for a partition under 
consistent load). This is needed to
+ * prevent the log start offset (which is exposed in fetch responses) from 
getting ahead of the high watermark.
+ */
+private volatile LogOffsetMetadata highWatermarkMetadata;
+private volatile long localLogStartOffset;
+private volatile long logStartOffset;
+private volatile LeaderEpochFileCache leaderEpochCache;
+private volatile Optional topicId;
+private volatile LogOffsetsListener logOffsetsListener;
+
+/**
+ * A log which presents a unified view of local and tiered log segments.
+ *
+ * The log consists of tiered and local segments with the tiered 
portion of the log being optional. There could be an
+ * overlap between the tiered and local segments. The active segment is 
always guaranteed to be local. If tiered segments
+ * are present, they always appear at the beginning of the log, followed 
by an optional region of overlap, followed by the local
+ * segments including the active segment.
+ *
+ * NOTE: this class handles state and behavior specific to tiered 
segments as well as any behavior combining both tiered
+ * and local segments. The state and behavior specific to local segments 
are handled by the encapsulated LocalLog instance.
+ *
+ * @param logStartOffset The earliest offset allowed to be exposed to 
kafka client.
+ *   The logStartOffset can be updated by :
+ *   - user's DeleteRecordsRequest
+ *   - broker's log retention
+ *   - broker's log truncation
+ *   - broker's log recovery
+ *   The logStartOffset is used to decide the 
following:
+ *   - Log deletion. LogSegment whose nextOffset <= 
log's logStartOffset can be deleted.
+ * It may trigger log rolling if the active 
segmen

Re: [PR] MINOR: zk2kraft: add more information about kafka-configs.sh [kafka]

2025-03-05 Thread via GitHub


chia7712 commented on code in PR #19100:
URL: https://github.com/apache/kafka/pull/19100#discussion_r1981121079


##
docs/zk2kraft.html:
##
@@ -153,6 +153,67 @@ Configurations
 
 
 
+Dynamic Log Levels
+
+
+
+The dynamic log levels feature allows you to change the log4j 
settings of a running broker or controller process without restarting it. The 
command-line syntax for setting dynamic log levels on brokers has not changed 
in KRaft mode. Here is an example of setting the log level on a broker:
+
+./bin/kafka-configs.sh --bootstrap-server localhost:9092 \
+--entity-type broker-loggers \
+--entity-name 1 \
+--alter \
+--add-config org.apache.kafka.raft.KafkaNetworkChannel=TRACE
+
+
+
+
+
+When setting dynamic log levels on the controllers, the 
--bootstrap-controller flag must be used. Here is an example of 
setting the log level ona  controller:
+
+./bin/kafka-configs.sh --bootstrap-controller localhost:9093 \
+--entity-type broker-loggers \
+--entity-name 1 \
+--alter \
+--add-config org.apache.kafka.raft.KafkaNetworkChannel=TRACE
+
+Note that the entity-type must be specified as 
broker-loggers, even though we are changing a controller's log 
level rather than a broker's log level.
+
+
+
+
+When changing the log level of a combined node, which has both 
broker and controller roles, either --bootstrap-servers or 
--bootstrap-controllers may be used. Combined nodes have only a single set of 
log levels; there are not different log levels for the broker and 
controller parts of the process.
+
+
+
+Dynamic Controller Configurations
+
+
+
+Some Kafka configurations can be changed dynamically, without 
restarting the process.  The command-line syntax for setting dynamic log levels 
on brokers has not changed in KRaft mode. Here is an example of setting the 
number of IO threads on a broker:
+
+./bin/kafka-configs.sh --bootstrap-server localhost:9092 \
+--entity-type brokers \
+--entity-name 1 \
+--alter \
+--add-config num.io.threads=5
+
+
+
+
+
+Controllers will apply all applicable cluster-level dynamic 
configurations. For example, the following command-line will change the 
max.connections setting on all of the brokers and all of the 
controllers in the cluster:
+
+./bin/kafka-configs.sh --bootstrap-server localhost:9092 \
+--entity-type brokers \
+--entity-default \
+--alter \
+--add-config max.connections=1
+
+It is not currently possible to apply a dynamic configuration 
on only a single controller.

Review Comment:
   Pardon me, is there any reason to disallow this behavior? It seems we can 
support this behavior by slightly modifying `KafkaAdminClient`[0] and 
`QuorumController`[1]
   
   [0] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L2896
   [1] 
https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/QuorumController.java#L487



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14121: AlterPartitionReassignments API should allow callers to specify the option of preserving the replication factor [kafka]

2025-03-05 Thread via GitHub


clolov commented on PR #18983:
URL: https://github.com/apache/kafka/pull/18983#issuecomment-2700629939

   That's a good question, I will merge as-is and reachout to people to hear 
their opinions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: KAFKA-18876 4.0 docs follow-up (2) [kafka]

2025-03-05 Thread via GitHub


frankvicky commented on code in PR #19107:
URL: https://github.com/apache/kafka/pull/19107#discussion_r1981192398


##
docs/ops.html:
##
@@ -3965,7 +3965,7 @@ The kafka-metadata-shell.sh tool can be used to interactively inspect the 
state of the cluster metadata partition:
 
-  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/.checkpoint
+  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/7228-01.checkpoint
 >> ls /

Review Comment:
   How about a placeholder?
   Using `<>` to imply this is a placeholder.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18817: ShareGroupHeartbeat and ShareGroupDescribe API must check topic describe [kafka]

2025-03-05 Thread via GitHub


clolov merged PR #19083:
URL: https://github.com/apache/kafka/pull/19083


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14121: AlterPartitionReassignments API should allow callers to specify the option of preserving the replication factor [kafka]

2025-03-05 Thread via GitHub


clolov commented on PR #18983:
URL: https://github.com/apache/kafka/pull/18983#issuecomment-2700635940

   I will also let you resolve the JIRA appropriately and move the KIP to the 
correct section + update the Kafka version in which it will be going out!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: KAFKA-18876 4.0 docs follow-up (2) [kafka]

2025-03-05 Thread via GitHub


Yunyung commented on code in PR #19107:
URL: https://github.com/apache/kafka/pull/19107#discussion_r1981209854


##
docs/ops.html:
##
@@ -3965,7 +3965,7 @@ The kafka-metadata-shell.sh tool can be used to interactively inspect the 
state of the cluster metadata partition:
 
-  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/.checkpoint
+  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/7228-01.checkpoint
 >> ls /

Review Comment:
   As chia said, .checkpoint files like 7228-01 has 
metadata, but -00.checkpoint does not.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: KAFKA-18876 4.0 docs follow-up (2) [kafka]

2025-03-05 Thread via GitHub


Yunyung commented on code in PR #19107:
URL: https://github.com/apache/kafka/pull/19107#discussion_r1981209854


##
docs/ops.html:
##
@@ -3965,7 +3965,7 @@ The kafka-metadata-shell.sh tool can be used to interactively inspect the 
state of the cluster metadata partition:
 
-  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/.checkpoint
+  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/7228-01.checkpoint
 >> ls /

Review Comment:
   As chai said, .checkpoint files like 7228-01 has 
metadata, but -00.checkpoint does not.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: KAFKA-18876 4.0 docs follow-up (2) [kafka]

2025-03-05 Thread via GitHub


Yunyung commented on code in PR #19107:
URL: https://github.com/apache/kafka/pull/19107#discussion_r1981209854


##
docs/ops.html:
##
@@ -3965,7 +3965,7 @@ The kafka-metadata-shell.sh tool can be used to interactively inspect the 
state of the cluster metadata partition:
 
-  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/.checkpoint
+  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/7228-01.checkpoint
 >> ls /

Review Comment:
   As chia said, .checkpoint files like 7228-01 have 
metadata, but -00.checkpoint does not.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-14484: Move UnifiedLog to storage module [kafka]

2025-03-05 Thread via GitHub


mimaison commented on code in PR #19030:
URL: https://github.com/apache/kafka/pull/19030#discussion_r1981217329


##
storage/src/main/java/org/apache/kafka/storage/internals/log/UnifiedLog.java:
##
@@ -55,6 +113,2289 @@ public class UnifiedLog {
 public static final String STRAY_DIR_SUFFIX = 
LogFileUtils.STRAY_DIR_SUFFIX;
 public static final long UNKNOWN_OFFSET = LocalLog.UNKNOWN_OFFSET;
 
+// For compatibility, metrics are defined to be under `Log` class
+private final KafkaMetricsGroup metricsGroup = new 
KafkaMetricsGroup("kafka.log", "Log");
+
+/* A lock that guards all modifications to the log */
+private final Object lock = new Object();
+private final Map> metricNames = new 
HashMap<>();
+
+// localLog The LocalLog instance containing non-empty log segments 
recovered from disk
+private final LocalLog localLog;
+private final BrokerTopicStats brokerTopicStats;
+private final ProducerStateManager producerStateManager;
+private final boolean remoteStorageSystemEnable;
+private final ScheduledFuture producerExpireCheck;
+private final int producerIdExpirationCheckIntervalMs;
+private final String logIdent;
+private final Logger logger;
+private final LogValidator.MetricsRecorder validatorMetricsRecorder;
+
+/* The earliest offset which is part of an incomplete transaction. This is 
used to compute the
+ * last stable offset (LSO) in ReplicaManager. Note that it is possible 
that the "true" first unstable offset
+ * gets removed from the log (through record or segment deletion). In this 
case, the first unstable offset
+ * will point to the log start offset, which may actually be either part 
of a completed transaction or not
+ * part of a transaction at all. However, since we only use the LSO for 
the purpose of restricting the
+ * read_committed consumer to fetching decided data (i.e. committed, 
aborted, or non-transactional), this
+ * temporary abuse seems justifiable and saves us from scanning the log 
after deletion to find the first offsets
+ * of each ongoing transaction in order to compute a new first unstable 
offset. It is possible, however,
+ * that this could result in disagreement between replicas depending on 
when they began replicating the log.
+ * In the worst case, the LSO could be seen by a consumer to go backwards.
+ */
+private volatile Optional firstUnstableOffsetMetadata = 
Optional.empty();
+private volatile Optional partitionMetadataFile = 
Optional.empty();
+// This is the offset(inclusive) until which segments are copied to the 
remote storage.
+private volatile long highestOffsetInRemoteStorage = -1L;
+
+/* Keep track of the current high watermark in order to ensure that 
segments containing offsets at or above it are
+ * not eligible for deletion. This means that the active segment is only 
eligible for deletion if the high watermark
+ * equals the log end offset (which may never happen for a partition under 
consistent load). This is needed to
+ * prevent the log start offset (which is exposed in fetch responses) from 
getting ahead of the high watermark.
+ */
+private volatile LogOffsetMetadata highWatermarkMetadata;
+private volatile long localLogStartOffset;
+private volatile long logStartOffset;
+private volatile LeaderEpochFileCache leaderEpochCache;
+private volatile Optional topicId;
+private volatile LogOffsetsListener logOffsetsListener;
+
+/**
+ * A log which presents a unified view of local and tiered log segments.
+ *
+ * The log consists of tiered and local segments with the tiered 
portion of the log being optional. There could be an
+ * overlap between the tiered and local segments. The active segment is 
always guaranteed to be local. If tiered segments
+ * are present, they always appear at the beginning of the log, followed 
by an optional region of overlap, followed by the local
+ * segments including the active segment.
+ *
+ * NOTE: this class handles state and behavior specific to tiered 
segments as well as any behavior combining both tiered
+ * and local segments. The state and behavior specific to local segments 
are handled by the encapsulated LocalLog instance.
+ *
+ * @param logStartOffset The earliest offset allowed to be exposed to 
kafka client.
+ *   The logStartOffset can be updated by :
+ *   - user's DeleteRecordsRequest
+ *   - broker's log retention
+ *   - broker's log truncation
+ *   - broker's log recovery
+ *   The logStartOffset is used to decide the 
following:
+ *   - Log deletion. LogSegment whose nextOffset <= 
log's logStartOffset can be deleted.
+ * It may trigger log rolling if the active 
segmen

Re: [PR] KAFKA-14484: Move UnifiedLog to storage module [kafka]

2025-03-05 Thread via GitHub


mimaison commented on code in PR #19030:
URL: https://github.com/apache/kafka/pull/19030#discussion_r1981225722


##
storage/src/main/java/org/apache/kafka/storage/internals/log/UnifiedLog.java:
##
@@ -55,6 +113,2289 @@ public class UnifiedLog {
 public static final String STRAY_DIR_SUFFIX = 
LogFileUtils.STRAY_DIR_SUFFIX;
 public static final long UNKNOWN_OFFSET = LocalLog.UNKNOWN_OFFSET;
 
+// For compatibility, metrics are defined to be under `Log` class
+private final KafkaMetricsGroup metricsGroup = new 
KafkaMetricsGroup("kafka.log", "Log");
+
+/* A lock that guards all modifications to the log */
+private final Object lock = new Object();
+private final Map> metricNames = new 
HashMap<>();
+
+// localLog The LocalLog instance containing non-empty log segments 
recovered from disk
+private final LocalLog localLog;
+private final BrokerTopicStats brokerTopicStats;
+private final ProducerStateManager producerStateManager;
+private final boolean remoteStorageSystemEnable;
+private final ScheduledFuture producerExpireCheck;
+private final int producerIdExpirationCheckIntervalMs;
+private final String logIdent;
+private final Logger logger;
+private final LogValidator.MetricsRecorder validatorMetricsRecorder;
+
+/* The earliest offset which is part of an incomplete transaction. This is 
used to compute the
+ * last stable offset (LSO) in ReplicaManager. Note that it is possible 
that the "true" first unstable offset
+ * gets removed from the log (through record or segment deletion). In this 
case, the first unstable offset
+ * will point to the log start offset, which may actually be either part 
of a completed transaction or not
+ * part of a transaction at all. However, since we only use the LSO for 
the purpose of restricting the
+ * read_committed consumer to fetching decided data (i.e. committed, 
aborted, or non-transactional), this
+ * temporary abuse seems justifiable and saves us from scanning the log 
after deletion to find the first offsets
+ * of each ongoing transaction in order to compute a new first unstable 
offset. It is possible, however,
+ * that this could result in disagreement between replicas depending on 
when they began replicating the log.
+ * In the worst case, the LSO could be seen by a consumer to go backwards.
+ */
+private volatile Optional firstUnstableOffsetMetadata = 
Optional.empty();
+private volatile Optional partitionMetadataFile = 
Optional.empty();
+// This is the offset(inclusive) until which segments are copied to the 
remote storage.
+private volatile long highestOffsetInRemoteStorage = -1L;
+
+/* Keep track of the current high watermark in order to ensure that 
segments containing offsets at or above it are
+ * not eligible for deletion. This means that the active segment is only 
eligible for deletion if the high watermark
+ * equals the log end offset (which may never happen for a partition under 
consistent load). This is needed to
+ * prevent the log start offset (which is exposed in fetch responses) from 
getting ahead of the high watermark.
+ */
+private volatile LogOffsetMetadata highWatermarkMetadata;
+private volatile long localLogStartOffset;
+private volatile long logStartOffset;
+private volatile LeaderEpochFileCache leaderEpochCache;
+private volatile Optional topicId;
+private volatile LogOffsetsListener logOffsetsListener;
+
+/**
+ * A log which presents a unified view of local and tiered log segments.
+ *
+ * The log consists of tiered and local segments with the tiered 
portion of the log being optional. There could be an
+ * overlap between the tiered and local segments. The active segment is 
always guaranteed to be local. If tiered segments
+ * are present, they always appear at the beginning of the log, followed 
by an optional region of overlap, followed by the local
+ * segments including the active segment.
+ *
+ * NOTE: this class handles state and behavior specific to tiered 
segments as well as any behavior combining both tiered
+ * and local segments. The state and behavior specific to local segments 
are handled by the encapsulated LocalLog instance.
+ *
+ * @param logStartOffset The earliest offset allowed to be exposed to 
kafka client.
+ *   The logStartOffset can be updated by :
+ *   - user's DeleteRecordsRequest
+ *   - broker's log retention
+ *   - broker's log truncation
+ *   - broker's log recovery
+ *   The logStartOffset is used to decide the 
following:
+ *   - Log deletion. LogSegment whose nextOffset <= 
log's logStartOffset can be deleted.
+ * It may trigger log rolling if the active 
segmen

Re: [PR] KAFKA-14484: Move UnifiedLog to storage module [kafka]

2025-03-05 Thread via GitHub


mimaison commented on code in PR #19030:
URL: https://github.com/apache/kafka/pull/19030#discussion_r1981230458


##
storage/src/main/java/org/apache/kafka/storage/internals/log/UnifiedLog.java:
##
@@ -55,6 +113,2289 @@ public class UnifiedLog {
 public static final String STRAY_DIR_SUFFIX = 
LogFileUtils.STRAY_DIR_SUFFIX;
 public static final long UNKNOWN_OFFSET = LocalLog.UNKNOWN_OFFSET;
 
+// For compatibility, metrics are defined to be under `Log` class
+private final KafkaMetricsGroup metricsGroup = new 
KafkaMetricsGroup("kafka.log", "Log");
+
+/* A lock that guards all modifications to the log */
+private final Object lock = new Object();
+private final Map> metricNames = new 
HashMap<>();
+
+// localLog The LocalLog instance containing non-empty log segments 
recovered from disk
+private final LocalLog localLog;
+private final BrokerTopicStats brokerTopicStats;
+private final ProducerStateManager producerStateManager;
+private final boolean remoteStorageSystemEnable;
+private final ScheduledFuture producerExpireCheck;
+private final int producerIdExpirationCheckIntervalMs;
+private final String logIdent;
+private final Logger logger;
+private final LogValidator.MetricsRecorder validatorMetricsRecorder;
+
+/* The earliest offset which is part of an incomplete transaction. This is 
used to compute the
+ * last stable offset (LSO) in ReplicaManager. Note that it is possible 
that the "true" first unstable offset
+ * gets removed from the log (through record or segment deletion). In this 
case, the first unstable offset
+ * will point to the log start offset, which may actually be either part 
of a completed transaction or not
+ * part of a transaction at all. However, since we only use the LSO for 
the purpose of restricting the
+ * read_committed consumer to fetching decided data (i.e. committed, 
aborted, or non-transactional), this
+ * temporary abuse seems justifiable and saves us from scanning the log 
after deletion to find the first offsets
+ * of each ongoing transaction in order to compute a new first unstable 
offset. It is possible, however,
+ * that this could result in disagreement between replicas depending on 
when they began replicating the log.
+ * In the worst case, the LSO could be seen by a consumer to go backwards.
+ */
+private volatile Optional firstUnstableOffsetMetadata = 
Optional.empty();
+private volatile Optional partitionMetadataFile = 
Optional.empty();
+// This is the offset(inclusive) until which segments are copied to the 
remote storage.
+private volatile long highestOffsetInRemoteStorage = -1L;
+
+/* Keep track of the current high watermark in order to ensure that 
segments containing offsets at or above it are
+ * not eligible for deletion. This means that the active segment is only 
eligible for deletion if the high watermark
+ * equals the log end offset (which may never happen for a partition under 
consistent load). This is needed to
+ * prevent the log start offset (which is exposed in fetch responses) from 
getting ahead of the high watermark.
+ */
+private volatile LogOffsetMetadata highWatermarkMetadata;
+private volatile long localLogStartOffset;
+private volatile long logStartOffset;
+private volatile LeaderEpochFileCache leaderEpochCache;
+private volatile Optional topicId;
+private volatile LogOffsetsListener logOffsetsListener;
+
+/**
+ * A log which presents a unified view of local and tiered log segments.
+ *
+ * The log consists of tiered and local segments with the tiered 
portion of the log being optional. There could be an
+ * overlap between the tiered and local segments. The active segment is 
always guaranteed to be local. If tiered segments
+ * are present, they always appear at the beginning of the log, followed 
by an optional region of overlap, followed by the local
+ * segments including the active segment.
+ *
+ * NOTE: this class handles state and behavior specific to tiered 
segments as well as any behavior combining both tiered
+ * and local segments. The state and behavior specific to local segments 
are handled by the encapsulated LocalLog instance.
+ *
+ * @param logStartOffset The earliest offset allowed to be exposed to 
kafka client.
+ *   The logStartOffset can be updated by :
+ *   - user's DeleteRecordsRequest
+ *   - broker's log retention
+ *   - broker's log truncation
+ *   - broker's log recovery
+ *   The logStartOffset is used to decide the 
following:
+ *   - Log deletion. LogSegment whose nextOffset <= 
log's logStartOffset can be deleted.
+ * It may trigger log rolling if the active 
segmen

[jira] [Updated] (KAFKA-16936) Upgrade slf4j to 2.0.9 and integrate "-Dslf4j.provider" to kafka script

2025-03-05 Thread TengYao Chi (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TengYao Chi updated KAFKA-16936:

Summary: Upgrade slf4j to 2.0.9 and integrate "-Dslf4j.provider" to kafka 
script  (was: Upgrade slf4k to 2.0.9 and integrate "-Dslf4j.provider" to kafka 
script)

> Upgrade slf4j to 2.0.9 and integrate "-Dslf4j.provider" to kafka script
> ---
>
> Key: KAFKA-16936
> URL: https://issues.apache.org/jira/browse/KAFKA-16936
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Chia-Ping Tsai
>Assignee: TengYao Chi
>Priority: Major
>  Labels: need-kip
>
> origin discussion: 
> [https://github.com/apache/kafka/pull/16260#issuecomment-2159632052]
> The specific provider class can be defined  by `slf4j.provider`[0]. Hence, we 
> can add the slf4j backends we care about to dependencies. With that, our 
> distributions will have different slf4j backends and it is safe as we will 
> define slf4j.provider in our script. Also, those slf4j backends will be 
> collected to "dependend-libs", and hence we can run kafka instance from 
> source code with specific provider too.
> In short, the following tasks are included by this jira
> 1. upgrade slf4j from 1.7.36 to 2.0.9+
> 2. add a new system variable to script to define -Dslf4j.provider easily. By 
> default we use org.slf4j.reload4j.Reload4jServiceProvider
> 3. add other slf4j backend dependencies (optional)
> This change needs KIP since slf4j requires the version match between the 
> provider and slf4j-api.jar. Hence, users may encounter compatibility issue if 
> they have added other providers jar into kafka classpath.
>  [0] https://www.slf4j.org/manual.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18886: add behavior change of CreateTopicPolicy and AlterConfigPolicy to zk2kraft [kafka]

2025-03-05 Thread via GitHub


chia7712 merged PR #19087:
URL: https://github.com/apache/kafka/pull/19087


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18804) Remove slf4j warning when using tool script.

2025-03-05 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-18804.

Fix Version/s: 4.1.0
   Resolution: Fixed

> Remove slf4j warning when using tool script.
> 
>
> Key: KAFKA-18804
> URL: https://issues.apache.org/jira/browse/KAFKA-18804
> Project: Kafka
>  Issue Type: Improvement
>Reporter: TengYao Chi
>Assignee: Jhen-Yung Hsu
>Priority: Minor
> Fix For: 4.1.0
>
>
> Currently, when running tools scripts, we encounter multiple SLF4J binding 
> warnings:
> ```
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/home/yungh/kafka/kafka_fork/kafka_test/core/build/dependant-libs-2.13.15/log4j-slf4j-impl-2.24.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/home/yungh/kafka/kafka_fork/kafka_test/tools/build/dependant-libs-2.13.15/log4j-slf4j-impl-2.24.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> ```
>  
> This issue occurs because we have included multiple SLF4J implementation 
> libraries in the project. We can resolve this by modifying the 
> `copyDependantLibs` task in gradle.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] MINOR: transformValues test improvment [kafka]

2025-03-05 Thread via GitHub


lucasbru opened a new pull request, #19106:
URL: https://github.com/apache/kafka/pull/19106

   Follow-up to 295760d3eb68178eac96ee4211fa75dfdbddfa95. Also check the set of 
state stores as suggested in reviews.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-12903) Replace producer state entry with auto-generated protocol

2025-03-05 Thread Deng Ziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deng Ziming resolved KAFKA-12903.
-
Resolution: Duplicate

Already fixed in KAFKA-17056

> Replace producer state entry with auto-generated protocol
> -
>
> Key: KAFKA-12903
> URL: https://issues.apache.org/jira/browse/KAFKA-12903
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Deng Ziming
>Assignee: Deng Ziming
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18910) Remove kafka.utils.json

2025-03-05 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

黃竣陽 updated KAFKA-18910:

Summary: Remove kafka.utils.json   (was: Romve kafka.utils.json )

> Remove kafka.utils.json 
> 
>
> Key: KAFKA-18910
> URL: https://issues.apache.org/jira/browse/KAFKA-18910
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: 黃竣陽
>Priority: Major
>
> those json utils are used by tools module only



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: transformValues test improvement [kafka]

2025-03-05 Thread via GitHub


cadonna commented on code in PR #19106:
URL: https://github.com/apache/kafka/pull/19106#discussion_r1981351538


##
streams/src/test/java/org/apache/kafka/streams/kstream/internals/KTableTransformValuesTest.java:
##
@@ -410,6 +410,8 @@ public void 
shouldCalculateCorrectOldValuesIfMaterializedEvenIfStateful() {
 
 final KeyValueStore keyValueStore = 
driver.getKeyValueStore(QUERYABLE_NAME);
 assertThat(keyValueStore.get("A"), is(3));
+assertThat(driver.getAllStateStores().keySet(),
+equalTo(Set.of("queryable-store", 
"KTABLE-AGGREGATE-STATE-STORE-05")));

Review Comment:
   Why not using `QUERYABLE_NAME` instead of `"queryable-store"`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18811: Included admin.client.config in VerifiableShareConsumer.java [kafka]

2025-03-05 Thread via GitHub


chirag-wadhwa5 commented on code in PR #19108:
URL: https://github.com/apache/kafka/pull/19108#discussion_r1981366175


##
tools/src/main/java/org/apache/kafka/tools/VerifiableShareConsumer.java:
##
@@ -596,18 +599,29 @@ public static VerifiableShareConsumer 
createFromArgs(ArgumentParser parser, Stri
 StringDeserializer deserializer = new StringDeserializer();
 KafkaShareConsumer consumer = new 
KafkaShareConsumer<>(consumerProps, deserializer, deserializer);
 
+Properties adminClientProps = new Properties();
+if (adminClientConfigFile != null) {
+try {
+
adminClientProps.putAll(Utils.loadProps(adminClientConfigFile));
+} catch (IOException e) {
+throw new ArgumentParserException(e.getMessage(), parser);
+}
+}
+
+adminClientProps.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, 
brokerHostandPort);

Review Comment:
   Thanks for the review. Actually, the --bootstrap-server itself is another 
flag which is used to populate the bootstrap server for both consumer and admin 
client. Also, I believe the configs present in AdminClientConfig are all 
specific to an admin client, so the new admin.client.config will include admin 
client specific configs only. Regarding what is required to pass, the cloud 
clusters require some extra configs for authentication of any kind of client 
(be it consumer, producer or admin) which includes the API key and secret.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18811: Included admin.client.config in VerifiableShareConsumer.java [kafka]

2025-03-05 Thread via GitHub


chirag-wadhwa5 commented on code in PR #19108:
URL: https://github.com/apache/kafka/pull/19108#discussion_r1981368859


##
tools/src/main/java/org/apache/kafka/tools/VerifiableShareConsumer.java:
##
@@ -562,6 +557,13 @@ private static ArgumentParser argParser() {
 .metavar("CONFIG_FILE")
 .help("Consumer config properties file (config options shared with 
command line parameters will be overridden).");
 
+parser.addArgument("--admin.client.config")

Review Comment:
   Thanks for the review. Actually there's no reason for that. This could be 
changed. Will push that change right away.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18736: Decide when a heartbeat should be sent [kafka]

2025-03-05 Thread via GitHub


cadonna commented on PR #19121:
URL: https://github.com/apache/kafka/pull/19121#issuecomment-2701535257

   Call for review: @aliehsaeedii 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR Add a required status check for trunk [kafka]

2025-03-05 Thread via GitHub


mumrah commented on code in PR #19122:
URL: https://github.com/apache/kafka/pull/19122#discussion_r1981866135


##
.asf.yaml:
##
@@ -45,3 +45,19 @@ github:
 squash_commit_message: PR_TITLE_AND_DESC
 merge: false
 rebase: false
+
+  protected_branches:
+trunk:
+  required_status_checks:
+strict: false
+contexts:
+  - build / CI checks completed
+  required_pull_request_reviews:
+required_approving_review_count: 1
+
+# Disable force push on release branches

Review Comment:
   Correct. Otherwise, we would need to backport a lot of build changes. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR Add a required status check for trunk [kafka]

2025-03-05 Thread via GitHub


jolshan commented on code in PR #19122:
URL: https://github.com/apache/kafka/pull/19122#discussion_r1981864951


##
.asf.yaml:
##
@@ -45,3 +45,19 @@ github:
 squash_commit_message: PR_TITLE_AND_DESC
 merge: false
 rebase: false
+
+  protected_branches:
+trunk:
+  required_status_checks:
+strict: false
+contexts:
+  - build / CI checks completed
+  required_pull_request_reviews:
+required_approving_review_count: 1
+
+# Disable force push on release branches

Review Comment:
   So we will not require this check on non-trunk release branches?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR Add a required status check for trunk [kafka]

2025-03-05 Thread via GitHub


mumrah commented on PR #19122:
URL: https://github.com/apache/kafka/pull/19122#issuecomment-2701616024

   Here's how Pulsar has their repo configured: 
https://github.com/apache/pulsar/blob/master/.asf.yaml


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18766:Docs: Make usage of allow.everyone.if.no.acl.found config clearer [kafka]

2025-03-05 Thread via GitHub


AndrewJSchofield commented on code in PR #19077:
URL: https://github.com/apache/kafka/pull/19077#discussion_r1981867953


##
docs/security.html:
##
@@ -1248,11 +1248,16 @@ https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface";>KIP-11
 and
 resource patterns in https://cwiki.apache.org/confluence/display/KAFKA/KIP-290%3A+Support+for+Prefixed+ACLs";>KIP-290.
-In order to add, remove, or list ACLs, you can use the Kafka ACL CLI 
kafka-acls.sh. By default, if no ResourcePatterns match a specific 
Resource R,
-then R has no associated ACLs, and therefore no one other than super users 
is allowed to access R.
-If you want to change that behavior, you can include the following in 
server.properties.
+In order to add, remove, or list ACLs, you can use the Kafka ACL CLI 
kafka-acls.sh. 
+Behavior Without ACLs: 
+If a resource (R) does not have any ACLs defined, meaning that no ACL 
matches the resource, Kafka will restrict access to that resource. In this 
situation, only super users are allowed to access it.

Review Comment:
   nit: Indentation is slightly off.



##
docs/security.html:
##
@@ -2366,4 +2371,4 @@ 
 security.inter.broker.protocol=SSL
 
 
-
+

Review Comment:
   This doesn't seem like a necessary change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: zk2kraft: add more information about kafka-configs.sh [kafka]

2025-03-05 Thread via GitHub


cmccabe commented on code in PR #19100:
URL: https://github.com/apache/kafka/pull/19100#discussion_r1981929991


##
docs/zk2kraft.html:
##
@@ -153,6 +153,67 @@ Configurations
 
 
 
+Dynamic Log Levels
+
+
+
+The dynamic log levels feature allows you to change the log4j 
settings of a running broker or controller process without restarting it. The 
command-line syntax for setting dynamic log levels on brokers has not changed 
in KRaft mode. Here is an example of setting the log level on a broker:
+
+./bin/kafka-configs.sh --bootstrap-server localhost:9092 \
+--entity-type broker-loggers \
+--entity-name 1 \
+--alter \
+--add-config org.apache.kafka.raft.KafkaNetworkChannel=TRACE
+
+
+
+
+
+When setting dynamic log levels on the controllers, the 
--bootstrap-controller flag must be used. Here is an example of 
setting the log level ona  controller:
+
+./bin/kafka-configs.sh --bootstrap-controller localhost:9093 \
+--entity-type broker-loggers \
+--entity-name 1 \
+--alter \
+--add-config org.apache.kafka.raft.KafkaNetworkChannel=TRACE
+
+Note that the entity-type must be specified as 
broker-loggers, even though we are changing a controller's log 
level rather than a broker's log level.
+
+
+
+
+When changing the log level of a combined node, which has both 
broker and controller roles, either --bootstrap-servers or 
--bootstrap-controllers may be used. Combined nodes have only a single set of 
log levels; there are not different log levels for the broker and 
controller parts of the process.
+
+
+
+Dynamic Controller Configurations
+
+
+
+Some Kafka configurations can be changed dynamically, without 
restarting the process.  The command-line syntax for setting dynamic log levels 
on brokers has not changed in KRaft mode. Here is an example of setting the 
number of IO threads on a broker:
+

Review Comment:
   Initially KRaft mode did this differently. But now the broker receiving the 
request does the validation, just like in ZK mode.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18646: Null records in fetch response breaks librdkafka [kafka]

2025-03-05 Thread via GitHub


junrao commented on code in PR #18726:
URL: https://github.com/apache/kafka/pull/18726#discussion_r1981925892


##
clients/src/main/resources/common/message/FetchResponse.json:
##
@@ -106,7 +106,7 @@
 ]},
 { "name": "PreferredReadReplica", "type": "int32", "versions": "11+", 
"default": "-1", "ignorable": false, "entityType": "brokerId",
   "about": "The preferred read replica for the consumer to use on its 
next fetch request."},
-{ "name": "Records", "type": "records", "versions": "0+", 
"nullableVersions": "0+", "about": "The record data."}

Review Comment:
   cc @dajac since this is a potential 4.0.0 blocker.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: zk2kraft: add more information about kafka-configs.sh [kafka]

2025-03-05 Thread via GitHub


cmccabe commented on code in PR #19100:
URL: https://github.com/apache/kafka/pull/19100#discussion_r1981933910


##
docs/zk2kraft.html:
##
@@ -153,6 +153,67 @@ Configurations
 
 
 
+Dynamic Log Levels
+
+
+
+The dynamic log levels feature allows you to change the log4j 
settings of a running broker or controller process without restarting it. The 
command-line syntax for setting dynamic log levels on brokers has not changed 
in KRaft mode. Here is an example of setting the log level on a broker:
+
+./bin/kafka-configs.sh --bootstrap-server localhost:9092 \
+--entity-type broker-loggers \
+--entity-name 1 \
+--alter \
+--add-config org.apache.kafka.raft.KafkaNetworkChannel=TRACE
+
+
+
+
+
+When setting dynamic log levels on the controllers, the 
--bootstrap-controller flag must be used. Here is an example of 
setting the log level ona  controller:
+
+./bin/kafka-configs.sh --bootstrap-controller localhost:9093 \
+--entity-type broker-loggers \
+--entity-name 1 \
+--alter \
+--add-config org.apache.kafka.raft.KafkaNetworkChannel=TRACE
+
+Note that the entity-type must be specified as 
broker-loggers, even though we are changing a controller's log 
level rather than a broker's log level.
+
+
+
+
+When changing the log level of a combined node, which has both 
broker and controller roles, either --bootstrap-servers or 
--bootstrap-controllers may be used. Combined nodes have only a single set of 
log levels; there are not different log levels for the broker and 
controller parts of the process.
+
+
+
+Dynamic Controller Configurations
+
+
+
+Some Kafka configurations can be changed dynamically, without 
restarting the process.  The command-line syntax for setting dynamic log levels 
on brokers has not changed in KRaft mode. Here is an example of setting the 
number of IO threads on a broker:
+
+./bin/kafka-configs.sh --bootstrap-server localhost:9092 \
+--entity-type brokers \
+--entity-name 1 \
+--alter \
+--add-config num.io.threads=5
+
+
+
+
+
+Controllers will apply all applicable cluster-level dynamic 
configurations. For example, the following command-line will change the 
max.connections setting on all of the brokers and all of the 
controllers in the cluster:
+
+./bin/kafka-configs.sh --bootstrap-server localhost:9092 \
+--entity-type brokers \
+--entity-default \
+--alter \
+--add-config max.connections=1
+
+It is not currently possible to apply a dynamic configuration 
on only a single controller.

Review Comment:
   It depends on if we wanted to do validation on the controller node whose 
configuration we were changing. If we did, then we would have to have the 
client initially contact that node in particular, and then pass the message to 
the active controller.
   
   If we don't care about pre-application validation then we could just send to 
the active controller like now.
   
   I don't think anyone has worked on this. It's not a feature in high demand 
since restarting controllers is so quick, that it's easier just to change the 
config file. But if you're interested you could look at implementing this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18602: Incorrect FinalizedVersionLevel reported for dynamic KRaft quorum (wip) [kafka]

2025-03-05 Thread via GitHub


jsancio commented on PR #18685:
URL: https://github.com/apache/kafka/pull/18685#issuecomment-2701811371

   @FrankYang0529 is this still a WIP? I would like to get this fixed and 
included in the 4.0.0 release. cc @dajac @junrao 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18920) kafka-feature doesn't report the correct kraft.version

2025-03-05 Thread Mahsa Seifikar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932713#comment-17932713
 ] 

Mahsa Seifikar commented on KAFKA-18920:


There is a similar Jira ticket 
https://issues.apache.org/jira/browse/KAFKA-18602 for this issue and there is 
an in-progress PR [https://github.com/apache/kafka/pull/18685] 

> kafka-feature doesn't report the correct kraft.version
> --
>
> Key: KAFKA-18920
> URL: https://issues.apache.org/jira/browse/KAFKA-18920
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft, tools
>Affects Versions: 4.0.0, 3.9.0
>Reporter: José Armando García Sancio
>Assignee: José Armando García Sancio
>Priority: Major
> Fix For: 3.9.1, 4.0.1
>
>
> After formatting a controller in standalone and starting the controller the 
> finalized kraf.version is reported as 0.
> {code:java}
> bin/kafka-features.sh --bootstrap-controller localhost:9093 describe
> Feature: kraft.version  SupportedMinVersion: 0  SupportedMaxVersion: 1  
> FinalizedVersionLevel: 0Epoch: 428
> Feature: metadata.version   SupportedMinVersion: 3.0-IV1
> SupportedMaxVersion: 3.9-IV0FinalizedVersionLevel: 3.9-IV0  Epoch: 428 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18919 Clarify that KafkaPrincipalBuilder classes must also implement KafkaPrincipalSerde [kafka]

2025-03-05 Thread via GitHub


chia7712 merged PR #19104:
URL: https://github.com/apache/kafka/pull/19104


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] KAFKA-18613: Add StreamsGroupHeartbeat handler in the group coordinator [kafka]

2025-03-05 Thread via GitHub


lucasbru opened a new pull request, #19114:
URL: https://github.com/apache/kafka/pull/19114

   Basic streams group heartbeat handling. The main part of are the unit tests 
that make sure that we behave, for the most part, like a consumer group.
   
   - No support for static membership
   - No support for configurations (using constants instead)
   - No support for regular expressions
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: KAFKA-18876 4.0 docs follow-up (2) [kafka]

2025-03-05 Thread via GitHub


mingdaoy commented on code in PR #19107:
URL: https://github.com/apache/kafka/pull/19107#discussion_r1981577358


##
docs/ops.html:
##
@@ -3965,7 +3965,7 @@ The kafka-metadata-shell.sh tool can be used to interactively inspect the 
state of the cluster metadata partition:
 
-  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/.checkpoint
+  $ bin/kafka-metadata-shell.sh --snapshot 
metadata_log_dir/__cluster_metadata-0/7228-01.checkpoint
 >> ls /

Review Comment:
   updated:
   
![image](https://github.com/user-attachments/assets/16a78f75-e8d1-46ad-b82d-740d76122dd0)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422: add Kafka client upgrade path [kafka]

2025-03-05 Thread via GitHub


chia7712 commented on code in PR #19097:
URL: https://github.com/apache/kafka/pull/19097#discussion_r1981591563


##
docs/upgrade.html:
##
@@ -46,6 +46,9 @@ Upgrading to 
4.0.0 from any vers
 Every https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java";>MetadataVersion
 has a boolean parameter that indicates if there are metadata changes 
(i.e. IBP_4_0_IV1(23, "4.0", "IV1", true) means this version has 
metadata changes).
 Given your current and target versions, a downgrade is only possible 
if there are no metadata changes in the versions between.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.

Review Comment:
   > perhaps make the current title more generic and creating a section 
specifically for brokers under it.
   
   "1.5 Upgrading From Previous Versions" is generic enough I think. we can 
just move it out of "Upgrading to 4.0.0  ..." and create a individual title " 
Kafka Client upgrade path for 4.x". for example:
   
   
   ## 1.5 Upgrading From Previous Versions
   
   ### Kafka Client upgrade path for 4.x
   
   For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible. For more information, please refer to 
[KIP-1124](https://cwiki.apache.org/confluence/x/y4kgF).
   
   ### Upgrading to 4.1.0 from any version 3.3.x through 4.0.x
   ...
   
   @ijuma WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18766:Docs: Make usage of allow.everyone.if.no.acl.found config clearer [kafka]

2025-03-05 Thread via GitHub


Iamoshione commented on PR #19077:
URL: https://github.com/apache/kafka/pull/19077#issuecomment-2701402395

   ![Screenshot from 2025-03-05 
11-07-40](https://github.com/user-attachments/assets/9bfb4c59-2226-48a0-a05a-115e1b2b9dde)
   Hey @AndrewJSchofield , thank you for acknowledging my PR. I have reviewed 
your requested changes and made the necessary updates accordingly. 
Additionally, I added an underline tag to maintain consistency throughout the 
document. Let me know if you need any further changes. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18922) RequestContext#principalSerde should not be optional as it is now always present

2025-03-05 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-18922:
---
Fix Version/s: 4.1.0

> RequestContext#principalSerde should not be optional as it is now always 
> present
> 
>
> Key: KAFKA-18922
> URL: https://issues.apache.org/jira/browse/KAFKA-18922
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Szu-Yung Wang
>Priority: Major
> Fix For: 4.1.0
>
>
> as title



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: KAFKA-18422 Adjust Kafka client upgrade path section [kafka]

2025-03-05 Thread via GitHub


ijuma commented on PR #19119:
URL: https://github.com/apache/kafka/pull/19119#issuecomment-2701424539

   I have some thoughts on this, will share them later today.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18922) RequestContext#principalSerde should not be optional as it is now always present

2025-03-05 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932684#comment-17932684
 ] 

Chia-Ping Tsai commented on KAFKA-18922:


According to discussion in KAFKA-18919, this jira should include another task: 
"`KafkaPrincipalBuilder` extend `KafkaPrincipalSerde`"

> RequestContext#principalSerde should not be optional as it is now always 
> present
> 
>
> Key: KAFKA-18922
> URL: https://issues.apache.org/jira/browse/KAFKA-18922
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Szu-Yung Wang
>Priority: Major
> Fix For: 4.1.0
>
>
> as title



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Improve the example of kafka-metadata-shell.sh [kafka]

2025-03-05 Thread via GitHub


m1a2st commented on code in PR #19107:
URL: https://github.com/apache/kafka/pull/19107#discussion_r1981739755


##
docs/ops.html:
##
@@ -3983,7 +3983,7 @@ 
-
+  Note: -00.checkpoint does not contain cluster 
metadata. Use a valid snapshot file when examining metadata with the 
kafka-metadata-shell.sh tool.

Review Comment:
   Please add a code block for `kafka-metadata-shell.sh`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-18919) Clarify that KafkaPrincipalBuilder classes must also implement KafkaPrincipalSerde

2025-03-05 Thread Ismael Juma (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932685#comment-17932685
 ] 

Ismael Juma edited comment on KAFKA-18919 at 3/5/25 4:19 PM:
-

Perhaps. I think it would be a good idea to have a KIP where we can make sure 
we are not missing any important details. If we agree that the compatibility 
risk is indeed acceptable for a minor/feature release, we can perhaps do it in 
4.1.


was (Author: ijuma):
Perhaps. I think it would be a good idea to have a KIP where we can make sure 
we are not missing any important details. If we agree that the compatibility 
risk is indeed acceptable for a feature release, we can perhaps do it in 4.1.

> Clarify that KafkaPrincipalBuilder classes must also implement 
> KafkaPrincipalSerde
> --
>
> Key: KAFKA-18919
> URL: https://issues.apache.org/jira/browse/KAFKA-18919
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Szu-Yung Wang
>Priority: Major
> Fix For: 4.0.0
>
>
> In KRaft, custom KafkaPrincipalBuilder implementations must also implement 
> KafkaPrincipalSerde, otherwise brokers are not able to forward requests to 
> the controller.
> So we need to update our docs.
> -We should also be able to do some cleanup in the code. For example 
> RequestContext uses Optional, we should be able to 
> remove Optional since it's now always present.- moved to KAFKA-18922



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18919) Clarify that KafkaPrincipalBuilder classes must also implement KafkaPrincipalSerde

2025-03-05 Thread Ismael Juma (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932685#comment-17932685
 ] 

Ismael Juma commented on KAFKA-18919:
-

Perhaps. I think it would be a good idea to have a KIP where we can make sure 
we are not missing any important details. If we agree that the compatibility 
risk is indeed acceptable for a feature release, we can perhaps do it in 4.1.

> Clarify that KafkaPrincipalBuilder classes must also implement 
> KafkaPrincipalSerde
> --
>
> Key: KAFKA-18919
> URL: https://issues.apache.org/jira/browse/KAFKA-18919
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Szu-Yung Wang
>Priority: Major
> Fix For: 4.0.0
>
>
> In KRaft, custom KafkaPrincipalBuilder implementations must also implement 
> KafkaPrincipalSerde, otherwise brokers are not able to forward requests to 
> the controller.
> So we need to update our docs.
> -We should also be able to do some cleanup in the code. For example 
> RequestContext uses Optional, we should be able to 
> remove Optional since it's now always present.- moved to KAFKA-18922



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18900: Experimental share consumer acknowledge mode config [kafka]

2025-03-05 Thread via GitHub


mumrah commented on PR #19113:
URL: https://github.com/apache/kafka/pull/19113#issuecomment-2701366305

   @AndrewJSchofield there was a broken commit on trunk, that's why things are 
failing


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18919) Clarify that KafkaPrincipalBuilder classes must also implement KafkaPrincipalSerde

2025-03-05 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932683#comment-17932683
 ] 

Chia-Ping Tsai commented on KAFKA-18919:


[~mimaison][~ijuma] Thank you for your response. Could we address this (extend 
`KafkaPrincipalSerde`) in KAFKA-18922 (4.1)? 

> Clarify that KafkaPrincipalBuilder classes must also implement 
> KafkaPrincipalSerde
> --
>
> Key: KAFKA-18919
> URL: https://issues.apache.org/jira/browse/KAFKA-18919
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Szu-Yung Wang
>Priority: Major
> Fix For: 4.0.0
>
>
> In KRaft, custom KafkaPrincipalBuilder implementations must also implement 
> KafkaPrincipalSerde, otherwise brokers are not able to forward requests to 
> the controller.
> So we need to update our docs.
> -We should also be able to do some cleanup in the code. For example 
> RequestContext uses Optional, we should be able to 
> remove Optional since it's now always present.- moved to KAFKA-18922



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18926) `KafkaPrincipalBuilder` should extend `KafkaPrincipalSerde`

2025-03-05 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-18926:
---
Labels: need-kip  (was: )

> `KafkaPrincipalBuilder` should extend `KafkaPrincipalSerde`
> ---
>
> Key: KAFKA-18926
> URL: https://issues.apache.org/jira/browse/KAFKA-18926
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>  Labels: need-kip
>
> from the discussion of KAFKA-18919
> In KRaft, custom KafkaPrincipalBuilder instances must implement 
> KafkaPrincipalSerde to support the forward mechanism. Therefore, 
> KafkaPrincipalBuilder should extend KafkaPrincipalSerde.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-18926) `KafkaPrincipalBuilder` should extend `KafkaPrincipalSerde`

2025-03-05 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-18926:
--

 Summary: `KafkaPrincipalBuilder` should extend 
`KafkaPrincipalSerde`
 Key: KAFKA-18926
 URL: https://issues.apache.org/jira/browse/KAFKA-18926
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


from the discussion of KAFKA-18919

In KRaft, custom KafkaPrincipalBuilder instances must implement 
KafkaPrincipalSerde to support the forward mechanism. Therefore, 
KafkaPrincipalBuilder should extend KafkaPrincipalSerde.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18919) Clarify that KafkaPrincipalBuilder classes must also implement KafkaPrincipalSerde

2025-03-05 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932686#comment-17932686
 ] 

Chia-Ping Tsai commented on KAFKA-18919:


{quote}
Perhaps. I think it would be a good idea to have a KIP where we can make sure 
we are not missing any important details. If we agree that the compatibility 
risk is indeed acceptable for a minor/feature release, we can perhaps do it in 
4.1.
{quote}

open KAFKA-18926 to address it.

> Clarify that KafkaPrincipalBuilder classes must also implement 
> KafkaPrincipalSerde
> --
>
> Key: KAFKA-18919
> URL: https://issues.apache.org/jira/browse/KAFKA-18919
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Szu-Yung Wang
>Priority: Major
> Fix For: 4.0.0
>
>
> In KRaft, custom KafkaPrincipalBuilder implementations must also implement 
> KafkaPrincipalSerde, otherwise brokers are not able to forward requests to 
> the controller.
> So we need to update our docs.
> -We should also be able to do some cleanup in the code. For example 
> RequestContext uses Optional, we should be able to 
> remove Optional since it's now always present.- moved to KAFKA-18922



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-18922) RequestContext#principalSerde should not be optional as it is now always present

2025-03-05 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932687#comment-17932687
 ] 

Chia-Ping Tsai commented on KAFKA-18922:


Please ignore above comment as we have KAFKA-18926 to address it (and KIP)

> RequestContext#principalSerde should not be optional as it is now always 
> present
> 
>
> Key: KAFKA-18922
> URL: https://issues.apache.org/jira/browse/KAFKA-18922
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Szu-Yung Wang
>Priority: Major
> Fix For: 4.1.0
>
>
> as title



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Delete DeleteGroupsResult class. [kafka]

2025-03-05 Thread via GitHub


junrao commented on code in PR #19057:
URL: https://github.com/apache/kafka/pull/19057#discussion_r1981900730


##
clients/src/main/java/org/apache/kafka/clients/admin/DeleteShareGroupsResult.java:
##
@@ -20,13 +20,33 @@
 import org.apache.kafka.common.KafkaFuture;
 
 import java.util.Collection;
+import java.util.HashMap;
 import java.util.Map;
 
 /**
  * The result of the {@link Admin#deleteShareGroups(Collection , 
DeleteShareGroupsOptions)} call.
  */
-public class DeleteShareGroupsResult extends DeleteGroupsResult {
+public class DeleteShareGroupsResult {

Review Comment:
   Do we need to preserve the Evolving tag?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Clean up metadata module [kafka]

2025-03-05 Thread via GitHub


mumrah commented on PR #19069:
URL: https://github.com/apache/kafka/pull/19069#issuecomment-2701677430

   @sjhajharia just a quick note -- if you force push your branch after someone 
has reviewed your PR, it basically destroys the history of the review and 
inline conversations. It's best to merge trunk into your branch after reviews 
have started.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Delete DeleteGroupsResult class. [kafka]

2025-03-05 Thread via GitHub


AndrewJSchofield commented on code in PR #19057:
URL: https://github.com/apache/kafka/pull/19057#discussion_r1981904870


##
clients/src/main/java/org/apache/kafka/clients/admin/DeleteShareGroupsResult.java:
##
@@ -20,13 +20,33 @@
 import org.apache.kafka.common.KafkaFuture;
 
 import java.util.Collection;
+import java.util.HashMap;
 import java.util.Map;
 
 /**
  * The result of the {@link Admin#deleteShareGroups(Collection , 
DeleteShareGroupsOptions)} call.
  */
-public class DeleteShareGroupsResult extends DeleteGroupsResult {
+public class DeleteShareGroupsResult {

Review Comment:
   That would be consistent with the rest of the KIP-932 interfaces, although 
not that critical in this case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18461: Fix potential NPE in setDelta after map is erased [kafka]

2025-03-05 Thread via GitHub


mumrah commented on code in PR #18684:
URL: https://github.com/apache/kafka/pull/18684#discussion_r1981899587


##
server-common/src/main/java/org/apache/kafka/timeline/Snapshot.java:
##
@@ -47,6 +48,7 @@  T getDelta(Revertable owner) {
 }
 
 void setDelta(Revertable owner, Delta delta) {
+Objects.requireNonNull(map, "Snapshot cannot be accessed after erase 
is called.");

Review Comment:
   We should add this same check to other methods where `map` is accessed. 
   
   For example:
   ```diff
void handleRevert() {
   +Objects.requireNonNull(map, "Snapshot cannot be accessed after 
erase is called.");
for (Map.Entry entry : map.entrySet()) {
entry.getKey().executeRevert(epoch, entry.getValue());
}
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18461: Fix potential NPE in setDelta after map is erased [kafka]

2025-03-05 Thread via GitHub


mumrah commented on PR #18684:
URL: https://github.com/apache/kafka/pull/18684#issuecomment-270121

   Can you please merge trunk into this PR and let the CI run again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR Add a required status check for trunk [kafka]

2025-03-05 Thread via GitHub


mumrah commented on code in PR #19122:
URL: https://github.com/apache/kafka/pull/19122#discussion_r1981908712


##
.asf.yaml:
##
@@ -45,3 +45,19 @@ github:
 squash_commit_message: PR_TITLE_AND_DESC
 merge: false
 rebase: false
+
+  protected_branches:
+trunk:
+  required_status_checks:
+strict: false
+contexts:
+  - build / CI checks completed

Review Comment:
   BTW this name comes from the Checks API.
   
   ```
   gh api \
 -H "Accept: application/vnd.github+json" \
 -H "X-GitHub-Api-Version: 2022-11-28" \
 
/repos/apache/kafka/commits/d64df5162bf08b96c1483b143d6ca77a48c35aaf/check-runs
   ```
   
   returns a list of check runs. We want the `name` of the run to use in 
.asf.yaml.
   
   ```
   ...
 {
 "id": 38237327961,
 "name": "build / Compile and Check (Merge Ref)", <-- This
 "node_id": "CR_kwDOACG9q88I5x9SWQ",
 "head_sha": "d64df5162bf08b96c1483b143d6ca77a48c35aaf",
 "external_id": "5f14810a-731d-5037-cdc7-5b433ffc2634",
 "url": 
"https://api.github.com/repos/apache/kafka/check-runs/38237327961";,
 "html_url": 
"https://github.com/apache/kafka/actions/runs/13676254784/job/38237327961";,
 "details_url": 
"https://github.com/apache/kafka/actions/runs/13676254784/job/38237327961";,
 "status": "completed",
 "conclusion": "success",
 "started_at": "2025-03-05T12:42:54Z",
 "completed_at": "2025-03-05T12:48:32Z",
 "output": {
   "title": null,
   "summary": null,
   "text": null,
   "annotations_count": 0,
   "annotations_url": 
"https://api.github.com/repos/apache/kafka/check-runs/38237327961/annotations";
 },
   ...
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Delete DeleteGroupsResult class. [kafka]

2025-03-05 Thread via GitHub


smjn commented on code in PR #19057:
URL: https://github.com/apache/kafka/pull/19057#discussion_r1981910462


##
clients/src/main/java/org/apache/kafka/clients/admin/DeleteShareGroupsResult.java:
##
@@ -20,13 +20,33 @@
 import org.apache.kafka.common.KafkaFuture;
 
 import java.util.Collection;
+import java.util.HashMap;
 import java.util.Map;
 
 /**
  * The result of the {@link Admin#deleteShareGroups(Collection , 
DeleteShareGroupsOptions)} call.
  */
-public class DeleteShareGroupsResult extends DeleteGroupsResult {
+public class DeleteShareGroupsResult {

Review Comment:
   @chia7712 had suggested to remove it saying the interface is mature.
   
   https://github.com/apache/kafka/pull/19057#issuecomment-2698014961



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] MINOR Add a required status check for trunk [kafka]

2025-03-05 Thread via GitHub


mumrah opened a new pull request, #19122:
URL: https://github.com/apache/kafka/pull/19122

   Add a single job that runs after the whole CI pipeline and make it a 
required check before merging a PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18766:Docs: Make usage of allow.everyone.if.no.acl.found config clearer [kafka]

2025-03-05 Thread via GitHub


Iamoshione closed pull request #19077: KAFKA-18766:Docs: Make usage of 
allow.everyone.if.no.acl.found config clearer
URL: https://github.com/apache/kafka/pull/19077


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-17014: ScramFormatter should not use String for password [kafka]

2025-03-05 Thread via GitHub


szetszwo commented on code in PR #19082:
URL: https://github.com/apache/kafka/pull/19082#discussion_r1981829797


##
clients/src/main/java/org/apache/kafka/common/security/scram/internals/ScramSaslClient.java:
##
@@ -190,7 +190,7 @@ private void setState(State state) {
 
 private ClientFinalMessage handleServerFirstMessage(char[] password) 
throws SaslException {
 try {
-byte[] passwordBytes = ScramFormatter.normalize(new 
String(password));
+byte[] passwordBytes = ScramFormatter.normalize(new 
String(password).toCharArray());

Review Comment:
   Why not simply passing the password as below?
   ```java
   byte[] passwordBytes = ScramFormatter.normalize(password);
   ```



##
metadata/src/main/java/org/apache/kafka/metadata/storage/ScramParser.java:
##
@@ -173,7 +173,9 @@ byte[] saltedPassword(byte[] salt, int iterations) throws 
Exception {
 return configuredSaltedPassword.get();
 }
 return new ScramFormatter(mechanism).saltedPassword(
-configuredPasswordString.get(),
+configuredPasswordString

Review Comment:
   Similarly, it would be better if we change `configuredPasswordString ` to 
not using String.



##
core/src/main/scala/kafka/server/DelegationTokenManager.scala:
##
@@ -106,7 +106,7 @@ class DelegationTokenManager(val config: KafkaConfig,
 val scramCredentialMap = mutable.Map[String, ScramCredential]()
 
 def scramCredential(mechanism: ScramMechanism): ScramCredential = {
-  new ScramFormatter(mechanism).generateCredential(hmacString, 
mechanism.minIterations)
+  new ScramFormatter(mechanism).generateCredential(hmacString.toCharArray, 
mechanism.minIterations)

Review Comment:
   It would be better if we change `hmacString` to not using String.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-17014) ScramFormatter should not use String for password.

2025-03-05 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932697#comment-17932697
 ] 

Tsz-wo Sze commented on KAFKA-17014:


[~mingdaoy], I just have commented on your pr.  Thanks!

> ScramFormatter should not use String for password.
> --
>
> Key: KAFKA-17014
> URL: https://issues.apache.org/jira/browse/KAFKA-17014
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Assignee: Mingdao Yang
>Priority: Major
>
> Since String is immutable, there are no easy ways to erase a String password 
> after use.  It is a security concern so we should not use String for 
> passwords.  See also  
> https://stackoverflow.com/questions/8881291/why-is-char-preferred-over-string-for-passwords



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18811: Included admin.client.config in VerifiableShareConsumer.java [kafka]

2025-03-05 Thread via GitHub


AndrewJSchofield commented on code in PR #19108:
URL: https://github.com/apache/kafka/pull/19108#discussion_r1981863839


##
tools/src/main/java/org/apache/kafka/tools/VerifiableShareConsumer.java:
##
@@ -596,18 +599,29 @@ public static VerifiableShareConsumer 
createFromArgs(ArgumentParser parser, Stri
 StringDeserializer deserializer = new StringDeserializer();
 KafkaShareConsumer consumer = new 
KafkaShareConsumer<>(consumerProps, deserializer, deserializer);
 
+Properties adminClientProps = new Properties();
+if (adminClientConfigFile != null) {
+try {
+
adminClientProps.putAll(Utils.loadProps(adminClientConfigFile));
+} catch (IOException e) {
+throw new ArgumentParserException(e.getMessage(), parser);
+}
+}
+
+adminClientProps.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, 
brokerHostandPort);

Review Comment:
   I think the point is that both consumer and admin client need to be 
configurable. Probably the same configuration values in both cases. Using the 
same config for both would work as far as I know.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18637: Fix max connections per ip and override reconfigurations [kafka]

2025-03-05 Thread via GitHub


azhar2407 commented on code in PR #19099:
URL: https://github.com/apache/kafka/pull/19099#discussion_r1981949803


##
core/src/test/scala/integration/kafka/server/DynamicBrokerReconfigurationTest.scala:
##
@@ -482,6 +482,30 @@ class DynamicBrokerReconfigurationTest extends 
QuorumTestHarness with SaslSetup
 waitForAuthenticationFailure(producerBuilder.keyStoreProps(sslProperties1))
   }
 
+  @ParameterizedTest(name = 
TestInfoUtils.TestWithParameterizedQuorumAndGroupProtocolNames)
+  @MethodSource(Array("getTestQuorumAndGroupProtocolParametersAll"))
+  def testSocketServerConfigTest(quorum: String, groupProtocol: String): Unit 
= {
+val updatedMaxConnections = "20"
+val ConnectionsIpsOverride = "1.2.3.4:1234,1.2.4.5:2345"

Review Comment:
   Fixed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: zk2kraft: add more information about kafka-configs.sh [kafka]

2025-03-05 Thread via GitHub


ijuma commented on code in PR #19100:
URL: https://github.com/apache/kafka/pull/19100#discussion_r1981944751


##
docs/zk2kraft.html:
##
@@ -153,6 +153,67 @@ Configurations
 
 
 
+Dynamic Log Levels
+
+
+
+The dynamic log levels feature allows you to change the log4j 
settings of a running broker or controller process without restarting it. The 
command-line syntax for setting dynamic log levels on brokers has not changed 
in KRaft mode. Here is an example of setting the log level on a broker:
+
+./bin/kafka-configs.sh --bootstrap-server localhost:9092 \
+--entity-type broker-loggers \
+--entity-name 1 \
+--alter \
+--add-config org.apache.kafka.raft.KafkaNetworkChannel=TRACE
+
+
+
+
+
+When setting dynamic log levels on the controllers, the 
--bootstrap-controller flag must be used. Here is an example of 
setting the log level ona  controller:
+
+./bin/kafka-configs.sh --bootstrap-controller localhost:9093 \
+--entity-type broker-loggers \
+--entity-name 1 \
+--alter \
+--add-config org.apache.kafka.raft.KafkaNetworkChannel=TRACE
+
+Note that the entity-type must be specified as 
broker-loggers, even though we are changing a controller's log 
level rather than a broker's log level.
+
+
+
+
+When changing the log level of a combined node, which has both 
broker and controller roles, either --bootstrap-servers or 
--bootstrap-controllers may be used. Combined nodes have only a single set of 
log levels; there are not different log levels for the broker and 
controller parts of the process.
+
+
+
+Dynamic Controller Configurations
+
+
+
+Some Kafka configurations can be changed dynamically, without 
restarting the process.  The command-line syntax for setting dynamic log levels 
on brokers has not changed in KRaft mode. Here is an example of setting the 
number of IO threads on a broker:
+

Review Comment:
   Thanks for the clarification.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18477) remove usage of OffsetForLeaderEpochRequest in AbstractFetcherThread

2025-03-05 Thread Jun Rao (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932712#comment-17932712
 ] 

Jun Rao commented on KAFKA-18477:
-

[~chia7712] : Good point. If the follower doesn't have the latest epoch, 
currently we fall back to truncating based on HWM. So we can still keep that 
part of the logic to support v0/v1 records. However, there is no need to ever 
issue OffsetForLeaderEpochRequests.

> remove usage of OffsetForLeaderEpochRequest in AbstractFetcherThread
> 
>
> Key: KAFKA-18477
> URL: https://issues.apache.org/jira/browse/KAFKA-18477
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 4.0.0
>Reporter: Jun Rao
>Assignee: 黃竣陽
>Priority: Major
> Fix For: 4.1.0
>
>
> This is because of the base MV in 4.0 is 3.0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18766:Docs: Make usage of allow.everyone.if.no.acl.found config clearer [kafka]

2025-03-05 Thread via GitHub


Iamoshione commented on PR #19077:
URL: https://github.com/apache/kafka/pull/19077#issuecomment-2701858209

   @AndrewJSchofield Hello there I have adjusted the indentation and removed 
the unnecessary div tag at the end of the page


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-18422) add Kafka client upgrade path

2025-03-05 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-18422:

Fix Version/s: 4.0.0

> add Kafka client upgrade path
> -
>
> Key: KAFKA-18422
> URL: https://issues.apache.org/jira/browse/KAFKA-18422
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Kuan Po Tseng
>Priority: Blocker
>  Labels: need-kip
> Fix For: 4.0.0
>
>
> https://github.com/apache/kafka/pull/18193#issuecomment-2572283545



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-18074) Add kafka client compatibility matrix

2025-03-05 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-18074:

Fix Version/s: 4.0.0

> Add kafka client compatibility matrix
> -
>
> Key: KAFKA-18074
> URL: https://issues.apache.org/jira/browse/KAFKA-18074
> Project: Kafka
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: 黃竣陽
>Priority: Blocker
> Fix For: 4.0.0
>
>
> in 4.0 we have many major breaking changes - JDK upgrade and protocol cleanup 
> - that may confuse users in rolling upgrade and setup env. Hence, we should 
> add a matrix for all our client modules - client, streams, and connect
> the matrix consists of following item.
> 1. supported JDKs
> 2. supported broker versions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18919 Clarify that KafkaPrincipalBuilder classes must also implement KafkaPrincipalSerde [kafka]

2025-03-05 Thread via GitHub


chia7712 commented on PR #19104:
URL: https://github.com/apache/kafka/pull/19104#issuecomment-2700934771

   cherry-pick to 4.0


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18811: Included admin.client.config in VerifiableShareConsumer.java [kafka]

2025-03-05 Thread via GitHub


apoorvmittal10 commented on code in PR #19108:
URL: https://github.com/apache/kafka/pull/19108#discussion_r1981377611


##
tools/src/main/java/org/apache/kafka/tools/VerifiableShareConsumer.java:
##
@@ -596,18 +599,29 @@ public static VerifiableShareConsumer 
createFromArgs(ArgumentParser parser, Stri
 StringDeserializer deserializer = new StringDeserializer();
 KafkaShareConsumer consumer = new 
KafkaShareConsumer<>(consumerProps, deserializer, deserializer);
 
+Properties adminClientProps = new Properties();
+if (adminClientConfigFile != null) {
+try {
+
adminClientProps.putAll(Utils.loadProps(adminClientConfigFile));
+} catch (IOException e) {
+throw new ArgumentParserException(e.getMessage(), parser);
+}
+}
+
+adminClientProps.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, 
brokerHostandPort);

Review Comment:
   It means sasl configs, should not they be same as passed in 
`consumer.config` hence should we rename `consumer.config` to `client.config` 
and have single config file, thoughts? We can have @AndrewJSchofield's opinion 
as well but I am thinking on the line to have single config file instead of two.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-18882) Remove BaseKey, TxnKey, and UnknownKey

2025-03-05 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-18882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-18882.

Fix Version/s: 4.1.0
   Resolution: Fixed

> Remove BaseKey, TxnKey, and UnknownKey
> --
>
> Key: KAFKA-18882
> URL: https://issues.apache.org/jira/browse/KAFKA-18882
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: xuanzhang gong
>Priority: Minor
> Fix For: 4.1.0
>
>
> as title



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-18919 Clarify that KafkaPrincipalBuilder classes must also implement KafkaPrincipalSerde [kafka]

2025-03-05 Thread via GitHub


chia7712 commented on PR #19104:
URL: https://github.com/apache/kafka/pull/19104#issuecomment-2700919321

   ![螢幕快照 2025-03-05 
21-22-32](https://github.com/user-attachments/assets/8933593a-b41e-47b5-a95d-e8a7c5887dc8)
   
   check the web site. LGTM


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix typos in multiple files [kafka]

2025-03-05 Thread via GitHub


AndrewJSchofield commented on PR #19102:
URL: https://github.com/apache/kafka/pull/19102#issuecomment-2701103358

   Reviewers: Andrew Schofield 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Fix typos in multiple files [kafka]

2025-03-05 Thread via GitHub


AndrewJSchofield merged PR #19102:
URL: https://github.com/apache/kafka/pull/19102


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18700: Migrate SnapshotPath and Entry in LogHistory to record classes [kafka]

2025-03-05 Thread via GitHub


mingyen066 commented on code in PR #19062:
URL: https://github.com/apache/kafka/pull/19062#discussion_r1981696171


##
raft/src/main/java/org/apache/kafka/raft/internals/LogHistory.java:
##
@@ -76,42 +75,5 @@ public interface LogHistory {
  */
 void clear();
 
-final class Entry {
-private final long offset;
-private final T value;
-
-public Entry(long offset, T value) {
-this.offset = offset;
-this.value = value;
-}
-
-public long offset() {
-return offset;
-}
-
-public T value() {
-return value;
-}
-
-@Override
-public boolean equals(Object o) {
-if (this == o) return true;
-if (o == null || getClass() != o.getClass()) return false;
-
-Entry that = (Entry) o;
-
-if (offset != that.offset) return false;
-return Objects.equals(value, that.value);
-}
-
-@Override
-public int hashCode() {
-return Objects.hash(offset, value);
-}
-
-@Override
-public String toString() {
-return String.format("Entry(offset=%d, value=%s)", offset, value);
-}
-}
+record Entry(long offset, T value) { }

Review Comment:
   Yes, we could. But I think we might lose some readibility since something 
like `votersEntry.get().offset()` will be refactored to 
`votersEntry.get().getKey()`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-18919) Clarify that KafkaPrincipalBuilder classes must also implement KafkaPrincipalSerde

2025-03-05 Thread Mickael Maison (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-18919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932678#comment-17932678
 ] 

Mickael Maison commented on KAFKA-18919:


Any existing KafkaPrincipalBuilder implementation that is used with KRaft must 
already also implement KafkaPrincipalSerde. So this shouldn't break any code.

> Clarify that KafkaPrincipalBuilder classes must also implement 
> KafkaPrincipalSerde
> --
>
> Key: KAFKA-18919
> URL: https://issues.apache.org/jira/browse/KAFKA-18919
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Szu-Yung Wang
>Priority: Major
> Fix For: 4.0.0
>
>
> In KRaft, custom KafkaPrincipalBuilder implementations must also implement 
> KafkaPrincipalSerde, otherwise brokers are not able to forward requests to 
> the controller.
> So we need to update our docs.
> -We should also be able to do some cleanup in the code. For example 
> RequestContext uses Optional, we should be able to 
> remove Optional since it's now always present.- moved to KAFKA-18922



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Fix java docs [kafka]

2025-03-05 Thread via GitHub


lucasbru commented on PR #19118:
URL: https://github.com/apache/kafka/pull/19118#issuecomment-2701355948

   Reverted on trunk 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-18422: add Kafka client upgrade path [kafka]

2025-03-05 Thread via GitHub


mingdaoy commented on code in PR #19097:
URL: https://github.com/apache/kafka/pull/19097#discussion_r1981698752


##
docs/upgrade.html:
##
@@ -46,6 +46,9 @@ Upgrading to 
4.0.0 from any vers
 Every https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java";>MetadataVersion
 has a boolean parameter that indicates if there are metadata changes 
(i.e. IBP_4_0_IV1(23, "4.0", "IV1", true) means this version has 
metadata changes).
 Given your current and target versions, a downgrade is only possible 
if there are no metadata changes in the versions between.
+For the Kafka client upgrade path, note that many deprecated APIs were 
removed in Kafka 4.0. Additionally, upgrading directly to 4.x from certain 
versions is not feasible.

Review Comment:
   Adjusted here https://github.com/apache/kafka/pull/19119#issue-2897715740



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] HOTFIX client java doc compile error [kafka]

2025-03-05 Thread via GitHub


lucasbru commented on PR #19115:
URL: https://github.com/apache/kafka/pull/19115#issuecomment-2701354757

   Hey - the change was reverted on trunk instead, we can close thois. Again, 
sorry for the disturance. It seems CI didn't run at all for that PR
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >