[jira] [Created] (KAFKA-17795) Fix the file path of suppressions.xml to run build on windows

2024-10-15 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17795:
--

 Summary: Fix the file path of suppressions.xml to run build on 
windows
 Key: KAFKA-17795
 URL: https://issues.apache.org/jira/browse/KAFKA-17795
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Cheng-Yan Wang


https://github.com/apache/kafka/commit/ad08ec600fa2b250884e456f18b47c77bf4071dc#diff-1caaa9743f4aa4a5d156f2b3cc72b86079f5c17b944d5dfdda2d0d39e7f14816R368

the path "kafka/security/JaasTestUtils.java" does not work on windwos, so we 
should rewrite it to "(JaasTestUtils).java"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17797) Use DelayedFuturePurgatory for RemoteListOffsetsPurgatory

2024-10-15 Thread Kamal Chandraprakash (Jira)
Kamal Chandraprakash created KAFKA-17797:


 Summary: Use DelayedFuturePurgatory for RemoteListOffsetsPurgatory
 Key: KAFKA-17797
 URL: https://issues.apache.org/jira/browse/KAFKA-17797
 Project: Kafka
  Issue Type: Task
Reporter: Kamal Chandraprakash


We are using the DelayedOperationPurgatory for remote-list-offsets. Each 
request is being tracked by multiple watch-keys: 
[listOffsetsRequestKeys|https://sourcegraph.com/github.com/apache/kafka/-/blob/core/src/main/scala/kafka/server/ReplicaManager.scala?L1561].
 



The watch-key is based on a topicPartition and the purgatory is checked on that 
key every time a remote listOffset task completes. However, the completion of 
such a task has no impact on other pending listOffset requests on the same 
partition.

The only reason we need the purgatory is really just for the expiration logic 
after the timeout if we chain all the futures together. Perhaps, using the 
pattern of DelayedFuturePurgatory is more intuitive. 

We could have a customized DelayedFuturePurgatory that also adds a delayed 
operation key per partition. But they are triggered for completion check when 
the replica is no longer the leader.
 
See: https://github.com/apache/kafka/pull/16602#discussion_r1792600283



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17801) RemoteLogManager may compute inaccurate upperBoundOffset for aborted txns

2024-10-15 Thread Jun Rao (Jira)
Jun Rao created KAFKA-17801:
---

 Summary: RemoteLogManager may compute inaccurate upperBoundOffset 
for aborted txns
 Key: KAFKA-17801
 URL: https://issues.apache.org/jira/browse/KAFKA-17801
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 3.6.0
Reporter: Jun Rao


In RemoteLogManager.read, we compute startPos as the following.
{code:java}
startPos = lookupPositionForOffset(remoteLogSegmentMetadata, offset);{code}
This is the position returned by the offset index. The actual position for the 
first batch being read happens in the following, but startPos is not updated 
accordingly.
{code:java}
firstBatch = findFirstBatch(remoteLogInputStream, offset);{code}
We then use the inaccurate startPos to create fetchDataInfo.
{code:java}
FetchDataInfo fetchDataInfo = new FetchDataInfo(
new LogOffsetMetadata(offset, remoteLogSegmentMetadata.startOffset(), startPos),
MemoryRecords.readableRecords(buffer));{code}
In addAbortedTransactions(), we use startPos to find the upperBoundOffset to 
retrieve the aborted txns.
{code:java}
long upperBoundOffset = offsetIndex.fetchUpperBoundOffset(startOffsetPosition, 
fetchSize)
.map(position -> position.offset).orElse(segmentMetadata.endOffset() + 1);{code}
The inaccurate startPos can lead to inaccurate upperBoundOffset, which leads to 
inaccurate aborted txns returned to the consumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12493) The controller should handle the consistency between the controllerContext and the partition replicas assignment on zookeeper

2024-10-15 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12493.

Resolution: Won't Fix

We're removing ZooKeeper support, closing

> The controller should handle the consistency between the controllerContext 
> and the partition replicas assignment on zookeeper
> -
>
> Key: KAFKA-12493
> URL: https://issues.apache.org/jira/browse/KAFKA-12493
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0
>Reporter: Wenbing Shen
>Assignee: Wenbing Shen
>Priority: Major
>
> This question can be linked to this email: 
> [https://lists.apache.org/thread.html/redf5748ec787a9c65fc48597e3d2256ffdd729de14afb873c63e6c5b%40%3Cusers.kafka.apache.org%3E]
>  
> This is a 100% recurring problem.
> Problem description:
> In the production environment of our customer’s site, the existing partitions 
> were redistributed in the code of colleagues in other departments and written 
> into zookeeper. This caused the controller to only judge the newly added 
> partitions when processing partition modification events. Partition 
> allocation plan and new partition and replica allocation in the partition 
> state machine and replica state machine, and issue LeaderAndISR and other 
> control requests.
> But the controller did not verify the existing partition replicas assigment 
> in the controllerContext and whether the original partition allocation on the 
> znode in zookeeper has changed. This seems to be no problem, but when we have 
> to restart the broker for some reasons, such as configuration updates and 
> upgrades Wait, this will cause this part of the topic in real-time production 
> to be abnormal, the controller cannot complete the allocation of the new 
> leader, and the original leader cannot correctly identify the replica 
> allocated on the current zookeeper. The real-time business in our customer's 
> on-site environment is interrupted and partially Data has been lost.
> This problem can be stably reproduced in the following ways:
> Adding partitions or modifying replicas of an existing topic through the 
> following code will cause the original partition replicas to be reallocated 
> and finally written to zookeeper.Next, the controller did not accurately 
> process this event, restart the topic related broker, this topic will not be 
> able to be produced and consumed.
>  
> {code:java}
> public void updateKafkaTopic(KafkaTopicVO kafkaTopicVO) {
> ZkUtils zkUtils = ZkUtils.apply(ZK_LIST, SESSION_TIMEOUT, 
> CONNECTION_TIMEOUT, JaasUtils.isZkSecurityEnabled());
> try {
> if (kafkaTopicVO.getPartitionNum() >= 0 && 
> kafkaTopicVO.getReplicationNum() >= 0) {
> // Get the original broker data information
> Seq brokerMetadata = 
> AdminUtils.getBrokerMetadatas(zkUtils,
> RackAwareMode.Enforced$.MODULE$,
> Option.apply(null));
> // Generate a new partition replica allocation plan
> scala.collection.Map> replicaAssign = 
> AdminUtils.assignReplicasToBrokers(brokerMetadata,
> kafkaTopicVO.getPartitionNum(), // Number of partitions
> kafkaTopicVO.getReplicationNum(), // Number of replicas 
> per partition
> -1,
> -1);
> // Modify the partition replica allocation plan
> AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(zkUtils,
> kafkaTopicVO.getTopicNameList().get(0),
> replicaAssign,
> null,
> true);
> }
> } catch (Exception e) {
> System.out.println("Adjust partition abnormal");
> System.exit(0);
> } finally {
> zkUtils.close();
> }
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17769) Fix flaky PlaintextConsumerSubscriptionTest.testSubscribeInvalidTopicCanUnsubscribe

2024-10-15 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17769.

Resolution: Fixed

> Fix flaky 
> PlaintextConsumerSubscriptionTest.testSubscribeInvalidTopicCanUnsubscribe
> ---
>
> Key: KAFKA-17769
> URL: https://issues.apache.org/jira/browse/KAFKA-17769
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Yu-Lin Chen
>Assignee: Yu-Lin Chen
>Priority: Major
>  Labels: flaky-test, integration-test, kip-848-client-support
> Fix For: 4.0.0
>
>
> 4 flaky out of 110 trunk builds in past 2 weeks. ([Report 
> Link|https://ge.apache.org/scans/tests?search.rootProjectNames=kafka&search.startTimeMax=1728584869905&search.startTimeMin=172615680&search.tags=trunk&search.timeZoneId=Asia%2FTaipei&tests.container=kafka.api.PlaintextConsumerSubscriptionTest&tests.test=testSubscribeInvalidTopicCanUnsubscribe(String%2C%20String)%5B3%5D])
> This issue can be reproduced in my local within 50 loops.
>  
> ([Oct 4 2024 at 10:35:49 
> CST|https://ge.apache.org/s/o4ir4xtitsu52/tests/task/:core:test/details/kafka.api.PlaintextConsumerSubscriptionTest/testSubscribeInvalidTopicCanUnsubscribe(String%2C%20String)%5B3%5D?top-execution=1]):
> {code:java}
> org.apache.kafka.common.KafkaException: Failed to close kafka consumer
> at 
> org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.close(AsyncKafkaConsumer.java:1249)
>  
> at 
> org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.close(AsyncKafkaConsumer.java:1204)
>  
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1718)
>  
> at 
> kafka.api.IntegrationTestHarness.$anonfun$tearDown$3(IntegrationTestHarness.scala:249)
>  
> at 
> kafka.api.IntegrationTestHarness.$anonfun$tearDown$3$adapted(IntegrationTestHarness.scala:249)
>  
> at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:619)   
> at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:617)  
> at scala.collection.AbstractIterable.foreach(Iterable.scala:935)  
> at 
> kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:249)   
>  
> at java.lang.reflect.Method.invoke(Method.java:566)   
> at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)  
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> 
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)   
>  
> at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274)  
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> 
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)  
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)  
>  
> at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) 
>  
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> 
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)  
> at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) 
> at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274)  
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)  
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
>  
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)  
> at 
> java.util.stream.AbstractPipeline.wr

[jira] [Created] (KAFKA-17798) Add forbidden-apis linting to gradle build

2024-10-15 Thread Greg Harris (Jira)
Greg Harris created KAFKA-17798:
---

 Summary: Add forbidden-apis linting to gradle build
 Key: KAFKA-17798
 URL: https://issues.apache.org/jira/browse/KAFKA-17798
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Greg Harris


We should use this project: [https://github.com/policeman-tools/forbidden-apis] 
to perform bytecode-level checks for undesirable API usages within the Kafka 
project. To start on this, we should add it to our build and CI without any/ 
with a minimal set of forbidden APIs.

We can track enabling certain checks separately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17799) Forbid use of the default system locale

2024-10-15 Thread Greg Harris (Jira)
Greg Harris created KAFKA-17799:
---

 Summary: Forbid use of the default system locale
 Key: KAFKA-17799
 URL: https://issues.apache.org/jira/browse/KAFKA-17799
 Project: Kafka
  Issue Type: Improvement
Reporter: Greg Harris


Using the system default locale sometimes has negative effects, especially when 
it's use is not intended. We should use forbidden-apis to catch these usages 
statically.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17800) Forbid the use of System.exit and Runtime exit

2024-10-15 Thread Greg Harris (Jira)
Greg Harris created KAFKA-17800:
---

 Summary: Forbid the use of System.exit and Runtime exit
 Key: KAFKA-17800
 URL: https://issues.apache.org/jira/browse/KAFKA-17800
 Project: Kafka
  Issue Type: Improvement
Reporter: Greg Harris


Currently the Exit class enforces it's usage through checkstyle, which is a 
source-code level check. We should replace this with a bytecode-level check via 
forbidden-apis.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17367) Add share coordinator implementation

2024-10-15 Thread Sushant Mahajan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushant Mahajan resolved KAFKA-17367.
-
Resolution: Fixed

> Add share coordinator implementation
> 
>
> Key: KAFKA-17367
> URL: https://issues.apache.org/jira/browse/KAFKA-17367
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Sushant Mahajan
>Assignee: Sushant Mahajan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17633) Implement formatter for share group state topic records

2024-10-15 Thread Sushant Mahajan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushant Mahajan resolved KAFKA-17633.
-
Resolution: Fixed

> Implement formatter for share group state topic records
> ---
>
> Key: KAFKA-17633
> URL: https://issues.apache.org/jira/browse/KAFKA-17633
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Sushant Mahajan
>Assignee: Sushant Mahajan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17796) Add support persist higher leaderEpoch in read state call in share coordinator.

2024-10-15 Thread Sushant Mahajan (Jira)
Sushant Mahajan created KAFKA-17796:
---

 Summary: Add support persist higher leaderEpoch in read state call 
in share coordinator.
 Key: KAFKA-17796
 URL: https://issues.apache.org/jira/browse/KAFKA-17796
 Project: Kafka
  Issue Type: Sub-task
Reporter: Sushant Mahajan
Assignee: Sushant Mahajan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17807) Update jetty dependency

2024-10-15 Thread Vishal (Jira)
Vishal created KAFKA-17807:
--

 Summary: Update jetty dependency
 Key: KAFKA-17807
 URL: https://issues.apache.org/jira/browse/KAFKA-17807
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.9.0
Reporter: Vishal
Assignee: Colin McCabe
 Fix For: 4.0.0, 3.9.0, 3.8.1






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17806) Revisit all this-escape

2024-10-15 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17806:
--

 Summary: Revisit all this-escape 
 Key: KAFKA-17806
 URL: https://issues.apache.org/jira/browse/KAFKA-17806
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


classes with subclasses should be refactored. Some of these classes can be 
marked as final to resolve the warnings, as this-escape becomes an issue when 
there are overridden methods. Additionally, we need to review the tests to 
ensure that adding final declarations doesn’t break any mock testing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.8 #102

2024-10-15 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.9 #90

2024-10-15 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-17805) Deprecate named topologies

2024-10-15 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-17805:
--

 Summary: Deprecate named topologies
 Key: KAFKA-17805
 URL: https://issues.apache.org/jira/browse/KAFKA-17805
 Project: Kafka
  Issue Type: Task
  Components: streams
Reporter: A. Sophie Blee-Goldman
Assignee: A. Sophie Blee-Goldman
 Fix For: 4.0.0


We plan to eventually phase out the experimental "named topologies" feature, 
since new features and functionality in Streams will not be compatible with 
named topologies and continuing to support them would result in increasing tech 
debt over time.

However, it is as-yet unknown how many users have deployed named topologies in 
production. We know of at least one case but hope to hear from any others who 
may be concerned about this deprecation, so that we can work on an alternative 
or even revert the deprecation if need be.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.7 #204

2024-10-15 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-17804) optimize ReplicaManager.completeDelayedOperationsWhenNotPartitionLeader

2024-10-15 Thread Jun Rao (Jira)
Jun Rao created KAFKA-17804:
---

 Summary: optimize 
ReplicaManager.completeDelayedOperationsWhenNotPartitionLeader
 Key: KAFKA-17804
 URL: https://issues.apache.org/jira/browse/KAFKA-17804
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Jun Rao


Currently, ReplicaManager.completeDelayedOperationsWhenNotPartitionLeader is 
called when (1) a replica is removed from the broker and (2) a replica becomes 
a follower replica and it checks the completion of multiple purgatories.  
However, not all purgatories need to be checked in both situations. For 
example, the fetch purgatory doesn't need to be checked in case (2) since we 
support fetch from follower now. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.8 #101

2024-10-15 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-14190) Corruption of Topic IDs with pre-2.8.0 ZK admin clients

2024-10-15 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14190.

Resolution: Won't Fix

We're now removing ZooKeeper support, so closing

> Corruption of Topic IDs with pre-2.8.0 ZK admin clients
> ---
>
> Key: KAFKA-14190
> URL: https://issues.apache.org/jira/browse/KAFKA-14190
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, core, zkclient
>Affects Versions: 2.8.0, 3.1.0, 2.8.1, 3.0.0, 3.0.1, 3.2.0, 3.1.1, 3.2.1
>Reporter: Alexandre Dupriez
>Assignee: Divij Vaidya
>Priority: Major
>
> h3. Scope
> The problem reported below has been verified to occur in Zookeeper mode. It 
> has not been attempted with Kraft controllers, although it is unlikely to be 
> reproduced in Kraft mode given the nature of the issue and clients involved.
> h3. Problem Description
> The ID of a topic is lost when an AdminClient of version < 2.8.0 is used to 
> increase the number of partitions of that topic for a cluster with version >= 
> 2.8.0. This results in the controller re-creating the topic IDs upon restart, 
> eventually conflicting with the topic ID of broker’s {{partition.metadata}} 
> files in the partition directories of the impacted topic, leading to an 
> availability loss of the partitions which do not accept leadership / 
> follower-ship when the topic ID indicated by a {{LeaderAndIsr}} request 
> differ from their own locally cached ID.
> One mitigation post-corruption is to substitute the stale topic ID in the 
> {{partition.metadata}} files with the new topic ID referenced by the 
> controller, or alternatively, delete the {{partition.metadata}} file 
> altogether. This requires a restart of the brokers which are assigned the 
> partitions of the impacted topic.
> h3. Steps to reproduce
> 1. Set-up and launch a two-nodes Kafka cluster in Zookeeper mode.
> 2. Create a topic e.g. via {{kafka-topics.sh}}
> {noformat}
> ./bin/kafka-topics.sh --bootstrap-server :9092 --create --topic myTopic 
> --partitions 2 --replication-factor 2{noformat}
> 3. Capture the topic ID using a 2.8.0+ client.
> {noformat}
> ./kafka/bin/kafka-topics.sh --bootstrap-server :9092 --topic myTopic 
> --describe
> Topic: myTopic TopicId: jKTRaM_TSNqocJeQI2aYOQ PartitionCount: 2 
> ReplicationFactor: 2 Configs: segment.bytes=1073741824
> Topic: myTopic Partition: 0 Leader: 0 Replicas: 1,0 Isr: 0,1
> Topic: myTopic Partition: 1 Leader: 0 Replicas: 0,1 Isr: 0,1{noformat}
> 4. Restart one of the broker. This will make each broker create the 
> {{partition.metadata}} files in the partition directories since it will 
> already have loaded the {{Log}} instance in memory.
> 5. Using a *pre-2.8.0* client library, run the following command.
> {noformat}
> ./kafka/bin/kafka-topics.sh --zookeeper :2181 --alter --topic myTopic 
> --partitions 3{noformat}
> 6. Using a 2.8.0+ client library, describe the topic via Zookeeper and notice 
> the absence of topic ID from the output, where it is otherwise expected.
> {noformat}
> ./kafka/bin/kafka-topics.sh —zookeeper :2181 —describe —topic myTopic
> Topic: myTopic PartitionCount: 3 ReplicationFactor: 2 Configs: 
> Topic: myTopic Partition: 0 Leader: 1 Replicas: 1,0 Isr: 0,1
> Topic: myTopic Partition: 1 Leader: 0 Replicas: 0,1 Isr: 0,1
> Topic: myTopic Partition: 2 Leader: 1 Replicas: 1,0 Isr: 1,0{noformat}
> 7. Using a 2.8.0+ client library, describe the topic via a broker endpoint 
> and notice the topic ID changed.
> {noformat}
> ./kafka/bin/kafka-topics.sh —bootstrap-server :9093 —describe —topic myTopic
> Topic: myTopic TopicId: nI-JQtPwQwGiylMfm8k13w PartitionCount: 3 
> ReplicationFactor: 2 Configs: segment.bytes=1073741824
> Topic: myTopic Partition: 0 Leader: 1 Replicas: 1,0 Isr: 1,0
> Topic: myTopic Partition: 1 Leader: 1 Replicas: 0,1 Isr: 1,0
> Topic: myTopic Partition: 2 Leader: 1 Replicas: 1,0 Isr: 1,0{noformat}
> 8. Restart the controller.
> 9. Check the state-change.log file on the controller broker. The following 
> type of logs will appear.
> {noformat}
> [2022-08-25 17:44:05,308] ERROR [Broker id=0] Topic Id in memory: 
> jKTRaM_TSNqocJeQI2aYOQ does not match the topic Id for partition myTopic-0 
> provided in the request: nI-JQtPwQwGiylMfm8k13w. 
> (state.change.logger){noformat}
> 10. Restart the other broker.
> 11. Describe the topic via the broker endpoint or Zookeeper with a 2.8.0+ 
> client library
> {noformat}
> ./kafka/bin/kafka-topics.sh --zookeeper :2181 --describe --topic myTopic
> Topic: myTopic TopicId: nI-JQtPwQwGiylMfm8k13w PartitionCount: 3 
> ReplicationFactor: 2 Configs: 
> Topic: myTopic Partition: 0 Leader: 0 Replicas: 1,0 Isr: 0
> Topic: myTopic Partition: 1 Leader: 0 Replicas: 0,1 Isr: 0
> Topic: myTopic Partition: 2 Leader: 0 Replicas: 1,0 Isr: 0{noformat}
> Notice the abno

Re: [VOTE] KIP-1054: Support external schemas in JSONConverter

2024-10-15 Thread Chris Egerton
Hi Priyanka,

Sorry for the delay! +1 (binding), thanks for the KIP.

Cheers,

Chris

On Wed, Sep 4, 2024, 01:35 Priyanka K U 
wrote:

> Hi Everyone,
>
> Please go through the proposal and requesting everyone to support with
> your votes  :
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1054%3A+Support+external+schemas+in+JSONConverter
> . If you need any further clarifications please post your queries in the
> discussion thread :
> https://lists.apache.org/thread/rwxkh1fnbxh5whobsyrt4gystyl9yhc5
>
> Thank you,
>
> Priyanka
>
>
> From: Priyanka K U 
> Date: Friday, 28 June 2024 at 3:02 PM
> To: dev@kafka.apache.org 
> Subject: [EXTERNAL] [VOTE] KIP-1054: Support external schemas in
> JSONConverter
> Hello Everyone,
>
> I'd like to start a vote on KIP-1054, which aims to Support external
> schemas in JSONConverter to Kafka Connect:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1054%3A+Support+external+schemas+in+JSONConverter
>
> Discussion thread -
> https://lists.apache.org/thread/rwxkh1fnbxh5whobsyrt4gystyl9yhc5
>
> Thank you,
>
> Priyanka
>
>
>


[jira] [Created] (KAFKA-17802) Update bouncy-castle from 1.75 to 1.78

2024-10-15 Thread Kartik Goyal (Jira)
Kartik Goyal created KAFKA-17802:


 Summary: Update bouncy-castle from 1.75 to 1.78
 Key: KAFKA-17802
 URL: https://issues.apache.org/jira/browse/KAFKA-17802
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Kartik Goyal
Assignee: Kartik Goyal


org.bouncycastle : bcprov-jdk18on version 1.75 contains vulnerabilities which 
can be remediated by version bump to 1.78



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17803) Reconcile Differences in MockLog and KafkaMetadataLog `read` Implementation

2024-10-15 Thread Kevin Wu (Jira)
Kevin Wu created KAFKA-17803:


 Summary: Reconcile Differences in MockLog and KafkaMetadataLog 
`read` Implementation
 Key: KAFKA-17803
 URL: https://issues.apache.org/jira/browse/KAFKA-17803
 Project: Kafka
  Issue Type: Improvement
Reporter: Kevin Wu


Calling MockLog or KafkaMetadataLog's read method for a given startOffset 
returns a LogOffsetMetadata object that contains an offset field. In the case 
of MockLog, this offset field is the base offset of the record batch which 
contains startOffset.

However, in KafkaMetadataLog, this offset field is set to the given 
startOffset. If the given startOffset is in the middle of a batch, the returned 
LogOffsetMetadata will have an offset that does not match the file position of 
the returned batch. This makes the javadoc for LogSegment#read inaccurate in 
this case since startOffset is not a lower bound (the base offset of the batch 
containing startOffset is the lower bound). 

The discussed approach was to change MockLog to behave the same way as 
KafkaMetadataLog, since this would be safer than changing the semantics of the 
read call in UnifiedLog.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17788) During ZK migration, always include control.plane.listener.name in advertisedBrokerListeners

2024-10-15 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-17788.
--
Fix Version/s: 3.9.0
 Reviewer: Luke Chen
   Resolution: Fixed

> During ZK migration, always include control.plane.listener.name in 
> advertisedBrokerListeners
> 
>
> Key: KAFKA-17788
> URL: https://issues.apache.org/jira/browse/KAFKA-17788
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.9.0
>Reporter: Jakub Scholz
>Assignee: Colin McCabe
>Priority: Blocker
> Fix For: 3.9.0
>
> Attachments: logs-pod-cluster-9833cba6-kafka-1-container-kafka.log
>
>
> When testing migration with Kafka 3.9.0-RC2, the broker fail to start when 
> they are first rolled to start the migration with the following error:
> {code}
> 2024-10-11 21:37:04,060 ERROR Exiting Kafka due to fatal exception 
> (kafka.Kafka$) [main]
> java.lang.IllegalArgumentException: requirement failed: 
> control.plane.listener.name must be a listener name defined in 
> advertised.listeners. The valid options based on currently configured 
> listeners are REPLICATION-9091,PLAIN-9092,TLS-9093
>   at scala.Predef$.require(Predef.scala:337)
>   at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1019)
>   at kafka.server.KafkaConfig.(KafkaConfig.scala:843)
>   at kafka.server.KafkaConfig.(KafkaConfig.scala:185)
>   at kafka.Kafka$.buildServer(Kafka.scala:71)
>   at kafka.Kafka$.main(Kafka.scala:90)
>   at kafka.Kafka.main(Kafka.scala)
> {code}
> This is despite our configuration having the {control.plane.listener.name} 
> properly configured:
> {code}
> listener.security.protocol.map=CONTROLPLANE-9090:SSL,REPLICATION-9091:SSL,PLAIN-9092:SASL_PLAINTEXT,TLS-9093:SSL
> listeners=CONTROLPLANE-9090://0.0.0.0:9090,REPLICATION-9091://0.0.0.0:9091,PLAIN-9092://0.0.0.0:9092,TLS-9093://0.0.0.0:9093
> advertised.listeners=CONTROLPLANE-9090://cluster-9833cba6-kafka-1.cluster-9833cba6-kafka-brokers.test-suite-namespace.svc:9090,REPLICATION-9091://cluster-9833cba6-kafka-1.cluster-9833cba6-kafka-brokers.test-suite-namespace.svc:9091,PLAIN-9092://cluster-9833cba6-kafka-1.cluster-9833cba6-kafka-brokers.test-suite-namespace.svc:9092,TLS-9093://cluster-9833cba6-kafka-1.cluster-9833cba6-kafka-brokers.test-suite-namespace.svc:9093
> inter.broker.listener.name=REPLICATION-9091
> control.plane.listener.name=CONTROLPLANE-9090
> {code}
> It looks like 3.9.0-RC2 filters out the control plane listener (maybe because 
> it is used by the KRaft controllers as well?) and runs into this error. This 
> worked fine in 3.8.0, so this seems like a regression in 3.9.0 that should be 
> fixed.
> The full log from the broker node is attached. It includes the full 
> configuration of the broker as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17790) Document that control.plane.listener should be removed before ZK migration is finished

2024-10-15 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-17790.
--
Fix Version/s: 3.9.0
   Resolution: Fixed

> Document that control.plane.listener should be removed before ZK migration is 
> finished
> --
>
> Key: KAFKA-17790
> URL: https://issues.apache.org/jira/browse/KAFKA-17790
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.2, 3.8.1, 3.9.1
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 3.9.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)