[jira] [Resolved] (KAFKA-14880) TransactionMetadata with producer epoch -1 should be expirable

2023-04-06 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-14880.
-
Fix Version/s: 3.2.4
   3.1.3
   3.0.3
   3.4.1
   3.3.3
   Resolution: Fixed

> TransactionMetadata with producer epoch -1 should be expirable 
> ---
>
> Key: KAFKA-14880
> URL: https://issues.apache.org/jira/browse/KAFKA-14880
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.0, 3.2.0, 3.3.0, 3.4.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.2.4, 3.1.3, 3.0.3, 3.4.1, 3.3.3
>
>
> We have seen the following error in logs:
> {noformat}
> "Mar 22, 2019 @ 
> 21:57:56.655",Error,"kafka-0-0","transaction-log-manager-0","Uncaught 
> exception in scheduled task 
> 'transactionalId-expiration'","java.lang.IllegalArgumentException: Illegal 
> new producer epoch -1
> {noformat}
> Investigations showed that it is actually possible for a transaction metadata 
> object to still have -1 as producer epoch when it transitions to Dead.
> When a transaction metadata is created for the first time (in 
> handleInitProducerId), it has -1 as its producer epoch. Then a producer epoch 
> is attributed and the transaction coordinator tries to persist the change. If 
> the write fail for instance because there is an under min isr, the 
> transaction metadata remains with its epoch as -1 forever or until the init 
> producer id is retried.
> This means that it is possible for transaction metadata to remain with -1 as 
> producer epoch until it gets expired. At the moment, this is not allowed 
> because we enforce a producer epoch greater or equals to 0 in 
> prepareTransitionTo.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Kafka (apache or confluent) - Subject of Work (SOW)

2023-04-06 Thread Kafka Life
Hi experts.. any pointers or guidance for this

On Wed, Apr 5, 2023 at 8:35 PM Kafka Life  wrote:

> Respected Kafka experts/managers
>
> Do anyone have Subject of work -Activities related to Kafka cluster
> management for Apache or Confluent kafka . Something to assess and propose
> to an enterprise for kafka cluster management. Request you to kindly share
> any such documentation please.
>


Failing tests in Kafka stream is under investigation now

2023-04-06 Thread Luke Chen
Hi all,

Just want to let you know, our current CI test results will contain
many(more than 600 in build 1738
) failed
tests. I'm already investigating them now, and have identified some errors
and a PR  is raised running the
CI tests now. Hopefully, it can bring the healthy CI tests back soon.

Stay tuned.

Luke


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #168

2023-04-06 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 490415 lines...]
[2023-04-06T10:03:20.960Z] > Task :connect:api:testJar
[2023-04-06T10:03:20.960Z] > Task :connect:api:testSrcJar
[2023-04-06T10:03:20.960Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2023-04-06T10:03:20.960Z] > Task :connect:api:publishToMavenLocal
[2023-04-06T10:03:21.927Z] 
[2023-04-06T10:03:21.927Z] > Task :streams:javadoc
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:854:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:84:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:136:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:147:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:101:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:167:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java:38:
 warning - Tag @link: reference not found: ProcessorContext#forward(Object, 
Object) forwards
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
[2023-04-06T10:03:21.927Z]  PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:44:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:36:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-04-06T10:03:21.927Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:57:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.2 #102

2023-04-06 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 457214 lines...]
[2023-04-06T10:05:25.544Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2023-04-06T10:05:25.544Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2023-04-06T10:05:25.544Z] > Task :connect:json:jar UP-TO-DATE
[2023-04-06T10:05:25.544Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2023-04-06T10:05:25.544Z] > Task :connect:api:javadocJar
[2023-04-06T10:05:25.544Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2023-04-06T10:05:25.544Z] > Task :connect:json:publishToMavenLocal
[2023-04-06T10:05:25.544Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2023-04-06T10:05:25.544Z] > Task :connect:api:testClasses UP-TO-DATE
[2023-04-06T10:05:25.544Z] > Task :connect:api:testJar
[2023-04-06T10:05:26.475Z] > Task :connect:api:testSrcJar
[2023-04-06T10:05:26.475Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2023-04-06T10:05:26.475Z] > Task :connect:api:publishToMavenLocal
[2023-04-06T10:05:29.079Z] 
[2023-04-06T10:05:29.079Z] > Task :streams:javadoc
[2023-04-06T10:05:29.079Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
[2023-04-06T10:05:29.079Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
[2023-04-06T10:05:30.006Z]  PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:44:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:36:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:57:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:74:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:110:
 warning - Tag @link: reference not found: this#getResult()
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:117:
 warning - Tag @link: reference not found: this#getFailureReason()
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:117:
 warning - Tag @link: reference not found: this#getFailureMessage()
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:155:
 warning - Tag @link: reference not found: this#isSuccess()
[2023-04-06T10:05:30.006Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:155:
 warning - Tag @link: reference not found: this#isFailure()
[2023-04-06T10:05:30.935Z] 12 warnings
[2023-04-06T10:05:30.935Z] 
[2023-04-06T10:05:30.935Z] > Task :streams:javadocJar
[2023-04-06T10:05:32.691Z] 
[2023-04-06T10:05:32.691Z] > Task :clients:javadoc
[2023-04-06T10:05:32.691Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.2/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/OAuthBearerLoginCallbackHandler.java:147:
 warning - Tag @link: reference not found: 
[2023-04-06T10:05:33.722Z] 1 warning
[2023-04-06T10:05:35.476Z] 
[2023-04-06T10:05:35.476Z] > Task :clients:javadocJar
[2023-04-06T10:05:37.230Z] 
[2023-04-06T10:05:37.230Z] > Task :clients:srcJar
[2023-04-06T10:05:37.230Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons

Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #1740

2023-04-06 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.4 #106

2023-04-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.0 #218

2023-04-06 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 286210 lines...]
[2023-04-06T10:27:37.588Z] > Task :connect:mirror-client:compileJava
[2023-04-06T10:27:40.110Z] > Task :connect:transforms:compileJava
[2023-04-06T10:27:56.121Z] > Task :streams:examples:compileJava
[2023-04-06T10:27:56.121Z] > Task :streams:streams-scala:compileJava NO-SOURCE
[2023-04-06T10:27:58.511Z] > Task :streams:test-utils:compileJava
[2023-04-06T10:27:59.488Z] > Task :connect:runtime:compileJava
[2023-04-06T10:28:02.781Z] > Task :connect:mirror:compileJava
[2023-04-06T10:28:02.781Z] > Task :log4j-appender:classes
[2023-04-06T10:28:02.781Z] > Task :server-common:classes
[2023-04-06T10:28:02.781Z] > Task :storage:classes
[2023-04-06T10:28:02.781Z] > Task :streams:classes
[2023-04-06T10:28:02.781Z] > Task :tools:classes
[2023-04-06T10:28:02.781Z] > Task :trogdor:classes
[2023-04-06T10:28:02.781Z] > Task :connect:api:classes
[2023-04-06T10:28:02.781Z] > Task :connect:basic-auth-extension:classes
[2023-04-06T10:28:02.781Z] > Task :connect:file:classes
[2023-04-06T10:28:02.781Z] > Task :connect:json:classes
[2023-04-06T10:28:02.781Z] > Task :connect:mirror:classes
[2023-04-06T10:28:02.781Z] > Task :connect:runtime:classes
[2023-04-06T10:28:02.781Z] > Task :connect:mirror-client:classes
[2023-04-06T10:28:02.781Z] > Task :connect:transforms:classes
[2023-04-06T10:28:02.781Z] > Task :storage:api:classes
[2023-04-06T10:28:02.781Z] > Task :streams:examples:classes
[2023-04-06T10:28:02.781Z] > Task :streams:test-utils:classes
[2023-04-06T10:28:02.781Z] > Task :spotlessScalaCheck
[2023-04-06T10:28:03.763Z] > Task :log4j-appender:checkstyleMain
[2023-04-06T10:28:30.218Z] > Task :streams:streams-scala:compileScala
[2023-04-06T10:28:33.594Z] > Task :metadata:checkstyleMain
[2023-04-06T10:28:59.383Z] > Task :raft:checkstyleMain
[2023-04-06T10:28:59.383Z] > Task :server-common:checkstyleMain
[2023-04-06T10:29:05.787Z] > Task :storage:checkstyleMain
[2023-04-06T10:29:38.191Z] > Task :streams:checkstyleMain
[2023-04-06T10:29:42.210Z] > Task :clients:testClasses
[2023-04-06T10:29:47.161Z] > Task :streams:streams-scala:compileScala FAILED
[2023-04-06T10:29:50.107Z] > Task :log4j-appender:compileTestJava
[2023-04-06T10:29:50.107Z] > Task :server-common:compileTestJava
[2023-04-06T10:29:53.931Z] > Task :storage:compileTestJava
[2023-04-06T10:29:57.204Z] > Task :tools:compileTestJava
[2023-04-06T10:29:58.367Z] > Task :raft:compileTestJava
[2023-04-06T10:29:58.367Z] > Task :raft:testClasses
[2023-04-06T10:30:00.431Z] > Task :trogdor:compileTestJava
[2023-04-06T10:30:06.849Z] > Task :connect:api:compileTestJava
[2023-04-06T10:30:09.400Z] > Task :connect:basic-auth-extension:compileTestJava
[2023-04-06T10:30:11.945Z] > Task :connect:file:compileTestJava
[2023-04-06T10:30:14.504Z] > Task :connect:mirror-client:compileTestJava
[2023-04-06T10:30:18.109Z] > Task :connect:json:compileTestJava
[2023-04-06T10:30:18.109Z] > Task :storage:api:compileTestJava
[2023-04-06T10:30:18.944Z] > Task :connect:transforms:compileTestJava
[2023-04-06T10:30:19.775Z] > Task :streams:examples:compileTestJava
[2023-04-06T10:30:23.646Z] > Task :metadata:compileTestJava
[2023-04-06T10:30:23.647Z] > Task :metadata:testClasses
[2023-04-06T10:30:28.667Z] > Task :streams:test-utils:compileTestJava
[2023-04-06T10:30:28.667Z] > Task :tools:checkstyleMain
[2023-04-06T10:30:31.731Z] 
[2023-04-06T10:30:31.731Z] > Task :core:compileScala
[2023-04-06T10:30:31.731Z] Scala Compiler interface compilation took 1 mins 
56.208 secs
[2023-04-06T10:30:35.287Z] 
[2023-04-06T10:30:35.287Z] > Task :connect:api:checkstyleMain
[2023-04-06T10:30:36.230Z] > Task :connect:basic-auth-extension:checkstyleMain
[2023-04-06T10:30:37.119Z] > Task :connect:file:checkstyleMain
[2023-04-06T10:30:40.693Z] > Task :connect:json:checkstyleMain
[2023-04-06T10:30:45.767Z] > Task :trogdor:checkstyleMain
[2023-04-06T10:30:45.767Z] > Task :connect:mirror:checkstyleMain
[2023-04-06T10:30:46.604Z] > Task :connect:mirror-client:checkstyleMain
[2023-04-06T10:30:49.215Z] > Task :clients:checkstyleMain
[2023-04-06T10:30:52.759Z] > Task :connect:transforms:checkstyleMain
[2023-04-06T10:30:54.579Z] > Task :storage:api:checkstyleMain
[2023-04-06T10:30:57.234Z] > Task :streams:examples:checkstyleMain
[2023-04-06T10:31:00.932Z] > Task :streams:test-utils:checkstyleMain
[2023-04-06T10:31:00.932Z] > Task :log4j-appender:testClasses
[2023-04-06T10:31:04.390Z] > Task :log4j-appender:checkstyleTest
[2023-04-06T10:31:15.299Z] > Task :connect:runtime:checkstyleMain
[2023-04-06T10:31:31.856Z] > Task :metadata:checkstyleTest
[2023-04-06T10:31:31.856Z] > Task :server-common:testClasses
[2023-04-06T10:31:31.856Z] > Task :server-common:checkstyleTest
[2023-04-06T10:31:31.856Z] > Task :storage:testClasses
[2023-04-06T10:31:34.411Z] > Task :raft:checkstyleTest
[2023-04-06T10:31:34.411Z] > Task :tools:testClasses
[2023-04-06T10:31:37.134Z] > Task :tools:

Re: Failing tests in Kafka stream is under investigation now

2023-04-06 Thread Chia-Ping Tsai
hi Luke

Thanks for your time and effort. It would be nice to see the green CI again :)

—
Chia-Ping

> Luke Chen  於 2023年4月6日 下午6:13 寫道:
> 
> Hi all,
> 
> Just want to let you know, our current CI test results will contain
> many(more than 600 in build 1738
> ) failed
> tests. I'm already investigating them now, and have identified some errors
> and a PR  is raised running the
> CI tests now. Hopefully, it can bring the healthy CI tests back soon.
> 
> Stay tuned.
> 
> Luke


Re: [DISCUSS] KIP-895: Dynamically refresh partition count of __consumer_offsets

2023-04-06 Thread David Jacot
Hi Divij,

I think that the motivation is clear, however the ideal solution is not, at
least not for me. I would like to ensure that we solve the real problem
instead of making it worse. In our experience, is this issue usually due to
a mistake or a willingness to increase the number of consumers? In my mind,
in the current state, one should never change the number of partitions
because it results in losing group metadata. Preventing it would not be a
bad idea.

I agree that the ideal solution would be to change how we assign groups to
__consumer_offsets partitions. I have this idea of making groups a first
class resource in Kafka in the back of my mind for a while. This idea would
be to store group ids and their current partition in the controller and to
let the controller decide where a group should go when it is created. This
could be done via a plugin as well. If we have this, then adding new
__consumer_offsets partitions is no longer an issue. The controller would
start by filling the empty partitions when new groups are created. This
would have a few other advantages. For instance, it would allow us to put
quotas on the number of groups. It also has a few challenges. For instance,
how should a group be created - implicitly as today or explicitly? There is
also the question about the deletion. At the moment, groups are cleaned up
automatically after the grace period. Would we keep this?

I think that we should also consider the transaction coordinator in this
discussion because it suffers from the very same limitation. Ideally, we
should have a solution for both of them. Have you looked at how it handles
an increase of the number of partitions?

As a side note, we are in the middle or rewriting the group coordinator. I
think that big changes should only be made when we are done with that.

Best,
David

On Wed, Apr 5, 2023 at 10:08 PM Divij Vaidya 
wrote:

> Thank you for your comments and participation in the discussion, David,
> Justine and Alex.
>
> You are right! The KIP is missing a lot of details about the motivation. I
> apologize for the confusion I created with my earlier statement about
> reducing the downtime in this thread. I will request Christo to update it.
>
> Meanwhile, as a summary, the KIP does not attempt to solve the problem of
> losing consumer offsets after partition increase. Instead the objective of
> the KIP is to reduce the time to recovery for reads to start after such an
> event has occurred. Prior to this KIP, impact of the change manifests when
> one of the brokers is restarted and the consumer groups remain in
> errors/undefined state until all brokers have been finished restarting.
> During a rolling restart, this places the time to recovery in proportion
> with the number of brokers in the clusters. After this KIP is implemented,
> we would not wait for the broker restart to pick up the new partitions,
> instead all brokers will notified about the change in number of partitions
> immediately. This would reduce the duration during which consumer groups
> are in erroring/undefined state from length of rolling to time it takes to
> process LISR across the cluster. Hence, a (small) win!
>
> I hope this explanation throws some more light into the context.
>
> Why do users change __consumer_offets?
> 1. They change it accidentally OR
> 2. They increase it to scale with the increase in the number of consumers.
> This is because (correct me if I am wrong) with an increase in the number
> of consumers, we can hit the limits on single partition throughput while
> reading/writing to the __consumer_offsets. This is a genuine use case and
> the downside of losing existing metadata/offsets is acceptable to them.
>
> How do we ideally fix it?
> An ideal solution would allow us to increase the number of partitions for
> __consumer_offsets without losing existing metadata. We either need to make
> partition assignment for a consumer "sticky" such that existing consumers
> are not re-assigned to new partitions OR we need to transfer data as per
> new partitions in __consumer_offsets. Both these approaches are long term
> fixes and require a separate discussion.
>
> What can we do in the short term?
> In the short term either we can block users from changing the number of
> partitions (which might not be possible due to use case #2 above) OR we can
> at least improve (not fix but just improve!) the current situation by
> reducing the time to recovery using this KIP.
>
> Let's circle back on this discussion as soon as KIP is updated with more
> details.
>
> --
> Divij Vaidya
>
>
>
> On Tue, Apr 4, 2023 at 8:00 PM Alexandre Dupriez <
> alexandre.dupr...@gmail.com> wrote:
>
> > Hi Christo,
> >
> > Thanks for the KIP. Apologies for the delayed review.
> >
> > At a high-level, I am not sure if the KIP really solves the problem it
> > intends to.
> >
> > More specifically, the KIP mentions that once a broker is restarted
> > and the group coordinator becomes aware of the new partition count of

[jira] [Created] (KAFKA-14881) Update UserScramCredentialRecord for SCRAM ZK to KRaft migration

2023-04-06 Thread Proven Provenzano (Jira)
Proven Provenzano created KAFKA-14881:
-

 Summary: Update UserScramCredentialRecord for SCRAM ZK to KRaft 
migration
 Key: KAFKA-14881
 URL: https://issues.apache.org/jira/browse/KAFKA-14881
 Project: Kafka
  Issue Type: Improvement
  Components: kraft
Affects Versions: 3.5.0
Reporter: Proven Provenzano
Assignee: Proven Provenzano
 Fix For: 3.5.0


I want to support ZK to KRaft migration.

ZK stores a storedKey and a serverKey for each SCRAM credential not the 
saltedPassword.

The storedKey and serverKey are a crypto hash of some data with the 
saltedPassword and it is not possible to extract the saltedPassword from them.

The serverKey and storedKey are enough for SCRAM authentication and 
saltedPassword is not needed.

I will update the UserScramCredentialRecord to store serverKey and storedKey 
instead of saltedPassword and I will update that SCRAM is only supported with a 
bumped version of IBP_3_5 so that there are no compatibility issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-902: Upgrade Zookeeper to 3.8.1

2023-04-06 Thread Ismael Juma
I'm +1 for this change, but we should do it early in the release cycle.
Perhaps 3.6.0 is the right release target. That should buy us enough time
for users to migrate to kraft mode.

Ismael

On Mon, Mar 20, 2023 at 10:12 AM Divij Vaidya 
wrote:

> Hey Colin
>
> Thank you for your feedback. In addition to what Christo mentioned above, I
> have tried to provide answers to your questions below. Also, for some
> context, we have had some conversation about the upgrade in the comments of
> this PR <
> https://github.com/apache/kafka/pull/12620#issuecomment-1409015865>
> .
>
> #1 We shouldn't drop support for rolling upgrades
>
> We are not dropping support for rolling upgrades. Christo's answer above
> hopefully resolves that concern.
>
> #2 Unless there is a security issue, we shouldn't upgrade Zk since Kafka
> 4.0 is going to remove the component
>
> First, if a zero-day exploit/vulnerability is discovered, Zk will not
> backport it to Zookeeper 3.6.4 since it has declared it as end of life. At
> that stage, we will either have to backport the fix to Zk 3.6.4 ourselves
> OR we will have to ask our users to upgrade to Zookeeper 3.8.x at a very
> short notice. Both the options are highly undesirable in my opinion.
>
> Second, even without a vulnerability, many compliance programs red flags
> usage of end of life software. Users of Kafka may be in violation of
> compliance even if they are using the latest version of Kafka (3.5) due to
> the Zookeeper dependency.
>
> Third, the community hasn't decided on a date for 4.0 release. Looking at
> the body of work required to migrate to 4.0, I would say (again, please
> correct me here if you think otherwise) it's at 12 months down the line. I
> think that is a long time to have users of Kafka facing compliance
> violations and at the risk of security exploits.
>
> #3 Major Zk upgrade is risky and may produce bugs
>
> Me and Christo are happy to perform any de-risking activities that you
> would recommend to us, in addition to what we have added in the KIP. I
> think it is worth the investment for the community due to Zookeeper removal
> being far ahead down the line.
>
> --
> Divij Vaidya
>
>
>
> On Wed, Mar 15, 2023 at 12:59 PM Christo Lolov 
> wrote:
>
> > Hello Colin,
> >
> > Thank you for taking the time to review the proposal!
> >
> > I have attached a compatibility matrix to aid the explanation below - if
> > the mailing system rejects it I will find another way to share it.
> >
> > For the avoidance of doubt, I am not proposing to drop support for
> rolling
> > upgrade from old Kafka versions to new ones. What I am saying is that
> > additional care will need to be taken when upgrading to the latest Kafka
> > versions depending on the version of the accompanying Zookeeper cluster.
> > This additional care means one might have to upgrade to a Kafka version
> > which falls in the intersection of the two sets in the accompanying
> diagram
> > before upgrading the accompanying Zookeeper cluster.
> >
> > As a concrete example let's say you want to upgrade to Kafka 3.5 from
> > Kafka 2.3 and Zookeeper 3.4. You will have to:
> > 1. Carry out a rolling upgrade of your Kafka cluster to a version between
> > 2.4 and 3.4.
> > 2. Carry out a rolling upgrade of your Zookeeper cluster to 3.8.1 (with a
> > possible stop at 3.4.6 due to
> >
> https://zookeeper.apache.org/doc/r3.8.1/zookeeperReconfig.html#ch_reconfig_upgrade
> > ).
> > 3. Carry out a rolling upgrade of your Kafka cluster from 3.4 to 3.5.
> >
> > It is true that Zookeeper is to be deprecated in Kafka 4.0, but as far as
> > I looked there is no concrete release date for that version yet. Until
> this
> > is the case and unless we carry out a Zookeeper version upgrade we leave
> > users to run on an end-of-life version with unpatched CVEs addressed in
> > later versions. Some users have compliance requirements to only run on
> > stable versions of a software and its dependencies and not keeping the
> > dependencies up to date renders them unable to use Kafka.
> >
> > Please, let me know your thoughts on the matter!
> >
> > Best,
> > Christo
> >
> > On Thu, 9 Mar 2023 at 21:56, Colin McCabe  wrote:
> >
> >> Hi,
> >>
> >> I'm struggling a bit with this KIP, because dropping support for rolling
> >> upgrades from old Kafka versions doesn't seem like something we should
> do
> >> in a minor release. But on the other hand, the next Kafka release won't
> >> have ZK at all. Maybe we should punt on this until and unless there is a
> >> security issue that requires some action from us.
> >>
> >> I would also add, that a major ZK version bump is pretty risky. Last
> time
> >> we did this we hit several bugs. I remember we hit one where there was
> an
> >> incompatible change with regard to formatting (sorry, I can't seem to
> find
> >> the JIRA right now).
> >>
> >> Sorry, but for now I have to vote -1 until I can understand this better
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Thu, Feb 23, 2023, at 06:48, Divij Vaidya 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1741

2023-04-06 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 504261 lines...]
[2023-04-06T14:07:51.579Z] at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:291)
[2023-04-06T14:07:51.579Z] at 
org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQueryAllStalePartitionStores(StoreQueryIntegrationTest.java:286)
[2023-04-06T14:07:51.579Z] 
[2023-04-06T14:07:51.579Z] Caused by:
[2023-04-06T14:07:51.579Z] java.lang.IllegalStateException: 
KafkaStreams is not running. State is ERROR.
[2023-04-06T14:07:51.579Z] at 
org.apache.kafka.streams.KafkaStreams.validateIsRunningOrRebalancing(KafkaStreams.java:381)
[2023-04-06T14:07:51.579Z] at 
org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1701)
[2023-04-06T14:07:51.579Z] at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.getStore(IntegrationTestUtils.java:1404)
[2023-04-06T14:07:51.579Z] ... 9 more
[2023-04-06T14:07:51.579Z] 
[2023-04-06T14:07:51.579Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > StoreQueryIntegrationTest > 
shouldQuerySpecificStalePartitionStoresMultiStreamThreads() STARTED
[2023-04-06T14:07:55.429Z] 
org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificStalePartitionStoresMultiStreamThreads()
 failed, log available in 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/build/reports/testOutput/org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificStalePartitionStoresMultiStreamThreads().test.stdout
[2023-04-06T14:07:55.429Z] 
[2023-04-06T14:07:55.429Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 175 > StoreQueryIntegrationTest > 
shouldQuerySpecificStalePartitionStoresMultiStreamThreads() FAILED
[2023-04-06T14:07:55.429Z] java.lang.AssertionError: 
java.lang.IllegalStateException: KafkaStreams is not running. State is ERROR.
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.getStore(IntegrationTestUtils.java:1411)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.getStore(IntegrationTestUtils.java:1395)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.streams.integration.StoreQueryIntegrationTest.lambda$shouldQuerySpecificStalePartitionStoresMultiStreamThreads$12(StoreQueryIntegrationTest.java:398)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:337)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:334)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:318)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:291)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificStalePartitionStoresMultiStreamThreads(StoreQueryIntegrationTest.java:397)
[2023-04-06T14:07:55.429Z] 
[2023-04-06T14:07:55.429Z] Caused by:
[2023-04-06T14:07:55.429Z] java.lang.IllegalStateException: 
KafkaStreams is not running. State is ERROR.
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.streams.KafkaStreams.validateIsRunningOrRebalancing(KafkaStreams.java:381)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1701)
[2023-04-06T14:07:55.429Z] at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.getStore(IntegrationTestUtils.java:1404)
[2023-04-06T14:07:55.429Z] ... 8 more
[2023-04-06T14:07:55.429Z] 
[2023-04-06T14:07:55.429Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 175 > StoreQueryIntegrationTest > 
shouldQuerySpecificStalePartitionStores() STARTED
[2023-04-06T14:08:52.605Z] 
org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificStalePartitionStores()
 failed, log available in 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/build/reports/testOutput/org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificStalePartitionStores().test.stdout
[2023-04-06T14:08:52.605Z] 
[2023-04-06T14:08:52.605Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 175 > StoreQueryIntegrationTest > 
shouldQuerySpecificStalePartitionStores() FAILED
[2023-04-06T14:08:52.605Z] java.lang.AssertionError: 
java.lang.IllegalStateException: KafkaStreams is not running. State is ERROR.
[2023-04-06T14:08:52.605Z] at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.getStore(IntegrationTestUtils.java:1411)
[2023-04-06T14:08:52

[DISCUSS] KIP-917: Additional custom metadata for remote log segment

2023-04-06 Thread Ivan Yurchenko
Hello!

I would like to start the discussion thread on KIP-917: Additional custom
metadata for remote log segment [1]
This KIP is fairly small and proposes to add a new field to the remote
segment metadata.

Thank you!

Best,
Ivan

[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-917%3A+Additional+custom+metadata+for+remote+log+segment


Re: [DISCUSS] KIP-895: Dynamically refresh partition count of __consumer_offsets

2023-04-06 Thread hzh0425
I think it's a good idea as we may want to store segments in different buckets



| |
hzhka...@163.com
|
|
邮箱:hzhka...@163.com
|




 回复的原邮件 
| 发件人 | Divij Vaidya |
| 日期 | 2023年04月04日 23:56 |
| 收件人 | dev@kafka.apache.org |
| 抄送至 | |
| 主题 | Re: [DISCUSS] KIP-895: Dynamically refresh partition count of 
__consumer_offsets |
FYI, a user faced this problem and reached out to us in the mailing list
[1]. Implementation of this KIP could have reduced the downtime for these
customers.

Christo, would you like to create a JIRA and associate with the KIP so that
we can continue to collect cases in the JIRA where users have faced this
problem?

[1] https://lists.apache.org/thread/zoowjshvdpkh5p0p7vqjd9fq8xvkr1nd

--
Divij Vaidya



On Wed, Jan 18, 2023 at 9:52 AM Christo Lolov 
wrote:

> Greetings,
>
> I am bumping the below DISCUSSion thread for KIP-895. The KIP presents a
> situation where consumer groups are in an undefined state until a rolling
> restart of a cluster is performed. While I have demonstrated the behaviour
> using a cluster using Zookeeper I believe the same problem can be shown in
> a KRaft cluster. Please let me know your opinions on the problem and the
> presented solution.
>
> Best,
> Christo
>
> On Thursday, 29 December 2022 at 14:19:27 GMT, Christo
> >  wrote:
> >
> >
> > Hello!
> > I would like to start this discussion thread on KIP-895: Dynamically
> > refresh partition count of __consumer_offsets.
> > The KIP proposes to alter brokers so that they refresh the partition
> count
> > of __consumer_offsets used to determine group coordinators without
> > requiring a rolling restart of the cluster.
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-895%3A+Dynamically+refresh+partition+count+of+__consumer_offsets
> >
> > Let me know your thoughts on the matter!
> > Best, Christo
> >
>


Re: [DISCUSS] KIP-917: Additional custom metadata for remote log segment

2023-04-06 Thread hzh0425
I think it's a good idea as we may want to store remote segments in different 
buckets



| |
hzhka...@163.com
|
|
邮箱:hzhka...@163.com
|




 回复的原邮件 
| 发件人 | Ivan Yurchenko |
| 日期 | 2023年04月06日 22:37 |
| 收件人 | dev@kafka.apache.org |
| 抄送至 | |
| 主题 | [DISCUSS] KIP-917: Additional custom metadata for remote log segment |
Hello!

I would like to start the discussion thread on KIP-917: Additional custom
metadata for remote log segment [1]
This KIP is fairly small and proposes to add a new field to the remote
segment metadata.

Thank you!

Best,
Ivan

[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-917%3A+Additional+custom+metadata+for+remote+log+segment


[jira] [Resolved] (KAFKA-14376) Add ConfigProvider to make use of environment variables

2023-04-06 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14376.

Fix Version/s: 3.5.0
   Resolution: Fixed

> Add ConfigProvider to make use of environment variables
> ---
>
> Key: KAFKA-14376
> URL: https://issues.apache.org/jira/browse/KAFKA-14376
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Reporter: Roman Schmitz
>Assignee: Roman Schmitz
>Priority: Minor
>  Labels: needs-kip
> Fix For: 3.5.0
>
>
> So far it is not possible to inject additional configurations stored in 
> environment variables. This topic came up in several projects and would be a 
> useful feature to have as a Kafka config feature similar to file/directory 
> providers, e.g.:
> {{config.providers=env}}
> {{}}
> {{config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider}}
> {{{}ssl.key.password=${env:<...>:KEY_PASSPHRASE}{{
>  
> Link to KIP: 
> [KIP-887|https://cwiki.apache.org/confluence/display/KAFKA/KIP-887%3A+Add+ConfigProvider+to+make+use+of+environment+variables]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Failing tests in Kafka stream is under investigation now

2023-04-06 Thread Guozhang Wang
Hi Luke,

Thanks for investigating it, I think my PR broke some tests somehow
--- I'm still looking through it why the local runs did not capture
them.. my apologies.

Guozhang

On Thu, Apr 6, 2023 at 4:57 AM Chia-Ping Tsai  wrote:
>
> hi Luke
>
> Thanks for your time and effort. It would be nice to see the green CI again :)
>
> —
> Chia-Ping
>
> > Luke Chen  於 2023年4月6日 下午6:13 寫道:
> >
> > Hi all,
> >
> > Just want to let you know, our current CI test results will contain
> > many(more than 600 in build 1738
> > ) failed
> > tests. I'm already investigating them now, and have identified some errors
> > and a PR  is raised running the
> > CI tests now. Hopefully, it can bring the healthy CI tests back soon.
> >
> > Stay tuned.
> >
> > Luke


[jira] [Created] (KAFKA-14882) Uncoordinated states about topic in ZooKeeper nodes and Kafka brokers cause TopicExistException at client

2023-04-06 Thread Haoze Wu (Jira)
Haoze Wu created KAFKA-14882:


 Summary: Uncoordinated states about topic in ZooKeeper nodes and 
Kafka brokers cause TopicExistException at client
 Key: KAFKA-14882
 URL: https://issues.apache.org/jira/browse/KAFKA-14882
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Haoze Wu


We have been doing testing on Kafka-2.8.0. We found some scenarios where 
TopicExistException happens and we feel the design of the topic create process 
in Kafka may confuse the users sometimes.

When a user uses a client which sends a topic create request to a Kafka broker, 
and the following steps will happen:
 # AdminManager check topic path in zkNodes and throw TopicExistException if 
the topic exists (Kafka sends request to ZooKeeper)
 # AdminManager add topic path in zkNodes (Kafka sends request to ZooKeeper)
 # Controller’s ZookperRequestWatcher detect it and put the corresponding event 
(ZooKeeper Watcher sends message to Kafka)
 # Event kicked out of queue and get executed (Kafka broker (controller) sends 
LeaderAndIsrRequest to Kafka broker (may include itself))
 # Broker handles the request and back to step #1

A symptom we observed is that when step #4 has some delay (stuck for some 
reason) and then the client may retry (send the topic create request again), 
which triggers TopicExistException in step #1. However, The topic create 
request should occur as kind of “transaction”. It should have some atomicity 
and also be robust under concurrent topic creation.

After some inspection, we found that it is not easy for us to implement such 
feature to the Kafka given the current implementation. But we do have the 
complaint that the user client gets TopicExistException when the topic is not 
actually existing or ready.

We suggest that maybe we can at least have some utility which help users 
mitigate this issue. For example, provide a tool which help users clean the 
ZooKeeper data and make sure the consistency of the topic metadata.

We are waiting for some feedbacks from the community. We can provided some 
concrete cases and reproduction scripts and analysis of the workload if needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1742

2023-04-06 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 370018 lines...]
[2023-04-06T20:39:12.485Z] 
[2023-04-06T20:39:12.485Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > AdjustStreamThreadCountTest > 
testRebalanceHappensBeforeStreamThreadGetDown() STARTED
[2023-04-06T20:39:16.964Z] 
[2023-04-06T20:39:16.965Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > AdjustStreamThreadCountTest > 
testRebalanceHappensBeforeStreamThreadGetDown() PASSED
[2023-04-06T20:39:16.965Z] 
[2023-04-06T20:39:16.965Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() STARTED
[2023-04-06T20:39:23.553Z] 
[2023-04-06T20:39:23.553Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() PASSED
[2023-04-06T20:39:23.553Z] 
[2023-04-06T20:39:23.553Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > AdjustStreamThreadCountTest > shouldRemoveStreamThread() 
STARTED
[2023-04-06T20:39:27.832Z] 
[2023-04-06T20:39:27.832Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > AdjustStreamThreadCountTest > shouldRemoveStreamThread() 
PASSED
[2023-04-06T20:39:27.832Z] 
[2023-04-06T20:39:27.832Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() STARTED
[2023-04-06T20:39:29.881Z] 
[2023-04-06T20:39:29.881Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() PASSED
[2023-04-06T20:39:34.079Z] 
[2023-04-06T20:39:34.079Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 STARTED
[2023-04-06T20:39:36.131Z] 
[2023-04-06T20:39:36.131Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest()
 PASSED
[2023-04-06T20:39:36.131Z] 
[2023-04-06T20:39:36.131Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() STARTED
[2023-04-06T20:39:36.131Z] 
[2023-04-06T20:39:36.131Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingPattern() PASSED
[2023-04-06T20:39:36.131Z] 
[2023-04-06T20:39:36.131Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingTopic() STARTED
[2023-04-06T20:39:36.131Z] 
[2023-04-06T20:39:36.131Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldThrowExceptionOverlappingTopic() PASSED
[2023-04-06T20:39:36.131Z] 
[2023-04-06T20:39:36.131Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets() STARTED
[2023-04-06T20:40:21.797Z] 
[2023-04-06T20:40:21.797Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets() PASSED
[2023-04-06T20:40:21.797Z] 
[2023-04-06T20:40:21.797Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest()
 STARTED
[2023-04-06T20:41:12.674Z] 
[2023-04-06T20:41:12.674Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest()
 PASSED
[2023-04-06T20:41:12.674Z] 
[2023-04-06T20:41:12.674Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldThrowStreamsExceptionNoResetSpecified() STARTED
[2023-04-06T20:41:12.674Z] 
[2023-04-06T20:41:12.675Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > FineGrainedAutoResetIntegrationTest > 
shouldThrowStreamsExceptionNoResetSpecified() PASSED
[2023-04-06T20:41:12.675Z] 
[2023-04-06T20:41:12.675Z] Gradle Test Run :streams:integrationTest > Gradle 
Test Executor 177 > GlobalKTableIntegrationTest > 
shouldGetToRunningWithOnlyGlobalTopology() STARTED
[2023-04-06T20:41:12.675Z] 
[2023-04-06T20:41:12.675Z] Gradle Test Run :streams