Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2305

2023-10-19 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-15645) Move ReplicationQuotasTestRig to tools

2023-10-19 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-15645:
---

 Summary: Move ReplicationQuotasTestRig to tools
 Key: KAFKA-15645
 URL: https://issues.apache.org/jira/browse/KAFKA-15645
 Project: Kafka
  Issue Type: Task
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov


ReplicationQuotasTestRig class used for measuring performance.
Conains dependencies to `ReassignPartitionCommand` API.

To move all commands to tools must move ReplicationQuotasTestRig to tools, also.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15646) Update ReassignPartitionsIntegrationTest once JBOD available

2023-10-19 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-15646:
---

 Summary: Update ReassignPartitionsIntegrationTest once JBOD 
available
 Key: KAFKA-15646
 URL: https://issues.apache.org/jira/browse/KAFKA-15646
 Project: Kafka
  Issue Type: Task
Reporter: Nikolay Izhikov
 Fix For: 3.7.0


Update ReassignPartitionsIntegrationTest once JBOD available



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15609) Corrupted index uploaded to remote tier

2023-10-19 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya resolved KAFKA-15609.
--
Resolution: Cannot Reproduce

> Corrupted index uploaded to remote tier
> ---
>
> Key: KAFKA-15609
> URL: https://issues.apache.org/jira/browse/KAFKA-15609
> Project: Kafka
>  Issue Type: Bug
>  Components: Tiered-Storage
>Affects Versions: 3.6.0
>Reporter: Divij Vaidya
>Priority: Minor
>
> While testing Tiered Storage, we have observed corrupt indexes being present 
> in remote tier. One such situation is covered here at 
> https://issues.apache.org/jira/browse/KAFKA-15401. This Jira presents another 
> such possible case of corruption.
> Potential cause of index corruption:
> We want to ensure that the file we are passing to RSM plugin contains all the 
> data which is present in MemoryByteBuffer i.e. we should have flushed the 
> MemoryByteBuffer to the file using force(). In Kafka, when we close a 
> segment, indexes are flushed asynchronously [1]. Hence, it might be possible 
> that when we are passing the file to RSM, the file doesn't contain flushed 
> data. Hence, we may end up uploading indexes which haven't been flushed yet. 
> Ideally, the contract should enforce that we force flush the content of 
> MemoryByteBuffer before we give the file for RSM. This will ensure that 
> indexes are not corrupted/incomplete.
> [1] 
> [https://github.com/apache/kafka/blob/4150595b0a2e0f45f2827cebc60bcb6f6558745d/core/src/main/scala/kafka/log/UnifiedLog.scala#L1613]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2306

2023-10-19 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #94

2023-10-19 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 502898 lines...]
[INFO] 
[INFO] --< org.apache.maven:standalone-pom >---
[INFO] Building Maven Stub Project (No POM) 1
[INFO] [ pom ]-
[INFO] 
[INFO] >>> archetype:3.2.1:generate (default-cli) > generate-sources @ 
standalone-pom >>>
[INFO] 
[INFO] <<< archetype:3.2.1:generate (default-cli) < generate-sources @ 
standalone-pom <<<
[INFO] 
[INFO] 
[INFO] --- archetype:3.2.1:generate (default-cli) @ standalone-pom ---
[INFO] Generating project in Interactive mode
[WARNING] Archetype not found in any catalog. Falling back to central 
repository.
[WARNING] Add a repository with id 'archetype' in your settings.xml if 
archetype's repository is elsewhere.
[INFO] Using property: groupId = streams.examples
[INFO] Using property: artifactId = streams.examples
[INFO] Using property: version = 0.1
[INFO] Using property: package = myapps
Confirm properties configuration:
groupId: streams.examples
artifactId: streams.examples
version: 0.1
package: myapps
 Y: : [INFO] 

[INFO] Using following parameters for creating project from Archetype: 
streams-quickstart-java:3.6.1-SNAPSHOT
[INFO] 

[INFO] Parameter: groupId, Value: streams.examples
[INFO] Parameter: artifactId, Value: streams.examples
[INFO] Parameter: version, Value: 0.1
[INFO] Parameter: package, Value: myapps
[INFO] Parameter: packageInPathFormat, Value: myapps
[INFO] Parameter: package, Value: myapps
[INFO] Parameter: version, Value: 0.1
[INFO] Parameter: groupId, Value: streams.examples
[INFO] Parameter: artifactId, Value: streams.examples
[INFO] Project created from Archetype in dir: 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/streams/quickstart/test-streams-archetype/streams.examples
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time:  3.013 s
[INFO] Finished at: 2023-10-19T07:18:10Z
[INFO] 
[Pipeline] dir
Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/streams/quickstart/test-streams-archetype/streams.examples
[Pipeline] {
[Pipeline] sh
+ mvn compile
[INFO] Scanning for projects...
[INFO] 
[INFO] -< streams.examples:streams.examples >--
[INFO] Building Kafka Streams Quickstart :: Java 0.1
[INFO]   from pom.xml
[INFO] [ jar ]-
[INFO] 
[INFO] --- resources:3.3.1:resources (default-resources) @ streams.examples ---
[INFO] Copying 1 resource from src/main/resources to target/classes
[INFO] 
[INFO] --- compiler:3.1:compile (default-compile) @ streams.examples ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 3 source files to 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6/streams/quickstart/test-streams-archetype/streams.examples/target/classes
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time:  2.103 s
[INFO] Finished at: 2023-10-19T07:18:17Z
[INFO] 
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }

Gradle Test Run :streams:test > Gradle Test Executor 95 > 
SmokeTestDriverIntegrationTest > shouldWorkWithRebalance(boolean) > [2] false 
PASSED
streams-5: SMOKE-TEST-CLIENT-CLOSED
streams-1: SMOKE-TEST-CLIENT-CLOSED
streams-0: SMOKE-TEST-CLIENT-EXCEPTION: Got an uncaught exception
streams-2: SMOKE-TEST-CLIENT-EXCEPTION: Got an uncaught exception
streams-3: SMOKE-TEST-CLIENT-CLOSED
streams-6: SMOKE-TEST-CLIENT-CLOSED
streams-1: SMOKE-TEST-CLIENT-EXCEPTION: Got an uncaught exception
streams-0: SMOKE-TEST-CLIENT-CLOSED
streams-4: SMOKE-TEST-CLIENT-CLOSED
streams-2: SMOKE-TEST-CLIENT-CLOSED

7131 tests completed, 2 failed, 1 skipped
There were failing tests. See the report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.6@2/streams/build/reports/tests/test/index.html

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scri

Re: Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2305

2023-10-19 Thread Shyam P
how to unsubscribe this ?

On Thu, Oct 19, 2023 at 1:30 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See <
> https://ci-builds.apache.org/job/Kafka/job/kafka/job/trunk/2305/display/redirect
> >
>
>


Re: [DISCUSS] KIP-892: Transactional Semantics for StateStores

2023-10-19 Thread Lucas Brutschy
HI Nick,

what I meant was, why don't you leave the behavior of Kafka Streams in
this case as is (wipe the state, abort the transaction), since the
contribution of the KIP is to allow transactional state stores, not to
eliminate all cases of state wiping in Kafka Streams. But either way,
that's something that could be discussed in the PR, not the KIP.

Cheers,
Lucas

On Wed, Oct 18, 2023 at 3:58 PM Nick Telford  wrote:
>
> Hi Lucas,
>
> TaskCorruptedException is how Streams signals that the Task state needs to
> be wiped, so we can't retain that exception without also wiping state on
> timeouts.
>
> Regards,
> Nick
>
> On Wed, 18 Oct 2023 at 14:48, Lucas Brutschy 
> wrote:
>
> > Hi Nick,
> >
> > I think indeed the better behavior would be to retry commitTransaction
> > until we risk running out of time to meet `max.poll.interval.ms`.
> >
> > However, if it's handled as a `TaskCorruptedException` at the moment,
> > I would do the same in this KIP, and leave exception handling
> > improvements to future work. This KIP is already improving the
> > situation a lot by not wiping the state store.
> >
> > Cheers,
> > Lucas
> >
> > On Tue, Oct 17, 2023 at 3:51 PM Nick Telford 
> > wrote:
> > >
> > > Hi Lucas,
> > >
> > > Yeah, this is pretty much the direction I'm thinking of going in now. You
> > > make an interesting point about committing on-error under
> > > ALOS/READ_COMMITTED, although I haven't had a chance to think through the
> > > implications yet.
> > >
> > > Something that I ran into earlier this week is an issue with the new
> > > handling of TimeoutException. Without TX stores, TimeoutException under
> > EOS
> > > throws a TaskCorruptedException, which wipes the stores. However, with TX
> > > stores, TimeoutException is now just bubbled up and dealt with as it is
> > > under ALOS. The problem arises when the Producer#commitTransaction call
> > > times out: Streams attempts to ignore the error and continue producing,
> > > which causes the next call to Producer#send to throw
> > > "IllegalStateException: Cannot attempt operation `send` because the
> > > previous call to `commitTransaction` timed out and must be retried".
> > >
> > > I'm not sure what we should do here: retrying the commitTransaction seems
> > > logical, but what if it times out again? Where do we draw the line and
> > > shutdown the instance?
> > >
> > > Regards,
> > > Nick
> > >
> > > On Mon, 16 Oct 2023 at 13:19, Lucas Brutschy  > .invalid>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I think I liked your suggestion of allowing EOS with READ_UNCOMMITTED,
> > > > but keep wiping the state on error, and I'd vote for this solution
> > > > when introducing `default.state.isolation.level`. This way, we'd have
> > > > the most low-risk roll-out of this feature (no behavior change without
> > > > reconfiguration), with the possibility of switching to the most sane /
> > > > battle-tested default settings in 4.0. Essentially, we'd have a
> > > > feature flag but call it `default.state.isolation.level` and don't
> > > > have to deprecate it later.
> > > >
> > > > So the possible configurations would then be this:
> > > >
> > > > 1. ALOS/READ_UNCOMMITTED (default) = processing uses direct-to-DB, IQ
> > > > reads from DB.
> > > > 2. ALOS/READ_COMMITTED = processing uses WriteBatch, IQ reads from
> > > > WriteBatch/DB. Flush on error (see note below).
> > > > 3. EOS/READ_UNCOMMITTED (default) = processing uses direct-to-DB, IQ
> > > > reads from DB. Wipe state on error.
> > > > 4. EOS/READ_COMMITTED = processing uses WriteBatch, IQ reads from
> > > > WriteBatch/DB.
> > > >
> > > > I believe the feature is important enough that we will see good
> > > > adoption even without changing the default. In 4.0, when we have seen
> > > > this being adopted and is battle-tested, we make READ_COMMITTED the
> > > > default for EOS, or even READ_COMITTED always the default, depending
> > > > on our experiences. And we could add a clever implementation of
> > > > READ_UNCOMITTED with WriteBatches later.
> > > >
> > > > The only smell here is that `default.state.isolation.level` wouldn't
> > > > be purely an IQ setting, but it would also (slightly) change the
> > > > behavior of the processing, but that seems unavoidable as long as we
> > > > haven't solve READ_UNCOMITTED IQ with WriteBatches.
> > > >
> > > > Minor: As for Bruno's point 4, I think if we are concerned about this
> > > > behavior (we don't necessarily have to be, because it doesn't violate
> > > > ALOS guarantees as far as I can see), we could make
> > > > ALOS/READ_COMMITTED more similar to ALOS/READ_UNCOMITTED by flushing
> > > > the WriteBatch on error (obviously, only if we have a chance to do
> > > > that).
> > > >
> > > > Cheers,
> > > > Lucas
> > > >
> > > > On Mon, Oct 16, 2023 at 12:19 PM Nick Telford 
> > > > wrote:
> > > > >
> > > > > Hi Guozhang,
> > > > >
> > > > > The KIP as it stands introduces a new configuration,
> > > > > default.state.isolation.level, which is independen

Re: [DISCUSS] KIP-892: Transactional Semantics for StateStores

2023-10-19 Thread Bruno Cadonna

Hi Nick,

What you and Lucas wrote about the different configurations of ALOS/EOS 
and READ_COMMITTED/READ_UNCOMMITTED make sense to me. My earlier 
concerns about changelogs diverging from the content of the local state 
stores turned out to not apply. So I think, we can move on with those 
configurations.


Regarding the TaskCorruptedException and wiping out the state stores 
under EOS, couldn't we abort the transaction on the state store and 
close the task dirty? If the Kafka transaction was indeed committed, the 
store would restore the missing part from the changelog topic. If the 
Kafka transaction was not committed, changelog topic and state store are 
in-sync.


In any case, IMO those are implementation details that we do not need to 
discuss and solve in the KIP discussion. We can solve them on the PR. 
The important thing is that the processing guarantees hold.


Best,
Bruno

On 10/18/23 3:56 PM, Nick Telford wrote:

Hi Lucas,

TaskCorruptedException is how Streams signals that the Task state needs to
be wiped, so we can't retain that exception without also wiping state on
timeouts.

Regards,
Nick

On Wed, 18 Oct 2023 at 14:48, Lucas Brutschy 
wrote:


Hi Nick,

I think indeed the better behavior would be to retry commitTransaction
until we risk running out of time to meet `max.poll.interval.ms`.

However, if it's handled as a `TaskCorruptedException` at the moment,
I would do the same in this KIP, and leave exception handling
improvements to future work. This KIP is already improving the
situation a lot by not wiping the state store.

Cheers,
Lucas

On Tue, Oct 17, 2023 at 3:51 PM Nick Telford 
wrote:


Hi Lucas,

Yeah, this is pretty much the direction I'm thinking of going in now. You
make an interesting point about committing on-error under
ALOS/READ_COMMITTED, although I haven't had a chance to think through the
implications yet.

Something that I ran into earlier this week is an issue with the new
handling of TimeoutException. Without TX stores, TimeoutException under

EOS

throws a TaskCorruptedException, which wipes the stores. However, with TX
stores, TimeoutException is now just bubbled up and dealt with as it is
under ALOS. The problem arises when the Producer#commitTransaction call
times out: Streams attempts to ignore the error and continue producing,
which causes the next call to Producer#send to throw
"IllegalStateException: Cannot attempt operation `send` because the
previous call to `commitTransaction` timed out and must be retried".

I'm not sure what we should do here: retrying the commitTransaction seems
logical, but what if it times out again? Where do we draw the line and
shutdown the instance?

Regards,
Nick

On Mon, 16 Oct 2023 at 13:19, Lucas Brutschy 
.invalid>

wrote:


Hi all,

I think I liked your suggestion of allowing EOS with READ_UNCOMMITTED,
but keep wiping the state on error, and I'd vote for this solution
when introducing `default.state.isolation.level`. This way, we'd have
the most low-risk roll-out of this feature (no behavior change without
reconfiguration), with the possibility of switching to the most sane /
battle-tested default settings in 4.0. Essentially, we'd have a
feature flag but call it `default.state.isolation.level` and don't
have to deprecate it later.

So the possible configurations would then be this:

1. ALOS/READ_UNCOMMITTED (default) = processing uses direct-to-DB, IQ
reads from DB.
2. ALOS/READ_COMMITTED = processing uses WriteBatch, IQ reads from
WriteBatch/DB. Flush on error (see note below).
3. EOS/READ_UNCOMMITTED (default) = processing uses direct-to-DB, IQ
reads from DB. Wipe state on error.
4. EOS/READ_COMMITTED = processing uses WriteBatch, IQ reads from
WriteBatch/DB.

I believe the feature is important enough that we will see good
adoption even without changing the default. In 4.0, when we have seen
this being adopted and is battle-tested, we make READ_COMMITTED the
default for EOS, or even READ_COMITTED always the default, depending
on our experiences. And we could add a clever implementation of
READ_UNCOMITTED with WriteBatches later.

The only smell here is that `default.state.isolation.level` wouldn't
be purely an IQ setting, but it would also (slightly) change the
behavior of the processing, but that seems unavoidable as long as we
haven't solve READ_UNCOMITTED IQ with WriteBatches.

Minor: As for Bruno's point 4, I think if we are concerned about this
behavior (we don't necessarily have to be, because it doesn't violate
ALOS guarantees as far as I can see), we could make
ALOS/READ_COMMITTED more similar to ALOS/READ_UNCOMITTED by flushing
the WriteBatch on error (obviously, only if we have a chance to do
that).

Cheers,
Lucas

On Mon, Oct 16, 2023 at 12:19 PM Nick Telford 
wrote:


Hi Guozhang,

The KIP as it stands introduces a new configuration,
default.state.isolation.level, which is independent of

processing.mode.

It's intended that this new configuration be used to configure a

global

IQ

isolation level in the s

Re: [DISCUSS] KIP-981: Manage Connect topics with custom implementation of Admin

2023-10-19 Thread Omnia Ibrahim
Hi, any thoughts on this kip?

Thanks

On Tue, Sep 19, 2023 at 6:04 PM Omnia Ibrahim 
wrote:

> Hi everyone,
> I want to start the discussion of the KIP-981 to extend Connect to use
> org.apache.kafka.clients.admin.ForwardingAdminClient instead of
> KafkaAdminClient 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-981%3A+Manage+Connect+topics+with+custom+implementation+of+Admin
>
>
> Thanks for your time and feedback
> Omnia
>


[jira] [Created] (KAFKA-15647) Fix the different behavior in error handling between the old and new group coordinator

2023-10-19 Thread Dongnuo Lyu (Jira)
Dongnuo Lyu created KAFKA-15647:
---

 Summary: Fix the different behavior in error handling between the 
old and new group coordinator
 Key: KAFKA-15647
 URL: https://issues.apache.org/jira/browse/KAFKA-15647
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dongnuo Lyu
Assignee: Dongnuo Lyu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15454) Add support for OffsetCommit version 9 in admin client

2023-10-19 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15454.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Add support for OffsetCommit version 9 in admin client
> --
>
> Key: KAFKA-15454
> URL: https://issues.apache.org/jira/browse/KAFKA-15454
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: David Jacot
>Assignee: Sagar Rao
>Priority: Minor
>  Labels: kip-848, kip-848-client-support, kip-848-preview
> Fix For: 3.7.0
>
>
> We need to handle the new error codes as specified here:
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/resources/common/message/OffsetCommitResponse.json#L46|https://github.com/apache/kafka/blob/trunk/clients/src/main/resources/common/message/OffsetCommitRequest.json#L35]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-975 Docker Image for Apache Kafka

2023-10-19 Thread Krishna Agarwal
Hi Viktor,

I've noticed there are two types of custom jar configurations:

   1. *Type 1*: In this case, only the class name is required(e.g
*authorizer.class.name
   **)* This can be configured by the
   following steps:
  - Mount the jar in the container.
  - Configure the *CLASSPATH* environment variable (used by
  *kafka-run-class.sh*) by providing the mounted path to it. This can
  be passed as an environment variable to the docker container.
   2. *Type 2*: Here, in addition to the class name, classpath can also be
   configured (eg *remote.log.metadata.manager.class.name
    *and
   *remote.log.metadata.manager.class.path*). This can be configured by the
   following steps:
  - Mount the jar in the container.
  - Configure the respective *class.path* property.

Regards,
Krishna

On Mon, Sep 25, 2023 at 11:41 PM Krishna Agarwal <
krishna0608agar...@gmail.com> wrote:

> Hi Viktor,
> Thanks for the questions.
>
>1. While the docker image outlined in KIP-975 is designed for
>production environments, it is equally suitable for development and testing
>purposes. We will furnish the docker image, allowing users the flexibility
>to employ it according to their specific needs.
>2. The configs will be injected into the docker container through
>environment variables. These environment variables will have a prefix
>allowing for efficient parsing to extract the relevant properties.(Will add
>this implementation in the KIP as well once we converge on this.)
>3. Regarding this question, I'll conduct a test on my end after
>gaining a better understanding, and then provide you with a response.
>
> Regards,
> Krishna
>
>
> On Tue, Sep 19, 2023 at 3:42 PM Viktor Somogyi-Vass
>  wrote:
>
>> Hi Ismael,
>>
>> I'm not trying to advocate against the docker image, I just pointed out
>> that the current scoping of the KIP may be a bit too generic and thought
>> that KIP-974 and KIP-975 were aiming for mostly the same thing and can be
>> discussed under one umbrella. Apologies if this was rooted in a
>> misunderstanding.
>>
>> Kirshna,
>>
>> I think we need to refine the KIP a bit more. I think there are some
>> interfaces that we need to include in the KIP as Kafka has plugins in
>> certain cases where users are expected to provide implementation and I
>> think it's worth discussing this in the KIP as they're kind of interfaces
>> for users. Here are my questions in order:
>> 1. In what environments do you want the image to be used? As I understand
>> it would replace the current testing image and serve as a basis for
>> development, but would it aim at production use cases too (docker-compose,
>> Kubernetes, etc.)?
>> 2. How do you plan to forward configs to the broker? Do we expect a
>> populated server.properties file placed in a certain location or should
>> the
>> docker image create this file based on some input (like env vars)?
>> 3. Certain parts can be pluggable, like metric reporters or remote log
>> implementations that were just introduced by KIP-405. These manifest in
>> jar
>> files that must be put on the classpath of Kafka while certain classnames
>> have to be configured. How do you plan to implement this, how do we
>> allow users to configure such things?
>>
>> Thanks,
>> Viktor
>>
>>
>>
>>
>> On Thu, Sep 14, 2023 at 4:59 PM Kenneth Eversole
>>  wrote:
>>
>> > Hello,
>> >
>> > I think this would be a wonderful improvement to the ecosystem. While
>> > Viktor is correct that most Docker pipelines eventually lead to a
>> > kubernetes deployment, that should not stop us from creating an
>> > Official Docker Image. Creating a Docker image would allow us to ensure
>> a
>> > level of quality and support for people who want to deploy Kafka as a
>> > container on baremetal machines, it could allow us to create
>> > a sandbox/developer environment for new contributors and developers to
>> test
>> > and have a single agreed upon environment that kafka works in for future
>> > KIPs and would most likely spawn more contributions from people wanting
>> to
>> > optimize kafka for k8s.
>> >
>> >
>> > I am 100% for this and will gladly help if approved.
>> >
>> > Kenneth
>> >
>> > On Thu, Sep 14, 2023 at 5:47 AM Ismael Juma  wrote:
>> >
>> > > Hi Viktor,
>> > >
>> > > I disagree. Docker is a very popular deployment tool and it's not only
>> > used
>> > > with Kubernetes.
>> > >
>> > > Ismael
>> > >
>> > > On Thu, Sep 14, 2023, 1:14 AM Viktor Somogyi-Vass
>> > >  wrote:
>> > >
>> > > > Hi Krishna,
>> > > >
>> > > > I think you should merge this KIP and KIP-974
>> >  as there
>> are
>> > overlaps as
>> > > > Federico pointed out on KIP-974
>> > . I think
>> you
>> > should keep that one as it
>> > > > has well defined goals (improve tests) while I feel this one is too
>

[DISCUSS] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-10-19 Thread Hanyu (Peter) Zheng
https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery

-- 

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]



Re: [DISCUSS] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-10-19 Thread Hanyu (Peter) Zheng
Hello everyone,

I would like to start the discussion for KIP-992: Proposal to introduce
IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery

Any suggestions are more than welcome.

Many thanks,
Hanyu

On Thu, Oct 19, 2023 at 8:17 AM Hanyu (Peter) Zheng 
wrote:

>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery
>
> --
>
> [image: Confluent] 
> Hanyu (Peter) Zheng he/him/his
> Software Engineer Intern
> +1 (213) 431-7193 <+1+(213)+431-7193>
> Follow us: [image: Blog]
> [image:
> Twitter] [image: LinkedIn]
> [image: Slack]
> [image: YouTube]
> 
>
> [image: Try Confluent Cloud for Free]
> 
>


-- 

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]



Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2307

2023-10-19 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 104875 lines...]
> Task :connect:json:jar UP-TO-DATE
> Task :connect:api:compileTestJava UP-TO-DATE
> Task :connect:json:generateMetadataFileForMavenJavaPublication
> Task :connect:api:testClasses UP-TO-DATE
> Task :connect:json:testClasses UP-TO-DATE
> Task :connect:json:testJar
> Task :connect:json:testSrcJar
> Task :connect:api:testJar
> Task :connect:api:testSrcJar
> Task :connect:api:publishMavenJavaPublicationToMavenLocal
> Task :connect:json:publishMavenJavaPublicationToMavenLocal
> Task :connect:json:publishToMavenLocal
> Task :connect:api:publishToMavenLocal
> Task :clients:generateMetadataFileForMavenJavaPublication
> Task :storage:storage-api:compileTestJava
> Task :storage:storage-api:testClasses
> Task :server-common:compileTestJava
> Task :server-common:testClasses
> Task :raft:compileTestJava
> Task :raft:testClasses
> Task :group-coordinator:compileTestJava
> Task :group-coordinator:testClasses

> Task :clients:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API";>KIP-554:
 Add Broker-side SCRAM Config API

 This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
 The type field in both files must match and must not change. The type field
 is used both for passing ScramCredentialUpsertion and for the internal
 UserScramCredentialRecord. Do not change the type field."

> Task :metadata:compileTestJava
> Task :metadata:testClasses

> Task :clients:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
2 warnings

> Task :clients:javadocJar
> Task :clients:srcJar
> Task :clients:testJar
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :core:compileScala
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava
> Task :streams:testClasses
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.3/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD SUCCESSFUL in 3m 6s
94 actionable tasks: 41 executed, 53 up-to-date

Publishing build scan...
https://ge.apache.org/s/3ix72zgrzd2lq

[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.7.0-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/streams/quickstart/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.7.0-SNAPSHOT/streams-quickstart-3.7.0-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.7.0-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] --[ maven-archetype ]---
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart-java ---
[INFO] 
[INFO] --- remote-resources:1.5:process (p

Re: Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2305

2023-10-19 Thread Matthias J. Sax

You would need to unsubscribe from the dev list.

I would recommend to setup a filter with you email provider if you don't 
want these and re-direct them directly to trash.



-Matthias

On 10/19/23 4:49 AM, Shyam P wrote:

how to unsubscribe this ?

On Thu, Oct 19, 2023 at 1:30 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:


See <
https://ci-builds.apache.org/job/Kafka/job/kafka/job/trunk/2305/display/redirect









[jira] [Resolved] (KAFKA-15582) Clean shutdown detection, broker side

2023-10-19 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-15582.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

merged the PR to trunk

> Clean shutdown detection, broker side
> -
>
> Key: KAFKA-15582
> URL: https://issues.apache.org/jira/browse/KAFKA-15582
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Calvin Liu
>Assignee: Calvin Liu
>Priority: Major
> Fix For: 3.7.0
>
>
> The clean shutdown file can now include the broker epoch before shutdown. 
> During the broker start process, the broker should extract the broker epochs 
> from the clean shutdown files. If successful, send the broker epoch through 
> the broker registration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15648) QuorumControllerTest#testBootstrapZkMigrationRecord is flaky

2023-10-19 Thread David Arthur (Jira)
David Arthur created KAFKA-15648:


 Summary: QuorumControllerTest#testBootstrapZkMigrationRecord is 
flaky
 Key: KAFKA-15648
 URL: https://issues.apache.org/jira/browse/KAFKA-15648
 Project: Kafka
  Issue Type: Bug
  Components: controller, unit tests
Reporter: David Arthur


Noticed that this test failed on Jenkins with 

{code}
org.apache.kafka.server.fault.FaultHandlerException: fatalFaultHandler: 
exception while completing controller activation: Should not have ZK migrations 
enabled on a cluster running metadata.version 3.0-IV1
at 
app//org.apache.kafka.controller.ActivationRecordsGenerator.recordsForNonEmptyLog(ActivationRecordsGenerator.java:154)
at 
app//org.apache.kafka.controller.ActivationRecordsGenerator.generate(ActivationRecordsGenerator.java:229)
at 
app//org.apache.kafka.controller.QuorumController$CompleteActivationEvent.generateRecordsAndResult(QuorumController.java:1237)
at 
app//org.apache.kafka.controller.QuorumController$ControllerWriteEvent.run(QuorumController.java:784)
at 
app//org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
at 
app//org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
at 
app//org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
at java.base@11.0.16.1/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.RuntimeException: Should not have ZK migrations enabled on 
a cluster running metadata.version 3.0-IV1
... 8 more
{code}

When trying to reproduce this failure locally, I ran into a separate flaky 
failure

{code}
[2023-10-19 13:42:09,442] INFO Elected new leader: 
LeaderAndEpoch(leaderId=OptionalInt[0], epoch=1). 
(org.apache.kafka.metalog.LocalLogManager$SharedLogData:300)
[2023-10-19 13:42:09,442] DEBUG 
append(batch=LeaderChangeBatch(newLeader=LeaderAndEpoch(leaderId=OptionalInt[0],
 epoch=1)), nextEndOffset=0) 
(org.apache.kafka.metalog.LocalLogManager$SharedLogData:276)
[2023-10-19 13:42:09,442] DEBUG [LocalLogManager 0] Node 0: running log check. 
(org.apache.kafka.metalog.LocalLogManager:536)
[2023-10-19 13:42:09,442] DEBUG [LocalLogManager 0] initialized local log 
manager for node 0 (org.apache.kafka.metalog.LocalLogManager:685)
[2023-10-19 13:42:09,442] DEBUG [QuorumController id=0] Creating in-memory 
snapshot -1 (org.apache.kafka.timeline.SnapshotRegistry:203)
[2023-10-19 13:42:09,442] INFO [QuorumController id=0] Creating new 
QuorumController with clusterId K8TDRiYZQuepVQHPgwP91A. ZK migration mode is 
enabled. (org.apache.kafka.controller.QuorumController:1912)
[2023-10-19 13:42:09,442] INFO [LocalLogManager 0] Node 0: registered 
MetaLogListener 1238203422 (org.apache.kafka.metalog.LocalLogManager:703)
[2023-10-19 13:42:09,443] DEBUG [LocalLogManager 0] Node 0: running log check. 
(org.apache.kafka.metalog.LocalLogManager:536)
[2023-10-19 13:42:09,443] DEBUG [LocalLogManager 0] Node 0: Executing 
handleLeaderChange LeaderAndEpoch(leaderId=OptionalInt[0], epoch=1) 
(org.apache.kafka.metalog.LocalLogManager:578)
[2023-10-19 13:42:09,443] DEBUG [QuorumController id=0] Executing 
handleLeaderChange[1]. (org.apache.kafka.controller.QuorumController:577)
[2023-10-19 13:42:09,443] INFO [QuorumController id=0] In the new epoch 1, the 
leader is (none). (org.apache.kafka.controller.QuorumController:1179)
[2023-10-19 13:42:09,443] DEBUG [QuorumController id=0] Processed 
handleLeaderChange[1] in 25 us 
(org.apache.kafka.controller.QuorumController:510)
[2023-10-19 13:42:09,443] DEBUG [QuorumController id=0] Executing 
handleLeaderChange[1]. (org.apache.kafka.controller.QuorumController:577)
[2023-10-19 13:42:09,443] INFO [QuorumController id=0] Becoming the active 
controller at epoch 1, next write offset 1. 
(org.apache.kafka.controller.QuorumController:1175)
[2023-10-19 13:42:09,443] DEBUG [QuorumController id=0] Processed 
handleLeaderChange[1] in 34 us 
(org.apache.kafka.controller.QuorumController:510)
[2023-10-19 13:42:09,443] WARN [QuorumController id=0] Performing controller 
activation. The metadata log appears to be empty. Appending 1 bootstrap 
record(s) at metadata.version 3.4-IV0 from bootstrap source 'test'. Putting the 
controller into pre-migration mode. No metadata updates will be allowed until 
the ZK metadata has been migrated. 
(org.apache.kafka.controller.QuorumController:108)
[2023-10-19 13:42:09,443] INFO [QuorumController id=0] Replayed a 
FeatureLevelRecord setting metadata version to 3.4-IV0 
(org.apache.kafka.controller.FeatureControlManager:400)
[2023-10-19 13:42:09,443] INFO [QuorumController id=0] Replayed a 
ZkMigrationStateRecord changing the migration state from NONE to PRE_MIGRATION. 
(org.apache.kafka.controller.FeatureControlManager:421)
[2023-10-19 13:42:09,443] DEBUG append(batch=LocalRecordBatch(leaderEpoch=1, 
appendTimesta

[jira] [Resolved] (KAFKA-15581) Introduce ELR

2023-10-19 Thread Calvin Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Calvin Liu resolved KAFKA-15581.

  Reviewer: David Arthur
Resolution: Fixed

> Introduce ELR
> -
>
> Key: KAFKA-15581
> URL: https://issues.apache.org/jira/browse/KAFKA-15581
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Calvin Liu
>Assignee: Calvin Liu
>Priority: Major
>
> Introduce the PartitionRecord, PartitionChangeRecord and the basic ELR 
> handling in the controller



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2308

2023-10-19 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 314804 lines...]
Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultStateUpdaterTest > 
shouldRestoreActiveStatefulTasksAndUpdateStandbyTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultStateUpdaterTest > 
shouldRestoreActiveStatefulTasksAndUpdateStandbyTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultStateUpdaterTest > shouldPauseStandbyTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultStateUpdaterTest > shouldPauseStandbyTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultStateUpdaterTest > shouldThrowIfStatefulTaskNotInStateRestoring() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultStateUpdaterTest > shouldThrowIfStatefulTaskNotInStateRestoring() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldSetUncaughtStreamsException() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldSetUncaughtStreamsException() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldClearTaskTimeoutOnProcessed() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldClearTaskTimeoutOnProcessed() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenRequired() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenRequired() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldClearTaskReleaseFutureOnShutdown() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldClearTaskReleaseFutureOnShutdown() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldProcessTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldProcessTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldPunctuateStreamTime() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldPunctuateStreamTime() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldShutdownTaskExecutor() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldShutdownTaskExecutor() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldAwaitProcessableTasksIfNoneAssignable() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldAwaitProcessableTasksIfNoneAssignable() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > 
shouldRespectPunctuationDisabledByTaskExecutionMetadata() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > 
shouldRespectPunctuationDisabledByTaskExecutionMetadata() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldSetTaskTimeoutOnTimeoutException() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldSetTaskTimeoutOnTimeoutException() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldPunctuateSystemTime() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldPunctuateSystemTime() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenNotProgressing() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldUnassignTaskWhenNotProgressing() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldNotFlushOnException() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > shouldNotFlushOnException() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > 
shouldRespectProcessingDisabledByTaskExecutionMetadata() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskExecutorTest > 
shouldRespectProcessingDisabledByTaskExecutionMetadata() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldLockAnEmptySetOfTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldLockAnEmptySetOfTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAssignT

[jira] [Created] (KAFKA-15649) Handle directory failure timeout

2023-10-19 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-15649:
---

 Summary: Handle directory failure timeout 
 Key: KAFKA-15649
 URL: https://issues.apache.org/jira/browse/KAFKA-15649
 Project: Kafka
  Issue Type: Sub-task
Reporter: Igor Soarez


If a broker with an offline log directory continues to fail to notify the 
controller of either:
 * the fact that the directory is offline; or
 * of any replica assignment into a failed directory

then the controller will not check if a leadership change is required, and this 
may lead to partitions remaining indefinitely offline.

KIP-858 proposes that the broker should shut down after a configurable timeout 
to force a leadership change. Alternatively, the broker could also request to 
be fenced, as long as there's a path for it to later become unfenced.

While this unavailability is possible in theory, in practice it's not easy to 
entertain a scenario where a broker continues to appear as healthy before the 
controller, but fails to send this information. So it's not clear if this is a 
real problem. 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15650) Data-loss on leader shutdown right after partition creation?

2023-10-19 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-15650:
---

 Summary: Data-loss on leader shutdown right after partition 
creation?
 Key: KAFKA-15650
 URL: https://issues.apache.org/jira/browse/KAFKA-15650
 Project: Kafka
  Issue Type: Sub-task
Reporter: Igor Soarez


As per KIP-858, when a replica is created, the broker selects a log directory 
to host the replica and queues the propagation of the directory assignment to 
the controller. The replica becomes immediately active, it isn't blocked until 
the controller confirms the metadata change. If the replica is the leader 
replica it can immediately start accepting writes. 

Consider the following scenario:
 # A partition is created in some selected log directory, and some produce 
traffic is accepted
 # Before the broker is able to notify the controller of the directory 
assignment, the broker shuts down
 # Upon coming back online, the broker has an offline directory, the same 
directory which was chosen to host the replica
 # The broker assumes leadership for the replica, but cannot find it in any 
available directory and has no way of knowing it was already created because 
the directory assignment is still missing
 # The replica is created and the previously produced records are lost

Step 4. may seem unlikely due to ISR membership gating leadership, but even 
assuming acks=all and replicas>1, if all other replicas are also offline the 
broker may still gain leadership. Perhaps KIP-966 is relevant here.

We may need to delay new replica activation until the assignment is propagated 
successfully.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15651) Investigate auto commit guarantees during Consumer.assign()

2023-10-19 Thread Kirk True (Jira)
Kirk True created KAFKA-15651:
-

 Summary: Investigate auto commit guarantees during 
Consumer.assign()
 Key: KAFKA-15651
 URL: https://issues.apache.org/jira/browse/KAFKA-15651
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Kirk True
Assignee: Kirk True


In the assign() method implementation, both KafkaConsumer and 
PrototypeAsyncConsumer commit offsets asynchronously. Is this intentional? 
[~junrao] asks in a [recent PR 
review|https://github.com/apache/kafka/pull/14406/files/193af8230d0c61853d764cbbe29bca2fc6361af9#r1349023459]:
{quote}Do we guarantee that the new owner of the unsubscribed partitions could 
pick up the latest committed offset?
{quote}
Let's confirm whether the asynchronous approach is acceptable and correct. If 
it is, great, let's enhance the documentation to briefly explain why. If it is 
not, let's correct the behavior if it's within the API semantic expectations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15652) Investigate resetting offsets in SubscriptionState.resetInitializingPositions()

2023-10-19 Thread Kirk True (Jira)
Kirk True created KAFKA-15652:
-

 Summary: Investigate resetting offsets in 
SubscriptionState.resetInitializingPositions()
 Key: KAFKA-15652
 URL: https://issues.apache.org/jira/browse/KAFKA-15652
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Kirk True
Assignee: Kirk True


In the {{assign()}} method implementation, both {{KafkaConsumer}} and 
{{PrototypeAsyncConsumer}} commit offsets asynchronously. Is this intentional? 
[~junrao] asks in a [recent PR 
review|https://github.com/apache/kafka/pull/14406/files/193af8230d0c61853d764cbbe29bca2fc6361af9#r1349023459]:
{quote}Do we guarantee that the new owner of the unsubscribed partitions could 
pick up the latest committed offset?
{quote}
Let's confirm whether the asynchronous approach is acceptable and correct. If 
it is, great, let's enhance the documentation to briefly explain why. If it is 
not, let's correct the behavior if it's within the API semantic expectations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2309

2023-10-19 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-15653) NPE in ChunkedByteStream.

2023-10-19 Thread Travis Bischel (Jira)
Travis Bischel created KAFKA-15653:
--

 Summary: NPE in ChunkedByteStream.
 Key: KAFKA-15653
 URL: https://issues.apache.org/jira/browse/KAFKA-15653
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 3.6.0
 Environment: Docker container on a Linux laptop, using the latest 
release.
Reporter: Travis Bischel


When looping franz-go integration tests, I received an UNKNOWN_SERVER_ERROR 
from producing. The broker logs for the failing request:

 
{noformat}
[2023-10-19 22:29:58,160] ERROR [ReplicaManager broker=2] Error processing 
append operation on partition 
2fa8995d8002fbfe68a96d783f26aa2c5efc15368bf44ed8f2ab7e24b41b9879-24 
(kafka.server.ReplicaManager)
java.lang.NullPointerException
at 
org.apache.kafka.common.utils.ChunkedBytesStream.(ChunkedBytesStream.java:89)
at 
org.apache.kafka.common.record.CompressionType$3.wrapForInput(CompressionType.java:105)
at 
org.apache.kafka.common.record.DefaultRecordBatch.recordInputStream(DefaultRecordBatch.java:273)
at 
org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:277)
at 
org.apache.kafka.common.record.DefaultRecordBatch.skipKeyValueIterator(DefaultRecordBatch.java:352)
at 
org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsetsCompressed(LogValidator.java:358)
at 
org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsets(LogValidator.java:165)
at kafka.log.UnifiedLog.append(UnifiedLog.scala:805)
at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:719)
at 
kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1313)
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1301)
at 
kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1210)
at 
scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
at 
scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
at scala.collection.mutable.HashMap.map(HashMap.scala:35)
at 
kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1198)
at kafka.server.ReplicaManager.appendEntries$1(ReplicaManager.scala:754)
at 
kafka.server.ReplicaManager.$anonfun$appendRecords$18(ReplicaManager.scala:874)
at 
kafka.server.ReplicaManager.$anonfun$appendRecords$18$adapted(ReplicaManager.scala:874)
at 
kafka.server.KafkaRequestHandler$.$anonfun$wrap$3(KafkaRequestHandler.scala:73)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:130)
at java.base/java.lang.Thread.run(Unknown Source)

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[VOTE] KIP-982: Enhance Custom KafkaPrincipalBuilder to Access SslPrincipalMapper and KerberosShortNamer

2023-10-19 Thread Raghu B
Hi everyone,

I would like to start a vote on KIP-982, which proposed enhancements to the
Custom KafkaPrincipalBuilder to allow access to SslPrincipalMapper and
KerberosShortNamer.

This KIP

aims to improve the flexibility and usability of custom
KafkaPrincipalBuilder implementations by enabling support for Mapping Rules
and enhancing the overall security configuration of Kafka brokers.

Thank you for your participation!

Sincerely,
Raghu


[jira] [Created] (KAFKA-15654) Address Transactions Errors

2023-10-19 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-15654:
--

 Summary: Address Transactions Errors 
 Key: KAFKA-15654
 URL: https://issues.apache.org/jira/browse/KAFKA-15654
 Project: Kafka
  Issue Type: Sub-task
Reporter: Justine Olshan
Assignee: Justine Olshan


In addition to the work in KIP-691, I propose we handle and clean up 
transactional error handling. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15655) Consider making transactional apis more compatible with topic IDs

2023-10-19 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-15655:
--

 Summary: Consider making transactional apis more compatible with 
topic IDs
 Key: KAFKA-15655
 URL: https://issues.apache.org/jira/browse/KAFKA-15655
 Project: Kafka
  Issue Type: Sub-task
Reporter: Justine Olshan
Assignee: Justine Olshan


Some ideas include adding topic ID to AddPartitions and other topic partition 
specific APIs.

Adding topic ID as a tagged field in the transactional state logs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15656) Frequent INVALID_RECORD on Kafka 3.6

2023-10-19 Thread Travis Bischel (Jira)
Travis Bischel created KAFKA-15656:
--

 Summary: Frequent INVALID_RECORD on Kafka 3.6
 Key: KAFKA-15656
 URL: https://issues.apache.org/jira/browse/KAFKA-15656
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 3.6.0
Reporter: Travis Bischel
 Attachments: invalid_record.log

Using this docker-compose.yml:
{noformat}
version: "3.7"
services:
  kafka:
    image: bitnami/kafka:latest
    network_mode: host
    environment:
      KAFKA_ENABLE_KRAFT: yes
      KAFKA_CFG_PROCESS_ROLES: controller,broker
      KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
      KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 
CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 1@127.0.0.1:9093
      # Set this to "PLAINTEXT://127.0.0.1:9092" if you want to run this 
container on localhost via Docker
      KAFKA_CFG_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_CFG_NODE_ID: 1
      ALLOW_PLAINTEXT_LISTENER: yes
      KAFKA_KRAFT_CLUSTER_ID: XkpGZQ27R3eTl3OdTm2LYA # 16 byte base64-encoded 
UUID{noformat}
And running franz-go integration tests with KGO_TEST_RF=1, I consistently 
receive INVALID_RECORD errors.

 

Looking at the container logs, I see these problematic log lines:
{noformat}
2023-10-19 23:33:47,942] ERROR [ReplicaManager broker=1] Error processing 
append operation on partition 
0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-0 
(kafka.server.ReplicaManager) 
org.apache.kafka.common.InvalidRecordException: Invalid negative header key 
size -25
[2023-10-19 23:33:47,942] ERROR [ReplicaManager broker=1] Error processing 
append operation on partition 
0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-6 
(kafka.server.ReplicaManager) 
org.apache.kafka.common.InvalidRecordException: Reached end of input stream 
before skipping all bytes. Remaining bytes:94
[2023-10-19 23:33:47,942] ERROR [ReplicaManager broker=1] Error processing 
append operation on partition 
0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-1 
(kafka.server.ReplicaManager) 
org.apache.kafka.common.InvalidRecordException: Found invalid number of record 
headers -26
[2023-10-19 23:33:47,948] ERROR [ReplicaManager broker=1] Error processing 
append operation on partition 
0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-6 
(kafka.server.ReplicaManager) 
org.apache.kafka.common.InvalidRecordException: Found invalid number of record 
headers -27
[2023-10-19 23:33:47,950] ERROR [ReplicaManager broker=1] Error processing 
append operation on partition 
0cf2f3faaafd3f906ea848b684b04833ca162bcd19ecae2cab36767a54f248c7-22 
(kafka.server.ReplicaManager)
org.apache.kafka.common.InvalidRecordException: Invalid negative header key 
size -25
[2023-10-19 23:33:47,947] ERROR [ReplicaManager broker=1] Error processing 
append operation on partition 
c63b6e30987317fad18815effb8d432b6df677d2ab56cf6da517bb93fa49b74b-25 
(kafka.server.ReplicaManager)
org.apache.kafka.common.InvalidRecordException: Found invalid number of record 
headers -50
[2023-10-19 23:33:47,959] ERROR [ReplicaManager broker=1] Error processing 
append operation on partition 
c63b6e30987317fad18815effb8d432b6df677d2ab56cf6da517bb93fa49b74b-25 
(kafka.server.ReplicaManager) 
 {noformat}
 

I modified franz-go with a diff to print the request that was written to the 
wire once this error occurs. Attached is a v9 produce request. I deserialized 
it locally and am not seeing the corrupt data that Kafka is printing. It's 
possible there is a bug in the client, but again, these tests have never 
received this error pre-Kafka 3.6. It _looks like_ there is either corruption 
when processing the incoming data, or there is some problematic race condition 
in the broker - I'm not sure which.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15657) Unexpected errors when producing transactionally in 3.6

2023-10-19 Thread Travis Bischel (Jira)
Travis Bischel created KAFKA-15657:
--

 Summary: Unexpected errors when producing transactionally in 3.6
 Key: KAFKA-15657
 URL: https://issues.apache.org/jira/browse/KAFKA-15657
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 3.6.0
Reporter: Travis Bischel


In loop-testing the franz-go client, I am frequently receiving INVALID_RECORD 
(which I created a separate issue for), and INVALID_TXN_STATE and 
UNKNOWN_SERVER_ERROR.

INVALID_TXN_STATE is being returned even though the partitions have been added 
to the transaction (AddPartitionsToTxn). Nothing about the code has changed 
between 3.5 and 3.6, and I have loop-integration-tested this code against 3.5 
thousands of times. 3.6 is newly - and always - returning INVALID_TXN_STATE. If 
I change the code to retry on INVALID_TXN_STATE, I eventually quickly (always) 
receive UNKNOWN_SERVER_ERROR. In looking at the broker logs, the broker 
indicates that sequence numbers are out of order - but (a) I am repeating 
requests that were in order (so something on the broker got a little haywire 
maybe? or maybe this is due to me ignoring invalid_txn_state?), _and_ I am not 
receiving OUT_OF_ORDER_SEQUENCE_NUMBER, I am receiving UNKNOWN_SERVER_ERROR.

I think the main problem is the client unexpectedly receiving 
INVALID_TXN_STATE, but a second problem here is that OOOSN is being mapped to 
USE on return for some reason.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Re: Re: Re: [DISCUSS] KIP-972: Add the metric of the current running version of kafka

2023-10-19 Thread hudeqi
Hi, Mickael, Sophie, Doğuşcan. Thanks for your replys.
I like the idea of adding the version and commitId as tags for existed metric 
named "startTimeMs". I will update the KIP doc and start the vote process for 
this KIP.
It seems that there is not an existing metric named "uptime", Doğuşcan.

best,
hudeqi

"Doğuşcan Namal" 写道:
> Hello, do we have a metric showing the uptime? We could tag that metric
> with version information as well.
> 
> I like the idea of adding the version as a tag as well. However, I am not
> inclined to tag each metric with a KafkaVersion information. We could
> discuss which metrics could be tagged but let's keep that out of scope from
> this discussion.
> 
> On Wed, 11 Oct 2023 at 07:37, Sophie Blee-Goldman 
> wrote:
> 
> > Just to chime in here since I recently went through a similar thing, I
> > support adding the version
> > as a tag instead of introducing an entirely new metric for this. In fact I
> > just implemented exactly this
> > in a project that uses Kafka, for these reasons:
> >
> > 1. Adding the version as a tag means that all metrics which are already
> > collected will benefit, and lets you easily tell
> > at a glance which version a specific client metric corresponds to. This is
> > incredibly useful when looking at a dashboard
> > covering multiple instances from different sources. For example, imagine a
> > graph that plots the performance (eg bytes
> > consumed rate) of many individual consumers and which shows several of them
> > maxing out much lower than the rest.
> > If the metric is tagged with the version already, you can easily check if
> > the slow consumers are all using a specific version
> > and may be displaying a performance regression. If the version info has to
> > be plotted separately as its own metric, this is
> > much more of a hassle to check.
> > 2. Additional metrics can be expensive, but additional tags are almost
> > always free (at least, that is my understanding)
> > 3. As you guys already discussed, many systems (like Prometheus) require
> > numeric values, and it's pretty much impossible
> > to come up with a readable scheme for all the relevant versioning info --
> > even if we removed the dots we're left with a rather
> > unreadable representation of the version and of course will need to solve
> > the "-SNAPSHOT" issue somehow. But beyond that,
> > in addition to the raw version we also wanted to emit the specific commit
> > id, which really needs to be a string.
> >
> > I'm pretty sure Kafka client metrics also include the commit id in addition
> > to the version. If we add the version to the tags,
> > we should consider adding the commit id as well. This is incredibly useful
> > for intermediate/SNAPSHOT versions, which
> > don't uniquely identify the specific code that is running.
> >
> > I would personally love to see a KIP start tagging the existing metrics
> > with the version info, and it sounds like this would also
> > solve your problem in a very natural way
> >
> > On Tue, Oct 10, 2023 at 5:42 AM Mickael Maison 
> > wrote:
> >
> > > Hi Hudeqi,
> > >
> > > Rather than creating a gauge with a dummy value, could we add the
> > > version (and commitId) as tags to an existing metric.
> > > For example, the alongside the existing Version and CommitId metrics
> > > we have StartTimeMs. Maybe we can have a StartTimeMs metrics with the
> > > version and commitId) as tags on it? The existing metric already has
> > > the brokerid (id) as tag. WDYT?
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Thu, Aug 31, 2023 at 4:59 AM hudeqi <16120...@bjtu.edu.cn> wrote:
> > > >
> > > > Thank you for your answer, Mickael.
> > > > If set the value of gauge to a constant value of 1, adding that tag key
> > > is "version" and value is the version value of the obtained string type,
> > > does this solve the problem? We can get the version by tag in prometheus.
> > > >
> > > > best,
> > > > hudeqi
> > > >
> > > > "Mickael Maison" 写道:
> > > > > Hi,
> > > > >
> > > > > Prometheus only support numeric values for metrics. This means it's
> > > > > not able to handle the kafka.server:type=app-info metric since Kafka
> > > > > versions are not valid numbers (3.5.0).
> > > > > As a workaround we could create a metric with the version without the
> > > > > dots, for example with value 350 for Kafka 3.5.0.
> > > > >
> > > > > Also in between releases Kafka uses the -SNAPSHOT suffix (for example
> > > > > trunk is currently 3.7.0-SNAPSHOT) so we should also consider a way
> > to
> > > > > handle those.
> > > > >
> > > > > Thanks,
> > > > > Mickael
> > > > >
> > > > > On Wed, Aug 30, 2023 at 2:51 PM hudeqi <16120...@bjtu.edu.cn> wrote:
> > > > > >
> > > > > > Hi, Kamal, thanks your reminding, but I have a question: It seems
> > > that I can't get this metric through "jmx_prometheus"? Although I
> > observed
> > > this metric through other tools.
> > > > > >
> > > > > > best,
> > > > > > hudeqi
> > > > > >

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2310

2023-10-19 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 316476 lines...]
Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testEmptyWrite() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testReadMigrateAndWriteProducerId() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testReadMigrateAndWriteProducerId() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testExistingKRaftControllerClaim() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testExistingKRaftControllerClaim() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testMigrateTopicConfigs() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testMigrateTopicConfigs() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testNonIncreasingKRaftEpoch() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testNonIncreasingKRaftEpoch() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testMigrateEmptyZk() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testMigrateEmptyZk() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testTopicAndBrokerConfigsMigrationWithSnapshots() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testTopicAndBrokerConfigsMigrationWithSnapshots() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testClaimAndReleaseExistingController() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testClaimAndReleaseExistingController() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testClaimAbsentController() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testClaimAbsentController() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testIdempotentCreateTopics() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testIdempotentCreateTopics() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testCreateNewTopic() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testCreateNewTopic() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZkMigrationClientTest > 
testUpdateExistingTopicWithNewAndChangedPartitions() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testGetChildrenExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testConnection() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testConnection() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testGetAclExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testGetAclExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() STARTED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testSessionExpiryDuringClose() PASSED

Gradle Test Run :core:test > Gradle Test Executor 89 > ZooKeeperClientTest > 
testReinitializeAfterAuthFailure() STARTED

Gradle Test Run :

[jira] [Created] (KAFKA-15658) Zookeeper 3.6.3 jar | CVE-2023-44981

2023-10-19 Thread masood (Jira)
masood created KAFKA-15658:
--

 Summary: Zookeeper 3.6.3 jar | CVE-2023-44981 
 Key: KAFKA-15658
 URL: https://issues.apache.org/jira/browse/KAFKA-15658
 Project: Kafka
  Issue Type: Bug
Reporter: masood


The [CVE-2023-44981|https://www.mend.io/vulnerability-database/CVE-2023-44981]  
vulnerability has been reported in the zookeeper.jar. 

It's worth noting that the latest version of Kafka has a dependency on version 
3.8.2 of Zookeeper, which is also impacted by this vulnerability. 

[https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper/3.8.2|https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper/3.8.2.]

could you please verify its impact on the Kafka.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)