[jira] [Created] (KAFKA-15286) Migrate ApiVersion related code to kraft

2023-08-01 Thread Deng Ziming (Jira)
Deng Ziming created KAFKA-15286:
---

 Summary: Migrate ApiVersion related code to kraft
 Key: KAFKA-15286
 URL: https://issues.apache.org/jira/browse/KAFKA-15286
 Project: Kafka
  Issue Type: Task
Reporter: Deng Ziming
Assignee: Deng Ziming


In many places involving ApiVersion, we only support zk, we should move it 
forward to kraft.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15287) Change NodeApiVersions.create() to contains both apis of zk and kraft broker

2023-08-01 Thread Deng Ziming (Jira)
Deng Ziming created KAFKA-15287:
---

 Summary: Change NodeApiVersions.create() to contains both apis of 
zk and kraft broker 
 Key: KAFKA-15287
 URL: https://issues.apache.org/jira/browse/KAFKA-15287
 Project: Kafka
  Issue Type: Sub-task
Reporter: Deng Ziming


We are using ApiKeys.zkBrokerApis() when calling NodeApiVersions.create(), this 
means we only support zk broker apis.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15288) Change BrokerApiVersionsCommandTest to support kraft mode

2023-08-01 Thread Deng Ziming (Jira)
Deng Ziming created KAFKA-15288:
---

 Summary: Change BrokerApiVersionsCommandTest to support kraft mode
 Key: KAFKA-15288
 URL: https://issues.apache.org/jira/browse/KAFKA-15288
 Project: Kafka
  Issue Type: Sub-task
Reporter: Deng Ziming


Currently 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15289) Use zkBrokerApis.clientApis instead of ApiKeys.zkBrokerApis in most cases

2023-08-01 Thread Deng Ziming (Jira)
Deng Ziming created KAFKA-15289:
---

 Summary: Use zkBrokerApis.clientApis instead of 
ApiKeys.zkBrokerApis in most cases
 Key: KAFKA-15289
 URL: https://issues.apache.org/jira/browse/KAFKA-15289
 Project: Kafka
  Issue Type: Sub-task
Reporter: Deng Ziming


In most Test cases, we are calling `zkBrokerApis`, we should ensure kraft 
broker apis are also supported, so use clientApis as far as possible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[QUESTION] What is the difference between sequence and offset for a Record?

2023-08-01 Thread tison
Hi,

I'm wringing a Kafka API Rust codec library[1] to understand how Kafka
models its concepts and how the core business logic works.

During implementing the codec for Records[2], I saw a twins of fields
"sequence" and "offset". Both of them are calculated by
baseOffset/baseSequence + offset delta. Then I'm a bit confused how to deal
with them properly - what's the difference between these two concepts
logically?

Also, to understand how the core business logic works, I write a simple
server based on my codec library, and observe that the server may need to
update offset for records produced. How does Kafka set the correct offset
for each produced records? And how does Kafka maintain the calculation for
offset and sequence during these modifications?

I'll appreciate if anyone can answer the question or give some insights :D

Best,
tison.

[1] https://github.com/tisonkun/kafka-api
[2] https://kafka.apache.org/documentation/#messageformat


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2057

2023-08-01 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-949: Add flag to enable the usage of topic separator in MM2 DefaultReplicationPolicy

2023-08-01 Thread Omnia Ibrahim
Thanks for the binding vote, Greg, We now need one extra binding vote to
get this KIP accepted.

On Tue, Jul 25, 2023 at 8:10 PM Greg Harris 
wrote:

> Hey Omnia,
>
> Thanks for the KIP!
>
> I think that MM2 is responsible for providing an upgrade path for
> users, even if it isn't backwards-compatible by default due to a
> mistake.
> The non-configuration-based strategies I could think of aren't viable
> due to the danger of inferring the incorrect topic name, and inherent
> complexity which makes them hard to backport.
> I also support the decision to backport this to 3.1 - 3.5, so that MM2
> users can upgrade in minor version increments after those patch
> releases go out.
>
> I'm +1 (binding).
>
> Thanks,
> Greg
>
> On Mon, Jul 24, 2023 at 7:21 AM Omnia Ibrahim 
> wrote:
> >
> > Hi Chris, I updated the KIP to address your feedback. Thanks for the
> vote.
> >
> > On Mon, Jul 24, 2023 at 1:30 PM Chris Egerton 
> > wrote:
> >
> > > Hi Omnia,
> > >
> > > I think there's a few clarifications that should still be made on the
> KIP,
> > > but assuming these are agreeable, I'm +1 (binding)
> > >
> > > - In the description for the
> > > replication.policy.internal.topic.separator.enabled property (in the
> > > "Public Interfaces" section), we should specify that it affects only
> the
> > > checkpoints and offset syncs topic
> > > - We can remove the code snippet from the "Proposed Changes" section
> (right
> > > now it's a little buggy; there's two different implementations for the
> same
> > > "internalSuffix" method, and there are references to an
> "internalSeparator"
> > > method but no implementation for it); since we don't usually require
> > > specific code changes in KIPs, I think as long as we can describe the
> > > changes we're proposing in the "Public Interfaces" section, that
> should be
> > > enough for this KIP
> > >
> > > Cheers,
> > >
> > > Chris
> > >
> > > On Mon, Jul 24, 2023 at 2:04 AM Federico Valeri 
> > > wrote:
> > >
> > > > +1 (non binding)
> > > >
> > > > Thanks
> > > > Fede
> > > >
> > > >
> > > > On Sun, Jul 23, 2023 at 6:30 PM Omnia Ibrahim <
> o.g.h.ibra...@gmail.com>
> > > > wrote:
> > > > >
> > > > > Hi everyone,
> > > > > I would like to open a vote for KIP-949. The proposal is here
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-949%3A+Add+flag+to+enable+the+usage+of+topic+separator+in+MM2+DefaultReplicationPolicy
> > > > > .
> > > > > <
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-949%3A+Add+flag+to+enable+the+usage+of+topic+separator+in+MM2+DefaultReplicationPolicy
> > > > >
> > > > >
> > > > > Thanks
> > > > > Omnia
> > > >
> > >
>


Re: [VOTE] KIP-959 Add BooleanConverter to Kafka Connect

2023-08-01 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi,

Still missing one binding vote for this (very small) KIP to pass :)

From: dev@kafka.apache.org At: 07/28/23 09:37:45 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: [VOTE] KIP-959 Add BooleanConverter to Kafka Connect

Hi everyone,

Thanks everyone who has reviewed and voted for this KIP. 

So far it has received 3 non-binding votes (Andrew Schofield, Yash Mayya, Kamal 
Chandraprakash) and 2 binding votes (Chris Egerton, Greg Harris)- still shy of 
one binding vote to pass.

Can we get help from a committer to push it through?

Thank you!
Hector

Sent from Bloomberg Professional for iPhone

- Original Message -
From: Greg Harris 
To: dev@kafka.apache.org
At: 07/26/23 12:23:20 UTC-04:00


Hey Hector,

Thanks for the straightforward and clear KIP!
+1 (binding)

Thanks,
Greg

On Wed, Jul 26, 2023 at 5:16 AM Chris Egerton  wrote:
>
> +1 (binding)
>
> Thanks Hector!
>
> On Wed, Jul 26, 2023 at 3:18 AM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > +1 (non-binding). Thanks for the KIP!
> >
> > On Tue, Jul 25, 2023 at 11:12 PM Yash Mayya  wrote:
> >
> > > Hi Hector,
> > >
> > > Thanks for the KIP!
> > >
> > > +1 (non-binding)
> > >
> > > Thanks,
> > > Yash
> > >
> > > On Tue, Jul 25, 2023 at 11:01 PM Andrew Schofield <
> > > andrew_schofield_j...@outlook.com> wrote:
> > >
> > > > Thanks for the KIP. As you say, not that controversial.
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Thanks,
> > > > Andrew
> > > >
> > > > > On 25 Jul 2023, at 18:22, Hector Geraldino (BLOOMBERG/ 919 3RD A) <
> > > > hgerald...@bloomberg.net> wrote:
> > > > >
> > > > > Hi everyone,
> > > > >
> > > > > The changes proposed by KIP-959 (Add BooleanConverter to Kafka
> > Connect)
> > > > have a limited scope and shouldn't be controversial. I'm opening a
> > voting
> > > > thread with the hope that it can be included in the next upcoming 3.6
> > > > release.
> > > > >
> > > > > Here are some links:
> > > > >
> > > > > KIP:
> > > >
> > >
> > 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverte
r+to+Kafka+Connect
> > > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-15248
> > > > > Discussion thread:
> > > > https://lists.apache.org/thread/15c2t0kl9bozmzjxmkl5n57kv4l4o1dt
> > > > > Pull Request: https://github.com/apache/kafka/pull/14093
> > > > >
> > > > > Thanks!
> > > >
> > > >
> > > >
> > >
> >




[jira] [Created] (KAFKA-15290) Add support to onboard existing topics to tiered storage

2023-08-01 Thread Kamal Chandraprakash (Jira)
Kamal Chandraprakash created KAFKA-15290:


 Summary: Add support to onboard existing topics to tiered storage
 Key: KAFKA-15290
 URL: https://issues.apache.org/jira/browse/KAFKA-15290
 Project: Kafka
  Issue Type: Task
Reporter: Kamal Chandraprakash
Assignee: Kamal Chandraprakash


This task is about adding support to enable tiered storage for existing topics 
in the cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2058

2023-08-01 Thread Apache Jenkins Server
See 




Re: [QUESTION] What is the difference between sequence and offset for a Record?

2023-08-01 Thread Matthias J. Sax

The _offset_ is the position of the record in the partition.

The _sequence number_ is a unique ID that allows broker to de-duplicate 
messages. It requires the producer to implement the idempotency protocol 
(part of Kafka transactions); thus, sequence numbers are optional and as 
long as you don't want to support idempotent writes, you don't need to 
worry about them. (If you want to dig into details, checkout KIP-98 that 
is the original KIP about Kafka TX).


HTH,
  -Matthias

On 8/1/23 2:19 AM, tison wrote:

Hi,

I'm wringing a Kafka API Rust codec library[1] to understand how Kafka
models its concepts and how the core business logic works.

During implementing the codec for Records[2], I saw a twins of fields
"sequence" and "offset". Both of them are calculated by
baseOffset/baseSequence + offset delta. Then I'm a bit confused how to deal
with them properly - what's the difference between these two concepts
logically?

Also, to understand how the core business logic works, I write a simple
server based on my codec library, and observe that the server may need to
update offset for records produced. How does Kafka set the correct offset
for each produced records? And how does Kafka maintain the calculation for
offset and sequence during these modifications?

I'll appreciate if anyone can answer the question or give some insights :D

Best,
tison.

[1] https://github.com/tisonkun/kafka-api
[2] https://kafka.apache.org/documentation/#messageformat



Re: [VOTE] KIP-759: Unneeded repartition canceling

2023-08-01 Thread Walker Carlson
+1 (binding)

On Mon, Jul 31, 2023 at 10:43 PM Matthias J. Sax  wrote:

> +1 (binding)
>
> On 7/11/23 11:16 AM, Shay Lin wrote:
> > Hi all,
> >
> > I'd like to call a vote on KIP-759: Unneeded repartition canceling
> > The KIP has been under discussion for quite some time(two years). This
> is a
> > valuable optimization for advanced users. I hope we can push this toward
> > the finish line this time.
> >
> > Link to the KIP:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-759%3A+Unneeded+repartition+canceling
> >
> > Best,
> > Shay
> >
>


Re: [QUESTION] What is the difference between sequence and offset for a Record?

2023-08-01 Thread Justine Olshan
For what it's worth -- the sequence number is not calculated
"baseOffset/baseSequence + offset delta" but rather by monotonically
increasing for a given epoch. If the epoch is bumped, we reset back to zero.
This may mean that the offset and sequence may match, but do not strictly
need to be the same. The sequence number will also always come from the
client and be in the produce records sent to the Kafka broker.

As for offsets, there is some code in the log layer that maintains the log
end offset and assigns offsets to the records. The produce handling on the
leader should typically assign the offset.
I believe you can find that code here:
https://github.com/apache/kafka/blob/b9a45546a7918799b6fb3c0fe63b56f47d8fcba9/core/src/main/scala/kafka/log/UnifiedLog.scala#L766

Justine

On Tue, Aug 1, 2023 at 11:38 AM Matthias J. Sax  wrote:

> The _offset_ is the position of the record in the partition.
>
> The _sequence number_ is a unique ID that allows broker to de-duplicate
> messages. It requires the producer to implement the idempotency protocol
> (part of Kafka transactions); thus, sequence numbers are optional and as
> long as you don't want to support idempotent writes, you don't need to
> worry about them. (If you want to dig into details, checkout KIP-98 that
> is the original KIP about Kafka TX).
>
> HTH,
>-Matthias
>
> On 8/1/23 2:19 AM, tison wrote:
> > Hi,
> >
> > I'm wringing a Kafka API Rust codec library[1] to understand how Kafka
> > models its concepts and how the core business logic works.
> >
> > During implementing the codec for Records[2], I saw a twins of fields
> > "sequence" and "offset". Both of them are calculated by
> > baseOffset/baseSequence + offset delta. Then I'm a bit confused how to
> deal
> > with them properly - what's the difference between these two concepts
> > logically?
> >
> > Also, to understand how the core business logic works, I write a simple
> > server based on my codec library, and observe that the server may need to
> > update offset for records produced. How does Kafka set the correct offset
> > for each produced records? And how does Kafka maintain the calculation
> for
> > offset and sequence during these modifications?
> >
> > I'll appreciate if anyone can answer the question or give some insights
> :D
> >
> > Best,
> > tison.
> >
> > [1] https://github.com/tisonkun/kafka-api
> > [2] https://kafka.apache.org/documentation/#messageformat
> >
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2059

2023-08-01 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 393964 lines...]
Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerLeft[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testOuterInner[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testOuterInner[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testOuterOuter[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testOuterOuter[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerWithRightVersionedOnly[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerWithRightVersionedOnly[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testLeftWithLeftVersionedOnly[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testLeftWithLeftVersionedOnly[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerWithLeftVersionedOnly[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerWithLeftVersionedOnly[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TaskAssignorIntegrationTest > shouldProperlyConfigureTheAssignor STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TaskAssignorIntegrationTest > shouldProperlyConfigureTheAssignor PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TaskMetadataIntegrationTest > shouldReportCorrectEndOffsetInformation STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TaskMetadataIntegrationTest > shouldReportCorrectEndOffsetInformation PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TaskMetadataIntegrationTest > shouldReportCorrectCommittedOffsetInformation 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
TaskMetadataIntegrationTest > shouldReportCorrectCommittedOffsetInformation 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndInMemoryStores(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndInMemoryStores(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
KStreamAggregationDedupIntegrationTest > shouldReduce(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
KStreamAggregationDedupIntegrationTest > shouldReduce(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
KStreamAggregationDedupIntegrationTest > shouldGroupByKey(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
KStreamAggregationDedupIntegrationTest > shouldGroupByKey(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
KStreamAggregationDedupIntegrationTest > shouldReduceWindowed(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 181 > 
KStreamAgg

[jira] [Created] (KAFKA-15291) Implemented Versioned interfaces in common Connect plugins

2023-08-01 Thread Greg Harris (Jira)
Greg Harris created KAFKA-15291:
---

 Summary: Implemented Versioned interfaces in common Connect plugins
 Key: KAFKA-15291
 URL: https://issues.apache.org/jira/browse/KAFKA-15291
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Greg Harris
 Fix For: 3.6.0


In KAFKA-14863, we changed the plugin scanning logic to allow plugins to opt-in 
to the Versioned interface individually, when previously it was limited to 
Connector plugins.

To take advantage of this change, we should have all of the plugins built via 
the Kafka repository opt-in, and provide the environment's Kafka version from 
the AppInfoParser.getVersion().

See the FileStreamSinkConnector as an example of the the version() method 
implementation.

All subclasses of Converter, HeaderConverter, Transformation, Predicate, and 
ConnectorClientConfigOverridePolicy should implement Versioned. The interfaces 
themselves will _not_ extend Versioned, as that would be a 
backwards-incompatible change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15292) Test IdentityReplicationIntegrationTest#testReplicateSourceDefault() is flaky

2023-08-01 Thread Kirk True (Jira)
Kirk True created KAFKA-15292:
-

 Summary: Test 
IdentityReplicationIntegrationTest#testReplicateSourceDefault() is flaky
 Key: KAFKA-15292
 URL: https://issues.apache.org/jira/browse/KAFKA-15292
 Project: Kafka
  Issue Type: Test
  Components: mirrormaker
Reporter: Kirk True


{{java.lang.RuntimeException: Could not stop worker}}{{ at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.stopWorker(EmbeddedConnectCluster.java:230)}}{{
 at java.base/java.lang.Iterable.forEach(Iterable.java:75)}}{{ at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.stop(EmbeddedConnectCluster.java:163)}}{{
 at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.shutdownClusters(MirrorConnectorsIntegrationBaseTest.java:267)}}{{
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)}}{{ at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)}}{{
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)}}{{
 at java.base/java.lang.reflect.Method.invoke(Method.java:568)}}{{ at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)}}{{
 at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)}}{{
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)}}{{
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)}}{{
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)}}{{
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterEachMethod(TimeoutExtension.java:110)}}{{
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)}}{{
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)}}{{
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)}}{{
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)}}{{
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)}}{{
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)}}{{
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)}}{{
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)}}{{
 at 
org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:520)}}{{
 at 
org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeAfterEachMethodAdapter$24(ClassBasedTestDescriptor.java:510)}}{{
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeAfterEachMethods$10(TestMethodTestDescriptor.java:243)}}{{
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeAllAfterMethodsOrCallbacks$13(TestMethodTestDescriptor.java:276)}}{{
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)}}{{
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeAllAfterMethodsOrCallbacks$14(TestMethodTestDescriptor.java:276)}}{{
 at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)}}{{ at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeAllAfterMethodsOrCallbacks(TestMethodTestDescriptor.java:275)}}{{
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeAfterEachMethods(TestMethodTestDescriptor.java:241)}}{{
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:142)}}{{
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)}}{{
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)}}{{
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)}}{{
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)}}{{
 at 
org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)}}{{ 
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)}}{{
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)}}{{
 at 
org.junit

Re: Debugging Jenkins test failures

2023-08-01 Thread Kirk True
Hi Divij,

Thanks for the pointer to Gradle Enterprise! That’s exactly what I was looking 
for.

Did we track builds before July 12? I see only tiny blips of failures on the 
90-day view.

Thanks,
Kirk

> On Jul 26, 2023, at 2:08 AM, Divij Vaidya  wrote:
> 
> Hi Kirk
> 
> I have been using this new tool to analyze the trends of test
> failures: 
> https://ge.apache.org/scans/tests?search.relativeStartTime=P28D&search.rootProjectNames=kafka&search.timeZoneId=Europe/Berlin
> and general build failures:
> https://ge.apache.org/scans/failures?search.relativeStartTime=P28D&search.rootProjectNames=kafka&search.timeZoneId=Europe/Berlin
> 
> About the classes of build failure, if we look at the last 28 days, I
> do not observe an increasing trend. The top causes of failure are:
> (link [2])
> 1. Failures due to checkstyle (193 builds)
> 2. Timeout waiting to lock cache. It is currently in-use by another
> Gradle instance.
> 3. Compilation failures (116 builds)
> 4. "Gradle Test Executor" finished with a non-zero exit value. Process
> 'Gradle Test Executor 180' finished with non-zero exit value 1
> 
> #4 is caused by a test failure that causes a crash of the Gradle
> process. To debug this, I usually go to complete test output and try
> to figure out which was the last test that 'Gradle Test Executor 180'
> was running. As an example, consider
> https://ge.apache.org/s/luizhogirob4e. We observe that this fails for
> PR-14094. Now, we need to see the complete system out. To find that, I
> will go to Kafka PR builder at
> https://ci-builds.apache.org/job/Kafka/job/kafka-pr/view/change-requests/
> and find the build page for PR-14094. That page is
> https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14094/.
> Next, find last failed build at
> https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14094/lastFailedBuild/
> , observe that we have a failure for "Gradle Test Executor 177", click
> on view as plain text (it takes a long time to load), find what the
> GradleTest Executor was doing. In this case, it failed with the
> following error. I strongly believe that it is due to
> https://github.com/apache/kafka/pull/13572 but unfortunately, this was
> reverted and never fixed after that. Perhaps you might want to re
> 
> Gradle Test Run :core:integrationTest > Gradle Test Executor 177 >
> ProducerFailureHandlingTest > testTooLargeRecordWithAckZero() STARTED
> 
>> Task :clients:integrationTest FAILED
> org.gradle.internal.remote.internal.ConnectException: Could not
> connect to server [bd7b0504-7491-43f8-a716-513adb302c92 port:43321,
> addresses:[/127.0.0.1]]. Tried addresses: [/127.0.0.1].
> at 
> org.gradle.internal.remote.internal.inet.TcpOutgoingConnector.connect(TcpOutgoingConnector.java:67)
> at 
> org.gradle.internal.remote.internal.hub.MessageHubBackedClient.getConnection(MessageHubBackedClient.java:36)
> at 
> org.gradle.process.internal.worker.child.SystemApplicationClassLoaderWorker.call(SystemApplicationClassLoaderWorker.java:103)
> at 
> org.gradle.process.internal.worker.child.SystemApplicationClassLoaderWorker.call(SystemApplicationClassLoaderWorker.java:65)
> at 
> worker.org.gradle.process.internal.worker.GradleWorkerMain.run(GradleWorkerMain.java:69)
> at 
> worker.org.gradle.process.internal.worker.GradleWorkerMain.main(GradleWorkerMain.java:74)
> Caused by: java.net.ConnectException: Connection refused
> at java.base/sun.nio.ch.Net.pollConnect(Native Method)
> at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
> at 
> java.base/sun.nio.ch.SocketChannelImpl.finishTimedConnect(SocketChannelImpl.java:1141)
> at 
> java.base/sun.nio.ch.SocketChannelImpl.blockingConnect(SocketChannelImpl.java:1183)
> at java.base/sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:98)
> at 
> org.gradle.internal.remote.internal.inet.TcpOutgoingConnector.tryConnect(TcpOutgoingConnector.java:81)
> at 
> org.gradle.internal.remote.internal.inet.TcpOutgoingConnector.connect(TcpOutgoingConnector.java:54)
> ... 5 more
> 
> 
> 
> 
> About the classes of test failure problems, if we look at the last 28
> days, the following tests are the biggest culprits. If we fix just
> these two, our CI would be in a much better shape. (link [1])
> 1. https://issues.apache.org/jira/browse/KAFKA-15197 (this test passes
> only 53% of the time)
> 2. https://issues.apache.org/jira/browse/KAFKA-15052 (this test passes
> only 49% of the time)
> 
> 
> [1] 
> https://ge.apache.org/scans/tests?search.relativeStartTime=P28D&search.rootProjectNames=kafka&search.timeZoneId=Europe/Berlin
> [2] 
> https://ge.apache.org/scans/failures?search.relativeStartTime=P28D&search.rootProjectNames=kafka&search.timeZoneId=Europe/Berlin
> 
> 
> --
> Divij Vaidya
> 
> On Tue, Jul 25, 2023 at 8:09 PM Kirk True  wrote:
>> 
>> Hi all!
>> 
>> I’ve noticed that we’re back in the state where it’s tough to get a clean PR 
>> Jenkins test run. Spot checking the top ~10 pull request runs show this 
>> doesn’t appear to be an issue with just my PRs :P
>> 
>> I 

Re: [VOTE] KIP-759: Unneeded repartition canceling

2023-08-01 Thread Bill Bejeck
I caught up on the discussion thread and the KIP LGTM.

+1(binding)

On Tue, Aug 1, 2023 at 3:07 PM Walker Carlson 
wrote:

> +1 (binding)
>
> On Mon, Jul 31, 2023 at 10:43 PM Matthias J. Sax  wrote:
>
> > +1 (binding)
> >
> > On 7/11/23 11:16 AM, Shay Lin wrote:
> > > Hi all,
> > >
> > > I'd like to call a vote on KIP-759: Unneeded repartition canceling
> > > The KIP has been under discussion for quite some time(two years). This
> > is a
> > > valuable optimization for advanced users. I hope we can push this
> toward
> > > the finish line this time.
> > >
> > > Link to the KIP:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-759%3A+Unneeded+repartition+canceling
> > >
> > > Best,
> > > Shay
> > >
> >
>


Flaky tests need attention (clients, Connect, Mirror Maker, Streams, etc.)

2023-08-01 Thread Kirk True
Hi!

According to the Gradle Enterprise statistics on our recent Kafka builds, over 
90% have flaky tests [1].

We also have 106 open Jiras with the “flaky-test” label across several 
functional areas of the project [2].

Can I ask that those familiar with those different functional areas take a look 
at the list of flaky tests and triage them?

Thanks,
Kirk

[1] 
https://ge.apache.org/scans/tests?search.relativeStartTime=P28D&search.rootProjectNames=kafka
[2] 
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20labels%20%3D%20flaky-test

Re: Flaky tests need attention (clients, Connect, Mirror Maker, Streams, etc.)

2023-08-01 Thread Justine Olshan
Is that right that the first one on the list (
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest)
takes
20 minutes?! That's quite a test.
I wonder if the length corresponds to whether it passes, but we should fix
it and maybe move it out of our PR builds.

I was also wondering if we could distinguish PR builds from trunk builds.
That might give us a better signal since PR builds could be before tests
are fixed. Not sure which one is being reported here.

Thanks for sharing though! This is a useful tool that we've needed for a
while.

Justine

On Tue, Aug 1, 2023 at 4:38 PM Kirk True  wrote:

> Hi!
>
> According to the Gradle Enterprise statistics on our recent Kafka builds,
> over 90% have flaky tests [1].
>
> We also have 106 open Jiras with the “flaky-test” label across several
> functional areas of the project [2].
>
> Can I ask that those familiar with those different functional areas take a
> look at the list of flaky tests and triage them?
>
> Thanks,
> Kirk
>
> [1]
> https://ge.apache.org/scans/tests?search.relativeStartTime=P28D&search.rootProjectNames=kafka
> [2]
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20labels%20%3D%20flaky-test


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2060

2023-08-01 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 394084 lines...]

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerInner[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerInner[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerOuter[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerOuter[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerLeft[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerLeft[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testOuterInner[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testOuterInner[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testOuterOuter[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testOuterOuter[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerWithRightVersionedOnly[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerWithRightVersionedOnly[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testLeftWithLeftVersionedOnly[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testLeftWithLeftVersionedOnly[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerWithLeftVersionedOnly[caching enabled = false] STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TableTableJoinIntegrationTest > [caching enabled = false] > 
testInnerWithLeftVersionedOnly[caching enabled = false] PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TaskAssignorIntegrationTest > shouldProperlyConfigureTheAssignor STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TaskAssignorIntegrationTest > shouldProperlyConfigureTheAssignor PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TaskMetadataIntegrationTest > shouldReportCorrectEndOffsetInformation STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TaskMetadataIntegrationTest > shouldReportCorrectEndOffsetInformation PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TaskMetadataIntegrationTest > shouldReportCorrectCommittedOffsetInformation 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
TaskMetadataIntegrationTest > shouldReportCorrectCommittedOffsetInformation 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndInMemoryStores(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 182 > 
HighAvailabilityTaskAssi

[VOTE] KIP-899: Allow producer and consumer clients to rebootstrap

2023-08-01 Thread Ivan Yurchenko
Hello,

The discussion [1] for KIP-899 [2] has been open for quite some time. I'd
like to put the KIP up for a vote.

Best,
Ivan

[1] https://lists.apache.org/thread/m0ncbmfxs5m87sszby2jbmtjx2bdpcdl
[2]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-899%3A+Allow+producer+and+consumer+clients+to+rebootstrap


[GitHub] [kafka-site] netstratum-labs opened a new pull request, #534: Netstratum info updated

2023-08-01 Thread via GitHub


netstratum-labs opened a new pull request, #534:
URL: https://github.com/apache/kafka-site/pull/534

   Added Netstratum information and how we use Kafka in our products.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org