[jira] [Created] (KAFKA-15442) add document to introduce tiered storage feature and the usage

2023-09-07 Thread Luke Chen (Jira)
Luke Chen created KAFKA-15442:
-

 Summary: add document to introduce tiered storage feature and the 
usage
 Key: KAFKA-15442
 URL: https://issues.apache.org/jira/browse/KAFKA-15442
 Project: Kafka
  Issue Type: Sub-task
Reporter: Luke Chen
 Fix For: 3.6.0


Add a section in the document to introduce the tiered storage feature and how 
to enable it, use it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Requesting permission to contribute

2023-09-07 Thread Nikhil Ramakrishnan
Hi,
I'm requesting permission to contribute to the Apache Kafka project, as
described on the WIki:


https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

My Wiki ID: nikrmk
My JIRA ID: nikramakrishnan

Thanks in advance,
Nikhil


Re: Requesting permission to contribute

2023-09-07 Thread Josep Prat
Hi Nikhil,

Thanks for your interest in contributing to Apache Kafka. Your accounts are
all set now.

Let me know if you have any questions.

Best,

On Thu, Sep 7, 2023 at 4:59 PM Nikhil Ramakrishnan <
ramakrishnan.nik...@gmail.com> wrote:

> Hi,
> I'm requesting permission to contribute to the Apache Kafka project, as
> described on the WIki:
>
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>
> My Wiki ID: nikrmk
> My JIRA ID: nikramakrishnan
>
> Thanks in advance,
> Nikhil
>


-- 
[image: Aiven] 

*Josep Prat*
Open Source Engineering Director, *Aiven*
josep.p...@aiven.io   |   +491715557497
aiven.io    |   
     
*Aiven Deutschland GmbH*
Alexanderufer 3-7, 10117 Berlin
Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
Amtsgericht Charlottenburg, HRB 209739 B


Re: KIP-976: Cluster-wide dynamic log adjustment for Kafka Connect

2023-09-07 Thread Chris Egerton
Hi all,

Thanks again for the reviews!


Sagar:

> The updated definition of last_modified looks good to me. As a
continuation
to point number 2, could we also mention that this could be used to get
insights into the propagation of the cluster wide log level updates. It is
implicit but probably better to add it I feel.

Sure, done. Added to the end of the "Config topic records" section: "There
may be some delay between when a REST request with scope=cluster is
received and when all workers have read the corresponding record from the
config topic. The last modified timestamp (details above) can serve as a
rudimentary tool to provide insight into the propagation of a cluster-wide
log level adjustment."

> Yeah I would lean on the side of calling it out explicitly. Since the
behaviour is similar to the current dynamically set log levels (i.e
resetting to the log4j config files levels) so we can call it out stating
that similarity just for completeness sake. It would be useful info for
new/medium level users reading the KIP considering worker restarts is not
uncommon.

Alright, did this too. Added near the end of the "Config topic records"
section: "Restarting a worker will cause it to discard all cluster-wide
dynamic log level adjustments, and revert to the levels specified in its
Log4j configuration. This mirrors the current behavior with per-worker
dynamic log level adjustments."

> I had a nit level suggestion but not sure if it makes sense but would
still
call it out. The entire namespace name in the config records key (along
with logger-cluster prefix) seems to be a bit too verbose. My first thought
was to not have the prefix org.apache.kafka.connect in the keys considering
it is the root namespace. But since logging can be enabled at a root level,
can we just use initials like (o.a.k.c) which is also a standard practice.
Let me know what you think?

Considering these records aren't meant to be user-visible, there doesn't
seem to be a pressing need to reduce their key sizes (though I'll happily
admit that to human eyes, the format is a bit ugly). IMO the increase in
implementation complexity isn't quite worth it, especially considering
there are plenty of logging namespaces that won't begin with
"org.apache.kafka.connect" (likely including all third-party connector
code), like Yash mentions. Is there a motivation for this suggestion that
I'm missing?

> Lastly, I was also thinking if we could introduce a new parameter which
takes a subset of worker ids and enables logging for them in one go. But
this is already achievable by invoking scope=worker endpoint n times to
reflect on n workers so maybe not a necessary change. But this could be
useful on a large cluster. Do you think this is worth listing in the Future
Work section? It's not important so can be ignored as well.

Hmmm... I think I'd rather leave this out for now because I'm just not
certain enough it'd be useful. The one advantage I can think of is
targeting specific workers that are behind a load balancer, but being able
to identify those workers would be a challenge in that scenario anyways.
Besides that, are there any cases that couldn't be addressed more
holistically by targeting based on connector/connector type, like Yash asks?


Ashwin:

Glad we're on the same page RE request forwarding and integration vs.
system tests! Let me know if anything else comes up that you'd like to
discuss.


Yash:

Glad that it makes sense to keep these changes ephemeral. I'm not quite
confident enough to put persistent updates in the "Future work" section but
have a sneaking suspicion that this isn't the last we'll see of that
request. Time will tell...


Thanks again, all!

Cheers,

Chris

On Wed, Sep 6, 2023 at 8:36 AM Yash Mayya  wrote:

> Hi Chris,
>
> Thanks for the clarification on the last modified timestamp tracking here
> and on the KIP, things look good to me now.
>
> On the persistence front, I hadn't considered the interplay between levels
> set by the log level REST APIs and those set by the log4j configuration
> files, and the user confusion that could arise from it. I think your
> argument for keeping the changes made by these log level admin endpoints
> ephemeral makes sense, thanks!
>
> Hi Sagar,
>
> > The entire namespace name in the config
> > records key (along with logger-cluster prefix)
> > seems to be a bit too verbose. My first
> > thought was to not have the prefix
> > org.apache.kafka.connect in the keys
> > considering it is the root namespace. But
> > since logging can be enabled at a root level,
> > can we just use initials like (o.a.k.c) which is
> > also a standard practice.
>
> We allow modifying the log levels for any namespace - i.e. even packages
> and classes outside of Kafka Connect itself (think connector classes,
> dependencies etc.). I'm not sure I follow how we'd avoid naming clashes
> without using the fully qualified name?
>
> > I was also thinking if we could introduce a
> >  new parameter which takes a subset of
> >

Re: Unable to start the Kafka with Kraft in Windows 11

2023-09-07 Thread José Armando García Sancio
Thanks for bringing this to my attention. I agree that it should be a blocker.

On Wed, Sep 6, 2023 at 9:41 AM Greg Harris  wrote:
>
> Hi Ziming,
>
> Thanks for finding that! I've mentioned that in the 3.6.0 release
> thread as a potential blocker since this appears to have a pretty
> substantial impact.
>
> Hey Sumanshu,
>
> Thank you so much for bringing this issue to our attention! It appears
> that your issue is caused by a bug in Kafka, so you shouldn't feel
> obligated to answer my questions from earlier.
> We'll see about trying to get a fix for this issue in the upcoming
> release. I apologize that the released versions of KRaft don't work on
> windows, and are preventing you from evaluating it. You will need to
> use Zookeeper clusters, or run custom builds of Kafka until the fix is
> released.
>
> Thanks,
> Greg
>
> On Tue, Sep 5, 2023 at 7:44 PM ziming deng  wrote:
> >
> > It seems this is related to KAFKA-14273, there is already a pr for this 
> > problem, but it’s not merged.
> >  https://github.com/apache/kafka/pull/12763
> >
> > --
> > Ziming
> >
> > > On Sep 6, 2023, at 07:25, Greg Harris  
> > > wrote:
> > >
> > > Hey Sumanshu,
> > >
> > > Thanks for trying out Kraft! I hope that you can get it working :)
> > >
> > > I am not familiar with Kraft or Windows, but the error appears to
> > > mention that the file is already in use by another process so maybe we
> > > can start there.
> > >
> > > 1. Have you verified that no other Kafka processes are running, such
> > > as in the background or in another terminal?
> > > 2. Are you setting up multiple Kafka brokers on the same machine in your 
> > > test?
> > > 3. Do you see the error if you restart your machine before starting Kafka?
> > > 4. Do you see the error if you delete the log directory and format it
> > > again before starting Kafka?
> > > 5. Have you made any changes to the `server.properties`, such as
> > > changing the log directories? (I see that the default is
> > > `/tmp/kraft-combined-logs`, I don't know if that is a valid path for
> > > Windows).
> > >
> > > Thanks,
> > > Greg
> > >
> > > On Mon, Sep 4, 2023 at 2:21 PM Sumanshu Nankana
> > >  wrote:
> > >>
> > >> Hi Team,
> > >>
> > >> I am following the steps mentioned here 
> > >> https://kafka.apache.org/quickstart to Install the Kafka.
> > >>
> > >> Windows 11
> > >> Kafka Version 
> > >> https://www.apache.org/dyn/closer.cgi?path=/kafka/3.5.0/kafka_2.13-3.5.0.tgz
> > >> 64 Bit Operating System
> > >>
> > >>
> > >> Step1: Generate the Cluster UUID
> > >>
> > >> $KAFKA_CLUSTER_ID=.\bin\windows\kafka-storage.bat random-uuid
> > >>
> > >> Step2: Format Log Directories
> > >>
> > >> .\bin\windows\kafka-storage.bat format -t $KAFKA_CLUSTER_ID -c 
> > >> .\config\kraft\server.properties
> > >>
> > >> Step3: Start the Kafka Server
> > >>
> > >> .\bin\windows\kafka-server-start.bat .\config\kraft\server.properties
> > >>
> > >> I am getting the error. Logs are attached
> > >>
> > >> Could you please help me to sort this error.
> > >>
> > >> Kindly let me know, if you need any more information.
> > >>
> > >> -
> > >> Best
> > >> Sumanshu Nankana
> > >>
> > >>



-- 
-José


Re: Apache Kafka 3.6.0 release

2023-09-07 Thread José Armando García Sancio
Hi Satish,

On Wed, Sep 6, 2023 at 4:58 PM Satish Duggana  wrote:
>
> Hi Greg,
> It seems https://issues.apache.org/jira/browse/KAFKA-14273 has been
> there in 3.5.x too.

I also agree that it should be a blocker for 3.6.0. It should have
been a blocker for those previous releases. I didn't fix it because,
unfortunately, I wasn't aware of the issue and jira.
I'll create a PR with a fix in case the original author doesn't respond in time.

Satish, do you agree?

Thanks!
-- 
-José


[GitHub] [kafka-site] C0urante merged pull request #539: KAFKA-14876: Add stopped state to Kafka Connect Administration docs section

2023-09-07 Thread via GitHub


C0urante merged PR #539:
URL: https://github.com/apache/kafka-site/pull/539


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2181

2023-09-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 410780 lines...]
Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForPunctuationIfPunctuationDisabled() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAddTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAddTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotAssignAnyLockedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotAssignAnyLockedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldRemoveTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldRemoveTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotRemoveAssignedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotRemoveAssignedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAssignTaskThatCanBeProcessed() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAssignTaskThatCanBeProcessed() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotRemoveUnlockedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotRemoveUnlockedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldReturnAndClearExceptionsOnDrainExceptions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldReturnAndClearExceptionsOnDrainExceptions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldUnassignTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldUnassignTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForProcessingIfProcessingDisabled() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForProcessingIfProcessingDisabled() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotSetUncaughtExceptionsForUnassignedTasks() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotSetUncaughtExceptionsForUnassignedTasks() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotAssignLockedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotAssignLockedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldUnassignLockingTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldUnassignLockingTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAssignTasksThatCanBeStreamTimePunctuated() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAssignTasksThatCanBeStreamTimePunctuated() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
RocksDBTimeOrderedKeyValueBytesStoreTest > shouldCreateEmptyWriteBatches() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
RocksDBTimeOrderedKeyValueBytesStoreTest > shouldCreateEmptyWriteBatches() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
RocksDBTimeOrderedKeyValueBytesStoreTest > shouldCreateWriteBatches() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
RocksDBTimeOrderedKeyValueBytesStoreTest > shouldCreateWriteBatches() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfDbToAddWasAlreadyAddedForOtherSegment() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfDbToAddWasAlreadyAddedForOtherSegment() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
RocksDBMetricsRecorderTest > 
shouldAddItselfToRecordingTriggerWhenFirstValueProvidersAreAddedAfterLastValueProvidersWereRemoved()
 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
RocksDBMetricsRecorderTest > 
shouldAddItselfToRecordingTriggerWhenFirstValueProvidersAreAddedAfterLastValueProvidersWereRemoved()
 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
RocksDBMetricsRecorderTest > shouldThrowIfValuePr

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #27

2023-09-07 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-966: Eligible Leader Replicas

2023-09-07 Thread Jun Rao
Hi, Calvin,

Thanks for the KIP. A few comments below.

10. "The High Watermark forwarding still requires a quorum within the ISR."
Should it say that it requires the full ISR?

11. "remove the duplicate member in both ISR and ELR from ELR." Hmm, not
sure that I follow since the KIP says that ELR doesn't overlap with ISR.

12. T1: In this case, min ISR is 3. Why would HWM advance to 2 with only 2
members in ISR?

13. The KIP says "Note that, if maximal ISR > ISR, the message should be
replicated to the maximal ISR before covering the message under HWM. The
proposal does not change this behavior." and "Currently, we would advance
HWM because it replicated to 2 brokers (the ones in Maximal ISR), but in
the new protocol we wait until the controller updates ISR=[0,2] to avoid
advancing HWM beyond what ELR=[1] has." They seem a bit inconsistent since
one says no change and the other describes a change.

14. "If there are no ELR members. If the
unclean.recovery.strategy=balanced, the controller will do the unclean
recovery. Otherwise, unclean.recovery.strategy=Manual, the controller will
not attempt to elect a leader. Waiting for the user operations." What
happens with unclean.recovery.strategy=Proactive?

15. "In Balance mode, all the LastKnownELR members have replied." In
Proactive, we wait for all replicas within a fixed amount of time. Balance
should do the same since it's designed to preserve more data, right?

16. "The URM will query all the replicas including the fenced replicas."
Why include the fenced replicas? Could a fenced replica be elected as the
leader?

17. Once unclean.recovery.strategy is enabled, new metadata records could
be written to the metadata log. At that point, is the broker downgradable?
It would be useful to document that.

18. Since LastKnownELR can have more than 1 member, should it be
LastKnownELRs?

19. BrokerRegistration.BrokerEpoch: "The broker's assigned epoch or the
epoch before a clean shutdown." How do we tell whether the value is for the
current or the previous epoch? Does it matter?

20. DescribeTopicRequest: Who issues that request? Who can serve that
request? Is it only the controller or any broker?

21. DesiredLeaders: Does the ordering matter?

22. GetReplicaLogInfo only uses topicId while DescribeTopicRequest uses
both topicId and name. Should they be consistent?

23. --election-type: The description mentions unclean, but that option
doesn't exist. Also, could we describe what DESIGNATION means?

24. kafka-leader-election.sh has minimalReplicas, but ElectLeadersRequest
doesn't seem to have a corresponding field?

25. kafka.replication.paused_partitions_count: paused doesn't seem to match
the meaning of the metric. Should this be leaderless_partitions_count?

26. kafka.replication.unclean_recovery_partitions_count: When is it set?
Does it ever get unset?

27. "min.insync.replicas now applies to the replication of all kinds of
messages." Not sure that I follow. Could you explain a bit more?

28. "But later, the current leader will put the follower into the pending
ISR" : It would be useful to clarify this is after the network partitioning
is gone.

29. "last known leader" behavior. Our current behavior is to preserve the
last known ISR, right?

30. For all new requests, it would be useful to document the corresponding
ACL.

Jun

On Wed, Sep 6, 2023 at 11:21 AM Calvin Liu 
wrote:

> Hi Jack
> Thanks for the comment.
> I have updated the reassignment part. Now the reassignment can only be
> completed/canceled if the final ISR size is larger than min ISR.
> Thanks to your efforts of the TLA+! It has been a great help to the KIP!
>
> On Wed, Sep 6, 2023 at 6:32 AM Jack Vanlightly 
> wrote:
>
> > Hi Calvin,
> >
> > Regarding partition reassignment, I have two comments.
> >
> > I notice the KIP says "The AlterPartitionReassignments will not change
> the
> > ELR" however, when a reassignment completes (or reverts) any replicas
> > removed from the replica set would be removed from the ELR. Sounds
> obvious
> > but I figured we should be explicit about that.
> >
> > Reassignment should also respect min.insync.replicas because currently a
> > reassignment can complete as long as the ISR is not empty and all added
> > replicas are members. However, my TLA+ specification, which now includes
> > reassignment, finds single broker failures that can cause committed data
> > loss - despite the added protection of the ELR and min.insync.replicas=2.
> > These scenarios are limited to shrinking the size of the replica set. If
> we
> > modify the PartitionChangeBuilder to add the completion condition that
> the
> > target ISR >= min.insync.replicas, then that closes this last
> > single-broker-failure data loss case.
> >
> > With the above modification, the TLA+ specification of the ELR part of
> the
> > design is standing up to all safety and liveness checks. The only thing
> > that is not modeled is the unclean recovery though I may leave that as
> the
> > specification is already very 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #28

2023-09-07 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-15416) Flaky test TopicAdminTest::retryEndOffsetsShouldRetryWhenTopicNotFound

2023-09-07 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-15416.
---
Fix Version/s: 3.7.0
   Resolution: Fixed

> Flaky test TopicAdminTest::retryEndOffsetsShouldRetryWhenTopicNotFound
> --
>
> Key: KAFKA-15416
> URL: https://issues.apache.org/jira/browse/KAFKA-15416
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Minor
> Fix For: 3.7.0
>
>
> This test fails frequently when I run unit tests locally, but I've never seen 
> it fail during a CI build.
> Failure message:
> {quote}    org.apache.kafka.connect.errors.ConnectException: Failed to list 
> offsets for topic partitions.
>         at 
> app//org.apache.kafka.connect.util.TopicAdmin.retryEndOffsets(TopicAdmin.java:777)
>         at 
> app//org.apache.kafka.connect.util.TopicAdminTest.retryEndOffsetsShouldRetryWhenTopicNotFound(TopicAdminTest.java:570)
>  
>         Caused by:
>         org.apache.kafka.connect.errors.ConnectException: Fail to list 
> offsets for topic partitions after 1 attempts.  Reason: Timed out while 
> waiting to get end offsets for topic 'myTopic' on brokers at 
> \{retry.backoff.ms=0}
>             at 
> app//org.apache.kafka.connect.util.RetryUtil.retryUntilTimeout(RetryUtil.java:106)
>             at 
> app//org.apache.kafka.connect.util.RetryUtil.retryUntilTimeout(RetryUtil.java:56)
>             at 
> app//org.apache.kafka.connect.util.TopicAdmin.retryEndOffsets(TopicAdmin.java:768)
>             ... 1 more
>  
>             Caused by:
>             org.apache.kafka.common.errors.TimeoutException: Timed out while 
> waiting to get end offsets for topic 'myTopic' on brokers at 
> \{retry.backoff.ms=0}
>  
>                 Caused by:
>                 java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send 
> the call. Call: listOffsets(api=METADATA)
>                     at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>                     at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>                     at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
>                     at 
> org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:716)
>                     at 
> org.apache.kafka.connect.util.TopicAdmin.lambda$retryEndOffsets$7(TopicAdmin.java:769)
>                     at 
> org.apache.kafka.connect.util.RetryUtil.retryUntilTimeout(RetryUtil.java:87)
>                     at 
> org.apache.kafka.connect.util.RetryUtil.retryUntilTimeout(RetryUtil.java:56)
>                     at 
> org.apache.kafka.connect.util.TopicAdmin.retryEndOffsets(TopicAdmin.java:768)
>                     at 
> org.apache.kafka.connect.util.TopicAdminTest.retryEndOffsetsShouldRetryWhenTopicNotFound(TopicAdminTest.java:570)
>  
>                     Caused by:
>                     org.apache.kafka.common.errors.TimeoutException: Timed 
> out waiting to send the call. Call: listOffsets(api=METADATA)
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14273) Kafka doesn't start with KRaft on Windows

2023-09-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-14273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

José Armando García Sancio resolved KAFKA-14273.

Resolution: Fixed

> Kafka doesn't start with KRaft on Windows
> -
>
> Key: KAFKA-14273
> URL: https://issues.apache.org/jira/browse/KAFKA-14273
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.3.1
>Reporter: Kedar Joshi
>Assignee: José Armando García Sancio
>Priority: Major
> Fix For: 3.6.0
>
>
> {{Basic setup doesn't work on Windows 10.}}
> *{{Steps}}*
>  * {{Initialize cluster with -}}
> {code:sh}
>     bin\windows\kafka-storage.bat random-uuid
>     bin\windows\kafka-storage.bat format -t %cluster_id% -c 
> .\config\kraft\server.properties{code}
>  
>  * Start Kafka with -
> {code:sh}
>    bin\windows\kafka-server-start.bat .\config\kraft\server.properties{code}
>  
> *Stacktrace*
> Kafka fails to start with following exception -
> {code:java}
> D:\LocationGuru\Servers\Kafka-3.3>bin\windows\kafka-server-start.bat 
> .\config\kraft\server.properties
> [2022-10-03 23:14:20,089] INFO Registered kafka:type=kafka.Log4jController 
> MBean (kafka.utils.Log4jControllerRegistration$)
> [2022-10-03 23:14:20,375] INFO Setting -D 
> jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated 
> TLS renegotiation (org.apache.zookeeper.common.X509Util)
> [2022-10-03 23:14:20,594] INFO [LogLoader partition=__cluster_metadata-0, 
> dir=D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs] Loading 
> producer state till offset 0 with message format version 2 
> (kafka.log.UnifiedLog$)
> [2022-10-03 23:14:20,594] INFO [LogLoader partition=__cluster_metadata-0, 
> dir=D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs] Reloading from 
> producer snapshot and rebuilding producer state from offset 0 
> (kafka.log.UnifiedLog$)
> [2022-10-03 23:14:20,594] INFO [LogLoader partition=__cluster_metadata-0, 
> dir=D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs] Producer state 
> recovery took 0ms for snapshot load and 0ms for segment recovery from offset 
> 0 (kafka.log.UnifiedLog$)
> [2022-10-03 23:14:20,640] INFO Initialized snapshots with IDs SortedSet() 
> from 
> D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs__cluster_metadata-0 
> (kafka.raft.KafkaMetadataLog$)
> [2022-10-03 23:14:20,734] INFO [raft-expiration-reaper]: Starting 
> (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
> [2022-10-03 23:14:20,900] ERROR Exiting Kafka due to fatal exception 
> (kafka.Kafka$)
> java.io.UncheckedIOException: Error while writing the Quorum status from the 
> file 
> D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs__cluster_metadata-0\quorum-state
>         at 
> org.apache.kafka.raft.FileBasedStateStore.writeElectionStateToFile(FileBasedStateStore.java:155)
>         at 
> org.apache.kafka.raft.FileBasedStateStore.writeElectionState(FileBasedStateStore.java:128)
>         at 
> org.apache.kafka.raft.QuorumState.transitionTo(QuorumState.java:477)
>         at org.apache.kafka.raft.QuorumState.initialize(QuorumState.java:212)
>         at 
> org.apache.kafka.raft.KafkaRaftClient.initialize(KafkaRaftClient.java:369)
>         at kafka.raft.KafkaRaftManager.buildRaftClient(RaftManager.scala:200)
>         at kafka.raft.KafkaRaftManager.(RaftManager.scala:127)
>         at kafka.server.KafkaRaftServer.(KafkaRaftServer.scala:83)
>         at kafka.Kafka$.buildServer(Kafka.scala:79)
>         at kafka.Kafka$.main(Kafka.scala:87)
>         at kafka.Kafka.main(Kafka.scala)
> Caused by: java.nio.file.FileSystemException: 
> D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs_cluster_metadata-0\quorum-state.tmp
>  -> 
> D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs_cluster_metadata-0\quorum-state:
>  The process cannot access the file because it is being used by another 
> process
>         at 
> java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
>         at 
> java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
>         at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:403)
>         at 
> java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:293)
>         at java.base/java.nio.file.Files.move(Files.java:1430)
>         at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:935)
>         at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:918)
>         at 
> org.apache.kafka.raft.FileBasedStateStore.writeElectionStateToFile(FileBasedStateStore.java:152)
>         ... 10 more
>         Suppressed: java.nio.file.FileSystemException: 
> D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-l

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #29

2023-09-07 Thread Apache Jenkins Server
See 




Re: Apache Kafka 3.6.0 release

2023-09-07 Thread Satish Duggana
Hi Jose,
Thanks for looking into this issue and resolving it with a quick fix.

~Satish.

On Thu, 7 Sept 2023 at 21:40, José Armando García Sancio
 wrote:
>
> Hi Satish,
>
> On Wed, Sep 6, 2023 at 4:58 PM Satish Duggana  
> wrote:
> >
> > Hi Greg,
> > It seems https://issues.apache.org/jira/browse/KAFKA-14273 has been
> > there in 3.5.x too.
>
> I also agree that it should be a blocker for 3.6.0. It should have
> been a blocker for those previous releases. I didn't fix it because,
> unfortunately, I wasn't aware of the issue and jira.
> I'll create a PR with a fix in case the original author doesn't respond in 
> time.
>
> Satish, do you agree?
>
> Thanks!
> --
> -José


[jira] [Created] (KAFKA-15443) Upgrade RocksDB dependency

2023-09-07 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-15443:
---

 Summary: Upgrade RocksDB dependency
 Key: KAFKA-15443
 URL: https://issues.apache.org/jira/browse/KAFKA-15443
 Project: Kafka
  Issue Type: Task
  Components: streams
Reporter: Matthias J. Sax


Kafka Streams currently depends on RocksDB 7.9.2

However, the latest version of RocksDB is already 8.5.3. We should check the 
RocksDB release notes to see what benefits we get to upgrade to the latest 
version (and file corresponding tickets to exploit improvement of newer 
releases as applicable).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #30

2023-09-07 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-966: Eligible Leader Replicas

2023-09-07 Thread Artem Livshits
Hi Calvin,

Thanks for the KIP.  The new ELR protocol looks good to me.  I have some
questions about unclean recovery, specifically in "balanced" mode:

1. The KIP mentions that the controller would trigger unclear recovery when
the leader is fenced, but my understanding is that when a leader is fenced,
it would get into ELR.  Would it be more precise to say that an unclear
leader election is triggered when the last member of ELR gets unfenced and
registers with unclean shutdown?
2. For balanced mode, we need replies from at least LastKnownELR, in which
case, does it make sense to start unclean recovery if some of the
LastKnownELR are fenced?
3. "The URM takes the partition info to initiate an unclear recovery task
..." the parameters are topic-partition and replica ids -- what are those?
Would those be just the whole replica assignment or just LastKnownELR?

-Artem

On Thu, Aug 10, 2023 at 3:47 PM Calvin Liu 
wrote:

> Hi everyone,
> I'd like to discuss a series of enhancement to the replication protocol.
>
> A partition replica can experience local data loss in unclean shutdown
> scenarios where unflushed data in the OS page cache is lost - such as an
> availability zone power outage or a server error. The Kafka replication
> protocol is designed to handle these situations by removing such replicas
> from the ISR and only re-adding them once they have caught up and therefore
> recovered any lost data. This prevents replicas that lost an arbitrary log
> suffix, which included committed data, from being elected leader.
> However, there is a "last replica standing" state which when combined with
> a data loss unclean shutdown event can turn a local data loss scenario into
> a global data loss scenario, i.e., committed data can be removed from all
> replicas. When the last replica in the ISR experiences an unclean shutdown
> and loses committed data, it will be reelected leader after starting up
> again, causing rejoining followers to truncate their logs and thereby
> removing the last copies of the committed records which the leader lost
> initially.
>
> The new KIP will maximize the protection and provides MinISR-1 tolerance to
> data loss unclean shutdown events.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-966%3A+Eligible+Leader+Replicas
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2183

2023-09-07 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-939: Support Participation in 2PC

2023-09-07 Thread Artem Livshits
Hi Alex,

Thank you for your questions.

> the purpose of having broker-level transaction.two.phase.commit.enable

The thinking is that 2PC is a bit of an advanced construct so enabling 2PC
in a Kafka cluster should be an explicit decision.  If it is set to 'false'
InitiProducerId (and initTransactions) would
return TRANSACTIONAL_ID_AUTHORIZATION_FAILED.

> WDYT about adding an AdminClient method that returns the state of
transaction.two.phase.commit.enable

I wonder if the client could just try to use 2PC and then handle the error
(e.g. if it needs to fall back to ordinary transactions).  This way it
could uniformly handle cases when Kafka cluster doesn't support 2PC
completely and cases when 2PC is restricted to certain users.  We could
also expose this config in describeConfigs, if the fallback approach
doesn't work for some scenarios.

-Artem


On Tue, Sep 5, 2023 at 12:45 PM Alexander Sorokoumov
 wrote:

> Hi Artem,
>
> Thanks for publishing this KIP!
>
> Can you please clarify the purpose of having broker-level
> transaction.two.phase.commit.enable config in addition to the new ACL? If
> the brokers are configured with transaction.two.phase.commit.enable=false,
> at what point will a client configured with
> transaction.two.phase.commit.enable=true fail? Will it happen at
> KafkaProducer#initTransactions?
>
> WDYT about adding an AdminClient method that returns the state of t
> ransaction.two.phase.commit.enable? This way, clients would know in advance
> if 2PC is enabled on the brokers.
>
> Best,
> Alex
>
> On Fri, Aug 25, 2023 at 9:40 AM Roger Hoover 
> wrote:
>
> > Other than supporting multiplexing transactional streams on a single
> > producer, I don't see how to improve it.
> >
> > On Thu, Aug 24, 2023 at 12:12 PM Artem Livshits
> >  wrote:
> >
> > > Hi Roger,
> > >
> > > Thank you for summarizing the cons.  I agree and I'm curious what would
> > be
> > > the alternatives to solve these problems better and if they can be
> > > incorporated into this proposal (or built independently in addition to
> or
> > > on top of this proposal).  E.g. one potential extension we discussed
> > > earlier in the thread could be multiplexing logical transactional
> > "streams"
> > > with a single producer.
> > >
> > > -Artem
> > >
> > > On Wed, Aug 23, 2023 at 4:50 PM Roger Hoover 
> > > wrote:
> > >
> > > > Thanks.  I like that you're moving Kafka toward supporting this
> > > dual-write
> > > > pattern.  Each use case needs to consider the tradeoffs.  You already
> > > > summarized the pros very well in the KIP.  I would summarize the cons
> > > > as follows:
> > > >
> > > > - you sacrifice availability - each write requires both DB and Kafka
> to
> > > be
> > > > available so I think your overall application availability is 1 -
> p(DB
> > is
> > > > unavailable)*p(Kafka is unavailable).
> > > > - latency will be higher and throughput lower - each write requires
> > both
> > > > writes to DB and Kafka while holding an exclusive lock in DB.
> > > > - you need to create a producer per unit of concurrency in your app
> > which
> > > > has some overhead in the app and Kafka side (number of connections,
> > poor
> > > > batching).  I assume the producers would need to be configured for
> low
> > > > latency (linger.ms=0)
> > > > - there's some complexity in managing stable transactional ids for
> each
> > > > producer/concurrency unit in your application.  With k8s deployment,
> > you
> > > > may need to switch to something like a StatefulSet that gives each
> pod
> > a
> > > > stable identity across restarts.  On top of that pod identity which
> you
> > > can
> > > > use as a prefix, you then assign unique transactional ids to each
> > > > concurrency unit (thread/goroutine).
> > > >
> > > > On Wed, Aug 23, 2023 at 12:53 PM Artem Livshits
> > > >  wrote:
> > > >
> > > > > Hi Roger,
> > > > >
> > > > > Thank you for the feedback.  You make a very good point that we
> also
> > > > > discussed internally.  Adding support for multiple concurrent
> > > > > transactions in one producer could be valuable but it seems to be a
> > > > fairly
> > > > > large and independent change that would deserve a separate KIP.  If
> > > such
> > > > > support is added we could modify 2PC functionality to incorporate
> > that.
> > > > >
> > > > > > Maybe not too bad but a bit of pain to manage these ids inside
> each
> > > > > process and across all application processes.
> > > > >
> > > > > I'm not sure if supporting multiple transactions in one producer
> > would
> > > > make
> > > > > id management simpler: we'd need to store a piece of data per
> > > > transaction,
> > > > > so whether it's N producers with a single transaction or N
> > transactions
> > > > > with a single producer, it's still roughly the same amount of data
> to
> > > > > manage.  In fact, managing transactional ids (current proposal)
> might
> > > be
> > > > > easier, because the id is controlled by the application and it
> knows
> > > how
> > > > to
> > > > > comp

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #71

2023-09-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 282502 lines...]
Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
VersionedKeyValueStoreIntegrationTest > shouldSetChangelogTopicProperties PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
VersionedKeyValueStoreIntegrationTest > shouldRestore STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
VersionedKeyValueStoreIntegrationTest > shouldRestore PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
VersionedKeyValueStoreIntegrationTest > shouldPutGetAndDelete STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
VersionedKeyValueStoreIntegrationTest > shouldPutGetAndDelete PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
VersionedKeyValueStoreIntegrationTest > 
shouldManualUpgradeFromNonVersionedTimestampedToVersioned STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
VersionedKeyValueStoreIntegrationTest > 
shouldManualUpgradeFromNonVersionedTimestampedToVersioned PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted PASSED
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader 
classes because bootstrap classpath has been appended

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testHighAvailabilityTaskAssignorLargeNumConsumers 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testHighAvailabilityTaskAssignorLargeNumConsumers 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorManyStandbys STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorManyStandbys PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorManyThreadsPerClient STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorManyThreadsPerClient PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorManyThreadsPerClient 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorManyThreadsPerClient 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorLargePartitionCount 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorLargePartitionCount 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorLargePartitionCount STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorLargePartitionCount PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorManyStandbys STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorManyStandbys PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testHighAvailabilityTaskAssignorManyStandbys 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testHighAvailabilityTaskAssignorManyStandbys PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 177 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorLargeNumConsumers 
STARTED

Gradle Test Run :streams:integrationTest > Grad

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #190

2023-09-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 419853 lines...]
> Task :connect:api:compileTestJava UP-TO-DATE
> Task :connect:api:testClasses UP-TO-DATE
> Task :connect:api:testJar
> Task :connect:api:testSrcJar
> Task :connect:api:publishMavenJavaPublicationToMavenLocal
> Task :connect:api:publishToMavenLocal

> Task :streams:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:854:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:84:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:136:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:147:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:101:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:167:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java:38:
 warning - Tag @link: reference not found: ProcessorContext#forward(Object, 
Object) forwards
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
 PositionBound, boolean) in org.apache.kafka.streams.processor.StateStore
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:44:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:36:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:57:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:74:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:110:
 warning - Tag @link: reference not found: this#getResult()
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:117:
 warning - Tag @link: reference not found: th

[jira] [Created] (KAFKA-15444) KIP-974: Docker Image for GraalVM based Native Kafka Broker

2023-09-07 Thread Krishna Agarwal (Jira)
Krishna Agarwal created KAFKA-15444:
---

 Summary: KIP-974: Docker Image for GraalVM based Native Kafka 
Broker
 Key: KAFKA-15444
 URL: https://issues.apache.org/jira/browse/KAFKA-15444
 Project: Kafka
  Issue Type: New Feature
Reporter: Krishna Agarwal
Assignee: Krishna Agarwal






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15445) KIP-975: Docker Image for Apache Kafka

2023-09-07 Thread Krishna Agarwal (Jira)
Krishna Agarwal created KAFKA-15445:
---

 Summary: KIP-975: Docker Image for Apache Kafka
 Key: KAFKA-15445
 URL: https://issues.apache.org/jira/browse/KAFKA-15445
 Project: Kafka
  Issue Type: New Feature
Reporter: Krishna Agarwal
Assignee: Krishna Agarwal






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.4 #164

2023-09-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 526774 lines...]
> Task :streams:javadoc
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:854:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:84:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:136:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:147:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:101:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:167:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:62:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:62:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java:38:
 warning - Tag @link: reference not found: ProcessorContext#forward(Object, 
Object) forwards
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
 PositionBound, boolean) in org.apache.kafka.streams.processor.StateStore
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:109:
 warning - Tag @link: reference not found: this#getResult()
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:116:
 warning - Tag @link: reference not found: this#getFailureReason()
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:116:
 warning - Tag @link: reference not found: this#getFailureMessage()
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:154:
 warning - Tag @link: reference not found: this#isSuccess()
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:154:
 warning - Tag @link: reference not found: this#isFailure()
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
25 warnings

> Task :streams:javadocJar

> Task :clients:javadoc
/home/jenkins/workspace/Kafka_kafka_3.4/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
/home/jenkins/workspace/Kafka_kafka_3.4/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #31

2023-09-07 Thread Apache Jenkins Server
See