Re: [DISCUSS] KIP-1005: Add EarliestLocalOffset to GetOffsetShell

2024-01-11 Thread Christo Lolov
Thank you Divij!

I have updated the KIP to explicitly state that the broker will have a
different behaviour when a timestamp of -5 is requested as part of
ListOffsets.

Best,
Christo

On Tue, 2 Jan 2024 at 11:10, Divij Vaidya  wrote:

> Thanks for the KIP Christo.
>
> The shell command that you mentioned calls ListOffsets API internally.
> Hence, I believe that we would be making a public interface change (and a
> version bump) to ListOffsetsAPI as well to include -5? If yes, can you
> please add that information to the change in public interfaces in the KIP.
>
> --
> Divij Vaidya
>
>
>
> On Tue, Nov 21, 2023 at 2:19 PM Christo Lolov 
> wrote:
>
> > Heya!
> >
> > Thanks a lot for this. I have updated the KIP to include exposing the
> > tiered-offset as well. Let me know whether the Public Interfaces section
> > needs more explanations regarding the changes needed to the OffsetSpec or
> > others.
> >
> > Best,
> > Christo
> >
> > On Tue, 21 Nov 2023 at 04:20, Satish Duggana 
> > wrote:
> >
> > > Thanks Christo for starting the discussion on the KIP.
> > >
> > > As mentioned in KAFKA-15857[1], the goal is to add new entries for
> > > local-log-start-offset and tierd-offset in OffsetSpec. This will be
> > > used in AdminClient APIs and also to be added as part of
> > > GetOffsetShell. This was also raised by Kamal in the earlier email.
> > >
> > > OffsetSpec related changes for these entries also need to be mentioned
> > > as part of the PublicInterfaces section because these are exposed to
> > > users as public APIs through Admin#listOffsets() APIs[2, 3].
> > >
> > > Please update the KIP with the above details.
> > >
> > > 1. https://issues.apache.org/jira/browse/KAFKA-15857
> > > 2.
> > >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/Admin.java#L1238
> > > 3.
> > >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/Admin.java#L1226
> > >
> > > ~Satish.
> > >
> > > On Mon, 20 Nov 2023 at 18:35, Kamal Chandraprakash
> > >  wrote:
> > > >
> > > > Hi Christo,
> > > >
> > > > Thanks for the KIP!
> > > >
> > > > Similar to the earliest-local-log offset, can we also expose the
> > > > highest-copied-remote-offset via
> > > > GetOffsetShell tool? This will be useful during the debugging
> session.
> > > >
> > > >
> > > > On Mon, Nov 20, 2023 at 5:38 PM Christo Lolov <
> christolo...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hello all!
> > > > >
> > > > > I would like to start a discussion for
> > > > >
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1005%3A+Add+EarliestLocalOffset+to+GetOffsetShell
> > > > > .
> > > > >
> > > > > A new offset called local log start offset was introduced as part
> of
> > > > > KIP-405: Kafka Tiered Storage. KIP-1005 aims to expose this offset
> by
> > > > > changing the AdminClient and in particular the GetOffsetShell tool.
> > > > >
> > > > > I am looking forward to your suggestions for improvement!
> > > > >
> > > > > Best,
> > > > > Christo
> > > > >
> > >
> >
>


Re: [VOTE] KIP-995: Allow users to specify initial offsets while creating connectors

2024-01-11 Thread Mickael Maison
Hi Ashwin,

+1 (binding), thanks for the KIP

Mickael

On Tue, Jan 9, 2024 at 4:54 PM Chris Egerton  wrote:
>
> Thanks for the KIP! +1 (binding)
>
> On Mon, Jan 8, 2024 at 9:35 AM Ashwin  wrote:
>
> > Hi All,
> >
> > I would like to start  a vote on KIP-995.
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-995%3A+Allow+users+to+specify+initial+offsets+while+creating+connectors
> >
> > Discussion thread -
> > https://lists.apache.org/thread/msorbr63scglf4484yq764v7klsj7c4j
> >
> > Thanks!
> >
> > Ashwin
> >


Re: [DISCUSS] KIP-1005: Add EarliestLocalOffset to GetOffsetShell

2024-01-11 Thread Divij Vaidya
Thank you for making the change Christo. It looks good to me.

--
Divij Vaidya



On Thu, Jan 11, 2024 at 11:19 AM Christo Lolov 
wrote:

> Thank you Divij!
>
> I have updated the KIP to explicitly state that the broker will have a
> different behaviour when a timestamp of -5 is requested as part of
> ListOffsets.
>
> Best,
> Christo
>
> On Tue, 2 Jan 2024 at 11:10, Divij Vaidya  wrote:
>
> > Thanks for the KIP Christo.
> >
> > The shell command that you mentioned calls ListOffsets API internally.
> > Hence, I believe that we would be making a public interface change (and a
> > version bump) to ListOffsetsAPI as well to include -5? If yes, can you
> > please add that information to the change in public interfaces in the
> KIP.
> >
> > --
> > Divij Vaidya
> >
> >
> >
> > On Tue, Nov 21, 2023 at 2:19 PM Christo Lolov 
> > wrote:
> >
> > > Heya!
> > >
> > > Thanks a lot for this. I have updated the KIP to include exposing the
> > > tiered-offset as well. Let me know whether the Public Interfaces
> section
> > > needs more explanations regarding the changes needed to the OffsetSpec
> or
> > > others.
> > >
> > > Best,
> > > Christo
> > >
> > > On Tue, 21 Nov 2023 at 04:20, Satish Duggana  >
> > > wrote:
> > >
> > > > Thanks Christo for starting the discussion on the KIP.
> > > >
> > > > As mentioned in KAFKA-15857[1], the goal is to add new entries for
> > > > local-log-start-offset and tierd-offset in OffsetSpec. This will be
> > > > used in AdminClient APIs and also to be added as part of
> > > > GetOffsetShell. This was also raised by Kamal in the earlier email.
> > > >
> > > > OffsetSpec related changes for these entries also need to be
> mentioned
> > > > as part of the PublicInterfaces section because these are exposed to
> > > > users as public APIs through Admin#listOffsets() APIs[2, 3].
> > > >
> > > > Please update the KIP with the above details.
> > > >
> > > > 1. https://issues.apache.org/jira/browse/KAFKA-15857
> > > > 2.
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/Admin.java#L1238
> > > > 3.
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/Admin.java#L1226
> > > >
> > > > ~Satish.
> > > >
> > > > On Mon, 20 Nov 2023 at 18:35, Kamal Chandraprakash
> > > >  wrote:
> > > > >
> > > > > Hi Christo,
> > > > >
> > > > > Thanks for the KIP!
> > > > >
> > > > > Similar to the earliest-local-log offset, can we also expose the
> > > > > highest-copied-remote-offset via
> > > > > GetOffsetShell tool? This will be useful during the debugging
> > session.
> > > > >
> > > > >
> > > > > On Mon, Nov 20, 2023 at 5:38 PM Christo Lolov <
> > christolo...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hello all!
> > > > > >
> > > > > > I would like to start a discussion for
> > > > > >
> > > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1005%3A+Add+EarliestLocalOffset+to+GetOffsetShell
> > > > > > .
> > > > > >
> > > > > > A new offset called local log start offset was introduced as part
> > of
> > > > > > KIP-405: Kafka Tiered Storage. KIP-1005 aims to expose this
> offset
> > by
> > > > > > changing the AdminClient and in particular the GetOffsetShell
> tool.
> > > > > >
> > > > > > I am looking forward to your suggestions for improvement!
> > > > > >
> > > > > > Best,
> > > > > > Christo
> > > > > >
> > > >
> > >
> >
>


Re: [PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-01-11 Thread via GitHub


mimaison commented on code in PR #578:
URL: https://github.com/apache/kafka-site/pull/578#discussion_r1448678795


##
blog.html:
##
@@ -22,6 +22,119 @@
 
 
 Blog
+
+
+
+Apache 
Kafka 3.7.0 Release Announcement
+
+TODO: January 2024 - Stanislav Kozlovski (https://twitter.com/0xeed";>@BdKozlovski)

Review Comment:
   You also want to fix the twitter link to point to your profile 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-01-11 Thread via GitHub


mimaison commented on code in PR #578:
URL: https://github.com/apache/kafka-site/pull/578#discussion_r1448691656


##
blog.html:
##
@@ -22,6 +22,119 @@
 
 
 Blog
+
+
+
+Apache 
Kafka 3.7.0 Release Announcement
+
+TODO: January 2024 - Stanislav Kozlovski (https://twitter.com/0xeed";>@BdKozlovski)
+We are proud to announce the release of Apache Kafka 3.7.0. 
This release contains many new features and improvements. This blog post will 
highlight some of the more prominent features. For a full list of changes, be 
sure to check the https://downloads.apache.org/kafka/3.7.0/RELEASE_NOTES.html";>release 
notes.
+See the https://kafka.apache.org/36/documentation.html#upgrade_3_7_0";>Upgrading 
to 3.7.0 from any version 0.8.x through 3.6.x section in the documentation 
for the list of notable changes and detailed upgrade steps.
+
+In the last release, 3.6,
+https://kafka.apache.org/documentation/#kraft_zk_migration";>the ability 
to migrate Kafka clusters from a ZooKeeper metadata system
+to a KRaft metadata system was ready for usage in 
production environments with one caveat - JBOD was not yet available for KRaft 
clusters.
+This release, 3.7, we are shipping support for JBOD in 
KRaft. (See https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft";>KIP-858
 for details)
+
+
+Note: ZooKeeper is marked as deprecated since the 3.5.0 
release. ZooKeeper is planned to be removed in Apache Kafka 4.0. For more 
information, please see the documentation for https://kafka.apache.org/documentation/#zk_depr";>ZooKeeper 
Deprecation
+
+
+Kafka Broker, Controller, Producer, Consumer and Admin 
Client
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft";>KIP-858:
+Handle JBOD broker disk failure in KRaft: Adds 
JBOD support in KRaft-based clusters.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability";>KIP-714:
+Client metrics and observability: Introduces a 
standardized interface to receive client metrics on the broker for better 
monitoring.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-1000%3A+List+Client+Metrics+Configuration+Resources";>KIP-1000:
+List Client Metrics Configuration Resources: 
Introduces a new AdminAPI to list out the client’s metric resources 
that are available in the cluster.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol";>(Early
 Access) KIP-848:
+The Next Generation of the Consumer Rebalance 
Protocol: The new simplified Consumer Rebalance Protocol moves 
complexity away from the consumer and into the Group Coordinator broker and 
completely revamps the protocol to be incremental in nature.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-951%3A+Leader+discovery+optimisations+for+the+client";>KIP-951:
+Leader discovery optimisations for the client: 
 Optimizes the time it takes for a client to discover the new leader of 
a partition, leading to reduced end-to-end latency of produce/fetch requests in 
the presence of leadership changes (broker restarts, partition reassignments, 
etc.)
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka";>KIP-975:
+Docker Image for Apache Kafka: Introduces an 
official Apache Kafka Docker image, enabling quicker testing and deployment, as 
well as onboarding of developers.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-580%3A+Exponential+Backoff+for+Kafka+Clients";>KIP-580:
+Exponential Backoff for Kafka Clients: Changes 
the client’s retry backoff time used for retrying failed requests from a static 
one to an exponentially-increasing one. This should help reduce slow metadata 
convergence after broker failure due to overload.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-963%3A+Additional+metrics+in+Tiered+Storage";>KIP-963:
+Additional metrics in Tiered Storage: 
Introduces new metrics for Tiered Storage, allow you to better monitor 
the performance of the Early Access feature
+
+htt

Re: KIP-991: Allow DropHeaders SMT to drop headers by wildcard/regexp

2024-01-11 Thread Roman Schmitz
Hi Mickael,
Hi all,

Thanks for the feedback!
I have adapted the KIP description - actually much shorter and just
reflecting the general functionality and interface/configuration changes.

Kindly let me know if you have any comments, questions, or suggestions for
this KIP!

Thanks,
Roman

Am Fr., 5. Jan. 2024 um 17:36 Uhr schrieb Mickael Maison <
mickael.mai...@gmail.com>:

> Hi Roman,
>
> Thanks for the KIP! This would be a useful improvement.
>
> Ideally you want to make a concerte proposal in the KIP instead of
> listing a series of options. Currently the KIP seems to list two
> alternatives.
>
> Also a KIP focuses on the API changes rather than on the pure
> implementation. It seems you're proposing adding a configuration to
> the DropHeaders SMT. It would be good to describe that new
> configuration. For example see KIP-911 which also added a
> configuration.
>
> Thanks,
> Mickael
>
> On Mon, Oct 16, 2023 at 9:50 AM Roman Schmitz 
> wrote:
> >
> > Hi Andrew,
> >
> > Ok, thanks for the feedback! I added a few more details and code examples
> > to explain the proposed changes.
> >
> > Thanks,
> > Roman
> >
> > Am So., 15. Okt. 2023 um 22:12 Uhr schrieb Andrew Schofield <
> > andrew_schofield_j...@outlook.com>:
> >
> > > Hi Roman,
> > > Thanks for the KIP. I think it’s an interesting idea, but I think the
> KIP
> > > document needs some
> > > more details added before it’s ready for review. For example, here’s a
> KIP
> > > in the same
> > > area which was delivered in an earlier version of Kafka. I think this
> is a
> > > good KIP to copy
> > > for a suitable level of detail and description (
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-585%3A+Filter+and+Conditional+SMTs
> > > ).
> > >
> > > Hope this helps.
> > >
> > > Thanks,
> > > Andrew
> > >
> > > > On 15 Oct 2023, at 21:02, Roman Schmitz 
> wrote:
> > > >
> > > > Hi all,
> > > >
> > > > While working with different customers I came across the case several
> > > times
> > > > that we'd like to not only explicitly remove headers by name but by
> > > pattern
> > > > / regexp. Here is a KIP for this feature!
> > > >
> > > > Please let me know if you have any comments, questions, or
> suggestions!
> > > >
> > > > https://cwiki.apache.org/confluence/x/oYtEE
> > > >
> > > > Thanks,
> > > > Roman
> > >
> > >
>


Re: Apache Kafka 3.7.0 Release

2024-01-11 Thread Luke Chen
Hi all,

There is a bug KAFKA-16101
 reporting that "Kafka
cluster will be unavailable during KRaft migration rollback".
The impact for this issue is that if brokers try to rollback to ZK mode
during KRaft migration process, there will be a period of time the cluster
is unavailable.
Since ZK migrating to KRaft feature is a production ready feature, I think
this should be addressed soon.
Do you think this is a blocker for v3.7.0?

Thanks.
Luke

On Thu, Jan 11, 2024 at 6:11 AM Stanislav Kozlovski
 wrote:

> Thanks Colin,
>
> With that, I believe we are out of blockers. I was traveling today and
> couldn't build an RC - expect one to be published tomorrow (barring any
> problems).
>
> In the meanwhile - here is a PR for the 3.7 blog post -
> https://github.com/apache/kafka-site/pull/578
>
> Best,
> Stan
>
> On Wed, Jan 10, 2024 at 12:06 AM Colin McCabe  wrote:
>
> > KAFKA-16094 has been fixed and backported to 3.7.
> >
> > Colin
> >
> >
> > On Mon, Jan 8, 2024, at 14:52, Colin McCabe wrote:
> > > On an unrelated note, I found a blocker bug related to upgrades from
> > > 3.6 (and earlier) to 3.7.
> > >
> > > The JIRA is here:
> > >   https://issues.apache.org/jira/browse/KAFKA-16094
> > >
> > > Fix here:
> > >   https://github.com/apache/kafka/pull/15153
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Mon, Jan 8, 2024, at 14:47, Colin McCabe wrote:
> > >> Hi Ismael,
> > >>
> > >> I wasn't aware of that. If we are required to publish all modules,
> then
> > >> this is working as intended.
> > >>
> > >> I am a bit curious if we've discussed why we need to publish the
> server
> > >> modules to Sonatype. Is there a discussion about the pros and cons of
> > >> this somewhere?
> > >>
> > >> regards,
> > >> Colin
> > >>
> > >> On Mon, Jan 8, 2024, at 14:09, Ismael Juma wrote:
> > >>> All modules are published to Sonatype - that's a requirement. You may
> > be
> > >>> missing the fact that `core` is published as `kafka_2.13` and
> > `kafka_2.12`.
> > >>>
> > >>> Ismael
> > >>>
> > >>> On Tue, Jan 9, 2024 at 12:00 AM Colin McCabe 
> > wrote:
> > >>>
> >  Hi Ismael,
> > 
> >  It seems like both the metadata gradle module and the server-common
> > module
> >  are getting published to Sonatype as separate artifacts, unless I'm
> >  misunderstanding something. Example:
> > 
> >  https://central.sonatype.com/search?q=kafka-server-common
> > 
> >  I don't see kafka-core getting published, but maybe other private
> >  server-side gradle modules are getting published.
> > 
> >  This seems bad. Is there a reason to publish modules that are only
> > used by
> >  the server on Sonatype?
> > 
> >  best,
> >  Colin
> > 
> > 
> >  On Mon, Jan 8, 2024, at 12:50, Ismael Juma wrote:
> >  > Hi Colin,
> >  >
> >  > I think you may have misunderstood what they mean by gradle
> > metadata -
> >  it's
> >  > not the Kafka metadata module.
> >  >
> >  > Ismael
> >  >
> >  > On Mon, Jan 8, 2024 at 9:45 PM Colin McCabe 
> > wrote:
> >  >
> >  >> Oops, hit send too soon. I see that #15127 was already merged. So
> > we
> >  >> should no longer be publishing :metadata as part of the clients
> >  artifacts,
> >  >> right?
> >  >>
> >  >> thanks,
> >  >> Colin
> >  >>
> >  >>
> >  >> On Mon, Jan 8, 2024, at 11:42, Colin McCabe wrote:
> >  >> > Hi Apporv,
> >  >> >
> >  >> > Please remove the metadata module from any artifacts published
> > for
> >  >> > clients. It is only used by the server.
> >  >> >
> >  >> > best,
> >  >> > Colin
> >  >> >
> >  >> >
> >  >> > On Sun, Jan 7, 2024, at 03:04, Apoorv Mittal wrote:
> >  >> >> Hi Colin,
> >  >> >> Thanks for the response. The only reason for asking the
> > question of
> >  >> >> publishing the metadata is because that's present in previous
> > client
> >  >> >> releases. For more context, the description of PR
> >  >> >>  holds the
> details
> > and
> >  >> waiting
> >  >> >> for the confirmation there prior to the merge.
> >  >> >>
> >  >> >> Regards,
> >  >> >> Apoorv Mittal
> >  >> >> +44 7721681581
> >  >> >>
> >  >> >>
> >  >> >> On Fri, Jan 5, 2024 at 10:22 PM Colin McCabe <
> > cmcc...@apache.org>
> >  >> wrote:
> >  >> >>
> >  >> >>> metadata is an internal gradle module. It is not used by
> > clients.
> >  So I
> >  >> >>> don't see why you would want to publish it (unless I'm
> >  misunderstanding
> >  >> >>> something).
> >  >> >>>
> >  >> >>> best,
> >  >> >>> Colin
> >  >> >>>
> >  >> >>>
> >  >> >>> On Fri, Jan 5, 2024, at 10:05, Stanislav Kozlovski wrote:
> >  >> >>> > Thanks for reporting the blockers, folks. Good job finding.
> >  >> >>> >
> >  >> >>> > I have one ask - can an

Re: [VOTE] KIP-1005: Expose EarliestLocalOffset and TieredOffset

2024-01-11 Thread Divij Vaidya
+1 (binding)

Divij Vaidya



On Tue, Dec 26, 2023 at 7:05 AM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> +1 (non-binding). Thanks for the KIP!
>
> --
> Kamal
>
> On Thu, Dec 21, 2023 at 2:23 PM Christo Lolov 
> wrote:
>
> > Heya all!
> >
> > KIP-1005 (
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1005%3A+Expose+EarliestLocalOffset+and+TieredOffset
> > )
> > has been open for around a month with no further comments - I would like
> to
> > start a voting round on it!
> >
> > Best,
> > Christo
> >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.7 #58

2024-01-11 Thread Apache Jenkins Server
See 




requesting permissions to contribute to Apache Kafka

2024-01-11 Thread Szymon Scharmach
Hi,

based on:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

I'd like to request permission to contribute to Apache Kafka.
wiki ID: theszym
jira ID: theszym



Szymon Scharmach


Re: requesting permissions to contribute to Apache Kafka

2024-01-11 Thread Mickael Maison
Hi,

I've granted you permissions in both Jira and the wiki.

Thanks,
Mickael

On Thu, Jan 11, 2024 at 2:40 PM Szymon Scharmach
 wrote:
>
> Hi,
>
> based on:
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> 
> I'd like to request permission to contribute to Apache Kafka.
> wiki ID: theszym
> jira ID: theszym
>
>
>
> Szymon Scharmach


Re: Kafka 3.0 support Java 8

2024-01-11 Thread Devinder Saggu
Hi,

So, it means version 3.x will be supported maximum until this year 2024.

Thanks

On Wed, Jan 10, 2024 at 3:59 PM Josep Prat 
wrote:

> Hi,
> We attempt to support the last 3 non-patch versions. This would mean we
> would try to Backport security vulnerabilities to a 3.x (probably 3.8) for
> 6 to 9 months after the last release.
>
> Best,
>
> ---
> Josep Prat
> Open Source Engineering Director, aivenjosep.p...@aiven.io   |
> +491715557497 | aiven.io
> Aiven Deutschland GmbH
> Alexanderufer 3-7, 10117 Berlin
> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> Amtsgericht Charlottenburg, HRB 209739 B
>
> On Wed, Jan 10, 2024, 21:43 Devinder Saggu 
> wrote:
>
> > Thanks.
> >
> > And how long Kafka 3.x will be supported.
> >
> > Thanks
> >
> > On Wed, Jan 10, 2024 at 3:40 PM Divij Vaidya 
> > wrote:
> >
> > > All versions in the 3.x series of Kafka will support Java 8.
> > >
> > > Starting Kafka 4.0, we will drop support for Java 8. Clients will
> support
> > > >= JDK 11 and other packages will support >= JDK 17. More details about
> > > Java in Kafka 4.0 can be found here:
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510
> > >
> > > Does this answer your question?
> > >
> > > --
> > > Divij Vaidya
> > >
> > >
> > >
> > > On Wed, Jan 10, 2024 at 9:37 PM Devinder Saggu <
> > saggusinghsu...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I wonder how long Kafka 3.0 can support Java 8.
> > > >
> > > > Thanks  & Regards,
> > > >
> > > > *Devinder Singh*
> > > > P *Please consider the environment before printing this email*
> > > >
> > >
> >
> >
> > --
> > Thanks  & Regards,
> >
> > *Devinder Singh*
> > P *Please consider the environment before printing this email*
> >
>


-- 
Thanks  & Regards,

*Devinder Singh*
P *Please consider the environment before printing this email*


Re: [DISCUSS] KIP-1014: Managing Unstable Metadata Versions in Apache Kafka

2024-01-11 Thread Proven Provenzano
Hi Federico,

Thank you for the suggestions. I've added them to the KIP.

--Proven

On Wed, Jan 10, 2024 at 4:16 AM Federico Valeri 
wrote:

> Hi folks,
>
> > If you use an unstable MV, you probably won't be able to upgrade your
> software. Because whenever something changes, you'll probably get
> serialization exceptions being thrown inside the controller. Fatal ones.
>
> Thanks for this clarification. I think this concrete risk should be
> highlighted in the KIP and in the "unstable.metadata.versions.enable"
> documentation.
>
> In the test plan, should we also have one system test checking that
> "features with a stable MV will never have that MV changed"?
>
> On Wed, Jan 10, 2024 at 8:16 AM Colin McCabe  wrote:
> >
> > On Tue, Jan 9, 2024, at 18:56, Proven Provenzano wrote:
> > > Hi folks,
> > >
> > > Thank you for the questions.
> > >
> > > Let me clarify about reorder first. The reorder of unstable metadata
> > > versions should be infrequent.
> >
> > Why does it need to be infrequent? We should be able to reorder unstable
> metadata versions as often as we like. There are no guarantees about
> unstable MVs.
> >
> > > The time you reorder is when a feature that
> > > requires a higher metadata version to enable becomes "production
> ready" and
> > > the features with unstable metadata versions less than the new stable
> one
> > > are moved to metadata versions greater than the new stable feature.
> When we
> > > reorder, we are always allocating a new MV and we are never reusing an
> > > existing MV even if it was also unstable. This way a developer
> upgrading
> > > their environment with a specific unstable MV might see existing
> > > functionality stop working but they won't see new MV dependent
> > > functionality magically appear. The feature set for a given unstable MV
> > > version can only decrease with reordering.
> >
> > If you use an unstable MV, you probably won't be able to upgrade your
> software. Because whenever something changes, you'll probably get
> serialization exceptions being thrown inside the controller. Fatal ones.
> >
> > Given that this is true, there's no reason to have special rules about
> what we can and can't do with unstable MVs. We can do anything.
> >
> > >
> > > How do we define "production ready" and when should we bump
> > > LATEST_PRODUCTION? I would like to define it to be the point where the
> > > feature is code complete with tests and the KIP for it is approved.
> However
> > > even with this definition if the feature later develops a major issue
> it
> > > could still block future features until the issue is fixed which is
> what we
> > > are trying to avoid here. We could be much more formal about this and
> let
> > > the release manager for a release define what is stable for a given
> release
> > > and then do the bump just after the branch is created on the branch.
> When
> > > an RC candidate is accepted, the bump would be backported. I would
> like to
> > > hear other ideas here.
> > >
> >
> > Yeah, it's an interesting question. Overall, I think developers should
> define when a feature is production ready.
> >
> > The question to ask is, "are you ready to take this feature to
> production in your workplace?" I think most developers do have a sense of
> this. Obviously bugs and mistakes can happen, but I think this standard
> would avoid most of the issues that we're trying to avoid by having
> unstable MVs in the first place.
> >
> > ELR is a good example. Nobody would have said that it was production
> ready in 3.7 ... hence it belonged (and still belongs) in an unstable MV,
> until that changes (hopefully soon :) )
> >
> > best,
> > Colin
> >
> > > --Proven
> > >
> > > On Tue, Jan 9, 2024 at 3:26 PM Colin McCabe 
> wrote:
> > >
> > >> Hi Justine,
> > >>
> > >> Yes, this is an important point to clarify. Proven can comment more,
> but
> > >> my understanding is that we can do anything to unstable metadata
> versions.
> > >> Reorder them, delete them, change them in any other way. There are no
> > >> stability guarantees. If the current text is unclear let's add more
> > >> examples of what we can do (which is anything) :)
> > >>
> > >> best,
> > >> Colin
> > >>
> > >>
> > >> On Mon, Jan 8, 2024, at 14:18, Justine Olshan wrote:
> > >> > Hey Colin,
> > >> >
> > >> > I had some offline discussions with Proven previously and it seems
> like
> > >> he
> > >> > said something different so I'm glad I brought it up here.
> > >> >
> > >> > Let's clarify if we are ok with reordering unstable metadata
> versions :)
> > >> >
> > >> > Justine
> > >> >
> > >> > On Mon, Jan 8, 2024 at 1:56 PM Colin McCabe 
> wrote:
> > >> >
> > >> >> On Mon, Jan 8, 2024, at 13:19, Justine Olshan wrote:
> > >> >> > Hey all,
> > >> >> >
> > >> >> > I was wondering how often we plan to update LATEST_PRODUCTION
> metadata
> > >> >> > version. Is this something we should do as soon as the feature is
> > >> >> complete
> > >> >> > or something we do when we are releasing kafka. When is 

Logging in Kafka

2024-01-11 Thread Clayton Wohl
If Kafka 4.0 is Java 11+, why would you use SLF4J instead of the Java
9+ logging facade System.Logger?


Re: [PROPOSAL] Add commercial support page on website

2024-01-11 Thread fpapon

Hi,

About the vendors list and neutrality, what is the policy of the 
"Powered by" page?


https://kafka.apache.org/powered-by

We can see company with logo, some are talking about their product 
(Agoora), some are offering services (Instaclustr, Aiven), and we can 
also see some that just put their logo and a link to their website 
without any explanation (GoldmanSachs).


So as I understand and after reading the text in the footer of this 
page, every company can add themselves by providing a PR right?


"Want to appear on this page?
Submit a pull request or send a quick description of your organization 
and usage to the mailing list and we'll add you."


In this case, I'm ok to say that the commercial support section in the 
"Get support" is no need as we can use this page.


regards,

François


On 10/01/2024 19:03, Kenneth Eversole wrote:

I agree with Divji here and to be more pointed. I worry that if we go down
the path of adding vendors to a list it comes off as supporting their
product, not to mention could be a huge security risk for novice users. I
would rather this be a callout to other purely open source tooling, such as
cruise control.

Divji brings up good question
1.  What value does additional of this page bring to the users of Apache
Kafka?

I think the community would be a better service to have a more synchronous
line of communication such as Slack/Discord and we call that out here. It
would be more inline with other major open source projects.

---
Kenneth Eversole

On Wed, Jan 10, 2024 at 10:30 AM Divij Vaidya 
wrote:


I don't see a need for this. What additional information does this provide
over what can be found via a quick google search?

My primary concern is that we are getting in the business of listing
vendors in the project site which brings it's own complications without
adding much additional value for users. In the spirit of being vendor
neutral, I would try to avoid this as much as possible.

So, my question to you is:
1. What value does additional of this page bring to the users of Apache
Kafka?
2. When a new PR is submitted to add a vendor, what criteria do we have to
decide whether to add them or not? If we keep a blanket criteria of
accepting all PRs, then we may end up in a situation where the llink
redirects to a phishing page or nefarious website. Hence, we might have to
at least perform some basic due diligence which adds overhead to the
resources of the community.

--
Divij Vaidya



On Wed, Jan 10, 2024 at 5:00 PM fpapon  wrote:


Hi,

After starting a first thread on this topic (
https://lists.apache.org/thread/kkox33rhtjcdr5zztq3lzj7c5s7k9wsr), I
would like to propose a PR:

https://github.com/apache/kafka-site/pull/577

The purpose of this proposal is to help users to find support for sla,
training, consulting...whatever that is not provide by the community as,
like we can already see in many ASF projects, no commercial support is
provided by the foundation. I think it could help with the adoption and

the

growth of the project because the users
need commercial support for production issues.

If the community is agree about this idea and want to move forward, I

just

add one company in the PR but everybody can add some by providing a new

PR

to complete the list. If people want me to add other you can reply to

this

thread because it will be better to have several company at the first
publication of the page.

Just provide the company-name and a short description of the service

offer

around Apache Kafka. The information must be factual and informational in
nature and not be a marketing statement.

regards,

François




--
--
François



Re: KIP-991: Allow DropHeaders SMT to drop headers by wildcard/regexp

2024-01-11 Thread Mickael Maison
Hi Roman,

Thanks for the updates, this looks much better.

Just a couple of small comments:
- The type of the field is listed as "boolean". I think it should be
string (or list)
- Should the field be named "headers.patterns" instead of
"headers.pattern" since it accepts a list of patterns?

Thanks,
Mickael

On Thu, Jan 11, 2024 at 12:56 PM Roman Schmitz  wrote:
>
> Hi Mickael,
> Hi all,
>
> Thanks for the feedback!
> I have adapted the KIP description - actually much shorter and just
> reflecting the general functionality and interface/configuration changes.
>
> Kindly let me know if you have any comments, questions, or suggestions for
> this KIP!
>
> Thanks,
> Roman
>
> Am Fr., 5. Jan. 2024 um 17:36 Uhr schrieb Mickael Maison <
> mickael.mai...@gmail.com>:
>
> > Hi Roman,
> >
> > Thanks for the KIP! This would be a useful improvement.
> >
> > Ideally you want to make a concerte proposal in the KIP instead of
> > listing a series of options. Currently the KIP seems to list two
> > alternatives.
> >
> > Also a KIP focuses on the API changes rather than on the pure
> > implementation. It seems you're proposing adding a configuration to
> > the DropHeaders SMT. It would be good to describe that new
> > configuration. For example see KIP-911 which also added a
> > configuration.
> >
> > Thanks,
> > Mickael
> >
> > On Mon, Oct 16, 2023 at 9:50 AM Roman Schmitz 
> > wrote:
> > >
> > > Hi Andrew,
> > >
> > > Ok, thanks for the feedback! I added a few more details and code examples
> > > to explain the proposed changes.
> > >
> > > Thanks,
> > > Roman
> > >
> > > Am So., 15. Okt. 2023 um 22:12 Uhr schrieb Andrew Schofield <
> > > andrew_schofield_j...@outlook.com>:
> > >
> > > > Hi Roman,
> > > > Thanks for the KIP. I think it’s an interesting idea, but I think the
> > KIP
> > > > document needs some
> > > > more details added before it’s ready for review. For example, here’s a
> > KIP
> > > > in the same
> > > > area which was delivered in an earlier version of Kafka. I think this
> > is a
> > > > good KIP to copy
> > > > for a suitable level of detail and description (
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-585%3A+Filter+and+Conditional+SMTs
> > > > ).
> > > >
> > > > Hope this helps.
> > > >
> > > > Thanks,
> > > > Andrew
> > > >
> > > > > On 15 Oct 2023, at 21:02, Roman Schmitz 
> > wrote:
> > > > >
> > > > > Hi all,
> > > > >
> > > > > While working with different customers I came across the case several
> > > > times
> > > > > that we'd like to not only explicitly remove headers by name but by
> > > > pattern
> > > > > / regexp. Here is a KIP for this feature!
> > > > >
> > > > > Please let me know if you have any comments, questions, or
> > suggestions!
> > > > >
> > > > > https://cwiki.apache.org/confluence/x/oYtEE
> > > > >
> > > > > Thanks,
> > > > > Roman
> > > >
> > > >
> >


Re: [VOTE] KIP-994: Minor Enhancements to ListTransactions and DescribeTransactions APIs

2024-01-11 Thread Jason Gustafson
HI Raman,

Thanks for the KIP! +1 from me.

One small thing: we will probably have to overload the constructor for
TransactionDescription in order to add the new update time field to avoid
breaking the API. We might consider whether we need the overload to be
public or not.

Best,
Jason

On Tue, Jan 9, 2024 at 10:41 AM Justine Olshan 
wrote:

> Thanks Raman.
>
> +1 (binding) from me as well.
>
> Justine
>
> On Tue, Jan 9, 2024 at 10:12 AM Jun Rao  wrote:
>
> > Hi, Raman,
> >
> > Thanks for the KIP. +1 from me.
> >
> > Jun
> >
> > On Tue, Dec 26, 2023 at 11:32 AM Raman Verma  >
> > wrote:
> >
> > > I would like to start a Vote on KIP-994
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-994%3A+Minor+Enhancements+to+ListTransactions+and+DescribeTransactions+APIs
> > >
> >
>


Re: [DISCUSS] KIP-1014: Managing Unstable Metadata Versions in Apache Kafka

2024-01-11 Thread Proven Provenzano
We have two approaches here for how we update unstable metadata versions.

   1. The update will only increase MVs of unstable features to a value
   greater than the new stable feature. The idea is that a specific unstable
   MV may support some set of features and in the future that set is always a
   strict subset of the current set. The issue is that moving a feature to
   make way for a stable feature with a higher MV will leave holes.
   2. We are free to reorder the MV for any unstable feature. This removes
   the hole issue, but does make the unstable MVs more muddled. There isn't
   the same binary state for a MV where a feature is available or there is a
   hole.


We also have two ends of the spectrum as to when we update the stable MV.

   1. We update at release points which reduces the amount of churn of the
   unstable MVs and makes a stronger correlation between accepted features and
   stable MVs for a release but means less testing on trunk as a stable MV.
   2. We update when the developers of a feature think it is done. This
   leads to features being available for more testing in trunk but forces the
   next release to include it as stable.


I'd like more feedback from others on these two dimensions.
--Proven



On Wed, Jan 10, 2024 at 12:16 PM Justine Olshan
 wrote:

> Hmm it seems like Colin and Proven are disagreeing with whether we can swap
> unstable metadata versions.
>
> >  When we reorder, we are always allocating a new MV and we are never
> reusing an existing MV even if it was also unstable.
>
> > Given that this is true, there's no reason to have special rules about
> what we can and can't do with unstable MVs. We can do anything
>
> I don't have a strong preference either way, but I think we should agree on
> one approach.
> The benefit of reordering and reusing is that we can release features that
> are ready earlier and we have more flexibility. With the approach where we
> always create a new MV, I am concerned with having many "empty" MVs. This
> would encourage waiting until the release before we decide an incomplete
> feature is not ready and moving its MV into the future. (The
> abandoning comment I made earlier -- that is consistent with Proven's
> approach)
>
> I think the only potential issue with reordering is that it could be a bit
> confusing and *potentially *prone to errors. Note I say potentially because
> I think it depends on folks' understanding with this new unstable metadata
> version concept. I echo Federico's comments about making sure the risks are
> highlighted.
>
> Thanks,
>
> Justine
>
> On Wed, Jan 10, 2024 at 1:16 AM Federico Valeri 
> wrote:
>
> > Hi folks,
> >
> > > If you use an unstable MV, you probably won't be able to upgrade your
> > software. Because whenever something changes, you'll probably get
> > serialization exceptions being thrown inside the controller. Fatal ones.
> >
> > Thanks for this clarification. I think this concrete risk should be
> > highlighted in the KIP and in the "unstable.metadata.versions.enable"
> > documentation.
> >
> > In the test plan, should we also have one system test checking that
> > "features with a stable MV will never have that MV changed"?
> >
> > On Wed, Jan 10, 2024 at 8:16 AM Colin McCabe  wrote:
> > >
> > > On Tue, Jan 9, 2024, at 18:56, Proven Provenzano wrote:
> > > > Hi folks,
> > > >
> > > > Thank you for the questions.
> > > >
> > > > Let me clarify about reorder first. The reorder of unstable metadata
> > > > versions should be infrequent.
> > >
> > > Why does it need to be infrequent? We should be able to reorder
> unstable
> > metadata versions as often as we like. There are no guarantees about
> > unstable MVs.
> > >
> > > > The time you reorder is when a feature that
> > > > requires a higher metadata version to enable becomes "production
> > ready" and
> > > > the features with unstable metadata versions less than the new stable
> > one
> > > > are moved to metadata versions greater than the new stable feature.
> > When we
> > > > reorder, we are always allocating a new MV and we are never reusing
> an
> > > > existing MV even if it was also unstable. This way a developer
> > upgrading
> > > > their environment with a specific unstable MV might see existing
> > > > functionality stop working but they won't see new MV dependent
> > > > functionality magically appear. The feature set for a given unstable
> MV
> > > > version can only decrease with reordering.
> > >
> > > If you use an unstable MV, you probably won't be able to upgrade your
> > software. Because whenever something changes, you'll probably get
> > serialization exceptions being thrown inside the controller. Fatal ones.
> > >
> > > Given that this is true, there's no reason to have special rules about
> > what we can and can't do with unstable MVs. We can do anything.
> > >
> > > >
> > > > How do we define "production ready" and when should we bump
> > > > LATEST_PRODUCTION? I would like to define it to be the point where

Re: [PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-01-11 Thread via GitHub


stanislavkozlovski commented on code in PR #578:
URL: https://github.com/apache/kafka-site/pull/578#discussion_r1449250008


##
blog.html:
##
@@ -22,6 +22,119 @@
 
 
 Blog
+
+
+
+Apache 
Kafka 3.7.0 Release Announcement
+
+TODO: January 2024 - Stanislav Kozlovski (https://twitter.com/0xeed";>@BdKozlovski)
+We are proud to announce the release of Apache Kafka 3.7.0. 
This release contains many new features and improvements. This blog post will 
highlight some of the more prominent features. For a full list of changes, be 
sure to check the https://downloads.apache.org/kafka/3.7.0/RELEASE_NOTES.html";>release 
notes.
+See the https://kafka.apache.org/36/documentation.html#upgrade_3_7_0";>Upgrading 
to 3.7.0 from any version 0.8.x through 3.6.x section in the documentation 
for the list of notable changes and detailed upgrade steps.
+
+In the last release, 3.6,
+https://kafka.apache.org/documentation/#kraft_zk_migration";>the ability 
to migrate Kafka clusters from a ZooKeeper metadata system
+to a KRaft metadata system was ready for usage in 
production environments with one caveat - JBOD was not yet available for KRaft 
clusters.
+This release, 3.7, we are shipping support for JBOD in 
KRaft. (See https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft";>KIP-858
 for details)
+
+
+Note: ZooKeeper is marked as deprecated since the 3.5.0 
release. ZooKeeper is planned to be removed in Apache Kafka 4.0. For more 
information, please see the documentation for https://kafka.apache.org/documentation/#zk_depr";>ZooKeeper 
Deprecation
+
+
+Kafka Broker, Controller, Producer, Consumer and Admin 
Client
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft";>KIP-858:
+Handle JBOD broker disk failure in KRaft: Adds 
JBOD support in KRaft-based clusters.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability";>KIP-714:
+Client metrics and observability: Introduces a 
standardized interface to receive client metrics on the broker for better 
monitoring.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-1000%3A+List+Client+Metrics+Configuration+Resources";>KIP-1000:
+List Client Metrics Configuration Resources: 
Introduces a new AdminAPI to list out the client’s metric resources 
that are available in the cluster.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol";>(Early
 Access) KIP-848:
+The Next Generation of the Consumer Rebalance 
Protocol: The new simplified Consumer Rebalance Protocol moves 
complexity away from the consumer and into the Group Coordinator broker and 
completely revamps the protocol to be incremental in nature.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-951%3A+Leader+discovery+optimisations+for+the+client";>KIP-951:
+Leader discovery optimisations for the client: 
 Optimizes the time it takes for a client to discover the new leader of 
a partition, leading to reduced end-to-end latency of produce/fetch requests in 
the presence of leadership changes (broker restarts, partition reassignments, 
etc.)
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka";>KIP-975:
+Docker Image for Apache Kafka: Introduces an 
official Apache Kafka Docker image, enabling quicker testing and deployment, as 
well as onboarding of developers.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-580%3A+Exponential+Backoff+for+Kafka+Clients";>KIP-580:
+Exponential Backoff for Kafka Clients: Changes 
the client’s retry backoff time used for retrying failed requests from a static 
one to an exponentially-increasing one. This should help reduce slow metadata 
convergence after broker failure due to overload.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-963%3A+Additional+metrics+in+Tiered+Storage";>KIP-963:
+Additional metrics in Tiered Storage: 
Introduces new metrics for Tiered Storage, allow you to better monitor 
the performance of the Early Access feature
+
+ 

[VOTE] 3.7.0 RC2

2024-01-11 Thread Stanislav Kozlovski
Hello Kafka users, developers, and client-developers,

This is the first candidate for release of Apache Kafka 3.7.0.

Note it's named "RC2" because I had a few "failed" RCs that I had
cut/uploaded but ultimately had to scrap prior to announcing due to new
blockers arriving before I could even announce them.

Further - I haven't yet been able to set up the system tests successfully.
And the integration/unit tests do have a few failures that I have to spend
time triaging. I would appreciate any help in case anyone notices any tests
failing that they're subject matters experts in. Expect me to follow up in
a day or two with more detailed analysis.

Major changes include:
- Early Access to KIP-848 - the next generation of the consumer rebalance
protocol
- KIP-858: Adding JBOD support to KRaft
- KIP-714: Observability into Client metrics via a standardized interface

Check more information in the WIP blog post:
https://github.com/apache/kafka-site/pull/578

Release notes for the 3.7.0 release:
https://home.apache.org/~stanislavkozlovski/kafka-3.7.0-rc2/RELEASE_NOTES.html

*** Please download, test and vote by Thursday, January 18, 9am PT ***

Usually these deadlines tend to be 2-3 days, but due to this being the
first RC and the tests not having ran yet, I am giving it a bit more time.

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~stanislavkozlovski/kafka-3.7.0-rc2/

* Docker release artifact to be voted upon:
apache/kafka:3.7.0-rc2

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~stanislavkozlovski/kafka-3.7.0-rc2/javadoc/

* Tag to be voted upon (off 3.7 branch) is the 3.7.0 tag:
https://github.com/apache/kafka/releases/tag/3.7.0-rc2

* Documentation:
https://kafka.apache.org/37/documentation.html

* Protocol:
https://kafka.apache.org/37/protocol.html

* Successful Jenkins builds for the 3.7 branch:
Unit/integration tests:
https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.7/58/
There are failing tests here. I have to follow up with triaging some of the
failures and figuring out if they're actual problems or simply flakes.

System tests: https://jenkins.confluent.io/job/system-test-kafka/job/3.7/

No successful system test runs yet. I am working on getting the job to run.

* Successful Docker Image Github Actions Pipeline for 3.7 branch:
Attached are the scan_report and report_jvm output files from the Docker
Build run:
https://github.com/apache/kafka/actions/runs/7486094960/job/20375761673

And the final docker image build job - Docker Build Test Pipeline:
https://github.com/apache/kafka/actions/runs/7486178277

The image is apache/kafka:3.7.0-rc2 -
https://hub.docker.com/layers/apache/kafka/3.7.0-rc2/images/sha256-5b4707c08170d39549fbb6e2a3dbb83936a50f987c0c097f23cb26b4c210c226?context=explore

/**

Thanks,
Stanislav Kozlovski

kafka/test:test (alpine 3.18.5)
===
Total: 0 (HIGH: 0, CRITICAL: 0)



Re: [PROPOSAL] Add commercial support page on website

2024-01-11 Thread Justine Olshan
I think there is a difference between the "Powered by" page and a page for
vendors to advertise their products and services.

The idea is that the companies on that page are "powered by" Kafka. They
serve as examples of happy users of Kafka.
I don't think it is meant only as a place just for those companies to
advertise.

I'm a little confused by

> In this case, I'm ok to say that the commercial support section in the
"Get support" is no need as we can use this page.

If you plan to submit for this page, please include a description on how
your company uses Kafka.

I'm happy to hear other folks' opinions on this page as well.

Thanks,
Justine



On Thu, Jan 11, 2024 at 8:57 AM fpapon  wrote:

> Hi,
>
> About the vendors list and neutrality, what is the policy of the
> "Powered by" page?
>
> https://kafka.apache.org/powered-by
>
> We can see company with logo, some are talking about their product
> (Agoora), some are offering services (Instaclustr, Aiven), and we can
> also see some that just put their logo and a link to their website
> without any explanation (GoldmanSachs).
>
> So as I understand and after reading the text in the footer of this
> page, every company can add themselves by providing a PR right?
>
> "Want to appear on this page?
> Submit a pull request or send a quick description of your organization
> and usage to the mailing list and we'll add you."
>
> In this case, I'm ok to say that the commercial support section in the
> "Get support" is no need as we can use this page.
>
> regards,
>
> François
>
>
> On 10/01/2024 19:03, Kenneth Eversole wrote:
> > I agree with Divji here and to be more pointed. I worry that if we go
> down
> > the path of adding vendors to a list it comes off as supporting their
> > product, not to mention could be a huge security risk for novice users. I
> > would rather this be a callout to other purely open source tooling, such
> as
> > cruise control.
> >
> > Divji brings up good question
> > 1.  What value does additional of this page bring to the users of Apache
> > Kafka?
> >
> > I think the community would be a better service to have a more
> synchronous
> > line of communication such as Slack/Discord and we call that out here. It
> > would be more inline with other major open source projects.
> >
> > ---
> > Kenneth Eversole
> >
> > On Wed, Jan 10, 2024 at 10:30 AM Divij Vaidya 
> > wrote:
> >
> >> I don't see a need for this. What additional information does this
> provide
> >> over what can be found via a quick google search?
> >>
> >> My primary concern is that we are getting in the business of listing
> >> vendors in the project site which brings it's own complications without
> >> adding much additional value for users. In the spirit of being vendor
> >> neutral, I would try to avoid this as much as possible.
> >>
> >> So, my question to you is:
> >> 1. What value does additional of this page bring to the users of Apache
> >> Kafka?
> >> 2. When a new PR is submitted to add a vendor, what criteria do we have
> to
> >> decide whether to add them or not? If we keep a blanket criteria of
> >> accepting all PRs, then we may end up in a situation where the llink
> >> redirects to a phishing page or nefarious website. Hence, we might have
> to
> >> at least perform some basic due diligence which adds overhead to the
> >> resources of the community.
> >>
> >> --
> >> Divij Vaidya
> >>
> >>
> >>
> >> On Wed, Jan 10, 2024 at 5:00 PM fpapon  wrote:
> >>
> >>> Hi,
> >>>
> >>> After starting a first thread on this topic (
> >>> https://lists.apache.org/thread/kkox33rhtjcdr5zztq3lzj7c5s7k9wsr), I
> >>> would like to propose a PR:
> >>>
> >>> https://github.com/apache/kafka-site/pull/577
> >>>
> >>> The purpose of this proposal is to help users to find support for sla,
> >>> training, consulting...whatever that is not provide by the community
> as,
> >>> like we can already see in many ASF projects, no commercial support is
> >>> provided by the foundation. I think it could help with the adoption and
> >> the
> >>> growth of the project because the users
> >>> need commercial support for production issues.
> >>>
> >>> If the community is agree about this idea and want to move forward, I
> >> just
> >>> add one company in the PR but everybody can add some by providing a new
> >> PR
> >>> to complete the list. If people want me to add other you can reply to
> >> this
> >>> thread because it will be better to have several company at the first
> >>> publication of the page.
> >>>
> >>> Just provide the company-name and a short description of the service
> >> offer
> >>> around Apache Kafka. The information must be factual and informational
> in
> >>> nature and not be a marketing statement.
> >>>
> >>> regards,
> >>>
> >>> François
> >>>
> >>>
> >>>
> --
> --
> François
>
>


Re: [VOTE] KIP-1004: Enforce tasks.max property in Kafka Connect

2024-01-11 Thread Chris Egerton
Hi all,

The vote for KIP-1004 passes with the following +1 votes and no +0 or -1
votes:

- Hector Geraldino
- Mickael Maison (binding)
- Greg Harris (binding)
- Yash Mayya (binding)
- Federico Valeri

With regards to the open discussion about whether to remove the deprecated
tasks.max.enforce property in 4.0.0 or later, I've tweaked the KIP to
clearly state that it may take place in 4.0.0 but may also be delayed. A
deprecated property does not require a KIP for removal, so we have some
wiggle room should the discussion continue, especially if people feel
strongly that we should push to remove it in time for 4.0.0.

Thanks all for your votes and discussion!

Cheers,

Chris

On Fri, Jan 5, 2024 at 3:45 PM Greg Harris 
wrote:

> Hey Chris,
>
> Thanks for keeping KIP-987 in-mind.
>
> The current design of KIP-987 doesn't take tasks.max.enforce into
> account, but I think it may be possible to only allow the protocol
> upgrade when tasks.max.enforce is true if we were to try to enforce
> it. It may also be reasonable to just have a warning about it appended
> to the documentation string for tasks.max.enforce.
> I am fine with either keeping or removing it in 4.0, leaning towards
> keeping it, for the same reasons you listed above.
>
> Thanks!
> Greg
>
> On Fri, Jan 5, 2024 at 9:40 AM Chris Egerton 
> wrote:
> >
> > Hi Yash,
> >
> > Thanks for raising the possibility of a more aggressive removal schedule
> > for the tasks.max.enforce property now that it seems a 3.8.x branch is
> > likely--I was wondering if someone would bring that up!
> >
> > I think I'd prefer to err on the side of caution and give users more time
> > to adjust, since some may skip 3.8.x and upgrade to 4.0.x, 4.1.x, etc.
> > directly instead. It seems like the maintenance cost will be fairly low,
> > and with the option to programmatically require it to be set to true in
> > order to work with other features we may want to develop in the future,
> it
> > shouldn't block any progress in the meantime. Thoughts? I'd also be
> curious
> > what Greg Harris thinks about this, given that it seems relevant to
> KIP-987
> > [1].
> >
> > [1] -
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-987%3A+Connect+Static+Assignments
> >
> > Cheers,
> >
> > Chris
> >
> > On Thu, Jan 4, 2024 at 4:45 AM Federico Valeri 
> wrote:
> >
> > > Thanks! This will finally reconcile Javadoc and implementation.
> > >
> > > +1 (non binding)
> > >
> > > On Thu, Jan 4, 2024 at 6:49 AM Yash Mayya 
> wrote:
> > > >
> > > > Hi Chris,
> > > >
> > > > +1 (binding), thanks for the KIP.
> > > >
> > > > Based on discussion in other threads, it looks like the community is
> > > > aligned with having a 3.8 release before the 4.0 release so we
> should be
> > > > able to remove the 'tasks.max.enforce' connector property in 4.0
> (we'd
> > > > discussed potentially having to live with this property until 5.0 in
> this
> > > > KIP's discussion thread). Once we have confirmation of a 3.8 release,
> > > will
> > > > this KIP be updated to reflect the exact AK versions where the
> deprecated
> > > > property will be introduced and removed?
> > > >
> > > > Thanks,
> > > > Yash
> > > >
> > > > On Wed, Jan 3, 2024 at 11:37 PM Greg Harris
>  > > >
> > > > wrote:
> > > >
> > > > > Hey Chris,
> > > > >
> > > > > Thanks for the KIP! I think the aggressive default and deprecation
> > > > > schedule is the right choice for this change.
> > > > >
> > > > > +1 (binding)
> > > > >
> > > > > On Wed, Jan 3, 2024 at 9:01 AM Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > Hi Chris,
> > > > > >
> > > > > > +1 (binding), thanks for the KIP
> > > > > >
> > > > > > Mickael
> > > > > >
> > > > > > On Tue, Jan 2, 2024 at 8:55 PM Hector Geraldino (BLOOMBERG/ 919
> 3RD
> > > A)
> > > > > >  wrote:
> > > > > > >
> > > > > > > +1 (non-binding)
> > > > > > >
> > > > > > > Thanks Chris!
> > > > > > >
> > > > > > > From: dev@kafka.apache.org At: 01/02/24 11:49:18 UTC-5:00To:
> > > > > dev@kafka.apache.org
> > > > > > > Subject: Re: [VOTE] KIP-1004: Enforce tasks.max property in
> Kafka
> > > > > Connect
> > > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > Happy New Year! Wanted to give this a bump now that the
> holidays
> > > are
> > > > > over
> > > > > > > for a lot of us. Looking forward to people's thoughts!
> > > > > > >
> > > > > > > Cheers,
> > > > > > >
> > > > > > > Chris
> > > > > > >
> > > > > > > On Mon, Dec 4, 2023 at 10:36 AM Chris Egerton  >
> > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I'd like to call for a vote on KIP-1004, which adds
> enforcement
> > > for
> > > > > the
> > > > > > > > tasks.max connector property in Kafka Connect.
> > > > > > > >
> > > > > > > > The KIP:
> > > > > > > >
> > > > > > >
> > > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1004%3A+Enforce+tasks.max+
> > > > > > > property+in+Kafka+Connect
> > > > > > > >
> > > > > > > > The discussion thread:
> > > >

Re: [PROPOSAL] Add commercial support page on website

2024-01-11 Thread fpapon

Hi Justine,

I'm not sure to see the difference between "happy users" and vendors 
that advertise their products in some of the company list in the 
"powered by" page.


Btw, my initial purpose of my proposal was to help user to find support 
for production stuff rather than searching in google.


I don't think this is a bad thing because this is something that already 
exist in many ASF projects like:


https://hop.apache.org/community/commercial/
https://struts.apache.org/commercial-support.html
https://directory.apache.org/commercial-support.html
https://tomee.apache.org/commercial-support.html
https://plc4x.apache.org/users/commercial-support.html
https://camel.apache.org/community/support/
https://openmeetings.apache.org/commercial-support.html
https://guacamole.apache.org/support/
https://cwiki.apache.org/confluence/display/HADOOP2/Distributions+and+Commercial+Support
https://activemq.apache.org/supporthttps://karaf.apache.org/community.html
https://netbeans.apache.org/front/main/help/commercial-support/
https://royale.apache.org/royale-commercial-support/

https://karaf.apache.org/community.html

As I understand for now, the channel for users to find production 
support is:


- The mailing list (u...@kafka.apache.org / dev@kafka.apache.org)

- The official #kafka  ASF Slack channel (may be we can add it on the 
website because I didn't find it in the website => 
https://kafka.apache.org/contact)


- Search in google for commercial support only

I can update my PR to mention only the 3 points above for the "get 
support" page if people think that having a support page make sense.


regards,

François

On 11/01/2024 19:34, Justine Olshan wrote:

I think there is a difference between the "Powered by" page and a page for
vendors to advertise their products and services.

The idea is that the companies on that page are "powered by" Kafka. They
serve as examples of happy users of Kafka.
I don't think it is meant only as a place just for those companies to
advertise.

I'm a little confused by


In this case, I'm ok to say that the commercial support section in the

"Get support" is no need as we can use this page.

If you plan to submit for this page, please include a description on how
your company uses Kafka.

I'm happy to hear other folks' opinions on this page as well.

Thanks,
Justine



On Thu, Jan 11, 2024 at 8:57 AM fpapon  wrote:


Hi,

About the vendors list and neutrality, what is the policy of the
"Powered by" page?

https://kafka.apache.org/powered-by

We can see company with logo, some are talking about their product
(Agoora), some are offering services (Instaclustr, Aiven), and we can
also see some that just put their logo and a link to their website
without any explanation (GoldmanSachs).

So as I understand and after reading the text in the footer of this
page, every company can add themselves by providing a PR right?

"Want to appear on this page?
Submit a pull request or send a quick description of your organization
and usage to the mailing list and we'll add you."

In this case, I'm ok to say that the commercial support section in the
"Get support" is no need as we can use this page.

regards,

François


On 10/01/2024 19:03, Kenneth Eversole wrote:

I agree with Divji here and to be more pointed. I worry that if we go

down

the path of adding vendors to a list it comes off as supporting their
product, not to mention could be a huge security risk for novice users. I
would rather this be a callout to other purely open source tooling, such

as

cruise control.

Divji brings up good question
1.  What value does additional of this page bring to the users of Apache
Kafka?

I think the community would be a better service to have a more

synchronous

line of communication such as Slack/Discord and we call that out here. It
would be more inline with other major open source projects.

---
Kenneth Eversole

On Wed, Jan 10, 2024 at 10:30 AM Divij Vaidya 
wrote:


I don't see a need for this. What additional information does this

provide

over what can be found via a quick google search?

My primary concern is that we are getting in the business of listing
vendors in the project site which brings it's own complications without
adding much additional value for users. In the spirit of being vendor
neutral, I would try to avoid this as much as possible.

So, my question to you is:
1. What value does additional of this page bring to the users of Apache
Kafka?
2. When a new PR is submitted to add a vendor, what criteria do we have

to

decide whether to add them or not? If we keep a blanket criteria of
accepting all PRs, then we may end up in a situation where the llink
redirects to a phishing page or nefarious website. Hence, we might have

to

at least perform some basic due diligence which adds overhead to the
resources of the community.

--
Divij Vaidya



On Wed, Jan 10, 2024 at 5:00 PM fpapon  wrote:


Hi,

After starting a first thread on this topic (
https://lists.apache.org/thread/kkox33rhtj

Re: [PROPOSAL] Add commercial support page on website

2024-01-11 Thread Chris Egerton
Hi François,

Is it an official policy of the ASF that projects provide a listing of
commercial support options for themselves? I understand that other projects
have chosen to provide one, but this doesn't necessarily imply that all
projects should do the same, and I can't say I find this point very
convincing as a rebuttal to some of the good-faith concerns raised by the
PMC and members of the community so far. However, if there's an official
ASF stance on this topic, then I acknowledge that Apache Kafka should align
with it.

Best,

Chris


On Thu, Jan 11, 2024, 14:50 fpapon  wrote:

> Hi Justine,
>
> I'm not sure to see the difference between "happy users" and vendors
> that advertise their products in some of the company list in the
> "powered by" page.
>
> Btw, my initial purpose of my proposal was to help user to find support
> for production stuff rather than searching in google.
>
> I don't think this is a bad thing because this is something that already
> exist in many ASF projects like:
>
> https://hop.apache.org/community/commercial/
> https://struts.apache.org/commercial-support.html
> https://directory.apache.org/commercial-support.html
> https://tomee.apache.org/commercial-support.html
> https://plc4x.apache.org/users/commercial-support.html
> https://camel.apache.org/community/support/
> https://openmeetings.apache.org/commercial-support.html
> https://guacamole.apache.org/support/
>
> https://cwiki.apache.org/confluence/display/HADOOP2/Distributions+and+Commercial+Support
> https://activemq.apache.org/supporthttps://karaf.apache.org/community.html
> https://netbeans.apache.org/front/main/help/commercial-support/
> https://royale.apache.org/royale-commercial-support/
>
> https://karaf.apache.org/community.html
>
> As I understand for now, the channel for users to find production
> support is:
>
> - The mailing list (u...@kafka.apache.org / dev@kafka.apache.org)
>
> - The official #kafka  ASF Slack channel (may be we can add it on the
> website because I didn't find it in the website =>
> https://kafka.apache.org/contact)
>
> - Search in google for commercial support only
>
> I can update my PR to mention only the 3 points above for the "get
> support" page if people think that having a support page make sense.
>
> regards,
>
> François
>
> On 11/01/2024 19:34, Justine Olshan wrote:
> > I think there is a difference between the "Powered by" page and a page
> for
> > vendors to advertise their products and services.
> >
> > The idea is that the companies on that page are "powered by" Kafka. They
> > serve as examples of happy users of Kafka.
> > I don't think it is meant only as a place just for those companies to
> > advertise.
> >
> > I'm a little confused by
> >
> >> In this case, I'm ok to say that the commercial support section in the
> > "Get support" is no need as we can use this page.
> >
> > If you plan to submit for this page, please include a description on how
> > your company uses Kafka.
> >
> > I'm happy to hear other folks' opinions on this page as well.
> >
> > Thanks,
> > Justine
> >
> >
> >
> > On Thu, Jan 11, 2024 at 8:57 AM fpapon  wrote:
> >
> >> Hi,
> >>
> >> About the vendors list and neutrality, what is the policy of the
> >> "Powered by" page?
> >>
> >> https://kafka.apache.org/powered-by
> >>
> >> We can see company with logo, some are talking about their product
> >> (Agoora), some are offering services (Instaclustr, Aiven), and we can
> >> also see some that just put their logo and a link to their website
> >> without any explanation (GoldmanSachs).
> >>
> >> So as I understand and after reading the text in the footer of this
> >> page, every company can add themselves by providing a PR right?
> >>
> >> "Want to appear on this page?
> >> Submit a pull request or send a quick description of your organization
> >> and usage to the mailing list and we'll add you."
> >>
> >> In this case, I'm ok to say that the commercial support section in the
> >> "Get support" is no need as we can use this page.
> >>
> >> regards,
> >>
> >> François
> >>
> >>
> >> On 10/01/2024 19:03, Kenneth Eversole wrote:
> >>> I agree with Divji here and to be more pointed. I worry that if we go
> >> down
> >>> the path of adding vendors to a list it comes off as supporting their
> >>> product, not to mention could be a huge security risk for novice
> users. I
> >>> would rather this be a callout to other purely open source tooling,
> such
> >> as
> >>> cruise control.
> >>>
> >>> Divji brings up good question
> >>> 1.  What value does additional of this page bring to the users of
> Apache
> >>> Kafka?
> >>>
> >>> I think the community would be a better service to have a more
> >> synchronous
> >>> line of communication such as Slack/Discord and we call that out here.
> It
> >>> would be more inline with other major open source projects.
> >>>
> >>> ---
> >>> Kenneth Eversole
> >>>
> >>> On Wed, Jan 10, 2024 at 10:30 AM Divij Vaidya  >
> >>> wrote:
> >>>
>  I don't see a need for this. What a

Re: [PROPOSAL] Add commercial support page on website

2024-01-11 Thread Justine Olshan
Hey François,

My point was that the companies on that page use kafka as part of their
business. If you use Kafka as part of your business feel free to submit a
PR to be added.

I second Chris's point that other projects are not enough to require Kafka
having such a support page.

Justine

On Thu, Jan 11, 2024 at 11:57 AM Chris Egerton 
wrote:

> Hi François,
>
> Is it an official policy of the ASF that projects provide a listing of
> commercial support options for themselves? I understand that other projects
> have chosen to provide one, but this doesn't necessarily imply that all
> projects should do the same, and I can't say I find this point very
> convincing as a rebuttal to some of the good-faith concerns raised by the
> PMC and members of the community so far. However, if there's an official
> ASF stance on this topic, then I acknowledge that Apache Kafka should align
> with it.
>
> Best,
>
> Chris
>
>
> On Thu, Jan 11, 2024, 14:50 fpapon  wrote:
>
> > Hi Justine,
> >
> > I'm not sure to see the difference between "happy users" and vendors
> > that advertise their products in some of the company list in the
> > "powered by" page.
> >
> > Btw, my initial purpose of my proposal was to help user to find support
> > for production stuff rather than searching in google.
> >
> > I don't think this is a bad thing because this is something that already
> > exist in many ASF projects like:
> >
> > https://hop.apache.org/community/commercial/
> > https://struts.apache.org/commercial-support.html
> > https://directory.apache.org/commercial-support.html
> > https://tomee.apache.org/commercial-support.html
> > https://plc4x.apache.org/users/commercial-support.html
> > https://camel.apache.org/community/support/
> > https://openmeetings.apache.org/commercial-support.html
> > https://guacamole.apache.org/support/
> >
> >
> https://cwiki.apache.org/confluence/display/HADOOP2/Distributions+and+Commercial+Support
> >
> https://activemq.apache.org/supporthttps://karaf.apache.org/community.html
> > https://netbeans.apache.org/front/main/help/commercial-support/
> > https://royale.apache.org/royale-commercial-support/
> >
> > https://karaf.apache.org/community.html
> >
> > As I understand for now, the channel for users to find production
> > support is:
> >
> > - The mailing list (u...@kafka.apache.org / dev@kafka.apache.org)
> >
> > - The official #kafka  ASF Slack channel (may be we can add it on the
> > website because I didn't find it in the website =>
> > https://kafka.apache.org/contact)
> >
> > - Search in google for commercial support only
> >
> > I can update my PR to mention only the 3 points above for the "get
> > support" page if people think that having a support page make sense.
> >
> > regards,
> >
> > François
> >
> > On 11/01/2024 19:34, Justine Olshan wrote:
> > > I think there is a difference between the "Powered by" page and a page
> > for
> > > vendors to advertise their products and services.
> > >
> > > The idea is that the companies on that page are "powered by" Kafka.
> They
> > > serve as examples of happy users of Kafka.
> > > I don't think it is meant only as a place just for those companies to
> > > advertise.
> > >
> > > I'm a little confused by
> > >
> > >> In this case, I'm ok to say that the commercial support section in the
> > > "Get support" is no need as we can use this page.
> > >
> > > If you plan to submit for this page, please include a description on
> how
> > > your company uses Kafka.
> > >
> > > I'm happy to hear other folks' opinions on this page as well.
> > >
> > > Thanks,
> > > Justine
> > >
> > >
> > >
> > > On Thu, Jan 11, 2024 at 8:57 AM fpapon  wrote:
> > >
> > >> Hi,
> > >>
> > >> About the vendors list and neutrality, what is the policy of the
> > >> "Powered by" page?
> > >>
> > >> https://kafka.apache.org/powered-by
> > >>
> > >> We can see company with logo, some are talking about their product
> > >> (Agoora), some are offering services (Instaclustr, Aiven), and we can
> > >> also see some that just put their logo and a link to their website
> > >> without any explanation (GoldmanSachs).
> > >>
> > >> So as I understand and after reading the text in the footer of this
> > >> page, every company can add themselves by providing a PR right?
> > >>
> > >> "Want to appear on this page?
> > >> Submit a pull request or send a quick description of your organization
> > >> and usage to the mailing list and we'll add you."
> > >>
> > >> In this case, I'm ok to say that the commercial support section in the
> > >> "Get support" is no need as we can use this page.
> > >>
> > >> regards,
> > >>
> > >> François
> > >>
> > >>
> > >> On 10/01/2024 19:03, Kenneth Eversole wrote:
> > >>> I agree with Divji here and to be more pointed. I worry that if we go
> > >> down
> > >>> the path of adding vendors to a list it comes off as supporting their
> > >>> product, not to mention could be a huge security risk for novice
> > users. I
> > >>> would rather this be a callout to other pur

Re: [PROPOSAL] Add commercial support page on website

2024-01-11 Thread fpapon

Hi Chris,

I never said that the Apache Kafka community "has to" provide this kind 
of page and it's not an official policy of the ASF.


I just listed other projects to show that this is something that already 
exist so this is potentially something that could be good for the 
community of an ASF project.


My proposal is just to help the project to growth and to help users to 
find production support because this is not the purpose of the ASF.


If the PMC and members of the community are not agree and think this is 
a bad thing for the project, I'm ok with that and I will close my PR.


regards,

François

On 11/01/2024 20:56, Chris Egerton wrote:

Hi François,

Is it an official policy of the ASF that projects provide a listing of
commercial support options for themselves? I understand that other projects
have chosen to provide one, but this doesn't necessarily imply that all
projects should do the same, and I can't say I find this point very
convincing as a rebuttal to some of the good-faith concerns raised by the
PMC and members of the community so far. However, if there's an official
ASF stance on this topic, then I acknowledge that Apache Kafka should align
with it.

Best,

Chris


On Thu, Jan 11, 2024, 14:50 fpapon  wrote:


Hi Justine,

I'm not sure to see the difference between "happy users" and vendors
that advertise their products in some of the company list in the
"powered by" page.

Btw, my initial purpose of my proposal was to help user to find support
for production stuff rather than searching in google.

I don't think this is a bad thing because this is something that already
exist in many ASF projects like:

https://hop.apache.org/community/commercial/
https://struts.apache.org/commercial-support.html
https://directory.apache.org/commercial-support.html
https://tomee.apache.org/commercial-support.html
https://plc4x.apache.org/users/commercial-support.html
https://camel.apache.org/community/support/
https://openmeetings.apache.org/commercial-support.html
https://guacamole.apache.org/support/

https://cwiki.apache.org/confluence/display/HADOOP2/Distributions+and+Commercial+Support
https://activemq.apache.org/supporthttps://karaf.apache.org/community.html
https://netbeans.apache.org/front/main/help/commercial-support/
https://royale.apache.org/royale-commercial-support/

https://karaf.apache.org/community.html

As I understand for now, the channel for users to find production
support is:

- The mailing list (u...@kafka.apache.org / dev@kafka.apache.org)

- The official #kafka  ASF Slack channel (may be we can add it on the
website because I didn't find it in the website =>
https://kafka.apache.org/contact)

- Search in google for commercial support only

I can update my PR to mention only the 3 points above for the "get
support" page if people think that having a support page make sense.

regards,

François

On 11/01/2024 19:34, Justine Olshan wrote:

I think there is a difference between the "Powered by" page and a page

for

vendors to advertise their products and services.

The idea is that the companies on that page are "powered by" Kafka. They
serve as examples of happy users of Kafka.
I don't think it is meant only as a place just for those companies to
advertise.

I'm a little confused by


In this case, I'm ok to say that the commercial support section in the

"Get support" is no need as we can use this page.

If you plan to submit for this page, please include a description on how
your company uses Kafka.

I'm happy to hear other folks' opinions on this page as well.

Thanks,
Justine



On Thu, Jan 11, 2024 at 8:57 AM fpapon  wrote:


Hi,

About the vendors list and neutrality, what is the policy of the
"Powered by" page?

https://kafka.apache.org/powered-by

We can see company with logo, some are talking about their product
(Agoora), some are offering services (Instaclustr, Aiven), and we can
also see some that just put their logo and a link to their website
without any explanation (GoldmanSachs).

So as I understand and after reading the text in the footer of this
page, every company can add themselves by providing a PR right?

"Want to appear on this page?
Submit a pull request or send a quick description of your organization
and usage to the mailing list and we'll add you."

In this case, I'm ok to say that the commercial support section in the
"Get support" is no need as we can use this page.

regards,

François


On 10/01/2024 19:03, Kenneth Eversole wrote:

I agree with Divji here and to be more pointed. I worry that if we go

down

the path of adding vendors to a list it comes off as supporting their
product, not to mention could be a huge security risk for novice

users. I

would rather this be a callout to other purely open source tooling,

such

as

cruise control.

Divji brings up good question
1.  What value does additional of this page bring to the users of

Apache

Kafka?

I think the community would be a better service to have a more

synchronous

line of communica

Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2562

2024-01-11 Thread Apache Jenkins Server
See 




Re: [PROPOSAL] Add commercial support page on website

2024-01-11 Thread Francois Papon

Hi Justine,

You're right, Kafka is a part of my business (training, consulting, 
architecture design, sla...) and most of the time, users/customers said 
that it was hard for them to find a commercial support (in France for my 
case) after searching on the Kafka website (Google didn't help them).


As an ASF member and PMC of several ASF projects, I know that this kind 
of page exist so this is why I made this proposal for the Kafka project 
because I really think that it can help users.


As you suggest, I can submit a PR to be added on the "powered by" page.

Thanks,

François

On 11/01/2024 21:00, Justine Olshan wrote:

Hey François,

My point was that the companies on that page use kafka as part of their
business. If you use Kafka as part of your business feel free to submit a
PR to be added.

I second Chris's point that other projects are not enough to require Kafka
having such a support page.

Justine

On Thu, Jan 11, 2024 at 11:57 AM Chris Egerton 
wrote:


Hi François,

Is it an official policy of the ASF that projects provide a listing of
commercial support options for themselves? I understand that other projects
have chosen to provide one, but this doesn't necessarily imply that all
projects should do the same, and I can't say I find this point very
convincing as a rebuttal to some of the good-faith concerns raised by the
PMC and members of the community so far. However, if there's an official
ASF stance on this topic, then I acknowledge that Apache Kafka should align
with it.

Best,

Chris


On Thu, Jan 11, 2024, 14:50 fpapon  wrote:


Hi Justine,

I'm not sure to see the difference between "happy users" and vendors
that advertise their products in some of the company list in the
"powered by" page.

Btw, my initial purpose of my proposal was to help user to find support
for production stuff rather than searching in google.

I don't think this is a bad thing because this is something that already
exist in many ASF projects like:

https://hop.apache.org/community/commercial/
https://struts.apache.org/commercial-support.html
https://directory.apache.org/commercial-support.html
https://tomee.apache.org/commercial-support.html
https://plc4x.apache.org/users/commercial-support.html
https://camel.apache.org/community/support/
https://openmeetings.apache.org/commercial-support.html
https://guacamole.apache.org/support/



https://cwiki.apache.org/confluence/display/HADOOP2/Distributions+and+Commercial+Support
https://activemq.apache.org/supporthttps://karaf.apache.org/community.html

https://netbeans.apache.org/front/main/help/commercial-support/
https://royale.apache.org/royale-commercial-support/

https://karaf.apache.org/community.html

As I understand for now, the channel for users to find production
support is:

- The mailing list (u...@kafka.apache.org / dev@kafka.apache.org)

- The official #kafka  ASF Slack channel (may be we can add it on the
website because I didn't find it in the website =>
https://kafka.apache.org/contact)

- Search in google for commercial support only

I can update my PR to mention only the 3 points above for the "get
support" page if people think that having a support page make sense.

regards,

François

On 11/01/2024 19:34, Justine Olshan wrote:

I think there is a difference between the "Powered by" page and a page

for

vendors to advertise their products and services.

The idea is that the companies on that page are "powered by" Kafka.

They

serve as examples of happy users of Kafka.
I don't think it is meant only as a place just for those companies to
advertise.

I'm a little confused by


In this case, I'm ok to say that the commercial support section in the

"Get support" is no need as we can use this page.

If you plan to submit for this page, please include a description on

how

your company uses Kafka.

I'm happy to hear other folks' opinions on this page as well.

Thanks,
Justine



On Thu, Jan 11, 2024 at 8:57 AM fpapon  wrote:


Hi,

About the vendors list and neutrality, what is the policy of the
"Powered by" page?

https://kafka.apache.org/powered-by

We can see company with logo, some are talking about their product
(Agoora), some are offering services (Instaclustr, Aiven), and we can
also see some that just put their logo and a link to their website
without any explanation (GoldmanSachs).

So as I understand and after reading the text in the footer of this
page, every company can add themselves by providing a PR right?

"Want to appear on this page?
Submit a pull request or send a quick description of your organization
and usage to the mailing list and we'll add you."

In this case, I'm ok to say that the commercial support section in the
"Get support" is no need as we can use this page.

regards,

François


On 10/01/2024 19:03, Kenneth Eversole wrote:

I agree with Divji here and to be more pointed. I worry that if we go

down

the path of adding vendors to a list it comes off as supporting their
product, not to mention could be a huge security ris

[jira] [Created] (KAFKA-16115) AsyncKafkaConsumer: Add missing heartbeat metrics

2024-01-11 Thread Philip Nee (Jira)
Philip Nee created KAFKA-16115:
--

 Summary: AsyncKafkaConsumer: Add missing heartbeat metrics
 Key: KAFKA-16115
 URL: https://issues.apache.org/jira/browse/KAFKA-16115
 Project: Kafka
  Issue Type: Improvement
  Components: consumer, metrics
Reporter: Philip Nee
Assignee: Philip Nee


The following metrics are missing:
|[heartbeat-rate|https://docs.confluent.io/platform/current/kafka/monitoring.html#heartbeat-rate]|
|[heartbeat-response-time-max|https://docs.confluent.io/platform/current/kafka/monitoring.html#heartbeat-response-time-max]|
|[heartbeat-total|https://docs.confluent.io/platform/current/kafka/monitoring.html#heartbeat-total]|
|[last-heartbeat-seconds-ago|https://docs.confluent.io/platform/current/kafka/monitoring.html#last-heartbeat-seconds-ago]|
|[last-rebalance-seconds-ago|https://docs.confluent.io/platform/current/kafka/monitoring.html#last-rebalance-seconds-ago]|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16116) AsyncKafkaConsumer: Add missing rebalance metrics

2024-01-11 Thread Philip Nee (Jira)
Philip Nee created KAFKA-16116:
--

 Summary: AsyncKafkaConsumer: Add missing rebalance metrics
 Key: KAFKA-16116
 URL: https://issues.apache.org/jira/browse/KAFKA-16116
 Project: Kafka
  Issue Type: Improvement
Reporter: Philip Nee
Assignee: Philip Nee


The following metrics are missing:
|[rebalance-latency-avg|https://docs.confluent.io/platform/current/kafka/monitoring.html#rebalance-latency-avg]|
|[rebalance-latency-max|https://docs.confluent.io/platform/current/kafka/monitoring.html#rebalance-latency-max]|
|[rebalance-latency-total|https://docs.confluent.io/platform/current/kafka/monitoring.html#rebalance-latency-total]|
|[rebalance-rate-per-hour|https://docs.confluent.io/platform/current/kafka/monitoring.html#rebalance-rate-per-hour]|
|[rebalance-total|https://docs.confluent.io/platform/current/kafka/monitoring.html#rebalance-total]|
|[failed-rebalance-rate-per-hour|https://docs.confluent.io/platform/current/kafka/monitoring.html#failed-rebalance-rate-per-hour]|
|[failed-rebalance-total|https://docs.confluent.io/platform/current/kafka/monitoring.html#failed-rebalance-total]|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1014: Managing Unstable Metadata Versions in Apache Kafka

2024-01-11 Thread Artem Livshits
Hi Proven,

I'd say that we should do 2 & 2.  The idea is that for small features that
can be done and stabilized within a short period of time (with one or very
few commits) that's exactly what happens -- people interested in testing
in-progress feature could take unstable code from a patch (or private
branch / fork) with the expectation that that private code could create a
state that will not be compatible with anything (or may be completely
broken for that matter -- in the end of the day it's a functionality that
may not be fully tested or even fully implemented); and once the feature is
stable it goes to trunk it is fully committed there, if the bugs are found
they'd get fixed "forward".  The 2 & 2 option pretty much extends this to
large features -- if a feature is above stable MV, then going above it is
like getting some in-progress code for early testing with the expectation
that something may not fully work or leave system in upgradable state;
promoting a feature into a state MV would come with the expectation that
the feature gets fully committed and any bugs will be fixed "forward".

-Artem

On Thu, Jan 11, 2024 at 10:16 AM Proven Provenzano
 wrote:

> We have two approaches here for how we update unstable metadata versions.
>
>1. The update will only increase MVs of unstable features to a value
>greater than the new stable feature. The idea is that a specific
> unstable
>MV may support some set of features and in the future that set is
> always a
>strict subset of the current set. The issue is that moving a feature to
>make way for a stable feature with a higher MV will leave holes.
>2. We are free to reorder the MV for any unstable feature. This removes
>the hole issue, but does make the unstable MVs more muddled. There isn't
>the same binary state for a MV where a feature is available or there is
> a
>hole.
>
>
> We also have two ends of the spectrum as to when we update the stable MV.
>
>1. We update at release points which reduces the amount of churn of the
>unstable MVs and makes a stronger correlation between accepted features
> and
>stable MVs for a release but means less testing on trunk as a stable MV.
>2. We update when the developers of a feature think it is done. This
>leads to features being available for more testing in trunk but forces
> the
>next release to include it as stable.
>
>
> I'd like more feedback from others on these two dimensions.
> --Proven
>
>
>
> On Wed, Jan 10, 2024 at 12:16 PM Justine Olshan
>  wrote:
>
> > Hmm it seems like Colin and Proven are disagreeing with whether we can
> swap
> > unstable metadata versions.
> >
> > >  When we reorder, we are always allocating a new MV and we are never
> > reusing an existing MV even if it was also unstable.
> >
> > > Given that this is true, there's no reason to have special rules about
> > what we can and can't do with unstable MVs. We can do anything
> >
> > I don't have a strong preference either way, but I think we should agree
> on
> > one approach.
> > The benefit of reordering and reusing is that we can release features
> that
> > are ready earlier and we have more flexibility. With the approach where
> we
> > always create a new MV, I am concerned with having many "empty" MVs. This
> > would encourage waiting until the release before we decide an incomplete
> > feature is not ready and moving its MV into the future. (The
> > abandoning comment I made earlier -- that is consistent with Proven's
> > approach)
> >
> > I think the only potential issue with reordering is that it could be a
> bit
> > confusing and *potentially *prone to errors. Note I say potentially
> because
> > I think it depends on folks' understanding with this new unstable
> metadata
> > version concept. I echo Federico's comments about making sure the risks
> are
> > highlighted.
> >
> > Thanks,
> >
> > Justine
> >
> > On Wed, Jan 10, 2024 at 1:16 AM Federico Valeri 
> > wrote:
> >
> > > Hi folks,
> > >
> > > > If you use an unstable MV, you probably won't be able to upgrade your
> > > software. Because whenever something changes, you'll probably get
> > > serialization exceptions being thrown inside the controller. Fatal
> ones.
> > >
> > > Thanks for this clarification. I think this concrete risk should be
> > > highlighted in the KIP and in the "unstable.metadata.versions.enable"
> > > documentation.
> > >
> > > In the test plan, should we also have one system test checking that
> > > "features with a stable MV will never have that MV changed"?
> > >
> > > On Wed, Jan 10, 2024 at 8:16 AM Colin McCabe 
> wrote:
> > > >
> > > > On Tue, Jan 9, 2024, at 18:56, Proven Provenzano wrote:
> > > > > Hi folks,
> > > > >
> > > > > Thank you for the questions.
> > > > >
> > > > > Let me clarify about reorder first. The reorder of unstable
> metadata
> > > > > versions should be infrequent.
> > > >
> > > > Why does it need to be infrequent? We should be able to reorder
> > unstable
> > > metadat

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.7 #59

2024-01-11 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-16117) Add Integration test for checking if the correct assignor is chosen

2024-01-11 Thread Ritika Reddy (Jira)
Ritika Reddy created KAFKA-16117:


 Summary: Add Integration test for checking if the correct assignor 
is chosen
 Key: KAFKA-16117
 URL: https://issues.apache.org/jira/browse/KAFKA-16117
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ritika Reddy


h4.  We are trying to test this section of the KIP-848
h4. Assignor Selection

The group coordinator has to determine which assignment strategy must be used 
for the group. The group's members may not have exactly the same assignors at 
any given point in time - e.g. they may migrate from an assignor to another one 
for instance. The group coordinator will chose the assignor as follow:
 * A client side assignor is used if possible. This means that a client side 
assignor must be supported by all the members. If multiple are, it will respect 
the precedence defined by the members when they advertise their supported 
client side assignors.
 * A server side assignor is used otherwise. If multiple server side assignors 
are specified in the group, the group coordinator uses the most common one. If 
a member does not provide an assignor, the group coordinator will default to 
the first one in {{{}group.consumer.assignors{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1005: Expose EarliestLocalOffset and TieredOffset

2024-01-11 Thread Satish Duggana
+1 (binding)

Thanks,
Satish.

On Thu, 11 Jan 2024 at 17:52, Divij Vaidya  wrote:
>
> +1 (binding)
>
> Divij Vaidya
>
>
>
> On Tue, Dec 26, 2023 at 7:05 AM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > +1 (non-binding). Thanks for the KIP!
> >
> > --
> > Kamal
> >
> > On Thu, Dec 21, 2023 at 2:23 PM Christo Lolov 
> > wrote:
> >
> > > Heya all!
> > >
> > > KIP-1005 (
> > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-1005%3A+Expose+EarliestLocalOffset+and+TieredOffset
> > > )
> > > has been open for around a month with no further comments - I would like
> > to
> > > start a voting round on it!
> > >
> > > Best,
> > > Christo
> > >
> >


Re: [VOTE] KIP-1005: Expose EarliestLocalOffset and TieredOffset

2024-01-11 Thread Boudjelda Mohamed Said
+1 (binding)


On Fri, Jan 12, 2024 at 1:21 AM Satish Duggana 
wrote:

> +1 (binding)
>
> Thanks,
> Satish.
>
> On Thu, 11 Jan 2024 at 17:52, Divij Vaidya 
> wrote:
> >
> > +1 (binding)
> >
> > Divij Vaidya
> >
> >
> >
> > On Tue, Dec 26, 2023 at 7:05 AM Kamal Chandraprakash <
> > kamal.chandraprak...@gmail.com> wrote:
> >
> > > +1 (non-binding). Thanks for the KIP!
> > >
> > > --
> > > Kamal
> > >
> > > On Thu, Dec 21, 2023 at 2:23 PM Christo Lolov 
> > > wrote:
> > >
> > > > Heya all!
> > > >
> > > > KIP-1005 (
> > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1005%3A+Expose+EarliestLocalOffset+and+TieredOffset
> > > > )
> > > > has been open for around a month with no further comments - I would
> like
> > > to
> > > > start a voting round on it!
> > > >
> > > > Best,
> > > > Christo
> > > >
> > >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2563

2024-01-11 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-15760) org.apache.kafka.trogdor.coordinator.CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated is flaky

2024-01-11 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris resolved KAFKA-15760.
-
Fix Version/s: 3.8.0
 Assignee: David Mao
   Resolution: Fixed

> org.apache.kafka.trogdor.coordinator.CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated
>  is flaky
> --
>
> Key: KAFKA-15760
> URL: https://issues.apache.org/jira/browse/KAFKA-15760
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Calvin Liu
>Assignee: David Mao
>Priority: Major
>  Labels: flaky-test
> Fix For: 3.8.0
>
>
> Build / JDK 17 and Scala 2.13 / testTaskRequestWithOldStartMsGetsUpdated() – 
> org.apache.kafka.trogdor.coordinator.CoordinatorTest
> {code:java}
> java.util.concurrent.TimeoutException: 
> testTaskRequestWithOldStartMsGetsUpdated() timed out after 12 
> milliseconds at 
> org.junit.jupiter.engine.extension.TimeoutExceptionFactory.create(TimeoutExceptionFactory.java:29)
>at 
> org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:58)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
>  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
>at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
> at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
>  at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>  at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
>at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
>at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:218)
> at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1005: Add EarliestLocalOffset to GetOffsetShell

2024-01-11 Thread Luke Chen
Hi Christo,

Thanks for the KIP!
One question:

What will the offset return if tiered storage is disabled?
For "-4 or earliest-local", it should be the same as "-2 or earliest",
right?
For "-5 or latest-tiered", it will be...0?

I think the result should be written in the KIP (or script help text)
explicitly.

Thanks.
Luke

On Thu, Jan 11, 2024 at 6:54 PM Divij Vaidya 
wrote:

> Thank you for making the change Christo. It looks good to me.
>
> --
> Divij Vaidya
>
>
>
> On Thu, Jan 11, 2024 at 11:19 AM Christo Lolov 
> wrote:
>
> > Thank you Divij!
> >
> > I have updated the KIP to explicitly state that the broker will have a
> > different behaviour when a timestamp of -5 is requested as part of
> > ListOffsets.
> >
> > Best,
> > Christo
> >
> > On Tue, 2 Jan 2024 at 11:10, Divij Vaidya 
> wrote:
> >
> > > Thanks for the KIP Christo.
> > >
> > > The shell command that you mentioned calls ListOffsets API internally.
> > > Hence, I believe that we would be making a public interface change
> (and a
> > > version bump) to ListOffsetsAPI as well to include -5? If yes, can you
> > > please add that information to the change in public interfaces in the
> > KIP.
> > >
> > > --
> > > Divij Vaidya
> > >
> > >
> > >
> > > On Tue, Nov 21, 2023 at 2:19 PM Christo Lolov 
> > > wrote:
> > >
> > > > Heya!
> > > >
> > > > Thanks a lot for this. I have updated the KIP to include exposing the
> > > > tiered-offset as well. Let me know whether the Public Interfaces
> > section
> > > > needs more explanations regarding the changes needed to the
> OffsetSpec
> > or
> > > > others.
> > > >
> > > > Best,
> > > > Christo
> > > >
> > > > On Tue, 21 Nov 2023 at 04:20, Satish Duggana <
> satish.dugg...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Thanks Christo for starting the discussion on the KIP.
> > > > >
> > > > > As mentioned in KAFKA-15857[1], the goal is to add new entries for
> > > > > local-log-start-offset and tierd-offset in OffsetSpec. This will be
> > > > > used in AdminClient APIs and also to be added as part of
> > > > > GetOffsetShell. This was also raised by Kamal in the earlier email.
> > > > >
> > > > > OffsetSpec related changes for these entries also need to be
> > mentioned
> > > > > as part of the PublicInterfaces section because these are exposed
> to
> > > > > users as public APIs through Admin#listOffsets() APIs[2, 3].
> > > > >
> > > > > Please update the KIP with the above details.
> > > > >
> > > > > 1. https://issues.apache.org/jira/browse/KAFKA-15857
> > > > > 2.
> > > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/Admin.java#L1238
> > > > > 3.
> > > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/Admin.java#L1226
> > > > >
> > > > > ~Satish.
> > > > >
> > > > > On Mon, 20 Nov 2023 at 18:35, Kamal Chandraprakash
> > > > >  wrote:
> > > > > >
> > > > > > Hi Christo,
> > > > > >
> > > > > > Thanks for the KIP!
> > > > > >
> > > > > > Similar to the earliest-local-log offset, can we also expose the
> > > > > > highest-copied-remote-offset via
> > > > > > GetOffsetShell tool? This will be useful during the debugging
> > > session.
> > > > > >
> > > > > >
> > > > > > On Mon, Nov 20, 2023 at 5:38 PM Christo Lolov <
> > > christolo...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hello all!
> > > > > > >
> > > > > > > I would like to start a discussion for
> > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1005%3A+Add+EarliestLocalOffset+to+GetOffsetShell
> > > > > > > .
> > > > > > >
> > > > > > > A new offset called local log start offset was introduced as
> part
> > > of
> > > > > > > KIP-405: Kafka Tiered Storage. KIP-1005 aims to expose this
> > offset
> > > by
> > > > > > > changing the AdminClient and in particular the GetOffsetShell
> > tool.
> > > > > > >
> > > > > > > I am looking forward to your suggestions for improvement!
> > > > > > >
> > > > > > > Best,
> > > > > > > Christo
> > > > > > >
> > > > >
> > > >
> > >
> >
>


Re: DISCUSS KIP-1011: Use incrementalAlterConfigs when updating broker configs by kafka-configs.sh

2024-01-11 Thread ziming deng
Thank you for your clarification, Chris,

I have spent some time to review KIP-894 and I think it's automatic way is 
better and bring no side effect, and I will also adopt this way here.
As you mentioned, the changes in semantics is minor, the most important reason 
for this change is fixing bug brought by sensitive configs.


>  We
> don't appear to support appending/subtracting from list properties via the
> CLI for any other entity type right now,
You are right about this, I tried and found that we can’t subtract or append 
configs, I will change the KIP to "making way for appending/subtracting list 
properties"

--
Best,
Ziming

> On Jan 6, 2024, at 01:34, Chris Egerton  wrote:
> 
> Hi all,
> 
> Can we clarify any changes in the user-facing semantics for the CLI tool
> that would come about as a result of this KIP? I think the debate over the
> necessity of an opt-in flag, or waiting for 4.0.0, ultimately boils down to
> this.
> 
> My understanding is that the only changes in semantics are fairly minor
> (semantic versioning pun intended):
> 
> - Existing sensitive broker properties no longer have to be explicitly
> specified on the command line if they're not being changed
> - A small race condition is fixed where the broker config is updated by a
> separate operation in between when the CLI reads the existing broker config
> and writes the new broker config
> - Usage of a new broker API that has been supported since version 2.3.0,
> but which does not require any new ACLs and does not act any differently
> apart from the two small changes noted above
> 
> If this is correct, then I'm inclined to agree with Ismael's suggestion of
> starting with incrementalAlterConfigs, and falling back on alterConfigs if
> the former is not supported by the broker, and do not believe it's
> necessary to wait for 4.0.0 or provide opt-in or opt-out flags to release
> this change. This would also be similar to changes we made to MirrorMaker 2
> in KIP-894 [1], where the default behavior for syncing topic configs is now
> to start with incrementalAlterConfigs and fall back on alterConfigs if it's
> not supported.
> 
> If there are other, more significant changes to the user-facing semantics
> for the CLI, then these should be called out here and in the KIP, and we
> might consider a more cautious approach.
> 
> [1] -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-894%3A+Use+incrementalAlterConfigs+API+for+syncing+topic+configurations
> 
> 
> Also, regarding this part of the KIP:
> 
>> incrementalAlterConfigs is more convenient especially for updating
> configs of list data type, such as "leader.replication.throttled.replicas"
> 
> While this is true for the Java admin client and the corresponding broker
> APIs, it doesn't appear to be relevant to the kafka-configs.sh CLI tool. We
> don't appear to support appending/subtracting from list properties via the
> CLI for any other entity type right now, and there's nothing in the KIP
> that leads me to believe we'd be adding it for broker configs.
> 
> Cheers,
> 
> Chris
> 
> On Thu, Jan 4, 2024 at 10:12 PM ziming deng  >
> wrote:
> 
>> Hi Ismael,
>> I added this automatically approach to “Rejected alternatives” concerning
>> that we need to unify the semantics between alterConfigs and
>> incrementalAlterConfigs, so I choose to give this privilege to users.
>> 
>> After reviewing these code and doing some tests I found that they
>> following the similar approach, I think the simplest way is to let the
>> client choose the best method heuristically.
>> 
>> Thank you for pointing out this, I will change the KIP later.
>> 
>> Best,
>> Ziming
>> 
>>> On Jan 4, 2024, at 17:28, Ismael Juma  wrote:
>>> 
>>> Hi Ziming,
>>> 
>>> Why is the flag required at all? Can we use incremental and fallback
>> automatically if it's not supported by the broker? At this point, the vast
>> majority of clusters should support it.
>>> 
>>> Ismael
>>> 
>>> On Mon, Dec 18, 2023 at 7:58 PM ziming deng > > wrote:
 
 Hello, I want to start a discussion on KIP-1011, to make the broker
>> config change path unified with that of user/topic/client-metrics and avoid
>> some bugs.
 
 Here is the link:
 
 KIP-1011: Use incrementalAlterConfigs when updating broker configs by
>> kafka-configs.sh - Apache Kafka - Apache Software Foundation
 cwiki.apache.org 
 
 <
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1011%3A+Use+incrementalAlterConfigs+when+updating+broker+configs+by+kafka-configs.sh>KIP-1011:
>> Use incrementalAlterConfigs when updating broker configs by
>> kafka-configs.sh - Apache Kafka - Apache Software Foundation <
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1011%3A+Use+incrementalAlterConfigs+when+updating+broker+configs+by+kafka-configs.sh
>>> 
 cwiki.apache.org  <
>> https://cwiki.apache.org/confluence/display/K

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2564

2024-01-11 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 462326 lines...]
Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testPipelinedGetData() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChange() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChange() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testGetChildrenExistingZNodeWithChildren() STARTED

Gradle Test Run :core:test > Gradle Test Executor 94 > 
AllocateProducerIdsRequestTest > testAllocateProducersIdSentToNonController() > 
testAllocateProducersIdSentToNonController [1] Type=Raft-Isolated, 
MetadataVersion=3.8-IV0, Security=PLAINTEXT PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testGetChildrenExistingZNodeWithChildren() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testSetDataExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testSetDataExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChangeNotTriggered() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChangeNotTriggered() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testMixedPipeline() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testMixedPipeline() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testGetDataExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testGetDataExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testDeleteExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testDeleteExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testSessionExpiry() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testSessionExpiry() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testSetDataNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testSetDataNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testConnectionViaNettyClient() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testConnectionViaNettyClient() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testDeleteNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testDeleteNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testExistsExistingZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testExistsExistingZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testZooKeeperStateChangeRateMetrics() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testZooKeeperStateChangeRateMetrics() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDeletion() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testZNodeChangeHandlerForDeletion() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testGetAclNonExistentZNode() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testGetAclNonExistentZNode() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testStateChangeHandlerForAuthFailure() STARTED

Gradle Test Run :core:test > Gradle Test Executor 92 > ZooKeeperClientTest > 
testStateChangeHandlerForAuthFailure() PASSED

Gradle Test Run :core:test > Gradle Test Executor 92 > 
AllocateProducerIdsRequestTest > testAllocateProducersIdSentToController() > 
testAllocateProducersIdSentToController [1] Type=Raft-Isolated, 
MetadataVersion=3.8-IV0, Security=PLAINTEXT STARTED

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.5/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD