Contributor permissions request

2023-12-07 Thread Simranjit Singh
Hi Team,

I'm interested in contributing to Kafka. I'm employed at Apple Inc and our
team maintains and uses Kafka heavily, our team basically is the transport
team for the entire Apple org and we eat, live and breathe Kafka on a daily
basis :)
We are solving some interesting problems & issues internally so it would be
helpful if we can bring those to open source Kafka as well.

Here are my account details, let me know if anything else is needed.

Jira Account: ssikka
GitHub Account :  ssikka100
Wiki Account:  Ssikka100

Best,
Simranjit S. Sikka


Re: Contributor permissions request

2023-12-07 Thread sekhon
Hello,

Bumping this thread. Could someone please grant the contributor access?

Thanks,

Navdeep

> On Nov 16, 2023, at 4:56 PM, sek...@apple.com wrote:
> 
> Requesting contributor permissions.
> 
> JIRA username: navdeep
> GitHub username: navdeepsekhon
> Wiki username: navdeep
> 
> 
> Thanks,
> 
> Navdeep



Re: Contributor permissions request

2023-12-07 Thread Mickael Maison
Hi Navdeep,

I granted you permissions and replied to your thread on November 20.
See https://lists.apache.org/thread/9sff5sbhthq849rsq8xo425t73k2xy55
Let us know if it's not working.

Thanks,
Mickael

On Thu, Dec 7, 2023 at 9:31 AM  wrote:
>
> Hello,
>
> Bumping this thread. Could someone please grant the contributor access?
>
> Thanks,
>
> Navdeep
>
> > On Nov 16, 2023, at 4:56 PM, sek...@apple.com wrote:
> >
> > Requesting contributor permissions.
> >
> > JIRA username: navdeep
> > GitHub username: navdeepsekhon
> > Wiki username: navdeep
> >
> >
> > Thanks,
> >
> > Navdeep
>


Re: Contributor permissions request

2023-12-07 Thread Mickael Maison
Hi,

I've granted you permissions on Jira and on the wiki.

Thanks,
Mickael

On Thu, Dec 7, 2023 at 9:30 AM Simranjit Singh  wrote:
>
> Hi Team,
>
> I'm interested in contributing to Kafka. I'm employed at Apple Inc and our
> team maintains and uses Kafka heavily, our team basically is the transport
> team for the entire Apple org and we eat, live and breathe Kafka on a daily
> basis :)
> We are solving some interesting problems & issues internally so it would be
> helpful if we can bring those to open source Kafka as well.
>
> Here are my account details, let me know if anything else is needed.
>
> Jira Account: ssikka
> GitHub Account :  ssikka100
> Wiki Account:  Ssikka100
>
> Best,
> Simranjit S. Sikka


[jira] [Created] (KAFKA-15985) Mirrormaker 2 offset sync is incomplete

2023-12-07 Thread Philipp Dallig (Jira)
Philipp Dallig created KAFKA-15985:
--

 Summary: Mirrormaker 2 offset sync is incomplete
 Key: KAFKA-15985
 URL: https://issues.apache.org/jira/browse/KAFKA-15985
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.5.1
Reporter: Philipp Dallig


We are currently trying to migrate between two Kafka clusters using Mirrormaker2

new kafka cluster version: 7.5.2-ccs
old kafka cluster version: kafka_2.13-2.8.0

The Mirrormaker 2 process runs on the new cluster (target cluster)

My main problem: The lag in the target cluster is not the same as in the source 
cluster.

target cluster
{code}
GROUP   TOPICPARTITION  CURRENT-OFFSET  
LOG-END-OFFSET  LAG CONSUMER-ID HOSTCLIENT-ID
test-sync-5 kafka-replication-test-5 0  36373668
31  -   -   -
{code}

source cluster
{code]
GROUP   TOPICPARTITION  CURRENT-OFFSET  
LOG-END-OFFSET  LAG CONSUMER-ID HOSTCLIENT-ID
test-sync-5 kafka-replication-test-5 0  36683668
0   -   -   -
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Update docs for 3.6.1 [kafka-site]

2023-12-07 Thread via GitHub


mimaison merged PR #568:
URL: https://github.com/apache/kafka-site/pull/568


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Add 3.6.1 to downloads page [kafka-site]

2023-12-07 Thread via GitHub


mimaison merged PR #570:
URL: https://github.com/apache/kafka-site/pull/570


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[ANNOUNCE] Apache Kafka 3.6.1

2023-12-07 Thread Mickael Maison
The Apache Kafka community is pleased to announce the release for
Apache Kafka 3.6.1

This is a bug fix release and it includes fixes and improvements from 30 JIRAs.

All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/3.6.1/RELEASE_NOTES.html

You can download the source and binary release (Scala 2.12 and Scala 2.13) from:
https://kafka.apache.org/downloads#3.6.1

---

Apache Kafka is a distributed streaming platform with four core APIs:

** The Producer API allows an application to publish a stream of records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming the
input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.


Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 39 contributors to this release!
(Please report an unintended omission)

Anna Sophie Blee-Goldman, Arpit Goyal, atu-sharm, Bill Bejeck, Chris
Egerton, Colin P. McCabe, David Arthur, David Jacot, Divij Vaidya,
Federico Valeri, Greg Harris, Guozhang Wang, Hao Li, hudeqi,
iit2009060, Ismael Juma, Jorge Esteban Quilcate Otoya, Josep Prat,
Jotaniya Jeel, Justine Olshan, Kamal Chandraprakash, kumarpritam863,
Levani Kokhreidze, Lucas Brutschy, Luke Chen, Manikumar Reddy,
Matthias J. Sax, Mayank Shekhar Narula, Mickael Maison, Nick Telford,
Philip Nee, Qichao Chu, Rajini Sivaram, Robert Wagner, Sagar Rao,
Satish Duggana, Walker Carlson, Xiaobing Fang, Yash Mayya

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at
https://kafka.apache.org/

Thank you!

Regards,
Mickael


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2457

2023-12-07 Thread Apache Jenkins Server
See 




Re: KIP-993: Allow restricting files accessed by File and Directory ConfigProviders

2023-12-07 Thread Gantigmaa Selenge
Thank you Mickael.

I'm going to leave the discussion thread open for a couple more days and if
there are no further comments, I would like to start the vote for this KIP.

Thanks.
Regards,
Tina

On Wed, Dec 6, 2023 at 10:06 AM Mickael Maison 
wrote:

> Hi,
>
> I'm not aware of any other mechanisms to explore the filesystem. If
> you have ideas, please reach out to the security list.
>
> Thanks,
> Mickael
>
> On Tue, Dec 5, 2023 at 1:05 PM Gantigmaa Selenge 
> wrote:
> >
> > Hi everyone,
> >
> >
> > Apologies for the very delayed response. Thank you both for the feedback.
> >
> >
> > > For clarity it might make sense to mention this feature will be useful
> >
> > when using a ConfigProvider with Kafka Connect as providers are set in
> >
> > the runtime and can then be used by connectors. This feature has no
> >
> > use when using a ConfigProvider in server.properties or in clients.
> >
> >
> > I have updated the KIP to address this suggestion. Please let me know if
> > it's not clear enough.
> >
> >
> > > When trying to use a path not allowed, you propose returning an error.
> >
> > With Connect does that mean the connector will be failed? The
> >
> > EnvVarConfigProvider returns empty string in case a user tries to
> >
> > access an environment variable not allowed. I wonder if we should
> >
> > follow the same pattern so the behavior is "consistent" across all
> >
> > built-in providers.
> >
> >
> > I agree with this, it makes sense to have consistent behaviour across all
> > the providers. I made this update.
> >
> >
> > > 1. In the past Connect removed the FileStream connectors in order to
> >
> > prevent a REST API attacker from accessing the filesystem. Is this the
> >
> > only remaining attack vector for reading the file system? Meaning, if
> >
> > this feature is configured and all custom plugins are audited for
> >
> > filesystem accesses, would someone with access to the REST API be
> >
> > unable to access arbitrary files on disk?
> >
> >
> > Once this feature is configured, it will stop someone from accessing the
> > file system via config providers.
> >
> > However, I’m not sure whether there are other ways users can access file
> > systems via REST API.
> >
> >
> > Mickael, perhaps you have some thoughts on this?
> >
> >
> > > 2. Could you explain how this feature would prevent a path traversal
> >
> > attack, and how we will verify that such attacks are not feasible?
> >
> >
> > The intention is to generate File objects based on the String value
> > provided for allowed.paths and the String path passed to the get()
> function.
> >
> > This would allow validation of path inclusion within the specified
> allowed
> > paths using their corresponding Path objects, rather than doing String
> > comparisons.
> >
> > This hopefully will mitigate the risk of path traversal. The
> implementation
> > should include unit tests to verify this.
> >
> >
> > > 3. This applies a single "allowed paths" to a whole worker, but I've
> >
> > seen situations where preventing one connector from accessing
> >
> > another's secrets may also be desirable. Is there any way to extend
> >
> > this feature now or in the future to make that possible?
> >
> >
> > One approach could be creating multiple providers, each assigned a unique
> > name and specific allowed.paths configuration. Users would then be
> assigned
> > a provider name, granting them appropriate access on the file system to
> > load variables for their connectors. However, during provider
> > configuration, administrators would have to anticipate and specify the
> > files and directories users may require access to.
> >
> >
> > Regards,
> >
> > Tina
> >
> > On Wed, Nov 8, 2023 at 7:49 PM Greg Harris  >
> > wrote:
> >
> > > Hey Tina,
> > >
> > > Thanks for the KIP! Unrestricted file system access over a REST API is
> > > an unfortunate anti-pattern, so I'm glad that you're trying to change
> > > it. I had a few questions, mostly from the Connect perspective.
> > >
> > > 1. In the past Connect removed the FileStream connectors in order to
> > > prevent a REST API attacker from accessing the filesystem. Is this the
> > > only remaining attack vector for reading the file system? Meaning, if
> > > this feature is configured and all custom plugins are audited for
> > > filesystem accesses, would someone with access to the REST API be
> > > unable to access arbitrary files on disk?
> > > 2. Could you explain how this feature would prevent a path traversal
> > > attack, and how we will verify that such attacks are not feasible?
> > > 3. This applies a single "allowed paths" to a whole worker, but I've
> > > seen situations where preventing one connector from accessing
> > > another's secrets may also be desirable. Is there any way to extend
> > > this feature now or in the future to make that possible?
> > >
> > > Thanks!
> > > Greg
> > >
> > > On Tue, Nov 7, 2023 at 7:06 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > > wrote:
> > > >
> > > > Hi Tina,
> > > >
>

Re: KIP-993: Allow restricting files accessed by File and Directory ConfigProviders

2023-12-07 Thread Chris Egerton
Hi Tina,

Thanks for the KIP! Looks good overall. A few minor thoughts:

1. We can remove the "This page is meant as a template for writing a KIP"
section from the beginning.

2. The type of the allowed.paths property is string in the KIP, but the
description mentions it'll contain multiple comma-separated paths.
Shouldn't it be described as a list? Or are we calling it a string in order
to allow for escape syntax for directories that may contain the delimiter
character (e.g., ',')?

3. I'm guessing the answer is yes but I want to make sure--will users be
allowed to specify files in the allowed.paths property?

4. Again, guessing the answer is yes but to make sure--if a directory is
specified in the allowed.paths property, will all files (nested or
otherwise) be accessible by the config provider? E.g., if I set
allowed.paths to "/", then everything on the entire file system would be
accessible, instead of just the files directly inside the root directory.

Cheers,

Chris

On Thu, Dec 7, 2023 at 9:33 AM Gantigmaa Selenge 
wrote:

> Thank you Mickael.
>
> I'm going to leave the discussion thread open for a couple more days and if
> there are no further comments, I would like to start the vote for this KIP.
>
> Thanks.
> Regards,
> Tina
>
> On Wed, Dec 6, 2023 at 10:06 AM Mickael Maison 
> wrote:
>
> > Hi,
> >
> > I'm not aware of any other mechanisms to explore the filesystem. If
> > you have ideas, please reach out to the security list.
> >
> > Thanks,
> > Mickael
> >
> > On Tue, Dec 5, 2023 at 1:05 PM Gantigmaa Selenge 
> > wrote:
> > >
> > > Hi everyone,
> > >
> > >
> > > Apologies for the very delayed response. Thank you both for the
> feedback.
> > >
> > >
> > > > For clarity it might make sense to mention this feature will be
> useful
> > >
> > > when using a ConfigProvider with Kafka Connect as providers are set in
> > >
> > > the runtime and can then be used by connectors. This feature has no
> > >
> > > use when using a ConfigProvider in server.properties or in clients.
> > >
> > >
> > > I have updated the KIP to address this suggestion. Please let me know
> if
> > > it's not clear enough.
> > >
> > >
> > > > When trying to use a path not allowed, you propose returning an
> error.
> > >
> > > With Connect does that mean the connector will be failed? The
> > >
> > > EnvVarConfigProvider returns empty string in case a user tries to
> > >
> > > access an environment variable not allowed. I wonder if we should
> > >
> > > follow the same pattern so the behavior is "consistent" across all
> > >
> > > built-in providers.
> > >
> > >
> > > I agree with this, it makes sense to have consistent behaviour across
> all
> > > the providers. I made this update.
> > >
> > >
> > > > 1. In the past Connect removed the FileStream connectors in order to
> > >
> > > prevent a REST API attacker from accessing the filesystem. Is this the
> > >
> > > only remaining attack vector for reading the file system? Meaning, if
> > >
> > > this feature is configured and all custom plugins are audited for
> > >
> > > filesystem accesses, would someone with access to the REST API be
> > >
> > > unable to access arbitrary files on disk?
> > >
> > >
> > > Once this feature is configured, it will stop someone from accessing
> the
> > > file system via config providers.
> > >
> > > However, I’m not sure whether there are other ways users can access
> file
> > > systems via REST API.
> > >
> > >
> > > Mickael, perhaps you have some thoughts on this?
> > >
> > >
> > > > 2. Could you explain how this feature would prevent a path traversal
> > >
> > > attack, and how we will verify that such attacks are not feasible?
> > >
> > >
> > > The intention is to generate File objects based on the String value
> > > provided for allowed.paths and the String path passed to the get()
> > function.
> > >
> > > This would allow validation of path inclusion within the specified
> > allowed
> > > paths using their corresponding Path objects, rather than doing String
> > > comparisons.
> > >
> > > This hopefully will mitigate the risk of path traversal. The
> > implementation
> > > should include unit tests to verify this.
> > >
> > >
> > > > 3. This applies a single "allowed paths" to a whole worker, but I've
> > >
> > > seen situations where preventing one connector from accessing
> > >
> > > another's secrets may also be desirable. Is there any way to extend
> > >
> > > this feature now or in the future to make that possible?
> > >
> > >
> > > One approach could be creating multiple providers, each assigned a
> unique
> > > name and specific allowed.paths configuration. Users would then be
> > assigned
> > > a provider name, granting them appropriate access on the file system to
> > > load variables for their connectors. However, during provider
> > > configuration, administrators would have to anticipate and specify the
> > > files and directories users may require access to.
> > >
> > >
> > > Regards,
> > >
> > > Tina
> > >
> > > On Wed, Nov 8, 2023 a

[jira] [Created] (KAFKA-15986) New consumer group protocol integration test failures

2023-12-07 Thread Andrew Schofield (Jira)
Andrew Schofield created KAFKA-15986:


 Summary: New consumer group protocol integration test failures
 Key: KAFKA-15986
 URL: https://issues.apache.org/jira/browse/KAFKA-15986
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 3.7.0
Reporter: Andrew Schofield
Assignee: Andrew Schofield
 Fix For: 3.7.0


A recent change in `AsyncKafkaConsumer.updateFetchPositions` has made fetching 
fail without returning records in some situations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15987) Refactor ReplicaManager code

2023-12-07 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-15987:
--

 Summary: Refactor ReplicaManager code
 Key: KAFKA-15987
 URL: https://issues.apache.org/jira/browse/KAFKA-15987
 Project: Kafka
  Issue Type: Sub-task
Reporter: Justine Olshan


I started to do this in KAFKA-15784, but the diff was deemed too large and 
confusing. I just wanted to file a followup ticket to reference this in code 
for the areas that will be refactored.

 

I hope to tackle it immediately after.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15858) Broker stays fenced until all assignments are correct

2023-12-07 Thread Omnia Ibrahim (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omnia Ibrahim resolved KAFKA-15858.
---
Resolution: Won't Fix

`BrokerHeartbeatManager.calculateNextBrokerState` already keeps the broker 
fenced (even if the broker asked to be unfenced) if it didn't catch up with the 
metadata.

> Broker stays fenced until all assignments are correct
> -
>
> Key: KAFKA-15858
> URL: https://issues.apache.org/jira/browse/KAFKA-15858
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Igor Soarez
>Assignee: Omnia Ibrahim
>Priority: Major
>
> Until there the broker has caught up with metadata AND corrected any 
> incorrect directory assignments, it should continue to want to stay fenced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2458

2023-12-07 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-15988) Kafka Connect OffsetsApiIntegrationTest takes too long

2023-12-07 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-15988:
-

 Summary: Kafka Connect OffsetsApiIntegrationTest takes too long
 Key: KAFKA-15988
 URL: https://issues.apache.org/jira/browse/KAFKA-15988
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Chris Egerton
Assignee: Chris Egerton


The [OffsetsApiIntegrationTest 
suite|https://github.com/apache/kafka/blob/c515bf51f820f26ff6be6b0fde03b47b69a10b00/connect/runtime/src/test/java/org/apache/kafka/connect/integration/OffsetsApiIntegrationTest.java]
 currently contains 27 test cases. Each test case begins by creating embedded 
Kafka and Kafka Connect clusters, which is fairly resource-intensive and 
time-consuming.

If possible, we should reuse those embedded clusters across test cases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-12-07 Thread Hanyu (Peter) Zheng
According to KIP-968, we added ResultOrder to this KIP and support
withAscendingKeys() to the TimestampedRangeQuery.

Sincerely,
Hanyu

On Wed, Nov 8, 2023 at 11:04 AM Hanyu (Peter) Zheng 
wrote:

> Hi all,
>
> This voting thread has been open for over 72 hours and has received enough
> votes. Therefore, the vote will be closed now.
>
> +3 binding votes
> +1 (non-binding)
>
> KIP-992 has PASSED.
>
>
> Thanks all for your votes
> Hanyu
>
> On Fri, Nov 3, 2023 at 5:10 PM Matthias J. Sax  wrote:
>
>> Thanks for the KIP.
>>
>> +1 (binding)
>>
>>
>> -Matthias
>>
>> On 11/3/23 6:08 AM, Lucas Brutschy wrote:
>> > Hi Hanyu,
>> >
>> > Thanks for the KIP!
>> > +1 (binding)
>> >
>> > Cheers
>> > Lucas
>> >
>> > On Thu, Nov 2, 2023 at 10:19 PM Hao Li 
>> wrote:
>> >>
>> >> Hi Hanyu,
>> >>
>> >> Thanks for the KIP!
>> >> +1 (non-binding)
>> >>
>> >> Hao
>> >>
>> >> On Thu, Nov 2, 2023 at 1:29 PM Bill Bejeck  wrote:
>> >>
>> >>> Hi Hanyu,
>> >>>
>> >>> Thanks for the KIP this LGTM.
>> >>> +1 (binding)
>> >>>
>> >>> Thanks,
>> >>> Bill
>> >>>
>> >>>
>> >>>
>> >>> On Wed, Nov 1, 2023 at 1:07 PM Hanyu (Peter) Zheng
>> >>>  wrote:
>> >>>
>>  Hello everyone,
>> 
>>  I would like to start a vote for KIP-992: Proposal to introduce IQv2
>> >>> Query
>>  Types: TimestampedKeyQuery and TimestampedRangeQuery.
>> 
>>  Sincerely,
>>  Hanyu
>> 
>>  On Wed, Nov 1, 2023 at 10:00 AM Hanyu (Peter) Zheng <
>> pzh...@confluent.io
>> 
>>  wrote:
>> 
>> >
>> >
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery
>> >
>> > --
>> >
>> > [image: Confluent] 
>> > Hanyu (Peter) Zheng he/him/his
>> > Software Engineer Intern
>> > +1 (213) 431-7193 <+1+(213)+431-7193>
>> > Follow us: [image: Blog]
>> > <
>> 
>> >>>
>> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
>> > [image:
>> > Twitter] [image: LinkedIn]
>> > [image: Slack]
>> > [image: YouTube]
>> > 
>> >
>> > [image: Try Confluent Cloud for Free]
>> > <
>> 
>> >>>
>> https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound&utm_source=gmail&utm_medium=organic
>> >
>> >
>> 
>> 
>>  --
>> 
>>  [image: Confluent] 
>>  Hanyu (Peter) Zheng he/him/his
>>  Software Engineer Intern
>>  +1 (213) 431-7193 <+1+(213)+431-7193>
>>  Follow us: [image: Blog]
>>  <
>> 
>> >>>
>> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
>> > [image:
>>  Twitter] [image: LinkedIn]
>>  [image: Slack]
>>  [image: YouTube]
>>  
>> 
>>  [image: Try Confluent Cloud for Free]
>>  <
>> 
>> >>>
>> https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound&utm_source=gmail&utm_medium=organic
>> >
>> 
>> >>>
>>
>
>
> --
>
> [image: Confluent] 
> Hanyu (Peter) Zheng he/him/his
> Software Engineer Intern
> +1 (213) 431-7193 <+1+(213)+431-7193>
> Follow us: [image: Blog]
> [image:
> Twitter] [image: LinkedIn]
> [image: Slack]
> [image: YouTube]
> 
>
> [image: Try Confluent Cloud for Free]
> 
>


-- 

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]



Re: [VOTE] KIP-985 Add reverseRange and reverseAll query over kv-store in IQv2

2023-12-07 Thread Hanyu (Peter) Zheng
According to KIP-968, we added ResultOrder to this KIP and support
withAscendingKeys() to the RangeQuery.

Sincerely,
Hanyu

On Tue, Oct 17, 2023 at 11:25 AM Guozhang Wang 
wrote:

> Seems my previous msg was sent to the wrong recipient, just resending..
>
> On Fri, Oct 13, 2023 at 7:06 PM Guozhang Wang
>  wrote:
> >
> > Thanks Hanyu. I made a pass on the KIP and read through the DISCUSS
> > thread. Do not have any comments. +1
> >
> > On Fri, Oct 13, 2023 at 9:29 AM Hanyu (Peter) Zheng
> >  wrote:
> > >
> > > Hello everyone,
> > >
> > > I would like to start a vote for KIP-985 that Add reverseRange and
> > > reverseAll query over kv-store in IQv2.
> > >
> > > Sincerely,
> > > Hanyu
> > >
> > > On Fri, Oct 13, 2023 at 9:15 AM Hanyu (Peter) Zheng <
> pzh...@confluent.io>
> > > wrote:
> > >
> > > >
> > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-985:+Add+reverseRange+and+reverseAll+query+over+kv-store+in+IQv2
> > > >
> > > > --
> > > >
> > > > [image: Confluent] 
> > > > Hanyu (Peter) Zheng he/him/his
> > > > Software Engineer Intern
> > > > +1 (213) 431-7193 <+1+(213)+431-7193>
> > > > Follow us: [image: Blog]
> > > > <
> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
> >[image:
> > > > Twitter] [image: LinkedIn]
> > > > [image: Slack]
> > > > [image: YouTube]
> > > > 
> > > >
> > > > [image: Try Confluent Cloud for Free]
> > > > <
> https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound&utm_source=gmail&utm_medium=organic
> >
> > > >
> > >
> > >
> > > --
> > >
> > > [image: Confluent] 
> > > Hanyu (Peter) Zheng he/him/his
> > > Software Engineer Intern
> > > +1 (213) 431-7193 <+1+(213)+431-7193>
> > > Follow us: [image: Blog]
> > > <
> https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
> >[image:
> > > Twitter] [image: LinkedIn]
> > > [image: Slack]
> > > [image: YouTube]
> > > 
> > >
> > > [image: Try Confluent Cloud for Free]
> > > <
> https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound&utm_source=gmail&utm_medium=organic
> >
>


-- 

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]



Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2459

2023-12-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 241975 lines...]
> Task :storage:storage-api:compileTestJava
> Task :storage:storage-api:testClasses
> Task :server:compileTestJava
> Task :server:testClasses
> Task :server-common:compileTestJava
> Task :server-common:testClasses
> Task :raft:compileTestJava
> Task :raft:testClasses
> Task :core:compileScala
> Task :group-coordinator:compileTestJava
> Task :group-coordinator:testClasses

> Task :clients:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API";>KIP-554:
 Add Broker-side SCRAM Config API

 This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
 The type field in both files must match and must not change. The type field
 is used both for passing ScramCredentialUpsertion and for the internal
 UserScramCredentialRecord. Do not change the type field."

> Task :streams:generateMetadataFileForMavenJavaPublication

> Task :clients:javadoc
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
2 warnings

> Task :clients:javadocJar
> Task :metadata:compileTestJava
> Task :metadata:testClasses
> Task :clients:srcJar
> Task :clients:testJar
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :connect:api:generateMetadataFileForMavenJavaPublication
> Task :connect:api:compileTestJava UP-TO-DATE
> Task :connect:api:testClasses UP-TO-DATE
> Task :connect:api:testJar
> Task :connect:api:testSrcJar
> Task :connect:api:publishMavenJavaPublicationToMavenLocal
> Task :connect:api:publishToMavenLocal
> Task :streams:javadoc
> Task :streams:javadocJar
> Task :streams:srcJar
> Task :streams:processTestResources UP-TO-DATE
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava
> Task :streams:testClasses
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

For more on this, please refer to 
https://docs.gradle.org/8.5/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

BUILD SUCCESSFUL in 5m 3s
95 actionable tasks: 41 executed, 54 up-to-date

Publishing build scan...
https://ge.apache.org/s/yjhnwm7z63xqu

[Pipeline] sh
+ + grep ^version=cut gradle.properties -d=
 -f 2
[Pipeline] dir
Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.7.0-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/streams/quickstart/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.7.0-SNAPSHOT/streams-quickstart-3.7.0-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.7.0-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] --[ maven-archetype ]---
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart-java ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart-java ---
[INFO] 

Re: [DISCUSS] KIP-939: Support Participation in 2PC

2023-12-07 Thread Justine Olshan
Hey Artem,

Thanks for the updates. I think what you say makes sense. I just updated my
KIP so I want to reconcile some of the changes we made especially with
respect to the TransactionLogValue.

Firstly, I believe tagged fields require a default value so that if they
are not filled, we return the default (and know that they were empty). For
my KIP, I proposed the default for producer ID tagged fields should be -1.
I was wondering if we could update the KIP to include the default values
for producer ID and epoch.

Next, I noticed we decided to rename the fields. I guess that the field
"NextProducerId" in my KIP correlates to "ProducerId" in this KIP. Is that
correct? So we would have "TransactionProducerId" for the non-tagged field
and have "ProducerId" (NextProducerId) and "PrevProducerId" as tagged
fields the final version after KIP-890 and KIP-936 are implemented. Is this
correct? I think the tags will need updating, but that is trivial.

The final question I had was with respect to storing the new epoch. In
KIP-890 part 2 (epoch bumps) I think we concluded that we don't need to
store the epoch since we can interpret the previous epoch based on the
producer ID. But here we could call the InitProducerId multiple times and
we only want the producer with the correct epoch to be able to commit the
transaction. Is that the correct reasoning for why we need epoch here but
not the Prepare/Commit state.

Thanks,
Justine

On Wed, Nov 22, 2023 at 9:48 AM Artem Livshits
 wrote:

> Hi Justine,
>
> After thinking a bit about supporting atomic dual writes for Kafka + NoSQL
> database, I came to a conclusion that we do need to bump the epoch even
> with InitProducerId(keepPreparedTxn=true).  As I described in my previous
> email, we wouldn't need to bump the epoch to protect from zombies so that
> reasoning is still true.  But we cannot protect from split-brain scenarios
> when two or more instances of a producer with the same transactional id try
> to produce at the same time.  The dual-write example for SQL databases (
> https://github.com/apache/kafka/pull/14231/files) doesn't have a
> split-brain problem because execution is protected by the update lock on
> the transaction state record; however NoSQL databases may not have this
> protection (I'll write an example for NoSQL database dual-write soon).
>
> In a nutshell, here is an example of a split-brain scenario:
>
>1. (instance1) InitProducerId(keepPreparedTxn=true), got epoch=42
>2. (instance2) InitProducerId(keepPreparedTxn=true), got epoch=42
>3. (instance1) CommitTxn, epoch bumped to 43
>4. (instance2) CommitTxn, this is considered a retry, so it got epoch 43
>as well
>5. (instance1) Produce messageA w/sequence 1
>6. (instance2) Produce messageB w/sequence 1, this is considered a
>duplicate
>7. (instance2) Produce messageC w/sequence 2
>8. (instance1) Produce messageD w/sequence 2, this is considered a
>duplicate
>
> Now if either of those commit the transaction, it would have a mix of
> messages from the two instances (messageA and messageC).  With the proper
> epoch bump, instance1 would get fenced at step 3.
>
> In order to update epoch in InitProducerId(keepPreparedTxn=true) we need to
> preserve the ongoing transaction's epoch (and producerId, if the epoch
> overflows), because we'd need to make a correct decision when we compare
> the PreparedTxnState that we read from the database with the (producerId,
> epoch) of the ongoing transaction.
>
> I've updated the KIP with the following:
>
>- Ongoing transaction now has 2 (producerId, epoch) pairs -- one pair
>describes the ongoing transaction, the other pair describes expected
> epoch
>for operations on this transactional id
>- InitProducerIdResponse now returns 2 (producerId, epoch) pairs
>- TransactionalLogValue now has 2 (producerId, epoch) pairs, the new
>values added as tagged fields, so it's easy to downgrade
>- Added a note about downgrade in the Compatibility section
>- Added a rejected alternative
>
> -Artem
>
> On Fri, Oct 6, 2023 at 5:16 PM Artem Livshits 
> wrote:
>
> > Hi Justine,
> >
> > Thank you for the questions.  Currently (pre-KIP-939) we always bump the
> > epoch on InitProducerId and abort an ongoing transaction (if any).  I
> > expect this behavior will continue with KIP-890 as well.
> >
> > With KIP-939 we need to support the case when the ongoing transaction
> > needs to be preserved when keepPreparedTxn=true.  Bumping epoch without
> > aborting or committing a transaction is tricky because epoch is a short
> > value and it's easy to overflow.  Currently, the overflow case is handled
> > by aborting the ongoing transaction, which would send out transaction
> > markers with epoch=Short.MAX_VALUE to the partition leaders, which would
> > fence off any messages with the producer id that started the transaction
> > (they would have epoch that is less than Short.MAX_VALUE).  Then it is
> safe
> > to allocate a n

Re: [VOTE] KIP-896: Remove old client protocol API versions in Kafka 4.0

2023-12-07 Thread Jason Gustafson
Hey Ismael,

I'm considering if we can do something in this KIP for the SASL baggage
we've accumulated. Prior to the existence of the `SaslHandshake` API, we
supported the raw SASL protocol. The main gap was that it did not support
negotiation of the SASL method. This was fixed in
https://cwiki.apache.org/confluence/display/KAFKA/KIP-43:+Kafka+SASL+enhancements
where we added the `SaslHandshake` and `SaslAuthenticate`. This has been
supported in the broker since 0.10.0 and, as far as I can tell, all major
clients mentioned in the KIP support the `SaslHandshake` API. However, we
still support fallback logic on the broker, effectively assuming GSSAPI if
the initial request is not a Kafka request. Can we require SASL negotiation
through `SaslHandshake` and drop support for this fallback logic?

I also looked at `SaslAuthenticate`, which was added in
https://cwiki.apache.org/confluence/display/KAFKA/KIP-152+-+Improve+diagnostics+for+SASL+authentication+failures.
Once method negotiation is complete using `SaslHandshake`, then we still
support direct authentication using the SASL protocol (i.e. without the
wrapped `SaslAuthenticate`).  It would be nice to drop this as well, but it
looks like kafka-python may not implement it.

Thanks,
Jason



On Fri, Nov 24, 2023 at 12:07 PM Ismael Juma  wrote:

> Hi all,
>
> I also vote +1.
>
> The vote passes with 4 binding +1s:
>
> 1. Colin McCabe
> 2. Jun Rao
> 3. Jose Sancio
> 4. Ismael Juma
>
> Thanks,
> Ismael
>
> On Tue, Nov 21, 2023 at 12:06 PM Ismael Juma  wrote:
>
> > Hi all,
> >
> > I would like to start a vote on KIP-896. Please take a look and let us
> > know what you think.
> >
> > Even though most of the changes in this KIP will be done for Apache Kafka
> > 4.0, I would like to introduce a new metric and new request log attribute
> > in Apache 3.7 to help users identify usage of deprecated protocol api
> > versions.
> >
> > Link:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-896%3A+Remove+old+client+protocol+API+versions+in+Kafka+4.0
> >
> > Thanks,
> > Ismael
> >
>


Re: [VOTE] KIP-896: Remove old client protocol API versions in Kafka 4.0

2023-12-07 Thread Jason Gustafson
Minor correction: only `SaslHandshake` was introduced in KIP-43.
`SaslAuthenticate` came later in KIP-152.

On Thu, Dec 7, 2023 at 3:18 PM Jason Gustafson  wrote:

> Hey Ismael,
>
> I'm considering if we can do something in this KIP for the SASL baggage
> we've accumulated. Prior to the existence of the `SaslHandshake` API, we
> supported the raw SASL protocol. The main gap was that it did not support
> negotiation of the SASL method. This was fixed in
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43:+Kafka+SASL+enhancements
> where we added the `SaslHandshake` and `SaslAuthenticate`. This has been
> supported in the broker since 0.10.0 and, as far as I can tell, all major
> clients mentioned in the KIP support the `SaslHandshake` API. However, we
> still support fallback logic on the broker, effectively assuming GSSAPI if
> the initial request is not a Kafka request. Can we require SASL negotiation
> through `SaslHandshake` and drop support for this fallback logic?
>
> I also looked at `SaslAuthenticate`, which was added in
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-152+-+Improve+diagnostics+for+SASL+authentication+failures.
> Once method negotiation is complete using `SaslHandshake`, then we still
> support direct authentication using the SASL protocol (i.e. without the
> wrapped `SaslAuthenticate`).  It would be nice to drop this as well, but it
> looks like kafka-python may not implement it.
>
> Thanks,
> Jason
>
>
>
> On Fri, Nov 24, 2023 at 12:07 PM Ismael Juma  wrote:
>
>> Hi all,
>>
>> I also vote +1.
>>
>> The vote passes with 4 binding +1s:
>>
>> 1. Colin McCabe
>> 2. Jun Rao
>> 3. Jose Sancio
>> 4. Ismael Juma
>>
>> Thanks,
>> Ismael
>>
>> On Tue, Nov 21, 2023 at 12:06 PM Ismael Juma  wrote:
>>
>> > Hi all,
>> >
>> > I would like to start a vote on KIP-896. Please take a look and let us
>> > know what you think.
>> >
>> > Even though most of the changes in this KIP will be done for Apache
>> Kafka
>> > 4.0, I would like to introduce a new metric and new request log
>> attribute
>> > in Apache 3.7 to help users identify usage of deprecated protocol api
>> > versions.
>> >
>> > Link:
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-896%3A+Remove+old+client+protocol+API+versions+in+Kafka+4.0
>> >
>> > Thanks,
>> > Ismael
>> >
>>
>


[VOTE] KIP-996: Pre-Vote

2023-12-07 Thread Alyssa Huang
Hey folks,

I would like to start a vote on Pre-vote 😉 Thank you Jose, Jason, Luke,
and Jun for your comments on the discussion thread!

Here's the link to the proposal -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-996%3A+Pre-Vote

Here's the link to the discussion -
https://lists.apache.org/thread/pqj9f1r3rk83oqtxxtg6y5h7m7cf56r2

Best,
Alyssa


Re: [DISCUSS] KIP-996: Pre-Vote

2023-12-07 Thread José Armando García Sancio
Hi Alyssa,

Thanks for the answers and the updates to the KIP. I took a look at
the latest version and it looks good to me.

-- 
-José


Re: [VOTE] KIP-996: Pre-Vote

2023-12-07 Thread Jason Gustafson
+1 Thanks for the KIP! Nice to see progress with the raft protocol.

On Thu, Dec 7, 2023 at 5:10 PM Alyssa Huang 
wrote:

> Hey folks,
>
> I would like to start a vote on Pre-vote 😉 Thank you Jose, Jason, Luke,
> and Jun for your comments on the discussion thread!
>
> Here's the link to the proposal -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-996%3A+Pre-Vote
> 
> Here's the link to the discussion -
> https://lists.apache.org/thread/pqj9f1r3rk83oqtxxtg6y5h7m7cf56r2
>
> Best,
> Alyssa
>


Re: [VOTE] 3.5.2 RC1

2023-12-07 Thread Tom Bentley
Hi,

I have validated signatures, checked the Java docs, built from source and
run tests. I had a few unit test failures, but I note that others saw them
pass and the CI was green too, so I think this is a problem with my system
rather than the release.

+1 (binding).

Thanks!

On Wed, 6 Dec 2023 at 12:17, Justine Olshan 
wrote:

> Hey all,
>
> I've built from source, ran unit tests, and ran a produce bench test on a
> running server.
> I've also scanned the various release components. Given the test results
> and the validations, +1 (binding) from me.
>
> Thanks,
> Justine
>
> On Tue, Dec 5, 2023 at 3:59 AM Luke Chen  wrote:
>
> > Hi all,
> >
> > Thanks for helping validate the RC1 build.
> > I've got 1 binding, and 3 non-binding votes.
> > Please help validate it when available.
> >
> > Update for the system test results:
> >
> >
> https://drive.google.com/file/d/1gLt5hTFCVnpoKZ_I5KmUvnowVGtzfip_/view?usp=sharing
> >
> > The result failed at 2 groups of tests:
> > 1. quota_test test suite failed with "ValueError: max() arg is an empty
> > sequence".
> > This is a known issue and these tests can be passed after re-run.
> > 2. zookeeper_migration_test failed with
> >   2.1. "Kafka server didn't finish startup in 60 seconds" : This is
> because
> > we added a constraint to ZK migrating to KRaft that we don't support JBOD
> > in use. These system tests are fixed in this PR in trunk:
> >
> >
> https://github.com/apache/kafka/pull/14654/files#diff-17b8c06d37fe43a3bd6ba5b89e08ff8f988ad5f4e5f7eda87844d51f7e5a5b96R61
> >   2.2. "Zookeeper node failed to start": This is because the ZK is
> pointing
> > to 3.4.0 version, which should be 3.4.1. These system tests are fixed in
> > this PR in trunk:
> >
> >
> https://github.com/apache/kafka/pull/14208/files#diff-17b8c06d37fe43a3bd6ba5b89e08ff8f988ad5f4e5f7eda87844d51f7e5a5b96R143
> >
> > I've confirmed that after applying this patch, the system tests pass now.
> > The PR to backport this fix to 3.5 branch is opened:
> > https://github.com/apache/kafka/pull/14927
> > But that doesn't block 3.5.2 release because they are test problems only.
> >
> > Thank you.
> > Luke
> >
> > On Mon, Nov 27, 2023 at 2:13 AM Mickael Maison  >
> > wrote:
> >
> > > Hi Luke,
> > >
> > > I ran the following checks:
> > > - Verified signatures and checksums
> > > - Ran the KRaft and ZooKeeper quickstarts with the 2.13 binaries
> > > - Built sources and ran unit/integration tests with Java 17
> > >
> > > +1 (binding)
> > >
> > > Thanks,
> > > Mickael
> > >
> > >
> > > On Fri, Nov 24, 2023 at 10:41 AM Jakub Scholz  wrote:
> > > >
> > > > +1 non-binding. I used the staged Scala 2.13 binaries and the staged
> > > Maven
> > > > repo to run my tests and all seems to work fine.
> > > >
> > > > Thanks & Regards
> > > > Jakub
> > > >
> > > > On Tue, Nov 21, 2023 at 11:09 AM Luke Chen 
> wrote:
> > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > > This is the first candidate for release of Apache Kafka 3.5.2.
> > > > >
> > > > > This is a bugfix release with several fixes since the release of
> > 3.5.1,
> > > > > including dependency version bumps for CVEs.
> > > > >
> > > > > Release notes for the 3.5.2 release:
> > > > >
> https://home.apache.org/~showuon/kafka-3.5.2-rc1/RELEASE_NOTES.html
> > > > >
> > > > > *** Please download, test and vote by Nov. 28.
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > https://kafka.apache.org/KEYS
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > https://home.apache.org/~showuon/kafka-3.5.2-rc1/
> > > > >
> > > > > * Maven artifacts to be voted upon:
> > > > >
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > > >
> > > > > * Javadoc:
> > > > > https://home.apache.org/~showuon/kafka-3.5.2-rc1/javadoc/
> > > > >
> > > > > * Tag to be voted upon (off 3.5 branch) is the 3.5.2 tag:
> > > > > https://github.com/apache/kafka/releases/tag/3.5.2-rc1
> > > > >
> > > > > * Documentation:
> > > > > https://kafka.apache.org/35/documentation.html
> > > > >
> > > > > * Protocol:
> > > > > https://kafka.apache.org/35/protocol.html
> > > > >
> > > > > * Successful Jenkins builds for the 3.5 branch:
> > > > > Unit/integration tests:
> > > > > https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.5/98/
> > > > > There are some falky tests, including the testSingleIP test
> failure.
> > It
> > > > > failed because of some infra change and we fixed it
> > > > >  recently.
> > > > >
> > > > > System tests: running, will update the results later.
> > > > >
> > > > >
> > > > >
> > > > > Thank you.
> > > > > Luke
> > > > >
> > >
> >
>