[REVIEW REQUEST] ReplicationQuotasTestRig rewritten in java

2023-11-03 Thread Николай Ижиков
Hello.

I prepare PR that implements `ReplicationQuotasTestRig` in java -  
https://github.com/apache/kafka/pull/14588
Please, review.

PR contains screenshots and reports both from scala version and java version [1]
So It seems pretty straightforward.

PR that rewrites `ReassignPartitionCommand` to java successfully landed in 
trunk.
But, we have one postponed dependency that must be rewritten in java - 
`ReplicationQuotasTestRig` which is application in tests
that run locally and produces some charts for reassign process.

[1] https://github.com/apache/kafka/pull/14588



Re: [VOTE] KIP-998: Give ProducerConfig(props, doLog) constructor protected access

2023-11-03 Thread Bruno Cadonna

Thanks for the KIP!

+1 (binding)

Best,
Bruno

On 11/3/23 7:55 AM, Chris Egerton wrote:

+1 (binding)

FWIW, I agree that this change should require a KIP. Gating upgrades of
visibility from private or package-private to protected may be cumbersome,
but it comes with the expectation that downgrades in visibility for those
same classes/methods will also be gated behind a KIP, which is IMO clearly
warranted considering the backwards compatibility implications of taking,
e.g., a protected constructor and turning it private.

Cheers,

Chris

On Fri, Nov 3, 2023, 00:55 Sophie Blee-Goldman 
wrote:


Hey all,

This is a trivial one-liner change that it was determined should go through
a KIP during the PR review process (see this thread
 for
context). Since the change itself was already reviewed and approved I'm
skipping the discussion thread and bringing it to a vote right away, but of
course I'm open to feedback and can create a discussion thread if there is
need for it.

The change itself is simply adding the `protected` modifier to the
ProducerConfig constructor that allows for silencing the config logging.
This just brings the ProducerConfig in alignment with the other client
configs, all of which already had this constructor as protected.

KIP:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-998%3A+Give+ProducerConfig%28props%2C+doLog%29+constructor+protected+access
PR: https://github.com/apache/kafka/pull/14681

Thanks!
Sophie





Re: [DISCUSS] KIP-969: Support range interactive queries for versioned state stores

2023-11-03 Thread Bruno Cadonna

Hi Alieh,

I like the examples!


1.
Some terms like `asOf` in the descriptions still need to be replaced in 
the KIP.



2.
In your last e-mail you state:

"How can a user retrieve the latest value? We have the same issue with 
kip-968 as well."


Why do we have the same issue in KIP-968?
If users need to retrieve the latest value for a specific key, they 
should use KIP-960.



3.
Regarding querying the latest version (or an asOf version) of records in 
a given key range, that is exactly why I proposed to split the query 
class. One class would return the latest and the asOf versions (i.e. a 
single version) of records in a key range and the other class would 
return all versions in a given time range (i.e. multiple versions) of 
the records in a given key range. The splitting in two classes avoids to 
specify a time range and latest or a time range and asOf on a given key 
range.


Alternatively, you could keep one class and you could specify that the 
last call wins as you specified for fromTime() and toTime(). For 
example, for


MultiVersionedRangeQuery.withLowerKeyBound(k1).fromTime(t1).toTime(t2).latest()

latest() wins. However, how would you interpret

MultiVersionedRangeQuery.withLowerKeyBound(k1).fromTime(t1).latest().toTime(t2)

Is it [t1, t2] or [-INF, t2]?
(I would say the latter, but somebody else would say differently)

The two class solution seems cleaner to me since we do not need to 
consider those edge cases.

You could propose both classes in this KIP.


4.
Why do we need orderByKey() and orderByTimestamp()?
Aren't withAscendingKeys(), withDescendingKeys(), 
withAscendingTimestamps(), and withDescendingTimestamps() sufficient?



5.
In example 2, why is

key,value: 2,20, timestamp: 2023-01-10T10:00:00.00Z, valid till: 
2023-01-25T10:00:00.00Z


not part of the result?
It is valid from 2023-01-10T10:00:00.00Z to 2023-01-25T10:00:00.00Z 
which overlaps with the time range [2023-01-17T10:00:00.00Z, 
2023-01-30T10:00:00.00Z] of the query.


(BTW, in the second example, you forgot to add "key" to the output.)


Best,
Bruno


On 10/25/23 4:01 PM, Alieh Saeedi wrote:

Thanks, Matthias and Bruno.
Here is a list of updates:

1. I changed the variable and method names as I did for KIP-968, as
follows:
   - "fromTimestamp" -> fromTime
   - "asOfTimestamp"-> toTime
   - "from(instant)" -> fromTime(instant)"
   - asOf(instant)"->toTime(instant)
2. As Bruno suggested for KIP-968, I added `orderByKey()`,
`withAscendingKeys()`, and `withAscendingTimestamps()` methods for user
readability.
3. I updated the "Example" section as well.

Some points:

1. Even though the kip suggests adding the `get(k lowerkeybound, k
upperkeybound, long fromtime, long totime)` method to the interface, I
added this method to the `rocksdbversionedstore` class for now.
2. Matthias, you mentioned a very important point. How can a user
retrieve the latest value? We have the same issue with kip-968 as well.
Asking a user to call `toTime(max)` violates the API design rules, as you
mentioned. So I think we must have a `latest()` method for both KIP-968 and
KIP-969. What do you think about that?


Cheers,
Alieh

On Thu, Oct 12, 2023 at 6:33 AM Matthias J. Sax  wrote:


Thanks for the update.



To retrieve

 the latest value(s), the user must call just the asOf method with

the MAX

 value (asOf(MAX)). The same applies to KIP-968. Do you think it is

clumsy,

 Matthias?



Well, in KIP-968 calling `asOf` and passing in a timestamp is optional,
and default is "latest", right? So while `asOf(MAX)` does the same
thing, practically users would never call `asOf` for a "latest" query?

In this KIP, we enforce that users give us a key range (we have the 4
static entry point methods to define a query for this), and we say we
default to "no bounds" for time range by default.

The existing `RangeQuery` allows to query a range of keys for existing
stores. It seems to be a common pattern to query a key-range on latest.
-- in the current proposal, users would need to do:

MultiVersionedRangeQuery.withKeyRange(startKey, endKey).asOf(MAX);

Would like to hear from others if we think that's good user experience?
If we agree to accept this, I think we should explain how to do this in
the JavaDocs (and also regular docs... --- otherwise, I can already
anticipate user question on all question-asking-channels how to do a
"normal key range query". IMHO, the problem is not that the code itself
it too clumsy, but that it's totally not obvious to uses how to express
it without actually explaining it to them. It basically violated the API
design rule "make it easy to use / simple things should be easy".

Btw: We could also re-use `RangeQuery` and add am implementation to
`VersionedStateStore` to just accept this query type, with "key range
over latest" semantics. -- The issue is of course, that uses need to
know that the query would return `ValueAndTimestamp` and not plai

Re: [VOTE] KIP-998: Give ProducerConfig(props, doLog) constructor protected access

2023-11-03 Thread Ismael Juma
Hi Sophie,

I was trying to understand the goal of the change and it's not totally
clear to me. If the goal is to allow third party applications to customize
the logging behavior, why is the method protected instead of public?

Ismael

On Thu, Nov 2, 2023 at 9:55 PM Sophie Blee-Goldman 
wrote:

> Hey all,
>
> This is a trivial one-liner change that it was determined should go through
> a KIP during the PR review process (see this thread
>  for
> context). Since the change itself was already reviewed and approved I'm
> skipping the discussion thread and bringing it to a vote right away, but of
> course I'm open to feedback and can create a discussion thread if there is
> need for it.
>
> The change itself is simply adding the `protected` modifier to the
> ProducerConfig constructor that allows for silencing the config logging.
> This just brings the ProducerConfig in alignment with the other client
> configs, all of which already had this constructor as protected.
>
> KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-998%3A+Give+ProducerConfig%28props%2C+doLog%29+constructor+protected+access
> PR: https://github.com/apache/kafka/pull/14681
>
> Thanks!
> Sophie
>


Re: [VOTE] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-11-03 Thread Lucas Brutschy
Hi Hanyu,

Thanks for the KIP!
+1 (binding)

Cheers
Lucas

On Thu, Nov 2, 2023 at 10:19 PM Hao Li  wrote:
>
> Hi Hanyu,
>
> Thanks for the KIP!
> +1 (non-binding)
>
> Hao
>
> On Thu, Nov 2, 2023 at 1:29 PM Bill Bejeck  wrote:
>
> > Hi Hanyu,
> >
> > Thanks for the KIP this LGTM.
> > +1 (binding)
> >
> > Thanks,
> > Bill
> >
> >
> >
> > On Wed, Nov 1, 2023 at 1:07 PM Hanyu (Peter) Zheng
> >  wrote:
> >
> > > Hello everyone,
> > >
> > > I would like to start a vote for KIP-992: Proposal to introduce IQv2
> > Query
> > > Types: TimestampedKeyQuery and TimestampedRangeQuery.
> > >
> > > Sincerely,
> > > Hanyu
> > >
> > > On Wed, Nov 1, 2023 at 10:00 AM Hanyu (Peter) Zheng  > >
> > > wrote:
> > >
> > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery
> > > >
> > > > --
> > > >
> > > > [image: Confluent] 
> > > > Hanyu (Peter) Zheng he/him/his
> > > > Software Engineer Intern
> > > > +1 (213) 431-7193 <+1+(213)+431-7193>
> > > > Follow us: [image: Blog]
> > > > <
> > >
> > https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
> > > >[image:
> > > > Twitter] [image: LinkedIn]
> > > > [image: Slack]
> > > > [image: YouTube]
> > > > 
> > > >
> > > > [image: Try Confluent Cloud for Free]
> > > > <
> > >
> > https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound&utm_source=gmail&utm_medium=organic
> > > >
> > > >
> > >
> > >
> > > --
> > >
> > > [image: Confluent] 
> > > Hanyu (Peter) Zheng he/him/his
> > > Software Engineer Intern
> > > +1 (213) 431-7193 <+1+(213)+431-7193>
> > > Follow us: [image: Blog]
> > > <
> > >
> > https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog
> > > >[image:
> > > Twitter] [image: LinkedIn]
> > > [image: Slack]
> > > [image: YouTube]
> > > 
> > >
> > > [image: Try Confluent Cloud for Free]
> > > <
> > >
> > https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound&utm_source=gmail&utm_medium=organic
> > > >
> > >
> >


[jira] [Created] (KAFKA-15783) Unable to set batchSize in log4j2 Kafka appender

2023-11-03 Thread Diego Pettisani (Jira)
Diego Pettisani created KAFKA-15783:
---

 Summary: Unable to set batchSize in log4j2 Kafka appender
 Key: KAFKA-15783
 URL: https://issues.apache.org/jira/browse/KAFKA-15783
 Project: Kafka
  Issue Type: Bug
  Components: logging
Affects Versions: 3.6.0
Reporter: Diego Pettisani


When I try to configure the batchSize of the Kafka log4j2 appender the 
application logs the following error:

{noformat}
ERROR StatusConsoleListener Kafka contains an invalid element or attribute 
"batchSize"
{noformat}

This is an example of configuration that fails:

{code:xml}







 
localhost:9092










{code}

Please note that other parameters like {{syncSend}} works fine.

Could be possible that log4j2 expects this field:

https://github.com/apache/kafka/blob/3.6.0/log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java#L83C11-L83C11

as a String for working fine?




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-978: Allow dynamic reloading of certificates with different DN / SANs

2023-11-03 Thread Viktor Somogyi-Vass
+1 (binding)

Thank you for working on this, feel free to add me to the code review too.

On Fri, Oct 27, 2023 at 2:54 PM Mickael Maison 
wrote:

> Hi,
>
> +1 (binding)
> Thanks for the KIP
>
> Mickael
>
> On Wed, Oct 25, 2023 at 5:34 PM Federico Valeri 
> wrote:
> >
> > Hi Jakub, thanks for this KIP.
> >
> > +1 (non binding)
> >
> > Thanks
> > Fede
> >
> > On Wed, Oct 25, 2023 at 4:45 PM Manikumar 
> wrote:
> > >
> > > Hi,
> > >
> > > Thanks for the KIP.
> > >
> > > +1 (binding)
> > >
> > >
> > > Thanks.
> > >
> > > On Wed, Oct 25, 2023 at 1:37 AM Jakub Scholz  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I would like to start a vote for the KIP-978: Allow dynamic
> reloading of
> > > > certificates with different DN / SANs
> > > > <
> > > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263429128
> > > > >
> > > > .
> > > >
> > > > Thanks & Regards
> > > > Jakub
> > > >
>


Re: [DISCUSS] KIP-954: expand default DSL store configuration to custom types

2023-11-03 Thread Almog Gavra
Good question :) I have a PR for it already here:
https://github.com/apache/kafka/pull/14659. The concept is to wrap the
suppliers with an interface that allows for delayed creation of the
StoreBuilder instead of creating the StoreBuilder from the suppliers right
away. Happy to get on a quick call to outline the implementation strategy
if you'd like, but hopefully you have no objections to the goal!

On Thu, Nov 2, 2023 at 8:44 PM Matthias J. Sax  wrote:

> Almog,
>
> can you explain how you intent to implement this change? It's not clear
> to me, how we could do this?
>
> When we call `StreasmBuilder.build()` it will give us a already fully
> wired `Topology`, including all store suppliers needed. I don't see a
> clean way how we could change the store supplier after the fact?
>
>
> -Matthias
>
> On 11/2/23 5:11 PM, Almog Gavra wrote:
> > Hello everyone - I updated the KIP to also include the following
> > modification:
> >
> > Both the new dsl.store.suppliers.class  and the old default.dsl.store
> will
> > now respect the configurations when passed in via
> KafkaStreams#new(Topology,
> > StreamsConfig)  (and other related constructors) instead of only being
> > respected when passed in to the initial StoreBuilder#new(TopologyConfig)
> > (though it will be respected if passed in via the original path as well).
> >
> > I was honestly a bit shocked this wasn't the case with the original KIP
> > that introduced default.dsl.store!
> >
> > On Fri, Jul 28, 2023 at 4:55 PM Almog Gavra 
> wrote:
> >
> >> OK! I think I got everything, but I'll give the KIP another read with
> >> fresh eyes later. Just a reminder that the voting is open, so go out and
> >> exercise your civic duty! ;)
> >>
> >> - Almog
> >>
> >> On Fri, Jul 28, 2023 at 10:38 AM Almog Gavra 
> >> wrote:
> >>
> >>> Thanks Guozhang & Sophie:
> >>>
> >>> A2. Will clarify in the KIP
> >>> A3. Will change back to the deprecated version!
> >>> A5. Seems like I'm outnumbered... DslStoreSuppliers it is.
> >>>
> >>> Will update the KIP today.
> >>>
> >>> - Almog
> >>>
> >>> On Thu, Jul 27, 2023 at 12:42 PM Guozhang Wang <
> >>> guozhang.wang...@gmail.com> wrote:
> >>>
>  Yes, that sounds right to me. Thanks Sophie.
> 
>  On Thu, Jul 27, 2023 at 12:35 PM Sophie Blee-Goldman
>   wrote:
> >
> > A2: Guozhang, just to close the book on the ListValue store thing, I
>  fully
> > agree it seems like overreach
> > to expose/force this on users, especially if it's fully internal
>  today. But
> > just to make sure we're on the same
> > page here, you're still ok with this KIP fixing the API gap that
> exists
> > today, in which these stores cannot be
> > customized by the user at all?
> >
> > In other words, after this KIP, the new behavior for the ListValue
>  store in
> > a stream join will be:
> >
> > S1: First, check if the user passed in a `DSLStoreSuppliers` (or
>  whatever
> > the name will be) to the
> > StreamJoined config object, and use that to obtain the
> > KVStoreSupplier for this ListValue store
> >
> > S2: If none was provided, check if the user has set a default
> > DSLStoreSuppliers via the new config,
> > and use that to get the KVStoreSupplier if so
> >
> > S3: If neither is set, fall back to the original logic as it is
> today,
> > which is to pass in a KVStoreSupplier
> > that is hard-coded to be either RocksDB or InMemory, based on
>  what
> > is returned for the #persistent
> > API by the StreamJoined's WindowStoreSupplier
> >
> > Does that sound right? We can clarify this further in the KIP if need
>  be
> >
> > On Thu, Jul 27, 2023 at 10:48 AM Guozhang Wang <
>  guozhang.wang...@gmail.com>
> > wrote:
> >
> >> Hi all,
> >>
> >> Like Almog's secretary as well! Just following up on that index:
> >>
> >> A2: I'm also happy without introducing versioned KV in this KIP as I
> >> would envision it to be introduced as new params into the
> >> KeyValuePluginParams in the future. And just to clarify on Sophie's
> >> previous comment, I think ListStore should not be exposed in this
> API
> >> until we see it as a common usage and hence would want to (again, we
> >> need to think very carefully since it would potentially ask all
> >> implementers to adopt) move it from the internal category to the
> >> public interface category. As for now, I think only having kv /
>  window
> >> / session as public store types is fine.
> >>
> >> A3: Seems I was not making myself very clear at the beginning :P The
> >> major thing that I'd actually like to avoid having two configs
> >> co-exist for the same function since it will be a confusing learning
> >> curve for users, and hence what I was proposing is to just have the
> >> newly introduced interface but not introducing a new config, an

Re: Java 1.8 and TLSv1.3

2023-11-03 Thread Edoardo Comar
Andreas,
do you mean that even if you configure your Java client running on Java8 with
ssl.enabled.protocols=TLSv1.3
you can't connect to a Kafka broker using TLS1.3 ?


On Sat, 28 Oct 2023 at 01:03, Ismael Juma  wrote:
>
> Hi Andreas,
>
> The TLS code has run into changes in behavior across different Java
> versions, so we only wanted to allow TLS 1.3 in the versions we tested
> against. TLS 1.3 landed in Java 8 a while after we made the relevant
> changes for Java 11 and newer. That said, Java 8 support is deprecated and
> will be removed in Apache Kafka 4.0 (in a few months). So, it's not clear
> we want to invest in making more features available for that version.
>
> Thanks,
> Ismael
>
> On Thu, Oct 26, 2023 at 4:49 AM Andreas Martens1 
> wrote:
>
> > Hello good people of Kafka,
> >
> > I was recently informed that TLS 1.3 doesn’t work for connecting our
> > product to Kafka, and after some digging realised it was true, no matter
> > how hard I type “TLSv1.3” it doesn’t work, weirdly with an error about no
> > applicable Ciphers.
> >
> > So after a bunch more digging I realised that the problem lies in the
> > Kafka client classes, in Kafka clients’ SslConfigs.java there is this code:
> > static {
> >   if (Java.IS_JAVA11_COMPATIBLE) {
> > DEFAULT_SSL_PROTOCOL = "TLSv1.3";
> > DEFAULT_SSL_ENABLED_PROTOCOLS = "TLSv1.2,TLSv1.3";
> >   } else {
> > DEFAULT_SSL_PROTOCOL = "TLSv1.2";
> > DEFAULT_SSL_ENABLED_PROTOCOLS = "TLSv1.2";
> >   }
> > }
> >
> > My initial thoughts were that these just set the defaults, I should still
> > be able to set TLSv1.3 in my properties, but no. If I change the above
> > block to:
> > static {
> >   DEFAULT_SSL_PROTOCOL = "TLSv1.3";
> >   DEFAULT_SSL_ENABLED_PROTOCOLS = "TLSv1.2,TLSv1.3";
> > }
> > it works just fine. I suspect (but haven’t yet had the time to verify)
> > that there’s something that gets the list of supported Ciphers from the
> > default, and applies those Ciphers to the selected end protocol, and since
> > there’s no overlap I’m outta luck.
> >
> > Now of course life is never simple, so I can’t just make the above change
> > and send you fine people a PR, since there’s probably still some older JREs
> > out there that don’t support TLSv1.3.
> >
> > As a fine person once told me (actually, a Java support person) “don’t
> > test the symptom, test the cause”. In this case, we shouldn’t be testing
> > whether we’re working with a Java 11 JVM, we should test whether our
> > current JVM has a TLSv1.3 Context instance. E.g.:
> >   try{
> > SSLContext context = SSLContext.getInstance("TLSv1.3");
> > DEFAULT_SSL_PROTOCOL = "TLSv1.3";
> >   }
> >   catch(java.security.NoSuchAlgorithmException e){
> > DEFAULT_SSL_PROTOCOL = "TLSv1.2";
> >   }
> > But the test in SslConfigs.java is done in *static* init, and as we all
> > know, performing try-catch within a static is a Big No-No.
> >
> > S, before I go digging further in the code and start looking to modify
> > the places where the defaults are consumed, does anyone have a better idea?
> > My initial thought was to raise a ticket and run away, but I’m trying to be
> > a good citizen!
> >
> > I should probably mention, this code was introduced in
> > https://issues.apache.org/jira/browse/KAFKA-7251 and
> > https://issues.apache.org/jira/browse/KAFKA-9320 and
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
> > .
> >
> > Thanks,
> > Andreas Martens
> > --
> > Senior Engineer
> > IBM App Connect Enterprise
> >
> > Unless otherwise stated above:
> >
> > IBM United Kingdom Limited
> > Registered in England and Wales with number 741598
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
> >


[jira] [Created] (KAFKA-15784) Ensure atomicity of in memory update and write when transactionally committing offsets

2023-11-03 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-15784:
--

 Summary: Ensure atomicity of in memory update and write when 
transactionally committing offsets
 Key: KAFKA-15784
 URL: https://issues.apache.org/jira/browse/KAFKA-15784
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 3.7.0
Reporter: Justine Olshan
Assignee: Justine Olshan


[https://github.com/apache/kafka/pull/14370] (KAFKA-15449) removed the locking 
around validating, updating state, and writing to the log transactional offset 
commits. (The verification causes us to release the lock)

This was discovered in the discussion of 
[https://github.com/apache/kafka/pull/14629] (KAFKA-15653).

Since KAFKA-15653 is needed for 3.5.1, it makes sense to address the locking 
issue separately with this ticket. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Apache Kafka 3.5.2 release

2023-11-03 Thread Matthias J. Sax

Hey,

Sorry for late reply. We finished our testing, and think we are go.

Thanks for giving us the opportunity to get the RocksDB version bump in. 
Let's ship it!



-Matthias

On 11/2/23 4:37 PM, Luke Chen wrote:

Hi Matthias,

Is there any update about the test status for RocksDB versions bumps?
Could I create a 3.5.2 RC build next week?

Thanks.
Luke

On Sat, Oct 21, 2023 at 1:01 PM Luke Chen  wrote:


Hi Matthias,

I agree it's indeed a blocker for 3.5.2 to address CVE in RocksDB.
Please let me know when the test is completed.

Thank you.
Luke

On Sat, Oct 21, 2023 at 2:12 AM Matthias J. Sax  wrote:


Thanks for the info Luke.

We did backport all but one PR in the mean time. The missing PR is a
RocksDB version bump. We want to consider it for 3.5.2, because it
addresses a CVE.

Cf https://github.com/apache/kafka/pull/14216

However, RocksDB versions bumps are a little bit more tricky, and we
would like to test this properly on 3.5 branch, what would take at least
one week; we could do the cherry-pick on Monday and start testing.

Please let us know if such a delay for 3.5.2 is acceptable or not.

Thanks.

-Matthias


On 10/20/23 5:44 AM, Luke Chen wrote:

Hi Ryan,

OK, I've backported it to 3.5 branch.
I'll be included in v3.5.2.

Thanks.
Luke

On Fri, Oct 20, 2023 at 7:43 AM Ryan Leslie (BLP/ NEW YORK (REMOT) <
rles...@bloomberg.net> wrote:


Hi Luke,

Hope you are well. Can you please include
https://issues.apache.org/jira/browse/KAFKA-15106 in 3.5.2?

Thanks,

Ryan

From: dev@kafka.apache.org At: 10/17/23 05:05:24 UTC-4:00
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] Apache Kafka 3.5.2 release

Thanks Luke for volunteering for 3.5.2 release.

On Tue, 17 Oct 2023 at 11:58, Josep Prat 
wrote:


Hi Luke,

Thanks for taking this one!

Best,

On Tue, Oct 17, 2023 at 8:12 AM Luke Chen  wrote:


Hi all,

I'd like to volunteer as release manager for the Apache Kafka 3.5.2,

to

have an important bug/vulnerability fix release for 3.5.1.

If there are no objections, I'll start building a release plan in

thewiki

in the next couple of weeks.

Thanks,
Luke




--
[image: Aiven] 

*Josep Prat*
Open Source Engineering Director, *Aiven*
josep.p...@aiven.io | +491715557497
aiven.io  | <

https://www.facebook.com/aivencloud>

 <

https://twitter.com/aiven_io>

*Aiven Deutschland GmbH*
Alexanderufer 3-7, 10117 Berlin
Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
Amtsgericht Charlottenburg, HRB 209739 B














Re: [VOTE] KIP-998: Give ProducerConfig(props, doLog) constructor protected access

2023-11-03 Thread Matthias J. Sax

+1 (binding)


About "why not public" question:

I think we need to distinguish between "end users" who create a producer 
instance, and "external parties" that might implement their own 
`Producer` (or wrap/extend `KafkaProducer`).


In the end, I would not expect an "end user" to actually call `new 
ProducerConfig` to begin with. If one creates a `KafkaProducer` they 
pass the config via a `Map` or `Properties`, and the producer creates 
`ProducerConfig` internally only. -- Thus, there is no need to make it 
`public`. (To this end, I don't actually understand why there is public 
`ProducerConfig` constructors to begin with -- sounds like a leaky 
abstraction to me.)


On the other hand, if a "third party" implements `Producer` interface to 
ship their own producer implementation, they might want to create 
`ProducerConfig` internally, so for them it's different, but still, they 
don't need public access because they can extend `ProducerConfig`, too 
for this case). -- To me, this falls into the category "simple thing 
should be easy, and hard things should be possible).



-Matthias


On 11/3/23 6:06 AM, Ismael Juma wrote:

Hi Sophie,

I was trying to understand the goal of the change and it's not totally
clear to me. If the goal is to allow third party applications to customize
the logging behavior, why is the method protected instead of public?

Ismael

On Thu, Nov 2, 2023 at 9:55 PM Sophie Blee-Goldman 
wrote:


Hey all,

This is a trivial one-liner change that it was determined should go through
a KIP during the PR review process (see this thread
 for
context). Since the change itself was already reviewed and approved I'm
skipping the discussion thread and bringing it to a vote right away, but of
course I'm open to feedback and can create a discussion thread if there is
need for it.

The change itself is simply adding the `protected` modifier to the
ProducerConfig constructor that allows for silencing the config logging.
This just brings the ProducerConfig in alignment with the other client
configs, all of which already had this constructor as protected.

KIP:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-998%3A+Give+ProducerConfig%28props%2C+doLog%29+constructor+protected+access
PR: https://github.com/apache/kafka/pull/14681

Thanks!
Sophie





Re: [DISCUSS] KIP-954: expand default DSL store configuration to custom types

2023-11-03 Thread Matthias J. Sax

Thanks. Will take a look into the PR.

I don't have any objection to the goal; contrary! It's very annoying 
what we have right now, and if we can improve it, I am totally in favor 
of it.



-Matthias

On 11/3/23 8:47 AM, Almog Gavra wrote:

Good question :) I have a PR for it already here:
https://github.com/apache/kafka/pull/14659. The concept is to wrap the
suppliers with an interface that allows for delayed creation of the
StoreBuilder instead of creating the StoreBuilder from the suppliers right
away. Happy to get on a quick call to outline the implementation strategy
if you'd like, but hopefully you have no objections to the goal!

On Thu, Nov 2, 2023 at 8:44 PM Matthias J. Sax  wrote:


Almog,

can you explain how you intent to implement this change? It's not clear
to me, how we could do this?

When we call `StreasmBuilder.build()` it will give us a already fully
wired `Topology`, including all store suppliers needed. I don't see a
clean way how we could change the store supplier after the fact?


-Matthias

On 11/2/23 5:11 PM, Almog Gavra wrote:

Hello everyone - I updated the KIP to also include the following
modification:

Both the new dsl.store.suppliers.class  and the old default.dsl.store

will

now respect the configurations when passed in via

KafkaStreams#new(Topology,

StreamsConfig)  (and other related constructors) instead of only being
respected when passed in to the initial StoreBuilder#new(TopologyConfig)
(though it will be respected if passed in via the original path as well).

I was honestly a bit shocked this wasn't the case with the original KIP
that introduced default.dsl.store!

On Fri, Jul 28, 2023 at 4:55 PM Almog Gavra 

wrote:



OK! I think I got everything, but I'll give the KIP another read with
fresh eyes later. Just a reminder that the voting is open, so go out and
exercise your civic duty! ;)

- Almog

On Fri, Jul 28, 2023 at 10:38 AM Almog Gavra 
wrote:


Thanks Guozhang & Sophie:

A2. Will clarify in the KIP
A3. Will change back to the deprecated version!
A5. Seems like I'm outnumbered... DslStoreSuppliers it is.

Will update the KIP today.

- Almog

On Thu, Jul 27, 2023 at 12:42 PM Guozhang Wang <
guozhang.wang...@gmail.com> wrote:


Yes, that sounds right to me. Thanks Sophie.

On Thu, Jul 27, 2023 at 12:35 PM Sophie Blee-Goldman
 wrote:


A2: Guozhang, just to close the book on the ListValue store thing, I

fully

agree it seems like overreach
to expose/force this on users, especially if it's fully internal

today. But

just to make sure we're on the same
page here, you're still ok with this KIP fixing the API gap that

exists

today, in which these stores cannot be
customized by the user at all?

In other words, after this KIP, the new behavior for the ListValue

store in

a stream join will be:

S1: First, check if the user passed in a `DSLStoreSuppliers` (or

whatever

the name will be) to the
 StreamJoined config object, and use that to obtain the
KVStoreSupplier for this ListValue store

S2: If none was provided, check if the user has set a default
DSLStoreSuppliers via the new config,
 and use that to get the KVStoreSupplier if so

S3: If neither is set, fall back to the original logic as it is

today,

which is to pass in a KVStoreSupplier
 that is hard-coded to be either RocksDB or InMemory, based on

what

is returned for the #persistent
 API by the StreamJoined's WindowStoreSupplier

Does that sound right? We can clarify this further in the KIP if need

be


On Thu, Jul 27, 2023 at 10:48 AM Guozhang Wang <

guozhang.wang...@gmail.com>

wrote:


Hi all,

Like Almog's secretary as well! Just following up on that index:

A2: I'm also happy without introducing versioned KV in this KIP as I
would envision it to be introduced as new params into the
KeyValuePluginParams in the future. And just to clarify on Sophie's
previous comment, I think ListStore should not be exposed in this

API

until we see it as a common usage and hence would want to (again, we
need to think very carefully since it would potentially ask all
implementers to adopt) move it from the internal category to the
public interface category. As for now, I think only having kv /

window

/ session as public store types is fine.

A3: Seems I was not making myself very clear at the beginning :P The
major thing that I'd actually like to avoid having two configs
co-exist for the same function since it will be a confusing learning
curve for users, and hence what I was proposing is to just have the
newly introduced interface but not introducing a new config, and I
realized now that it is actually more aligned with the CUSTOM idea
where the ordering would be looking at config first, and then the
interface. I blushed as I read Almog likes it.. After thinking about
it twice, I'm now a bit leaning towards just deprecating the old
config with the new API+config as well.

A5: Among the names we have been discussed so far:

DslStorePlugin
StoreTypeSpec
StoreImplSpec
DslStoreS

Re: [VOTE] KIP-998: Give ProducerConfig(props, doLog) constructor protected access

2023-11-03 Thread Ismael Juma
It seems wrong to require inheritance for this and we already have a public
constructor. I would make both of them public.

Ismael

On Fri, Nov 3, 2023 at 10:47 AM Matthias J. Sax  wrote:

> +1 (binding)
>
>
> About "why not public" question:
>
> I think we need to distinguish between "end users" who create a producer
> instance, and "external parties" that might implement their own
> `Producer` (or wrap/extend `KafkaProducer`).
>
> In the end, I would not expect an "end user" to actually call `new
> ProducerConfig` to begin with. If one creates a `KafkaProducer` they
> pass the config via a `Map` or `Properties`, and the producer creates
> `ProducerConfig` internally only. -- Thus, there is no need to make it
> `public`. (To this end, I don't actually understand why there is public
> `ProducerConfig` constructors to begin with -- sounds like a leaky
> abstraction to me.)
>
> On the other hand, if a "third party" implements `Producer` interface to
> ship their own producer implementation, they might want to create
> `ProducerConfig` internally, so for them it's different, but still, they
> don't need public access because they can extend `ProducerConfig`, too
> for this case). -- To me, this falls into the category "simple thing
> should be easy, and hard things should be possible).
>
>
> -Matthias
>
>
> On 11/3/23 6:06 AM, Ismael Juma wrote:
> > Hi Sophie,
> >
> > I was trying to understand the goal of the change and it's not totally
> > clear to me. If the goal is to allow third party applications to
> customize
> > the logging behavior, why is the method protected instead of public?
> >
> > Ismael
> >
> > On Thu, Nov 2, 2023 at 9:55 PM Sophie Blee-Goldman <
> sop...@responsive.dev>
> > wrote:
> >
> >> Hey all,
> >>
> >> This is a trivial one-liner change that it was determined should go
> through
> >> a KIP during the PR review process (see this thread
> >>  for
> >> context). Since the change itself was already reviewed and approved I'm
> >> skipping the discussion thread and bringing it to a vote right away,
> but of
> >> course I'm open to feedback and can create a discussion thread if there
> is
> >> need for it.
> >>
> >> The change itself is simply adding the `protected` modifier to the
> >> ProducerConfig constructor that allows for silencing the config logging.
> >> This just brings the ProducerConfig in alignment with the other client
> >> configs, all of which already had this constructor as protected.
> >>
> >> KIP:
> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-998%3A+Give+ProducerConfig%28props%2C+doLog%29+constructor+protected+access
> >> PR: https://github.com/apache/kafka/pull/14681
> >>
> >> Thanks!
> >> Sophie
> >>
> >
>


Request for Removal From the Mail Thread

2023-11-03 Thread Dasun Nirmitha
Hello Apache Kafka Mail Thread,
Thanks for having me in your mail thread for a really long time. But since
I've changed my fields I believe I won't be needing an active involvement
with this platform anymore. So, this is me dear request to kindly remove
me from your mailing list.
Thanks and regards
Dasun


Re: Request for Removal From the Mail Thread

2023-11-03 Thread Divij Vaidya
Hey Dasun

Subscribe/unsubscribe to the mailing list is self service. You can find the
details at https://kafka.apache.org/contact

--
Divij Vaidya



On Fri, Nov 3, 2023 at 7:37 PM Dasun Nirmitha 
wrote:

> Hello Apache Kafka Mail Thread,
> Thanks for having me in your mail thread for a really long time. But since
> I've changed my fields I believe I won't be needing an active involvement
> with this platform anymore. So, this is me dear request to kindly remove
> me from your mailing list.
> Thanks and regards
> Dasun
>


[jira] [Created] (KAFKA-15785) Investigate new test case failures

2023-11-03 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15785:
-

 Summary: Investigate new test case failures
 Key: KAFKA-15785
 URL: https://issues.apache.org/jira/browse/KAFKA-15785
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal


PR - [https://github.com/apache/kafka/pull/14621] has 7 new test case failure 
which are not related to the PR though. This Jira tracks the failure of these 
tests for investigation if current changes somehow impact the tests.

CI: 
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14621/12/tests/

Failed tests
Build / JDK 17 and Scala 2.13 / 
testTransactionAfterTransactionIdExpiresButProducerIdRemains(String).quorum=kraft
 – kafka.api.ProducerIdExpirationTest
8s
Build / JDK 8 and Scala 2.12 / testBumpTransactionalEpoch(String).quorum=kraft 
– kafka.api.TransactionsTest
1m 20s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest
2m 15s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest
1m 51s
Build / JDK 11 and Scala 2.13 / 
testDeleteCmdNonExistingGroup(String).quorum=kraft – 
kafka.admin.DeleteConsumerGroupsTest
11s
Build / JDK 11 and Scala 2.13 / testTimeouts() – 
org.apache.kafka.controller.QuorumControllerTest
<1s
Build / JDK 11 and Scala 2.13 / 
testHighWaterMarkAfterPartitionReassignment(String).quorum=kraft – 
org.apache.kafka.tools.reassign.ReassignPartitionsIntegrationTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15787) Investigate new test case failure - testReplicateSourceDefault

2023-11-03 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15787:
-

 Summary: Investigate new test case failure - 
testReplicateSourceDefault
 Key: KAFKA-15787
 URL: https://issues.apache.org/jira/browse/KAFKA-15787
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal


PR - [https://github.com/apache/kafka/pull/14621] has 7 new test case failure 
which are not related to the PR though. This Jira tracks the failure of these 
tests for investigation if current changes somehow impact the tests.

CI: 
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14621/12/tests/]

Failed tests
Build / JDK 17 and Scala 2.13 / 
testTransactionAfterTransactionIdExpiresButProducerIdRemains(String).quorum=kraft
 – kafka.api.ProducerIdExpirationTest
8s

Build / JDK 8 and Scala 2.12 / testBumpTransactionalEpoch(String).quorum=kraft 
– kafka.api.TransactionsTest
1m 20s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest
2m 15s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest
1m 51s
Build / JDK 11 and Scala 2.13 / 
testDeleteCmdNonExistingGroup(String).quorum=kraft – 
kafka.admin.DeleteConsumerGroupsTest
11s
Build / JDK 11 and Scala 2.13 / testTimeouts() – 
org.apache.kafka.controller.QuorumControllerTest
<1s
Build / JDK 11 and Scala 2.13 / 
testHighWaterMarkAfterPartitionReassignment(String).quorum=kraft – 
org.apache.kafka.tools.reassign.ReassignPartitionsIntegrationTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15786) Investigate new test case failure - testBumpTransactionalEpoch

2023-11-03 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15786:
-

 Summary: Investigate new test case failure - 
testBumpTransactionalEpoch
 Key: KAFKA-15786
 URL: https://issues.apache.org/jira/browse/KAFKA-15786
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal


PR - [https://github.com/apache/kafka/pull/14621] has 7 new test case failure 
which are not related to the PR though. This Jira tracks the failure of these 
tests for investigation if current changes somehow impact the tests.

CI: 
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14621/12/tests/]

Failed tests
Build / JDK 17 and Scala 2.13 / 
testTransactionAfterTransactionIdExpiresButProducerIdRemains(String).quorum=kraft
 – kafka.api.ProducerIdExpirationTest
8s

Build / JDK 8 and Scala 2.12 / testBumpTransactionalEpoch(String).quorum=kraft 
– kafka.api.TransactionsTest
1m 20s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest
2m 15s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest
1m 51s
Build / JDK 11 and Scala 2.13 / 
testDeleteCmdNonExistingGroup(String).quorum=kraft – 
kafka.admin.DeleteConsumerGroupsTest
11s
Build / JDK 11 and Scala 2.13 / testTimeouts() – 
org.apache.kafka.controller.QuorumControllerTest
<1s
Build / JDK 11 and Scala 2.13 / 
testHighWaterMarkAfterPartitionReassignment(String).quorum=kraft – 
org.apache.kafka.tools.reassign.ReassignPartitionsIntegrationTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15789) Investigate new test case failure - testTimeouts - QuorumControllerTest

2023-11-03 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15789:
-

 Summary: Investigate new test case failure - testTimeouts - 
QuorumControllerTest
 Key: KAFKA-15789
 URL: https://issues.apache.org/jira/browse/KAFKA-15789
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal


PR - [https://github.com/apache/kafka/pull/14621] has 7 new test case failure 
which are not related to the PR though. This Jira tracks the failure of these 
tests for investigation if current changes somehow impact the tests.

CI: 
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14621/12/tests/]

Failed tests
Build / JDK 17 and Scala 2.13 / 
testTransactionAfterTransactionIdExpiresButProducerIdRemains(String).quorum=kraft
 – kafka.api.ProducerIdExpirationTest
8s

Build / JDK 8 and Scala 2.12 / testBumpTransactionalEpoch(String).quorum=kraft 
– kafka.api.TransactionsTest
1m 20s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest
2m 15s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest
1m 51s
Build / JDK 11 and Scala 2.13 / 
testDeleteCmdNonExistingGroup(String).quorum=kraft – 
kafka.admin.DeleteConsumerGroupsTest
11s
Build / JDK 11 and Scala 2.13 / testTimeouts() – 
org.apache.kafka.controller.QuorumControllerTest
<1s
Build / JDK 11 and Scala 2.13 / 
testHighWaterMarkAfterPartitionReassignment(String).quorum=kraft – 
org.apache.kafka.tools.reassign.ReassignPartitionsIntegrationTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15790) Investigate new test case failure - testHighWaterMarkAfterPartitionReassignment

2023-11-03 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15790:
-

 Summary: Investigate new test case failure - 
testHighWaterMarkAfterPartitionReassignment
 Key: KAFKA-15790
 URL: https://issues.apache.org/jira/browse/KAFKA-15790
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal


PR - [https://github.com/apache/kafka/pull/14621] has 7 new test case failure 
which are not related to the PR though. This Jira tracks the failure of these 
tests for investigation if current changes somehow impact the tests.

CI: 
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14621/12/tests/]

Failed tests
Build / JDK 17 and Scala 2.13 / 
testTransactionAfterTransactionIdExpiresButProducerIdRemains(String).quorum=kraft
 – kafka.api.ProducerIdExpirationTest
8s

Build / JDK 8 and Scala 2.12 / testBumpTransactionalEpoch(String).quorum=kraft 
– kafka.api.TransactionsTest
1m 20s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest
2m 15s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest
1m 51s
Build / JDK 11 and Scala 2.13 / 
testDeleteCmdNonExistingGroup(String).quorum=kraft – 
kafka.admin.DeleteConsumerGroupsTest
11s
Build / JDK 11 and Scala 2.13 / testTimeouts() – 
org.apache.kafka.controller.QuorumControllerTest
<1s
Build / JDK 11 and Scala 2.13 / 
testHighWaterMarkAfterPartitionReassignment(String).quorum=kraft – 
org.apache.kafka.tools.reassign.ReassignPartitionsIntegrationTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15788) Investigate new test case failure - testDeleteCmdNonExistingGroup

2023-11-03 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15788:
-

 Summary: Investigate new test case failure - 
testDeleteCmdNonExistingGroup
 Key: KAFKA-15788
 URL: https://issues.apache.org/jira/browse/KAFKA-15788
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal


PR - [https://github.com/apache/kafka/pull/14621] has 7 new test case failure 
which are not related to the PR though. This Jira tracks the failure of these 
tests for investigation if current changes somehow impact the tests.

CI: 
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14621/12/tests/]

Failed tests
Build / JDK 17 and Scala 2.13 / 
testTransactionAfterTransactionIdExpiresButProducerIdRemains(String).quorum=kraft
 – kafka.api.ProducerIdExpirationTest
8s

Build / JDK 8 and Scala 2.12 / testBumpTransactionalEpoch(String).quorum=kraft 
– kafka.api.TransactionsTest
1m 20s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest
2m 15s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest
1m 51s
Build / JDK 11 and Scala 2.13 / 
testDeleteCmdNonExistingGroup(String).quorum=kraft – 
kafka.admin.DeleteConsumerGroupsTest
11s
Build / JDK 11 and Scala 2.13 / testTimeouts() – 
org.apache.kafka.controller.QuorumControllerTest
<1s
Build / JDK 11 and Scala 2.13 / 
testHighWaterMarkAfterPartitionReassignment(String).quorum=kraft – 
org.apache.kafka.tools.reassign.ReassignPartitionsIntegrationTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15791) Investigate new test case failure - MirrorConnectorsWithCustomForwardingAdminIntegrationTest - MirrorConnectorsIntegrationBaseTest

2023-11-03 Thread Apoorv Mittal (Jira)
Apoorv Mittal created KAFKA-15791:
-

 Summary: Investigate new test case failure - 
MirrorConnectorsWithCustomForwardingAdminIntegrationTest - 
MirrorConnectorsIntegrationBaseTest
 Key: KAFKA-15791
 URL: https://issues.apache.org/jira/browse/KAFKA-15791
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apoorv Mittal


PR - [https://github.com/apache/kafka/pull/14621] has 7 new test case failure 
which are not related to the PR though. This Jira tracks the failure of these 
tests for investigation if current changes somehow impact the tests.

CI: 
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14621/12/tests/]

Failed tests
Build / JDK 17 and Scala 2.13 / 
testTransactionAfterTransactionIdExpiresButProducerIdRemains(String).quorum=kraft
 – kafka.api.ProducerIdExpirationTest
8s

Build / JDK 8 and Scala 2.12 / testBumpTransactionalEpoch(String).quorum=kraft 
– kafka.api.TransactionsTest
1m 20s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest
2m 15s
Build / JDK 11 and Scala 2.13 / testReplicateSourceDefault() – 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsWithCustomForwardingAdminIntegrationTest
1m 51s
Build / JDK 11 and Scala 2.13 / 
testDeleteCmdNonExistingGroup(String).quorum=kraft – 
kafka.admin.DeleteConsumerGroupsTest
11s
Build / JDK 11 and Scala 2.13 / testTimeouts() – 
org.apache.kafka.controller.QuorumControllerTest
<1s
Build / JDK 11 and Scala 2.13 / 
testHighWaterMarkAfterPartitionReassignment(String).quorum=kraft – 
org.apache.kafka.tools.reassign.ReassignPartitionsIntegrationTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-998: Give ProducerConfig(props, doLog) constructor protected access

2023-11-03 Thread Sophie Blee-Goldman
I am fine with making them public. Of course in that case we should also
change the corresponding constructors in ConsumerConfig, AdminConfig, and
StreamsConfig from protected to public as well, to be consistent. But
Matthias seems to feel that these should never be instantiated by a user
and that the correct course of action would be to move in the opposite
direction.

I don't personally feel strongly either way -- honestly I had thought it
was an abuse of internal APIs to extend the other Config classes in order
to access the protected constructor and disable logging. So I would be
happy to officially pull it into the public API with all-public
constructors, because I do feel it is valid/useful to be able to
instantiate these objects. We do so in order to access config values in a
way that accounts for any overrides on top of the default, for example when
multiple overrides are in play (eg user overrides on top of framework
overrides on top of Kafka Streams overrides on top of
Consumer/Consumer/Admin client defaults). Using them is also (slightly)
more type-safe than going through a Properties or config Map<>

Any objections to expanding the KIP to the ConsumerConfig, AdminConfig, and
StreamsConfig constructors and making them public as well? From Matthias or
otherwise?

On Fri, Nov 3, 2023 at 11:09 AM Ismael Juma  wrote:

> It seems wrong to require inheritance for this and we already have a public
> constructor. I would make both of them public.
>
> Ismael
>
> On Fri, Nov 3, 2023 at 10:47 AM Matthias J. Sax  wrote:
>
> > +1 (binding)
> >
> >
> > About "why not public" question:
> >
> > I think we need to distinguish between "end users" who create a producer
> > instance, and "external parties" that might implement their own
> > `Producer` (or wrap/extend `KafkaProducer`).
> >
> > In the end, I would not expect an "end user" to actually call `new
> > ProducerConfig` to begin with. If one creates a `KafkaProducer` they
> > pass the config via a `Map` or `Properties`, and the producer creates
> > `ProducerConfig` internally only. -- Thus, there is no need to make it
> > `public`. (To this end, I don't actually understand why there is public
> > `ProducerConfig` constructors to begin with -- sounds like a leaky
> > abstraction to me.)
> >
> > On the other hand, if a "third party" implements `Producer` interface to
> > ship their own producer implementation, they might want to create
> > `ProducerConfig` internally, so for them it's different, but still, they
> > don't need public access because they can extend `ProducerConfig`, too
> > for this case). -- To me, this falls into the category "simple thing
> > should be easy, and hard things should be possible).
> >
> >
> > -Matthias
> >
> >
> > On 11/3/23 6:06 AM, Ismael Juma wrote:
> > > Hi Sophie,
> > >
> > > I was trying to understand the goal of the change and it's not totally
> > > clear to me. If the goal is to allow third party applications to
> > customize
> > > the logging behavior, why is the method protected instead of public?
> > >
> > > Ismael
> > >
> > > On Thu, Nov 2, 2023 at 9:55 PM Sophie Blee-Goldman <
> > sop...@responsive.dev>
> > > wrote:
> > >
> > >> Hey all,
> > >>
> > >> This is a trivial one-liner change that it was determined should go
> > through
> > >> a KIP during the PR review process (see this thread
> > >> 
> for
> > >> context). Since the change itself was already reviewed and approved
> I'm
> > >> skipping the discussion thread and bringing it to a vote right away,
> > but of
> > >> course I'm open to feedback and can create a discussion thread if
> there
> > is
> > >> need for it.
> > >>
> > >> The change itself is simply adding the `protected` modifier to the
> > >> ProducerConfig constructor that allows for silencing the config
> logging.
> > >> This just brings the ProducerConfig in alignment with the other client
> > >> configs, all of which already had this constructor as protected.
> > >>
> > >> KIP:
> > >>
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-998%3A+Give+ProducerConfig%28props%2C+doLog%29+constructor+protected+access
> > >> PR: https://github.com/apache/kafka/pull/14681
> > >>
> > >> Thanks!
> > >> Sophie
> > >>
> > >
> >
>


Re: [VOTE] KIP-998: Give ProducerConfig(props, doLog) constructor protected access

2023-11-03 Thread Chris Egerton
No objections, I'm +1 ether way.

On Fri, Nov 3, 2023, 18:50 Sophie Blee-Goldman 
wrote:

> I am fine with making them public. Of course in that case we should also
> change the corresponding constructors in ConsumerConfig, AdminConfig, and
> StreamsConfig from protected to public as well, to be consistent. But
> Matthias seems to feel that these should never be instantiated by a user
> and that the correct course of action would be to move in the opposite
> direction.
>
> I don't personally feel strongly either way -- honestly I had thought it
> was an abuse of internal APIs to extend the other Config classes in order
> to access the protected constructor and disable logging. So I would be
> happy to officially pull it into the public API with all-public
> constructors, because I do feel it is valid/useful to be able to
> instantiate these objects. We do so in order to access config values in a
> way that accounts for any overrides on top of the default, for example when
> multiple overrides are in play (eg user overrides on top of framework
> overrides on top of Kafka Streams overrides on top of
> Consumer/Consumer/Admin client defaults). Using them is also (slightly)
> more type-safe than going through a Properties or config Map<>
>
> Any objections to expanding the KIP to the ConsumerConfig, AdminConfig, and
> StreamsConfig constructors and making them public as well? From Matthias or
> otherwise?
>
> On Fri, Nov 3, 2023 at 11:09 AM Ismael Juma  wrote:
>
> > It seems wrong to require inheritance for this and we already have a
> public
> > constructor. I would make both of them public.
> >
> > Ismael
> >
> > On Fri, Nov 3, 2023 at 10:47 AM Matthias J. Sax 
> wrote:
> >
> > > +1 (binding)
> > >
> > >
> > > About "why not public" question:
> > >
> > > I think we need to distinguish between "end users" who create a
> producer
> > > instance, and "external parties" that might implement their own
> > > `Producer` (or wrap/extend `KafkaProducer`).
> > >
> > > In the end, I would not expect an "end user" to actually call `new
> > > ProducerConfig` to begin with. If one creates a `KafkaProducer` they
> > > pass the config via a `Map` or `Properties`, and the producer creates
> > > `ProducerConfig` internally only. -- Thus, there is no need to make it
> > > `public`. (To this end, I don't actually understand why there is public
> > > `ProducerConfig` constructors to begin with -- sounds like a leaky
> > > abstraction to me.)
> > >
> > > On the other hand, if a "third party" implements `Producer` interface
> to
> > > ship their own producer implementation, they might want to create
> > > `ProducerConfig` internally, so for them it's different, but still,
> they
> > > don't need public access because they can extend `ProducerConfig`, too
> > > for this case). -- To me, this falls into the category "simple thing
> > > should be easy, and hard things should be possible).
> > >
> > >
> > > -Matthias
> > >
> > >
> > > On 11/3/23 6:06 AM, Ismael Juma wrote:
> > > > Hi Sophie,
> > > >
> > > > I was trying to understand the goal of the change and it's not
> totally
> > > > clear to me. If the goal is to allow third party applications to
> > > customize
> > > > the logging behavior, why is the method protected instead of public?
> > > >
> > > > Ismael
> > > >
> > > > On Thu, Nov 2, 2023 at 9:55 PM Sophie Blee-Goldman <
> > > sop...@responsive.dev>
> > > > wrote:
> > > >
> > > >> Hey all,
> > > >>
> > > >> This is a trivial one-liner change that it was determined should go
> > > through
> > > >> a KIP during the PR review process (see this thread
> > > >> 
> > for
> > > >> context). Since the change itself was already reviewed and approved
> > I'm
> > > >> skipping the discussion thread and bringing it to a vote right away,
> > > but of
> > > >> course I'm open to feedback and can create a discussion thread if
> > there
> > > is
> > > >> need for it.
> > > >>
> > > >> The change itself is simply adding the `protected` modifier to the
> > > >> ProducerConfig constructor that allows for silencing the config
> > logging.
> > > >> This just brings the ProducerConfig in alignment with the other
> client
> > > >> configs, all of which already had this constructor as protected.
> > > >>
> > > >> KIP:
> > > >>
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-998%3A+Give+ProducerConfig%28props%2C+doLog%29+constructor+protected+access
> > > >> PR: https://github.com/apache/kafka/pull/14681
> > > >>
> > > >> Thanks!
> > > >> Sophie
> > > >>
> > > >
> > >
> >
>


Re: [VOTE] KIP-998: Give ProducerConfig(props, doLog) constructor protected access

2023-11-03 Thread Matthias J. Sax
Sophie reads my mind well, but I also won't object if majority if people 
thinks it's desirable to have it public (it's does not really hurt to 
have them public).


I just personally think, we should optimize for "end users" and they 
should not need it -- and thus, keeping the API surface area as small as 
possible seems desirable (and don't generate JavaDocs for protected 
methods...). Maybe it's less of an issue for clients, but given my 
experience with Kafka Streams, and it large API, I prefer to guide users 
by avoiding "leaky" abstractions.


-Matthias



On 11/3/23 4:34 PM, Chris Egerton wrote:

No objections, I'm +1 ether way.

On Fri, Nov 3, 2023, 18:50 Sophie Blee-Goldman 
wrote:


I am fine with making them public. Of course in that case we should also
change the corresponding constructors in ConsumerConfig, AdminConfig, and
StreamsConfig from protected to public as well, to be consistent. But
Matthias seems to feel that these should never be instantiated by a user
and that the correct course of action would be to move in the opposite
direction.

I don't personally feel strongly either way -- honestly I had thought it
was an abuse of internal APIs to extend the other Config classes in order
to access the protected constructor and disable logging. So I would be
happy to officially pull it into the public API with all-public
constructors, because I do feel it is valid/useful to be able to
instantiate these objects. We do so in order to access config values in a
way that accounts for any overrides on top of the default, for example when
multiple overrides are in play (eg user overrides on top of framework
overrides on top of Kafka Streams overrides on top of
Consumer/Consumer/Admin client defaults). Using them is also (slightly)
more type-safe than going through a Properties or config Map<>

Any objections to expanding the KIP to the ConsumerConfig, AdminConfig, and
StreamsConfig constructors and making them public as well? From Matthias or
otherwise?

On Fri, Nov 3, 2023 at 11:09 AM Ismael Juma  wrote:


It seems wrong to require inheritance for this and we already have a

public

constructor. I would make both of them public.

Ismael

On Fri, Nov 3, 2023 at 10:47 AM Matthias J. Sax 

wrote:



+1 (binding)


About "why not public" question:

I think we need to distinguish between "end users" who create a

producer

instance, and "external parties" that might implement their own
`Producer` (or wrap/extend `KafkaProducer`).

In the end, I would not expect an "end user" to actually call `new
ProducerConfig` to begin with. If one creates a `KafkaProducer` they
pass the config via a `Map` or `Properties`, and the producer creates
`ProducerConfig` internally only. -- Thus, there is no need to make it
`public`. (To this end, I don't actually understand why there is public
`ProducerConfig` constructors to begin with -- sounds like a leaky
abstraction to me.)

On the other hand, if a "third party" implements `Producer` interface

to

ship their own producer implementation, they might want to create
`ProducerConfig` internally, so for them it's different, but still,

they

don't need public access because they can extend `ProducerConfig`, too
for this case). -- To me, this falls into the category "simple thing
should be easy, and hard things should be possible).


-Matthias


On 11/3/23 6:06 AM, Ismael Juma wrote:

Hi Sophie,

I was trying to understand the goal of the change and it's not

totally

clear to me. If the goal is to allow third party applications to

customize

the logging behavior, why is the method protected instead of public?

Ismael

On Thu, Nov 2, 2023 at 9:55 PM Sophie Blee-Goldman <

sop...@responsive.dev>

wrote:


Hey all,

This is a trivial one-liner change that it was determined should go

through

a KIP during the PR review process (see this thread


for

context). Since the change itself was already reviewed and approved

I'm

skipping the discussion thread and bringing it to a vote right away,

but of

course I'm open to feedback and can create a discussion thread if

there

is

need for it.

The change itself is simply adding the `protected` modifier to the
ProducerConfig constructor that allows for silencing the config

logging.

This just brings the ProducerConfig in alignment with the other

client

configs, all of which already had this constructor as protected.

KIP:







https://cwiki.apache.org/confluence/display/KAFKA/KIP-998%3A+Give+ProducerConfig%28props%2C+doLog%29+constructor+protected+access

PR: https://github.com/apache/kafka/pull/14681

Thanks!
Sophie













Re: [VOTE] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-11-03 Thread Matthias J. Sax

Thanks for the KIP.

+1 (binding)


-Matthias

On 11/3/23 6:08 AM, Lucas Brutschy wrote:

Hi Hanyu,

Thanks for the KIP!
+1 (binding)

Cheers
Lucas

On Thu, Nov 2, 2023 at 10:19 PM Hao Li  wrote:


Hi Hanyu,

Thanks for the KIP!
+1 (non-binding)

Hao

On Thu, Nov 2, 2023 at 1:29 PM Bill Bejeck  wrote:


Hi Hanyu,

Thanks for the KIP this LGTM.
+1 (binding)

Thanks,
Bill



On Wed, Nov 1, 2023 at 1:07 PM Hanyu (Peter) Zheng
 wrote:


Hello everyone,

I would like to start a vote for KIP-992: Proposal to introduce IQv2

Query

Types: TimestampedKeyQuery and TimestampedRangeQuery.

Sincerely,
Hanyu

On Wed, Nov 1, 2023 at 10:00 AM Hanyu (Peter) Zheng 





https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery


--

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
<



https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog

[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]
<



https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound&utm_source=gmail&utm_medium=organic






--

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
<


https://www.confluent.io/blog?utm_source=footer&utm_medium=email&utm_campaign=ch.email-signature_type.community_content.blog

[image:

Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]
<


https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound&utm_source=gmail&utm_medium=organic








Re: [DISCUSS] KIP-969: Support range interactive queries for versioned state stores

2023-11-03 Thread Matthias J. Sax

Great discussion. Seems we are making good progress.

I see advantages and disadvantages in splitting out a "single-ts 
key-range" query type. I guess, the main question might be, what 
use-cases do we anticipate and how common we believe they are?


We should also take KIP-992 (adding `TimestampedRangeQuery`) into account.

(1) The most common use case seems to be, a key-range over latest. For 
this one, we could use `TimestampedRangeQuery` -- it would return a 
`ValueAndTimestamp` instead of a `VersionedRecord` but the won't 
be any information loss, because "toTimestamp" would be `null` anyway.



(2) The open question is, how common is a key-range in a point in the 
past? For this case, using 
`MultiVersionedRangeQuery.withKeyRange().from(myTs).to(myTs)` seems 
actually not to be a bad UX, and also does not really need to be 
explained how to do this (compared to "latest" that required to pass in 
MAX_INT).



If we add a new query type, we avoid both issues (are they actually 
issues?) and add some nice syntactic sugar to the API. The question is, 
if it's worth the effort and expanded API surface area?


To summarize:

Add new query type:


// queries latest; returns VersionedRecords
VersionedRangeQuery.withKeyRange(...);

VersionedRangeQuery.withKeyRange(...).asOf(ts);


vs

No new query type:


// queries latest; returns ValueAndTimestamps
TimestampedRangeQuery.withRange(...); 


MultiVersionedRangeQuery.withKeyRange(...).from(myTs).to(myTs)




I guess, bottom line, I would be ok with either one and I am actually 
not even sure which one I prefer personally. Just wanted to lay out the 
tradeoffs I see. Not sure if three are other considerations that would 
tip the scale into either direction?




-Matthias

On 11/3/23 3:43 AM, Bruno Cadonna wrote:

Hi Alieh,

I like the examples!


1.
Some terms like `asOf` in the descriptions still need to be replaced in 
the KIP.



2.
In your last e-mail you state:

"How can a user retrieve the latest value? We have the same issue with 
kip-968 as well."


Why do we have the same issue in KIP-968?
If users need to retrieve the latest value for a specific key, they 
should use KIP-960.



3.
Regarding querying the latest version (or an asOf version) of records in 
a given key range, that is exactly why I proposed to split the query 
class. One class would return the latest and the asOf versions (i.e. a 
single version) of records in a key range and the other class would 
return all versions in a given time range (i.e. multiple versions) of 
the records in a given key range. The splitting in two classes avoids to 
specify a time range and latest or a time range and asOf on a given key 
range.


Alternatively, you could keep one class and you could specify that the 
last call wins as you specified for fromTime() and toTime(). For 
example, for


MultiVersionedRangeQuery.withLowerKeyBound(k1).fromTime(t1).toTime(t2).latest()

latest() wins. However, how would you interpret

MultiVersionedRangeQuery.withLowerKeyBound(k1).fromTime(t1).latest().toTime(t2)

Is it [t1, t2] or [-INF, t2]?
(I would say the latter, but somebody else would say differently)

The two class solution seems cleaner to me since we do not need to 
consider those edge cases.

You could propose both classes in this KIP.


4.
Why do we need orderByKey() and orderByTimestamp()?
Aren't withAscendingKeys(), withDescendingKeys(), 
withAscendingTimestamps(), and withDescendingTimestamps() sufficient?



5.
In example 2, why is

key,value: 2,20, timestamp: 2023-01-10T10:00:00.00Z, valid till: 
2023-01-25T10:00:00.00Z


not part of the result?
It is valid from 2023-01-10T10:00:00.00Z to 2023-01-25T10:00:00.00Z 
which overlaps with the time range [2023-01-17T10:00:00.00Z, 
2023-01-30T10:00:00.00Z] of the query.


(BTW, in the second example, you forgot to add "key" to the output.)


Best,
Bruno


On 10/25/23 4:01 PM, Alieh Saeedi wrote:

Thanks, Matthias and Bruno.
Here is a list of updates:

    1. I changed the variable and method names as I did for KIP-968, as
    follows:
   - "fromTimestamp" -> fromTime
   - "asOfTimestamp"-> toTime
   - "from(instant)" -> fromTime(instant)"
   - asOf(instant)"->toTime(instant)
    2. As Bruno suggested for KIP-968, I added `orderByKey()`,
    `withAscendingKeys()`, and `withAscendingTimestamps()` methods for 
user

    readability.
    3. I updated the "Example" section as well.

Some points:

    1. Even though the kip suggests adding the `get(k lowerkeybound, k
    upperkeybound, long fromtime, long totime)` method to the 
interface, I

    added this method to the `rocksdbversionedstore` class for now.
    2. Matthias, you mentioned a very important point. How can a user
    retrieve the latest value? We have the same issue with kip-968 as 
well.
    Asking a user to call `toTime(max)` violates the API design rules, 
as you
    mentioned. So I think we must have a `latest()` method for both 
KIP-968 and

    KIP-969. What do you think about that?



Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2357

2023-11-03 Thread Apache Jenkins Server
See