Re: [DISCUSS] KIP-1052: Enable warmup in producer performance test

2024-07-03 Thread Federico Valeri
Hi,

On Tue, Jul 2, 2024 at 10:38 PM Welch, Matt  wrote:
>
> Hi Federico,
>
> Thanks for your response.  I have a few questions.
>
> > You mean, no existing public interfaces right? Anyway, this content should 
> > be in the "Compatibility, Deprecation, and Migration Plan"
> section.
> Just a question here: Should the contents of the "Public Interfaces" section 
> after that first sentence be moved to "Compatibility, Deprecation, and 
> Migration Plan"? I guess I wasn't thinking of the tools as an interface.
>

No, I think it's fine, maybe we should just say "no existing argument
is affected" or something like that. According to the KIP main page,
command line tools and arguments are considered public interfaces.

> > I would also say that, if we specify warmup-records, we also want separate 
> > stats. With this change it would be really straightforward IMO, and we 
> > wouldn't need the additional separated-warmup option.
> > Wdyt?
> I no longer think we really need a separated-warmup option and I would prefer 
> to leave the separated-warmup option out for simplicity. My preference is to 
> have as much data as available presented, so the user has maximum information 
> to make decisions about warmups and performance. That would imply having 
> *three* printed summary lines for "whole test", "warmup only", and "steady 
> state only".  Each of these would have a slightly different message like 
> "records sent", "warmup records sent", and "steady state records sent" to 
> enable the user to differentiate between them. I haven't modified the KIP to 
> reflect this yet because there seems to be some motivation for having this as 
> an option. What is the preference of the community here? Would having all 
> three summary lines printed at end of test be confusing, informative, or 
> other?
>

I like the idea of three printed summaries. There is no ambiguity with that.

> > Have you considered having a sort of autopilot for computing the warmup 
> > size based on the tool's output information (window's p99)?
> > Once p99 is stable enough, the tool could start the steady-state phase 
> > printing out the computed warmup size. In case we decide this is tricky or 
> > undesired behavior, we can list the autopilot mode in the rejected 
> > alternatives, along with motivations.
> I like the idea of a producer autopilot, but it's sufficiently complex that 
> it needs its own KIP. I've added a description of the autopilot feature to 
> the Rejected Alternatives.
>

Fine. It's good to report them as often people find inspiration for
improvements or new feature by simply reading other's KIPs.

> > Just a nit, but I think we miss the --payload-file option in all snippets.
> Nit or not, I really appreciate the thorough review!  I've updated the 
> command lines to contain the payload-file option.
>
> Thanks,
> Matt
>
> -Original Message-
> From: Federico Valeri 
> Sent: Monday, July 1, 2024 1:04 AM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-1052: Enable warmup in producer performance test
>
> Hi Matt, thanks for the updates. Snippets are really useful.
>
> > No public interfaces are affected.
>
> You mean, no existing public interfaces right? Anyway, this content should be 
> in the "Compatibility, Deprecation, and Migration Plan"
> section.
>
> > The first option, --warmup-records, will be added to the producer 
> > performance test to request that the initial records sent in a test be 
> > gathered into a separate Stats object from the steady-state records to 
> > follow.
>
> I would also say that, if we specify warmup-records, we also want separate 
> stats. With this change it would be really straightforward IMO, and we 
> wouldn't need the additional separated-warmup option.
> Wdyt?
>
> > Although the producer performance test output should provide
> > sufficient information to set a warmup
>
> Have you considered having a sort of autopilot for computing the warmup size 
> based on the tool's output information (window's p99)?
> Once p99 is stable enough, the tool could start the steady-state phase 
> printing out the computed warmup size. In case we decide this is tricky or 
> undesired behavior, we can list the autopilot mode in the rejected 
> alternatives, along with motivations.
>
> > bin/kafka-producer-perf-test.sh --num-records 100 --throughput
> > 5
>
> Just a nit, but I think we miss the --payload-file option in all snippets.
>
> On Sat, Jun 29, 2024 at 1:19 AM Welch, Matt  wrote:
> >
> > Hi Luke and Federico,
> >
> > Thank you for your responses.  Your questions seemed to be along similar 
> > lines so I've combined the responses.
> > Please let me know if you need more clarification.
> >
> > 1. I've updated the KIP to describe both new command line options, now 
> > '--warmup-records' and '--separated-warmup'.  After reading Federico's 
> > email, I realized the parameter '--combined-summary' didn't make sense in 
> > its intended use and the revised parameter name 'separate

[jira] [Resolved] (KAFKA-17047) Refactor Consumer group and shared classes with Share to modern package

2024-07-03 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-17047.
-
Fix Version/s: 3.9.0
   Resolution: Fixed

> Refactor Consumer group and shared classes with Share to modern package
> ---
>
> Key: KAFKA-17047
> URL: https://issues.apache.org/jira/browse/KAFKA-17047
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apoorv Mittal
>Assignee: Apoorv Mittal
>Priority: Major
> Fix For: 3.9.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17069) Remote copy throttle metrics

2024-07-03 Thread Abhijeet Kumar (Jira)
Abhijeet Kumar created KAFKA-17069:
--

 Summary: Remote copy throttle metrics 
 Key: KAFKA-17069
 URL: https://issues.apache.org/jira/browse/KAFKA-17069
 Project: Kafka
  Issue Type: Sub-task
Reporter: Abhijeet Kumar






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17070) perf: consider to use ByteBufferOutputstream to append records

2024-07-03 Thread Luke Chen (Jira)
Luke Chen created KAFKA-17070:
-

 Summary: perf: consider to use ByteBufferOutputstream to append 
records
 Key: KAFKA-17070
 URL: https://issues.apache.org/jira/browse/KAFKA-17070
 Project: Kafka
  Issue Type: Improvement
Reporter: Luke Chen


Consider to use ByteBufferOutputstream to append records, instead of a 
DataOutputStream. We should add JMH test to confirm this indeed improve the 
performance before merging it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17026) Implement updateCacheAndOffsets functionality on LSO movement

2024-07-03 Thread Abhinav Dixit (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinav Dixit resolved KAFKA-17026.
---
Fix Version/s: 4.0.0
   3.9.0
   Resolution: Fixed

> Implement updateCacheAndOffsets functionality on LSO movement
> -
>
> Key: KAFKA-17026
> URL: https://issues.apache.org/jira/browse/KAFKA-17026
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Abhinav Dixit
>Assignee: Abhinav Dixit
>Priority: Major
> Fix For: 4.0.0, 3.9.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17071) SharePartition - Add more unit tests

2024-07-03 Thread Abhinav Dixit (Jira)
Abhinav Dixit created KAFKA-17071:
-

 Summary: SharePartition - Add more unit tests
 Key: KAFKA-17071
 URL: https://issues.apache.org/jira/browse/KAFKA-17071
 Project: Kafka
  Issue Type: Sub-task
Reporter: Abhinav Dixit






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.8 #61

2024-07-03 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #3071

2024-07-03 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-1042: Support for wildcard when creating new acls

2024-07-03 Thread Claude Warren, Jr
I think that if we put in a trie based system we should be able to halve
the normal searhc times and still be able to locate wild card matches very
quickly.  Users should be warned that "head wildcard" matches are slow and
to use them sparingly.  I am going to see if I can work out how to do
wildcard matches within the trie.

But in all cases can show that the trie is faster than the current
implementation.

Claude



On Wed, Jun 19, 2024 at 7:53 PM Muralidhar Basani
 wrote:

> There are some test results mentioned in the Test Plan section of the Kip,
> but we need to do more testing with various patterns and permission types.
> As mentioned in the discuss thread, the trie implementation could
> potentially surpass the current speed of ACL match.
>
> However, we can only accurately assess the results after updating the
> actual classes and analysing them with AuthorizerBenchmark.
>
> Thanks,
>
> Murali
>
> On Mon, 17 Jun 2024 at 20:39, Colin McCabe  wrote:
>
> > My concern is that the extra complexity may actually slow us down. In
> > general people already complain about the speed of ACL matches, and
> adding
> > another "degree of freedom" seems likely to make things worse.
> >
> > It would be useful to understand how much faster or slower the code is
> > with the propsed changes, versus without them.
> >
> > best,
> > Colin
> >
> >
> > On Mon, Jun 17, 2024, at 01:26, Muralidhar Basani wrote:
> > > Hi all,
> > >
> > > I would like to call a vote on KIP-1042 which extends creation of acls
> > with
> > > MATCH pattern type.
> > >
> > > KIP -
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1042%3A+Support+for+wildcard+when+creating+new+acls
> > >
> > > Discussion thread -
> > > https://lists.apache.org/thread/xx3lcg60kp4v34x0j9p6xobby8l4cfq2
> > >
> > > Thanks,
> > > Murali
> >
>


[jira] [Created] (KAFKA-17072) Document broker decommissioning process with KRaft

2024-07-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-17072:
--

 Summary: Document broker decommissioning process with KRaft
 Key: KAFKA-17072
 URL: https://issues.apache.org/jira/browse/KAFKA-17072
 Project: Kafka
  Issue Type: Improvement
  Components: docs
Reporter: Mickael Maison


When decommissioning a broker in KRaft mode, the broker also has to be 
explicitly unregistered. This is not mentioned anywhere in the documentation.

A broker not unregistered stays eligible for new partition assignment and will 
prevent bumping the metadata version if the remaining brokers are upgraded.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Adding TEST file [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah opened a new pull request, #2:
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/2

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Adding TEST file [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah closed pull request #2: Adding TEST file
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/2


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Adding test file [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah opened a new pull request, #3:
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/3

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Adding feature A [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah closed pull request #1: Adding feature A
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Adding test file [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah closed pull request #3: Adding test file
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/3


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Adding test file [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah opened a new pull request, #4:
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/4

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [VOTE] KIP-752: Support --bootstrap-server in ReplicaVerificationTool

2024-07-03 Thread Dongjin Lee
Hi Tsai,

Sorry for being late. How about this way?

1. Amend mention on the deprecation plan to the original KIP.
2. You cast +1 to this voting thread.
3. Add a new KIP to remove this tool with the 4.0 release.

Since this KIP already has +2 bindings with PR, this way would be slightly
more swift. How do you think?

Thanks,
Dongjin

On Mon, Jun 3, 2024 at 4:15 AM Chia-Ping Tsai  wrote:

> `replica_verification_test.py` is unstable in my jenkins, and then I
> notice this thread.
>
> Maybe kafka 4 is a good timing to remove this tool, but does it need a
> KIP? If so, I'd like to file a KIP for it.
>
> Best,
> Chia-Ping
>
> On 2021/06/10 05:01:43 Ismael Juma wrote:
> > KAFKA-12600 was a general change, not related to this tool specifically.
> I
> > am not convinced this tool is actually useful, I haven't seen anyone
> using
> > it in years.
> >
> > Ismael
> >
> > On Wed, Jun 9, 2021 at 9:51 PM Dongjin Lee  wrote:
> >
> > > Hi Ismael,
> > >
> > > Before I submit this KIP, I reviewed some history. When KIP-499
> > > <
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-499+-+Unify+connection+name+flag+for+command+line+tool
> > > >
> > > tried to resolve the inconsistencies between the command line tools,
> two
> > > tools were omitted, probably by mistake.
> > >
> > > - KAFKA-12878: Support --bootstrap-server
> kafka-streams-application-reset
> > > 
> > > - KAFKA-12899: Support --bootstrap-server in ReplicaVerificationTool
> > >  (this one)
> > >
> > > And it seems like this tool is still working. The last update was
> > > KAFKA-12600  by
> you,
> > > which will also be included in this 3.0.0 release. It is why I
> determined
> > > that this tool is worth updating.
> > >
> > > Thanks,
> > > Dongjin
> > >
> > > On Thu, Jun 10, 2021 at 1:26 PM Ismael Juma  wrote:
> > >
> > > > Hi Dongjin,
> > > >
> > > > Does this tool still work? I recall that there were some doubts
> about it
> > > > and that's why it wasn't updated previously.
> > > >
> > > > Ismael
> > > >
> > > > On Sat, Jun 5, 2021 at 2:38 PM Dongjin Lee 
> wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to call for a vote on KIP-752: Support --bootstrap-server
> in
> > > > > ReplicaVerificationTool:
> > > > >
> > > > >
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-752%3A+Support+--bootstrap-server+in+ReplicaVerificationTool
> > > > >
> > > > > Best,
> > > > > Dongjin
> > > > >
> > > > > --
> > > > > *Dongjin Lee*
> > > > >
> > > > > *A hitchhiker in the mathematical world.*
> > > > >
> > > > >
> > > > >
> > > > > *github:  github.com/dongjinleekr
> > > > > keybase:
> > > > https://keybase.io/dongjinleekr
> > > > > linkedin:
> > > > kr.linkedin.com/in/dongjinleekr
> > > > > speakerdeck:
> > > > > speakerdeck.com/dongjin
> > > > > *
> > > > >
> > > >
> > >
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > > *A hitchhiker in the mathematical world.*
> > >
> > >
> > >
> > > *github:  github.com/dongjinleekr
> > > keybase:
> https://keybase.io/dongjinleekr
> > > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > > speakerdeck:
> > > speakerdeck.com/dongjin
> > > *
> > >
> >
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [PR] Adding test file [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah closed pull request #4: Adding test file
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/4


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] test 2: trying again [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah opened a new pull request, #5:
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/5

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [VOTE] KIP-752: Support --bootstrap-server in ReplicaVerificationTool

2024-07-03 Thread Chia-Ping Tsai
hi Dongjin

It will be removed in 4.0 if we are able to deprecate it in 3.9. Hence, it 
seems to me enhancing it is a bit weird since the feature is active only for 
one release …

> 
> Dongjin Lee  於 2024年7月3日 晚上10:04 寫道:
> 
> Hi Tsai,
> 
> Sorry for being late. How about this way?
> 
> 1. Amend mention on the deprecation plan to the original KIP.
> 2. You cast +1 to this voting thread.
> 3. Add a new KIP to remove this tool with the 4.0 release.
> 
> Since this KIP already has +2 bindings with PR, this way would be slightly
> more swift. How do you think?
> 
> Thanks,
> Dongjin
> 
>> On Mon, Jun 3, 2024 at 4:15 AM Chia-Ping Tsai  wrote:
>> 
>> `replica_verification_test.py` is unstable in my jenkins, and then I
>> notice this thread.
>> 
>> Maybe kafka 4 is a good timing to remove this tool, but does it need a
>> KIP? If so, I'd like to file a KIP for it.
>> 
>> Best,
>> Chia-Ping
>> 
>>> On 2021/06/10 05:01:43 Ismael Juma wrote:
>>> KAFKA-12600 was a general change, not related to this tool specifically.
>> I
>>> am not convinced this tool is actually useful, I haven't seen anyone
>> using
>>> it in years.
>>> 
>>> Ismael
>>> 
 On Wed, Jun 9, 2021 at 9:51 PM Dongjin Lee  wrote:
>>> 
 Hi Ismael,
 
 Before I submit this KIP, I reviewed some history. When KIP-499
 <
 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-499+-+Unify+connection+name+flag+for+command+line+tool
> 
 tried to resolve the inconsistencies between the command line tools,
>> two
 tools were omitted, probably by mistake.
 
 - KAFKA-12878: Support --bootstrap-server
>> kafka-streams-application-reset
 
 - KAFKA-12899: Support --bootstrap-server in ReplicaVerificationTool
  (this one)
 
 And it seems like this tool is still working. The last update was
 KAFKA-12600  by
>> you,
 which will also be included in this 3.0.0 release. It is why I
>> determined
 that this tool is worth updating.
 
 Thanks,
 Dongjin
 
 On Thu, Jun 10, 2021 at 1:26 PM Ismael Juma  wrote:
 
> Hi Dongjin,
> 
> Does this tool still work? I recall that there were some doubts
>> about it
> and that's why it wasn't updated previously.
> 
> Ismael
> 
> On Sat, Jun 5, 2021 at 2:38 PM Dongjin Lee 
>> wrote:
> 
>> Hi all,
>> 
>> I'd like to call for a vote on KIP-752: Support --bootstrap-server
>> in
>> ReplicaVerificationTool:
>> 
>> 
>> 
> 
 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-752%3A+Support+--bootstrap-server+in+ReplicaVerificationTool
>> 
>> Best,
>> Dongjin
>> 
>> --
>> *Dongjin Lee*
>> 
>> *A hitchhiker in the mathematical world.*
>> 
>> 
>> 
>> *github:  github.com/dongjinleekr
>> keybase:
> https://keybase.io/dongjinleekr
>> linkedin:
> kr.linkedin.com/in/dongjinleekr
>> speakerdeck:
>> speakerdeck.com/dongjin
>> *
>> 
> 
 
 
 --
 *Dongjin Lee*
 
 *A hitchhiker in the mathematical world.*
 
 
 
 *github:  github.com/dongjinleekr
 keybase:
>> https://keybase.io/dongjinleekr
 linkedin:
>> kr.linkedin.com/in/dongjinleekr
 speakerdeck:
 speakerdeck.com/dongjin
 *
 
>>> 
>> 
> 
> 
> --
> *Dongjin Lee*
> 
> *A hitchhiker in the mathematical world.*
> 
> 
> 
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck: speakerdeck.com/dongjin
> *


[jira] [Resolved] (KAFKA-16991) Flaky Test org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState

2024-07-03 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-16991.
-
Resolution: Fixed

> Flaky Test 
> org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState
> ---
>
> Key: KAFKA-16991
> URL: https://issues.apache.org/jira/browse/KAFKA-16991
> Project: Kafka
>  Issue Type: Test
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 3.9.0
>
> Attachments: 
> 5owo5xbyzjnao-org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest-shouldRestoreState()-1-output.txt
>
>
> We see this test running into timeouts more frequently recently.
> {code:java}
> org.opentest4j.AssertionFailedError: Condition not met within timeout 6. 
> Repartition topic 
> restore-test-KSTREAM-AGGREGATE-STATE-STORE-02-repartition not purged 
> data after 6 ms. ==> expected:  but was: at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)•••at
>  
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$3(TestUtils.java:396)at
>  
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:444)at
>  org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:393)at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:377)at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:367)at 
> org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState(PurgeRepartitionTopicIntegrationTest.java:220)
>  {code}
> There was no ERROR or WARN log...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-752: Support --bootstrap-server in ReplicaVerificationTool

2024-07-03 Thread Ismael Juma
I think we should just do a KIP to remove it in 4.0 with deprecation in 3.9.

Ismael

On Wed, Jul 3, 2024 at 7:38 AM Chia-Ping Tsai  wrote:

> hi Dongjin
>
> It will be removed in 4.0 if we are able to deprecate it in 3.9. Hence, it
> seems to me enhancing it is a bit weird since the feature is active only
> for one release …
>
> >
> > Dongjin Lee  於 2024年7月3日 晚上10:04 寫道:
> >
> > Hi Tsai,
> >
> > Sorry for being late. How about this way?
> >
> > 1. Amend mention on the deprecation plan to the original KIP.
> > 2. You cast +1 to this voting thread.
> > 3. Add a new KIP to remove this tool with the 4.0 release.
> >
> > Since this KIP already has +2 bindings with PR, this way would be
> slightly
> > more swift. How do you think?
> >
> > Thanks,
> > Dongjin
> >
> >> On Mon, Jun 3, 2024 at 4:15 AM Chia-Ping Tsai 
> wrote:
> >>
> >> `replica_verification_test.py` is unstable in my jenkins, and then I
> >> notice this thread.
> >>
> >> Maybe kafka 4 is a good timing to remove this tool, but does it need a
> >> KIP? If so, I'd like to file a KIP for it.
> >>
> >> Best,
> >> Chia-Ping
> >>
> >>> On 2021/06/10 05:01:43 Ismael Juma wrote:
> >>> KAFKA-12600 was a general change, not related to this tool
> specifically.
> >> I
> >>> am not convinced this tool is actually useful, I haven't seen anyone
> >> using
> >>> it in years.
> >>>
> >>> Ismael
> >>>
>  On Wed, Jun 9, 2021 at 9:51 PM Dongjin Lee 
> wrote:
> >>>
>  Hi Ismael,
> 
>  Before I submit this KIP, I reviewed some history. When KIP-499
>  <
> 
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-499+-+Unify+connection+name+flag+for+command+line+tool
> >
>  tried to resolve the inconsistencies between the command line tools,
> >> two
>  tools were omitted, probably by mistake.
> 
>  - KAFKA-12878: Support --bootstrap-server
> >> kafka-streams-application-reset
>  
>  - KAFKA-12899: Support --bootstrap-server in ReplicaVerificationTool
>   (this one)
> 
>  And it seems like this tool is still working. The last update was
>  KAFKA-12600  by
> >> you,
>  which will also be included in this 3.0.0 release. It is why I
> >> determined
>  that this tool is worth updating.
> 
>  Thanks,
>  Dongjin
> 
>  On Thu, Jun 10, 2021 at 1:26 PM Ismael Juma 
> wrote:
> 
> > Hi Dongjin,
> >
> > Does this tool still work? I recall that there were some doubts
> >> about it
> > and that's why it wasn't updated previously.
> >
> > Ismael
> >
> > On Sat, Jun 5, 2021 at 2:38 PM Dongjin Lee 
> >> wrote:
> >
> >> Hi all,
> >>
> >> I'd like to call for a vote on KIP-752: Support --bootstrap-server
> >> in
> >> ReplicaVerificationTool:
> >>
> >>
> >>
> >
> 
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-752%3A+Support+--bootstrap-server+in+ReplicaVerificationTool
> >>
> >> Best,
> >> Dongjin
> >>
> >> --
> >> *Dongjin Lee*
> >>
> >> *A hitchhiker in the mathematical world.*
> >>
> >>
> >>
> >> *github:  github.com/dongjinleekr
> >> keybase:
> > https://keybase.io/dongjinleekr
> >> linkedin:
> > kr.linkedin.com/in/dongjinleekr
> >> speakerdeck:
> >> speakerdeck.com/dongjin
> >> *
> >>
> >
> 
> 
>  --
>  *Dongjin Lee*
> 
>  *A hitchhiker in the mathematical world.*
> 
> 
> 
>  *github:  github.com/dongjinleekr
>  keybase:
> >> https://keybase.io/dongjinleekr
>  linkedin:
> >> kr.linkedin.com/in/dongjinleekr
>  speakerdeck:
>  speakerdeck.com/dongjin
>  *
> 
> >>>
> >>
> >
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> >
> >
> >
> > *github:  github.com/dongjinleekr
> > keybase:
> https://keybase.io/dongjinleekr
> > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > speakerdeck:
> speakerdeck.com/dongjin
> > *
>


Re: [VOTE] KIP-752: Support --bootstrap-server in ReplicaVerificationTool

2024-07-03 Thread Chia-Ping Tsai
Agree to Juma 

> Ismael Juma  於 2024年7月3日 晚上10:41 寫道:
> 
> I think we should just do a KIP to remove it in 4.0 with deprecation in 3.9.
> 
> Ismael
> 
>> On Wed, Jul 3, 2024 at 7:38 AM Chia-Ping Tsai  wrote:
>> 
>> hi Dongjin
>> 
>> It will be removed in 4.0 if we are able to deprecate it in 3.9. Hence, it
>> seems to me enhancing it is a bit weird since the feature is active only
>> for one release …
>> 
>>> 
 Dongjin Lee  於 2024年7月3日 晚上10:04 寫道:
>>> 
>>> Hi Tsai,
>>> 
>>> Sorry for being late. How about this way?
>>> 
>>> 1. Amend mention on the deprecation plan to the original KIP.
>>> 2. You cast +1 to this voting thread.
>>> 3. Add a new KIP to remove this tool with the 4.0 release.
>>> 
>>> Since this KIP already has +2 bindings with PR, this way would be
>> slightly
>>> more swift. How do you think?
>>> 
>>> Thanks,
>>> Dongjin
>>> 
 On Mon, Jun 3, 2024 at 4:15 AM Chia-Ping Tsai 
>> wrote:
 
 `replica_verification_test.py` is unstable in my jenkins, and then I
 notice this thread.
 
 Maybe kafka 4 is a good timing to remove this tool, but does it need a
 KIP? If so, I'd like to file a KIP for it.
 
 Best,
 Chia-Ping
 
> On 2021/06/10 05:01:43 Ismael Juma wrote:
> KAFKA-12600 was a general change, not related to this tool
>> specifically.
 I
> am not convinced this tool is actually useful, I haven't seen anyone
 using
> it in years.
> 
> Ismael
> 
>> On Wed, Jun 9, 2021 at 9:51 PM Dongjin Lee 
>> wrote:
> 
>> Hi Ismael,
>> 
>> Before I submit this KIP, I reviewed some history. When KIP-499
>> <
>> 
 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-499+-+Unify+connection+name+flag+for+command+line+tool
>>> 
>> tried to resolve the inconsistencies between the command line tools,
 two
>> tools were omitted, probably by mistake.
>> 
>> - KAFKA-12878: Support --bootstrap-server
 kafka-streams-application-reset
>> 
>> - KAFKA-12899: Support --bootstrap-server in ReplicaVerificationTool
>>  (this one)
>> 
>> And it seems like this tool is still working. The last update was
>> KAFKA-12600  by
 you,
>> which will also be included in this 3.0.0 release. It is why I
 determined
>> that this tool is worth updating.
>> 
>> Thanks,
>> Dongjin
>> 
>> On Thu, Jun 10, 2021 at 1:26 PM Ismael Juma 
>> wrote:
>> 
>>> Hi Dongjin,
>>> 
>>> Does this tool still work? I recall that there were some doubts
 about it
>>> and that's why it wasn't updated previously.
>>> 
>>> Ismael
>>> 
>>> On Sat, Jun 5, 2021 at 2:38 PM Dongjin Lee 
 wrote:
>>> 
 Hi all,
 
 I'd like to call for a vote on KIP-752: Support --bootstrap-server
 in
 ReplicaVerificationTool:
 
 
 
>>> 
>> 
 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-752%3A+Support+--bootstrap-server+in+ReplicaVerificationTool
 
 Best,
 Dongjin
 
 --
 *Dongjin Lee*
 
 *A hitchhiker in the mathematical world.*
 
 
 
 *github:  github.com/dongjinleekr
 keybase:
>>> https://keybase.io/dongjinleekr
 linkedin:
>>> kr.linkedin.com/in/dongjinleekr
 speakerdeck:
 speakerdeck.com/dongjin
 *
 
>>> 
>> 
>> 
>> --
>> *Dongjin Lee*
>> 
>> *A hitchhiker in the mathematical world.*
>> 
>> 
>> 
>> *github:  github.com/dongjinleekr
>> keybase:
 https://keybase.io/dongjinleekr
>> linkedin:
 kr.linkedin.com/in/dongjinleekr
>> speakerdeck:
>> speakerdeck.com/dongjin
>> *
>> 
> 
 
>>> 
>>> 
>>> --
>>> *Dongjin Lee*
>>> 
>>> *A hitchhiker in the mathematical world.*
>>> 
>>> 
>>> 
>>> *github:  github.com/dongjinleekr
>>> keybase:
>> https://keybase.io/dongjinleekr
>>> linkedin:
>> kr.linkedin.com/in/dongjinleekr
>>> speakerdeck:
>> speakerdeck.com/dongjin
>>> *
>> 


Re: [DISCUSS] KIP-1059: Enable the Producer flush() method to clear the latest send() error

2024-07-03 Thread Chris Egerton
Hi Alieh,

I don't love defining the changes for this KIP in terms of a catch clause
in the KafkaProducer class, for two reasons. First, the set of errors that
are handled by that clause may shift over time as the code base is
modified, and second, it would be fairly opaque to users who want to
understand whether an error would be affected by using this API or not.

It also seems strange that we'd handle some types of
RecordTooLargeException (i.e., ones reported client-side) with this API,
but not others (i.e., ones reported by a broker).

I think this kind of API would be most powerful, most intuitive to users,
and easiest to document if we expanded the scope to all record-send-related
errors, except anything indicating issues with exactly-once semantics. That
would include records that are too large (when caught both client- and
server-side), records that can't be sent due to authorization failures,
records sent to nonexistent topics/topic partitions, and keyless records
sent to compacted topics. It would not include
ProducerFencedException, InvalidProducerEpochException,
UnsupportedVersionException,
and possibly others.

@Justine -- do you think it would be possible to develop either a better
definition for the kinds of "excluded" errors that should not be covered by
this API, or, barring that, a comprehensive list of exact error types? And
do you think this would be acceptable in terms of risk and complexity?

Cheers,

Chris

On Tue, Jul 2, 2024 at 5:05 PM Alieh Saeedi 
wrote:

> Hey Justine,
>
> About the consequences: the consequences will be like when we did not have
> the fix made in `KAFKA-9279`: silent loss of data! Obviously, when the user
> intentionally chose to ignore errors, that would not be silent any more.
> Right?
> Of course, considering all types of `ApiException`s would be too broad. But
> are the exceptions caught in `catch(ApiException e)` of the `doSend()`
> method also too broad?
>
> -Alieh
>
> On Tue, Jul 2, 2024 at 9:45 PM Justine Olshan  >
> wrote:
>
> > Hey Alieh,
> >
> > If we want to allow any error to be ignored we should probably run
> through
> > all the errors to make sure they make sense.
> > I just want to feel confident that we aren't just making a decision
> without
> > considering the consequences carefully.
> >
> > Justine
> >
> > On Tue, Jul 2, 2024 at 12:30 PM Alieh Saeedi
>  > >
> > wrote:
> >
> > > Hey Justine,
> > >
> > > yes we talked about `RecordTooLargeException` as an example, but did we
> > > ever limit ourselves to only this specific exception? I think neither
> in
> > > the KIP nor in the PR.  As Chris mentioned, this KIP is going to undo
> > what
> > > we have done in `KAFKA-9279` in case 1) the user is in a transaction
> and
> > 2)
> > > he decides to ignore the errors in which the record was not even added
> to
> > > the batch. Yes, and we suggested some methods for undoing or, in fact,
> > > moving back the transaction from the error state in `flush` or in
> > > `commitTnx` and we finally came to the idea of not even doing the
> changes
> > > (better than undoing) in `send`.
> > >
> > > Bests,
> > > Alieh
> > >
> > > On Tue, Jul 2, 2024 at 8:03 PM Justine Olshan
> >  > > >
> > > wrote:
> > >
> > > > Hey folks,
> > > >
> > > > I understand where you are coming from by asking for specific use
> > cases.
> > > My
> > > > understanding based on previous conversations was that there were a
> few
> > > > different errors that have been seen.
> > > > One example I heard some information about was when the record was
> too
> > > > large and it fails the batch. Besides that, I'm not really sure if
> > there
> > > > are cases in mind, though it is fair to ask on those and bring them
> up.
> > > >
> > > > > Does a record qualify as a poison pill if it targets a topic that
> > > > doesn't exist? Or if it targets a topic that the producer principal
> > lacks
> > > > ACLs for? What if it fails broker-side validation (e.g., has a null
> key
> > > for
> > > > a compacted topic)?
> > > >
> > > > I think there was some parallel work with addressing the
> > > > UnknownTopicOrPartitionError in another way. As for the other checks,
> > > acls,
> > > > validation etc. I am not aware of that being in Alieh's scope, but we
> > > > should be clear about exactly what we are doing.
> > > >
> > > > All errors that fall into ApiException seems too broad to me.
> > > >
> > > > Justine
> > > >
> > > > On Tue, Jul 2, 2024 at 10:51 AM Alieh Saeedi
> > >  > > > >
> > > > wrote:
> > > >
> > > > > Hey Chris,
> > > > > thanks for sharing your concerns.
> > > > >
> > > > > 1) About the language of KIP (or maybe later in Javadocs): Is that
> > > > alright
> > > > > if I write all errors that fall into the `ApiException` category
> > thrown
> > > > > (actually returned) by Producer?
> > > > > 2) About future expansion: do you have any better suggestions for
> > enum
> > > > > names? Do you think `IGNORE_API_EXEPTIONS` or something like that
> is
> > a
> > > > > "better/more accurate" one

Re: [DISCUSS] KIP-1059: Enable the Producer flush() method to clear the latest send() error

2024-07-03 Thread Justine Olshan
Hey Chris,

I think what you say makes sense. I agree that defining the behavior based
on code that can possibly change is not a good idea, and I was trying to
get a clearer definition from the KIP's author :)

I think it can always be hard to ensure that only specific errors are
handled unless they are explicitly enumerated in code as the code can
change and can be changed by folks who are not aware of this KIP or
conversation.
I personally don't have the bandwidth to do this definition/enumeration of
errors, so hopefully Alieh can expand upon this.

Justine

On Wed, Jul 3, 2024 at 8:28 AM Chris Egerton 
wrote:

> Hi Alieh,
>
> I don't love defining the changes for this KIP in terms of a catch clause
> in the KafkaProducer class, for two reasons. First, the set of errors that
> are handled by that clause may shift over time as the code base is
> modified, and second, it would be fairly opaque to users who want to
> understand whether an error would be affected by using this API or not.
>
> It also seems strange that we'd handle some types of
> RecordTooLargeException (i.e., ones reported client-side) with this API,
> but not others (i.e., ones reported by a broker).
>
> I think this kind of API would be most powerful, most intuitive to users,
> and easiest to document if we expanded the scope to all record-send-related
> errors, except anything indicating issues with exactly-once semantics. That
> would include records that are too large (when caught both client- and
> server-side), records that can't be sent due to authorization failures,
> records sent to nonexistent topics/topic partitions, and keyless records
> sent to compacted topics. It would not include
> ProducerFencedException, InvalidProducerEpochException,
> UnsupportedVersionException,
> and possibly others.
>
> @Justine -- do you think it would be possible to develop either a better
> definition for the kinds of "excluded" errors that should not be covered by
> this API, or, barring that, a comprehensive list of exact error types? And
> do you think this would be acceptable in terms of risk and complexity?
>
> Cheers,
>
> Chris
>
> On Tue, Jul 2, 2024 at 5:05 PM Alieh Saeedi 
> wrote:
>
> > Hey Justine,
> >
> > About the consequences: the consequences will be like when we did not
> have
> > the fix made in `KAFKA-9279`: silent loss of data! Obviously, when the
> user
> > intentionally chose to ignore errors, that would not be silent any more.
> > Right?
> > Of course, considering all types of `ApiException`s would be too broad.
> But
> > are the exceptions caught in `catch(ApiException e)` of the `doSend()`
> > method also too broad?
> >
> > -Alieh
> >
> > On Tue, Jul 2, 2024 at 9:45 PM Justine Olshan
>  > >
> > wrote:
> >
> > > Hey Alieh,
> > >
> > > If we want to allow any error to be ignored we should probably run
> > through
> > > all the errors to make sure they make sense.
> > > I just want to feel confident that we aren't just making a decision
> > without
> > > considering the consequences carefully.
> > >
> > > Justine
> > >
> > > On Tue, Jul 2, 2024 at 12:30 PM Alieh Saeedi
> >  > > >
> > > wrote:
> > >
> > > > Hey Justine,
> > > >
> > > > yes we talked about `RecordTooLargeException` as an example, but did
> we
> > > > ever limit ourselves to only this specific exception? I think neither
> > in
> > > > the KIP nor in the PR.  As Chris mentioned, this KIP is going to undo
> > > what
> > > > we have done in `KAFKA-9279` in case 1) the user is in a transaction
> > and
> > > 2)
> > > > he decides to ignore the errors in which the record was not even
> added
> > to
> > > > the batch. Yes, and we suggested some methods for undoing or, in
> fact,
> > > > moving back the transaction from the error state in `flush` or in
> > > > `commitTnx` and we finally came to the idea of not even doing the
> > changes
> > > > (better than undoing) in `send`.
> > > >
> > > > Bests,
> > > > Alieh
> > > >
> > > > On Tue, Jul 2, 2024 at 8:03 PM Justine Olshan
> > >  > > > >
> > > > wrote:
> > > >
> > > > > Hey folks,
> > > > >
> > > > > I understand where you are coming from by asking for specific use
> > > cases.
> > > > My
> > > > > understanding based on previous conversations was that there were a
> > few
> > > > > different errors that have been seen.
> > > > > One example I heard some information about was when the record was
> > too
> > > > > large and it fails the batch. Besides that, I'm not really sure if
> > > there
> > > > > are cases in mind, though it is fair to ask on those and bring them
> > up.
> > > > >
> > > > > > Does a record qualify as a poison pill if it targets a topic that
> > > > > doesn't exist? Or if it targets a topic that the producer principal
> > > lacks
> > > > > ACLs for? What if it fails broker-side validation (e.g., has a null
> > key
> > > > for
> > > > > a compacted topic)?
> > > > >
> > > > > I think there was some parallel work with addressing the
> > > > > UnknownTopicOrPartitionError in another way. As for the othe

Re: [DISCUSS] KIP-1059: Enable the Producer flush() method to clear the latest send() error

2024-07-03 Thread Chris Egerton
Hi Justine,

I agree that enumerating a list of errors that should be covered by the KIP
is difficult; I was thinking it might be easier if we list the errors that
should _not_ be covered by the KIP, and only if we can't define a
reasonable heuristic that would cover them without having to explicitly
list them. Could it be enough to say "all irrecoverable transactional
errors will still be fatal", or even just "all transactional errors (as
opposed to errors related to this specific record) will still be fatal"?

Cheers,

Chris

On Wed, Jul 3, 2024 at 11:56 AM Justine Olshan 
wrote:

> Hey Chris,
>
> I think what you say makes sense. I agree that defining the behavior based
> on code that can possibly change is not a good idea, and I was trying to
> get a clearer definition from the KIP's author :)
>
> I think it can always be hard to ensure that only specific errors are
> handled unless they are explicitly enumerated in code as the code can
> change and can be changed by folks who are not aware of this KIP or
> conversation.
> I personally don't have the bandwidth to do this definition/enumeration of
> errors, so hopefully Alieh can expand upon this.
>
> Justine
>
> On Wed, Jul 3, 2024 at 8:28 AM Chris Egerton 
> wrote:
>
> > Hi Alieh,
> >
> > I don't love defining the changes for this KIP in terms of a catch clause
> > in the KafkaProducer class, for two reasons. First, the set of errors
> that
> > are handled by that clause may shift over time as the code base is
> > modified, and second, it would be fairly opaque to users who want to
> > understand whether an error would be affected by using this API or not.
> >
> > It also seems strange that we'd handle some types of
> > RecordTooLargeException (i.e., ones reported client-side) with this API,
> > but not others (i.e., ones reported by a broker).
> >
> > I think this kind of API would be most powerful, most intuitive to users,
> > and easiest to document if we expanded the scope to all
> record-send-related
> > errors, except anything indicating issues with exactly-once semantics.
> That
> > would include records that are too large (when caught both client- and
> > server-side), records that can't be sent due to authorization failures,
> > records sent to nonexistent topics/topic partitions, and keyless records
> > sent to compacted topics. It would not include
> > ProducerFencedException, InvalidProducerEpochException,
> > UnsupportedVersionException,
> > and possibly others.
> >
> > @Justine -- do you think it would be possible to develop either a better
> > definition for the kinds of "excluded" errors that should not be covered
> by
> > this API, or, barring that, a comprehensive list of exact error types?
> And
> > do you think this would be acceptable in terms of risk and complexity?
> >
> > Cheers,
> >
> > Chris
> >
> > On Tue, Jul 2, 2024 at 5:05 PM Alieh Saeedi  >
> > wrote:
> >
> > > Hey Justine,
> > >
> > > About the consequences: the consequences will be like when we did not
> > have
> > > the fix made in `KAFKA-9279`: silent loss of data! Obviously, when the
> > user
> > > intentionally chose to ignore errors, that would not be silent any
> more.
> > > Right?
> > > Of course, considering all types of `ApiException`s would be too broad.
> > But
> > > are the exceptions caught in `catch(ApiException e)` of the `doSend()`
> > > method also too broad?
> > >
> > > -Alieh
> > >
> > > On Tue, Jul 2, 2024 at 9:45 PM Justine Olshan
> >  > > >
> > > wrote:
> > >
> > > > Hey Alieh,
> > > >
> > > > If we want to allow any error to be ignored we should probably run
> > > through
> > > > all the errors to make sure they make sense.
> > > > I just want to feel confident that we aren't just making a decision
> > > without
> > > > considering the consequences carefully.
> > > >
> > > > Justine
> > > >
> > > > On Tue, Jul 2, 2024 at 12:30 PM Alieh Saeedi
> > >  > > > >
> > > > wrote:
> > > >
> > > > > Hey Justine,
> > > > >
> > > > > yes we talked about `RecordTooLargeException` as an example, but
> did
> > we
> > > > > ever limit ourselves to only this specific exception? I think
> neither
> > > in
> > > > > the KIP nor in the PR.  As Chris mentioned, this KIP is going to
> undo
> > > > what
> > > > > we have done in `KAFKA-9279` in case 1) the user is in a
> transaction
> > > and
> > > > 2)
> > > > > he decides to ignore the errors in which the record was not even
> > added
> > > to
> > > > > the batch. Yes, and we suggested some methods for undoing or, in
> > fact,
> > > > > moving back the transaction from the error state in `flush` or in
> > > > > `commitTnx` and we finally came to the idea of not even doing the
> > > changes
> > > > > (better than undoing) in `send`.
> > > > >
> > > > > Bests,
> > > > > Alieh
> > > > >
> > > > > On Tue, Jul 2, 2024 at 8:03 PM Justine Olshan
> > > >  > > > > >
> > > > > wrote:
> > > > >
> > > > > > Hey folks,
> > > > > >
> > > > > > I understand where you are coming from by asking for specific use
> > > > cases.
> > > >

Re: [DISCUSS] KIP-1006: Remove SecurityManager Support

2024-07-03 Thread Frédérik Rouleau
Hi all,

When this KIP is intended to be implemented? As KIP-1013 is deprecating
Java 11 in AK 3.7 and removes its support in AK 4.0, maybe the KIP needs an
update.

Regards,

On Mon, Jul 1, 2024 at 10:39 PM Greg Harris 
wrote:

> Hi Mickael,
>
> Thanks for the pointer to that JDK ticket, I did not realize that the
> legacy APIs were going to be degraded instead of removed.
>
> I have updated the KIP to accommodate for this change in the JDK
> implementation. In addition to detecting the removal of the method/classes,
> it will also fall back to the new implementations when encountering an
> UnsupportedOperationException.
> Since this will be a blocker for supporting JDK 23, I'll open a vote thread
> for this next week if I don't get any more comments here.
>
> Thanks,
> Greg
>
> On Wed, Apr 10, 2024 at 10:42 AM Mickael Maison 
> wrote:
>
> > Hi,
> >
> > It looks like some of the SecurityManager APIs are starting to be
> > removed in JDK 23, see
> > - https://bugs.openjdk.org/browse/JDK-8296244
> > - https://github.com/quarkusio/quarkus/issues/39634
> >
> > JDK 23 is currently planned for September 2024.
> > Considering the timelines and that we only drop support for Java
> > versions in major Kafka releases, I think the proposed approach of
> > detecting the APIs to use makes sense.
> >
> > Thanks,
> > Mickael
> >
> > On Tue, Nov 21, 2023 at 8:38 AM Greg Harris
> >  wrote:
> > >
> > > Hey Ashwin,
> > >
> > > Thanks for your question!
> > >
> > > I believe we have only removed support for two Java versions:
> > > 7:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-118%3A+Drop+Support+for+Java+7
> > > in 2.0
> > > 8:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181308223
> > > in 4.0
> > >
> > > In both cases, we changed the gradle sourceCompatibility and
> > > targetCompatibility at the same time, which I believe changes the
> > > "-target" option in javac.
> > >
> > > We have no plans currently for dropping support for 11 or 17, but I
> > > presume they would work in much the same way.
> > >
> > > Hope this helps!
> > > Greg
> > >
> > > On Mon, Nov 20, 2023 at 11:19 PM Ashwin 
> > wrote:
> > > >
> > > > Hi Greg,
> > > >
> > > > Thanks for writing this KIP.
> > > > I agree with you that handling this now will help us react to the
> > > > deprecation of SecurityManager, whenever it happens.
> > > >
> > > > I had a question regarding how we deprecate JDKs supported by Apache
> > Kafka.
> > > > When we drop support for JDK 17, will we set the “-target” option of
> > Javac
> > > > such that the resulting JARs will not load in JVMs which are lesser
> > than or
> > > > equal to that version ?
> > > >
> > > > Thanks,
> > > > Ashwin
> > > >
> > > >
> > > > On Tue, Nov 21, 2023 at 6:18 AM Greg Harris
> > 
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to invite you all to discuss removing SecurityManager
> > support
> > > > > from Kafka. This affects the client and server SASL mechanism,
> Tiered
> > > > > Storage, and Connect classloading.
> > > > >
> > > > > Find the KIP here:
> > > > >
> > > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1006%3A+Remove+SecurityManager+Support
> > > > >
> > > > > I think this is a "code higiene" effort that doesn't need to be
> dealt
> > > > > with urgently, but it would prevent a lot of headache later when
> Java
> > > > > does decide to remove support.
> > > > >
> > > > > If you are currently using the SecurityManager with Kafka, I'd
> really
> > > > > appreciate hearing how you're using it, and how you're planning
> > around
> > > > > its removal.
> > > > >
> > > > > Thanks!
> > > > > Greg Harris
> > > > >
> >
>


Re: [VOTE] KIP-752: Support --bootstrap-server in ReplicaVerificationTool

2024-07-03 Thread Chia-Ping Tsai
see https://issues.apache.org/jira/browse/KAFKA-17073 for deprecation.

On 2024/07/03 14:57:51 Chia-Ping Tsai wrote:
> Agree to Juma 
> 
> > Ismael Juma  於 2024年7月3日 晚上10:41 寫道:
> > 
> > I think we should just do a KIP to remove it in 4.0 with deprecation in 
> > 3.9.
> > 
> > Ismael
> > 
> >> On Wed, Jul 3, 2024 at 7:38 AM Chia-Ping Tsai  wrote:
> >> 
> >> hi Dongjin
> >> 
> >> It will be removed in 4.0 if we are able to deprecate it in 3.9. Hence, it
> >> seems to me enhancing it is a bit weird since the feature is active only
> >> for one release …
> >> 
> >>> 
>  Dongjin Lee  於 2024年7月3日 晚上10:04 寫道:
> >>> 
> >>> Hi Tsai,
> >>> 
> >>> Sorry for being late. How about this way?
> >>> 
> >>> 1. Amend mention on the deprecation plan to the original KIP.
> >>> 2. You cast +1 to this voting thread.
> >>> 3. Add a new KIP to remove this tool with the 4.0 release.
> >>> 
> >>> Since this KIP already has +2 bindings with PR, this way would be
> >> slightly
> >>> more swift. How do you think?
> >>> 
> >>> Thanks,
> >>> Dongjin
> >>> 
>  On Mon, Jun 3, 2024 at 4:15 AM Chia-Ping Tsai 
> >> wrote:
>  
>  `replica_verification_test.py` is unstable in my jenkins, and then I
>  notice this thread.
>  
>  Maybe kafka 4 is a good timing to remove this tool, but does it need a
>  KIP? If so, I'd like to file a KIP for it.
>  
>  Best,
>  Chia-Ping
>  
> > On 2021/06/10 05:01:43 Ismael Juma wrote:
> > KAFKA-12600 was a general change, not related to this tool
> >> specifically.
>  I
> > am not convinced this tool is actually useful, I haven't seen anyone
>  using
> > it in years.
> > 
> > Ismael
> > 
> >> On Wed, Jun 9, 2021 at 9:51 PM Dongjin Lee 
> >> wrote:
> > 
> >> Hi Ismael,
> >> 
> >> Before I submit this KIP, I reviewed some history. When KIP-499
> >> <
> >> 
>  
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-499+-+Unify+connection+name+flag+for+command+line+tool
> >>> 
> >> tried to resolve the inconsistencies between the command line tools,
>  two
> >> tools were omitted, probably by mistake.
> >> 
> >> - KAFKA-12878: Support --bootstrap-server
>  kafka-streams-application-reset
> >> 
> >> - KAFKA-12899: Support --bootstrap-server in ReplicaVerificationTool
> >>  (this one)
> >> 
> >> And it seems like this tool is still working. The last update was
> >> KAFKA-12600  by
>  you,
> >> which will also be included in this 3.0.0 release. It is why I
>  determined
> >> that this tool is worth updating.
> >> 
> >> Thanks,
> >> Dongjin
> >> 
> >> On Thu, Jun 10, 2021 at 1:26 PM Ismael Juma 
> >> wrote:
> >> 
> >>> Hi Dongjin,
> >>> 
> >>> Does this tool still work? I recall that there were some doubts
>  about it
> >>> and that's why it wasn't updated previously.
> >>> 
> >>> Ismael
> >>> 
> >>> On Sat, Jun 5, 2021 at 2:38 PM Dongjin Lee 
>  wrote:
> >>> 
>  Hi all,
>  
>  I'd like to call for a vote on KIP-752: Support --bootstrap-server
>  in
>  ReplicaVerificationTool:
>  
>  
>  
> >>> 
> >> 
>  
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-752%3A+Support+--bootstrap-server+in+ReplicaVerificationTool
>  
>  Best,
>  Dongjin
>  
>  --
>  *Dongjin Lee*
>  
>  *A hitchhiker in the mathematical world.*
>  
>  
>  
>  *github:  github.com/dongjinleekr
>  keybase:
> >>> https://keybase.io/dongjinleekr
>  linkedin:
> >>> kr.linkedin.com/in/dongjinleekr
>  speakerdeck:
>  speakerdeck.com/dongjin
>  *
>  
> >>> 
> >> 
> >> 
> >> --
> >> *Dongjin Lee*
> >> 
> >> *A hitchhiker in the mathematical world.*
> >> 
> >> 
> >> 
> >> *github:  github.com/dongjinleekr
> >> keybase:
>  https://keybase.io/dongjinleekr
> >> linkedin:
>  kr.linkedin.com/in/dongjinleekr
> >> speakerdeck:
> >> speakerdeck.com/dongjin
> >> *
> >> 
> > 
>  
> >>> 
> >>> 
> >>> --
> >>> *Dongjin Lee*
> >>> 
> >>> *A hitchhiker in the mathematical world.*
> >>> 
> >>> 
> >>> 
> >>> *github:  github.com/dongjinleekr
> >>> keybase:
> >> https://keybase.io/dong

[jira] [Created] (KAFKA-17073) Deprecate ReplicaVerificationTool in 3.9

2024-07-03 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17073:
--

 Summary: Deprecate ReplicaVerificationTool in 3.9
 Key: KAFKA-17073
 URL: https://issues.apache.org/jira/browse/KAFKA-17073
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


see discussion https://lists.apache.org/thread/6zz7xwps8lq2lxfo5bhyl4cggh64c5py

In short, the tool is useless and so it is good time to deprecate it in 3.9. 
That enables us to remove it from 4.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17074) Remove ReplicaVerificationTool

2024-07-03 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17074:
--

 Summary: Remove ReplicaVerificationTool
 Key: KAFKA-17074
 URL: https://issues.apache.org/jira/browse/KAFKA-17074
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai
 Fix For: 4.0.0


this is follow-up of KAFKA-17073



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1022 Formatting and Updating Features

2024-07-03 Thread David Jacot
Hi Jun, Colin,

Thanks for your replies.

If the FeatureCommand relies on version 0 too, my suggestion does not work.
Omitting the features for old clients as suggested by Colin seems fine for
me. In practice, administrators will usually use a version of
FeatureCommand matching the cluster version so the impact is not too bad
knowing that the first features will be introduced from 3.9 on.

Best,
David

On Tue, Jul 2, 2024 at 2:15 AM Colin McCabe  wrote:

> Hi David,
>
> In the ApiVersionsResponse, we really don't have an easy way of mapping
> finalizedVersion = 1 to "off" in older releases such as 3.7.0. For example,
> if a 3.9.0 broker advertises that it has finalized group.version = 1, that
> will be treated by 3.7.0 as a brand new feature, not as "KIP-848 is off."
> However, I suppose we could work around this by not setting a
> finalizedVersion at all for group.version (or any other feature) if its
> finalized level was 1. We could also work around the "deletion = set to 0"
> issue on the server side. The server can translate requests to set the
> finalized level to 0, into requests to set it to 1.
>
> So maybe this solution is worth considering, although it's unfortunate to
> lose 0. I suppose we'd have to special case metadata.version being set to
> 1, since that was NOT equivalent to it being "off"
>
> best,
> Colin
>
>
> On Mon, Jul 1, 2024, at 10:11, Jun Rao wrote:
> > Hi, David,
> >
> > Yes, that's another option. It probably has its own challenges. For
> > example, the FeatureCommand tool currently treats disabling a feature as
> > setting the version to 0. It would be useful to get Jose's opinion on
> this
> > since he introduced version 0 in the kraft.version feature.
> >
> > Thanks,
> >
> > Jun
> >
> > On Sun, Jun 30, 2024 at 11:48 PM David Jacot  >
> > wrote:
> >
> >> Hi Jun, Colin,
> >>
> >> Have we considered sticking with the range going from version 1 to N
> where
> >> version 1 would be the equivalent of "disabled"? In the group.version
> case,
> >> we could introduce group.version=1 that does basically nothing and
> >> group.version=2 that enables the new protocol. I suppose that we could
> do
> >> the same for the other features. I agree that it is less elegant but it
> >> would avoid all the backward compatibility issues.
> >>
> >> Best,
> >> David
> >>
> >> On Fri, Jun 28, 2024 at 6:02 PM Jun Rao 
> wrote:
> >>
> >> > Hi, Colin,
> >> >
> >> > Yes, #3 is the scenario that I was thinking about.
> >> >
> >> > In either approach, there will be some information missing in the old
> >> > client. It seems that we should just pick the one that's less wrong.
> In
> >> the
> >> > more common case when a feature is finalized on the server,
> presenting a
> >> > supported feature with a range of 1-1 seems less wrong than omitting
> it
> >> in
> >> > the output of "kafka-features describe".
> >> >
> >> > Thanks,
> >> >
> >> > Jun
> >> >
> >> > On Thu, Jun 27, 2024 at 9:52 PM Colin McCabe 
> wrote:
> >> >
> >> > > Hi Jun,
> >> > >
> >> > > This is a fair question. I think there's a few different scenarios
> to
> >> > > consider:
> >> > >
> >> > > 1. mixed server software versions in a single cluster
> >> > >
> >> > > 2. new client software + old server software
> >> > >
> >> > > 3. old client software + new server software
> >> > >
> >> > > In scenario #1 and #2, we have old (pre-3.9) server software in the
> >> mix.
> >> > > This old software won't support features like group.version and
> >> > > kraft.version. As we know, there are no features supported in 3.8
> and
> >> > older
> >> > > except metadata.version itself. So the fact that we leave out some
> >> stuff
> >> > > from the ApiVersionResponse isn't terribly significant. We weren't
> >> going
> >> > to
> >> > > be able to enable those post-3.8 features anyway, since enabling a
> >> > feature
> >> > > requires ALL server nodes to support it.
> >> > >
> >> > > Scenario #3 is more interesting. With new server software, features
> >> like
> >> > > group.version and kraft.version may be enabled. But due to the
> >> > KAFKA-17011
> >> > > bug, we cannot accurately communicate the supported feature range
> back
> >> to
> >> > > the old client.
> >> > >
> >> > > What is the impact of this? It depends on what the client is. Today,
> >> the
> >> > > only client that cares about feature versions is admin client, which
> >> can
> >> > > surface them through the Admin.describeFeatures API. So if we omit
> the
> >> > > supported feature range, admi client won't report it. If we fudge
> it by
> >> > > reporting it as 1-1 instead of 0-1, admin client will report the
> fudged
> >> > > version.
> >> > >
> >> > > In theory, there could be other clients looking at the supported
> >> feature
> >> > > ranges later, but I guess those will be post-3.8, if they ever
> exist,
> >> and
> >> > > so not subject to this problem.
> >> > >
> >> > > AdminClient returns a separate map for "supported features" and
> >> > "finalized
> >> > > features." So leaving out the s

Re: [DISCUSS] KIP-1006: Remove SecurityManager Support

2024-07-03 Thread Greg Harris
Hi Frédérik,

Thanks for your response! This KIP is intended to be implemented
immediately after voting, so it will appear in Kafka 3.10 or 4.0 at the
earliest.
I'm currently working on a PR and in my opinion there is very little risk
of the change slipping from a release during the implementation stage, it's
all up to the vote.

I just re-read the KIP and believe it is still up-to-date given the current
Java version deprecation schedule. KIP-1013 only applies to the broker and
tools, while other modules such as clients and connect must still support
Java 11.
There are call-sites for these functions in the clients library and in
connect, so the KIP mentions those versions explicitly. And it doesn't
actually make a significant difference in the implementation complexity;
The new APIs are not added until Java 18, so even the brokers cannot
directly rely on the new APIs.

Thanks,
Greg

On Wed, Jul 3, 2024 at 9:39 AM Frédérik Rouleau
 wrote:

> Hi all,
>
> When this KIP is intended to be implemented? As KIP-1013 is deprecating
> Java 11 in AK 3.7 and removes its support in AK 4.0, maybe the KIP needs an
> update.
>
> Regards,
>
> On Mon, Jul 1, 2024 at 10:39 PM Greg Harris 
> wrote:
>
> > Hi Mickael,
> >
> > Thanks for the pointer to that JDK ticket, I did not realize that the
> > legacy APIs were going to be degraded instead of removed.
> >
> > I have updated the KIP to accommodate for this change in the JDK
> > implementation. In addition to detecting the removal of the
> method/classes,
> > it will also fall back to the new implementations when encountering an
> > UnsupportedOperationException.
> > Since this will be a blocker for supporting JDK 23, I'll open a vote
> thread
> > for this next week if I don't get any more comments here.
> >
> > Thanks,
> > Greg
> >
> > On Wed, Apr 10, 2024 at 10:42 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > It looks like some of the SecurityManager APIs are starting to be
> > > removed in JDK 23, see
> > > - https://bugs.openjdk.org/browse/JDK-8296244
> > > - https://github.com/quarkusio/quarkus/issues/39634
> > >
> > > JDK 23 is currently planned for September 2024.
> > > Considering the timelines and that we only drop support for Java
> > > versions in major Kafka releases, I think the proposed approach of
> > > detecting the APIs to use makes sense.
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Tue, Nov 21, 2023 at 8:38 AM Greg Harris
> > >  wrote:
> > > >
> > > > Hey Ashwin,
> > > >
> > > > Thanks for your question!
> > > >
> > > > I believe we have only removed support for two Java versions:
> > > > 7:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-118%3A+Drop+Support+for+Java+7
> > > > in 2.0
> > > > 8:
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181308223
> > > > in 4.0
> > > >
> > > > In both cases, we changed the gradle sourceCompatibility and
> > > > targetCompatibility at the same time, which I believe changes the
> > > > "-target" option in javac.
> > > >
> > > > We have no plans currently for dropping support for 11 or 17, but I
> > > > presume they would work in much the same way.
> > > >
> > > > Hope this helps!
> > > > Greg
> > > >
> > > > On Mon, Nov 20, 2023 at 11:19 PM Ashwin  >
> > > wrote:
> > > > >
> > > > > Hi Greg,
> > > > >
> > > > > Thanks for writing this KIP.
> > > > > I agree with you that handling this now will help us react to the
> > > > > deprecation of SecurityManager, whenever it happens.
> > > > >
> > > > > I had a question regarding how we deprecate JDKs supported by
> Apache
> > > Kafka.
> > > > > When we drop support for JDK 17, will we set the “-target” option
> of
> > > Javac
> > > > > such that the resulting JARs will not load in JVMs which are lesser
> > > than or
> > > > > equal to that version ?
> > > > >
> > > > > Thanks,
> > > > > Ashwin
> > > > >
> > > > >
> > > > > On Tue, Nov 21, 2023 at 6:18 AM Greg Harris
> > > 
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to invite you all to discuss removing SecurityManager
> > > support
> > > > > > from Kafka. This affects the client and server SASL mechanism,
> > Tiered
> > > > > > Storage, and Connect classloading.
> > > > > >
> > > > > > Find the KIP here:
> > > > > >
> > > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1006%3A+Remove+SecurityManager+Support
> > > > > >
> > > > > > I think this is a "code higiene" effort that doesn't need to be
> > dealt
> > > > > > with urgently, but it would prevent a lot of headache later when
> > Java
> > > > > > does decide to remove support.
> > > > > >
> > > > > > If you are currently using the SecurityManager with Kafka, I'd
> > really
> > > > > > appreciate hearing how you're using it, and how you're planning
> > > around
> > > > > > its removal.
> > > > > >
> > > > > > Thanks!
> > > > > > Greg Harris
> > > > > >
> > >
> >
>


Re: [DISCUSS] KIP-1006: Remove SecurityManager Support

2024-07-03 Thread Ismael Juma
Hi Greg,

Thanks for the KIP. I'm not totally clear on why we need a KIP. Can we not
use the SecurityManager when it's available and fallback when it's not? If
so, then it would mean that whether SecurityManager is used or not depends
on the JDK and its configuration.

Ismael

On Mon, Nov 20, 2023 at 4:48 PM Greg Harris 
wrote:

> Hi all,
>
> I'd like to invite you all to discuss removing SecurityManager support
> from Kafka. This affects the client and server SASL mechanism, Tiered
> Storage, and Connect classloading.
>
> Find the KIP here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1006%3A+Remove+SecurityManager+Support
>
> I think this is a "code higiene" effort that doesn't need to be dealt
> with urgently, but it would prevent a lot of headache later when Java
> does decide to remove support.
>
> If you are currently using the SecurityManager with Kafka, I'd really
> appreciate hearing how you're using it, and how you're planning around
> its removal.
>
> Thanks!
> Greg Harris
>


Re: [DISCUSS] KIP-1006: Remove SecurityManager Support

2024-07-03 Thread Greg Harris
Hi Ismael,

Thanks for the question.

> Can we not
> use the SecurityManager when it's available and fallback when it's not?

This is the strategy the KIP is proposing in the interim before we drop
support for the SecurityManager. The KIP should be stating this idea, just
more verbosely.

> I'm not totally clear on why we need a KIP.

Implementing the above strategy is IMHO tech debt, and I wanted to plan for
eventually paying off that tech debt before incurring it.
I think the only way to eliminate it is going to be removing our support
for SecurityManager entirely.
Since there may be Kafka users using the SecurityManager, this would
represent a removal of functionality/breaking change for them, and
therefore warrants a KIP.

Please let me know if you have more questions,
Greg

On Wed, Jul 3, 2024 at 10:14 AM Ismael Juma  wrote:

> Hi Greg,
>
> Thanks for the KIP. I'm not totally clear on why we need a KIP. Can we not
> use the SecurityManager when it's available and fallback when it's not? If
> so, then it would mean that whether SecurityManager is used or not depends
> on the JDK and its configuration.
>
> Ismael
>
> On Mon, Nov 20, 2023 at 4:48 PM Greg Harris 
> wrote:
>
> > Hi all,
> >
> > I'd like to invite you all to discuss removing SecurityManager support
> > from Kafka. This affects the client and server SASL mechanism, Tiered
> > Storage, and Connect classloading.
> >
> > Find the KIP here:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1006%3A+Remove+SecurityManager+Support
> >
> > I think this is a "code higiene" effort that doesn't need to be dealt
> > with urgently, but it would prevent a lot of headache later when Java
> > does decide to remove support.
> >
> > If you are currently using the SecurityManager with Kafka, I'd really
> > appreciate hearing how you're using it, and how you're planning around
> > its removal.
> >
> > Thanks!
> > Greg Harris
> >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.8 #62

2024-07-03 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-1022 Formatting and Updating Features

2024-07-03 Thread Jun Rao
Hi, David,

Thanks for the reply. In the common case, there is no difference between
omitting just v0 of the feature or omitting the feature completely. It's
just when an old client is used, there is some difference. To me,
omitting just v0 of the feature seems slightly better for the old client.

Jun

On Wed, Jul 3, 2024 at 9:45 AM David Jacot 
wrote:

> Hi Jun, Colin,
>
> Thanks for your replies.
>
> If the FeatureCommand relies on version 0 too, my suggestion does not work.
> Omitting the features for old clients as suggested by Colin seems fine for
> me. In practice, administrators will usually use a version of
> FeatureCommand matching the cluster version so the impact is not too bad
> knowing that the first features will be introduced from 3.9 on.
>
> Best,
> David
>
> On Tue, Jul 2, 2024 at 2:15 AM Colin McCabe  wrote:
>
> > Hi David,
> >
> > In the ApiVersionsResponse, we really don't have an easy way of mapping
> > finalizedVersion = 1 to "off" in older releases such as 3.7.0. For
> example,
> > if a 3.9.0 broker advertises that it has finalized group.version = 1,
> that
> > will be treated by 3.7.0 as a brand new feature, not as "KIP-848 is off."
> > However, I suppose we could work around this by not setting a
> > finalizedVersion at all for group.version (or any other feature) if its
> > finalized level was 1. We could also work around the "deletion = set to
> 0"
> > issue on the server side. The server can translate requests to set the
> > finalized level to 0, into requests to set it to 1.
> >
> > So maybe this solution is worth considering, although it's unfortunate to
> > lose 0. I suppose we'd have to special case metadata.version being set to
> > 1, since that was NOT equivalent to it being "off"
> >
> > best,
> > Colin
> >
> >
> > On Mon, Jul 1, 2024, at 10:11, Jun Rao wrote:
> > > Hi, David,
> > >
> > > Yes, that's another option. It probably has its own challenges. For
> > > example, the FeatureCommand tool currently treats disabling a feature
> as
> > > setting the version to 0. It would be useful to get Jose's opinion on
> > this
> > > since he introduced version 0 in the kraft.version feature.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Sun, Jun 30, 2024 at 11:48 PM David Jacot
>  > >
> > > wrote:
> > >
> > >> Hi Jun, Colin,
> > >>
> > >> Have we considered sticking with the range going from version 1 to N
> > where
> > >> version 1 would be the equivalent of "disabled"? In the group.version
> > case,
> > >> we could introduce group.version=1 that does basically nothing and
> > >> group.version=2 that enables the new protocol. I suppose that we could
> > do
> > >> the same for the other features. I agree that it is less elegant but
> it
> > >> would avoid all the backward compatibility issues.
> > >>
> > >> Best,
> > >> David
> > >>
> > >> On Fri, Jun 28, 2024 at 6:02 PM Jun Rao 
> > wrote:
> > >>
> > >> > Hi, Colin,
> > >> >
> > >> > Yes, #3 is the scenario that I was thinking about.
> > >> >
> > >> > In either approach, there will be some information missing in the
> old
> > >> > client. It seems that we should just pick the one that's less wrong.
> > In
> > >> the
> > >> > more common case when a feature is finalized on the server,
> > presenting a
> > >> > supported feature with a range of 1-1 seems less wrong than omitting
> > it
> > >> in
> > >> > the output of "kafka-features describe".
> > >> >
> > >> > Thanks,
> > >> >
> > >> > Jun
> > >> >
> > >> > On Thu, Jun 27, 2024 at 9:52 PM Colin McCabe 
> > wrote:
> > >> >
> > >> > > Hi Jun,
> > >> > >
> > >> > > This is a fair question. I think there's a few different scenarios
> > to
> > >> > > consider:
> > >> > >
> > >> > > 1. mixed server software versions in a single cluster
> > >> > >
> > >> > > 2. new client software + old server software
> > >> > >
> > >> > > 3. old client software + new server software
> > >> > >
> > >> > > In scenario #1 and #2, we have old (pre-3.9) server software in
> the
> > >> mix.
> > >> > > This old software won't support features like group.version and
> > >> > > kraft.version. As we know, there are no features supported in 3.8
> > and
> > >> > older
> > >> > > except metadata.version itself. So the fact that we leave out some
> > >> stuff
> > >> > > from the ApiVersionResponse isn't terribly significant. We weren't
> > >> going
> > >> > to
> > >> > > be able to enable those post-3.8 features anyway, since enabling a
> > >> > feature
> > >> > > requires ALL server nodes to support it.
> > >> > >
> > >> > > Scenario #3 is more interesting. With new server software,
> features
> > >> like
> > >> > > group.version and kraft.version may be enabled. But due to the
> > >> > KAFKA-17011
> > >> > > bug, we cannot accurately communicate the supported feature range
> > back
> > >> to
> > >> > > the old client.
> > >> > >
> > >> > > What is the impact of this? It depends on what the client is.
> Today,
> > >> the
> > >> > > only client that cares about feature versions is admin client,
> which
> > >> 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3072

2024-07-03 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-10816) Connect REST API should have a resource that can be used as a readiness probe

2024-07-03 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-10816.
---
Fix Version/s: 3.9.0
   Resolution: Done

> Connect REST API should have a resource that can be used as a readiness probe
> -
>
> Key: KAFKA-10816
> URL: https://issues.apache.org/jira/browse/KAFKA-10816
> Project: Kafka
>  Issue Type: Improvement
>  Components: connect
>Reporter: Randall Hauch
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.9.0
>
>
> There are a few ways to accurately detect whether a Connect worker is 
> *completely* ready to process all REST requests:
> # Wait for {{Herder started}} in the Connect worker logs
> # Use the REST API to issue a request that will be completed only after the 
> herder has started, such as {{GET /connectors/{name}/}} or {{GET 
> /connectors/{name}/status}}.
> Other techniques can be used to detect other startup states, though none of 
> these will guarantee that the worker has indeed completely started up and can 
> process all REST requests:
> * {{GET /}} can be used to know when the REST server has started, but this 
> may be before the worker has started completely and successfully.
> * {{GET /connectors}} can be used to know when the REST server has started, 
> but this may be before the worker has started completely and successfully. 
> And, for the distributed Connect worker, this may actually return an older 
> list of connectors if the worker hasn't yet completely read through the 
> internal config topic. It's also possible that this request returns even if 
> the worker is having trouble reading from the internal config topic.
> * {{GET /connector-plugins}} can be used to know when the REST server has 
> started, but this may be before the worker has started completely and 
> successfully.
> The Connect REST API should have an endpoint that more obviously and more 
> simply can be used as a readiness probe. This could be a new resource (e.g., 
> {{GET /status}}), though this would only work on newer Connect runtimes, and 
> existing tooling, installations, and examples would have to be modified to 
> take advantage of this feature (if it exists). 
> Alternatively, we could make sure that the existing resources (e.g., {{GET 
> /}} or {{GET /connectors}}) wait for the herder to start completely; this 
> wouldn't require a KIP and it would not require clients use different 
> technique for newer and older Connect runtimes. (Whether or not we back port 
> this is another question altogether, since it's debatable whether the 
> behavior of the existing REST resources is truly a bug.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17075) Use health check endpoint to verify Connect worker readiness in system tests

2024-07-03 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-17075:
-

 Summary: Use health check endpoint to verify Connect worker 
readiness in system tests
 Key: KAFKA-17075
 URL: https://issues.apache.org/jira/browse/KAFKA-17075
 Project: Kafka
  Issue Type: Improvement
  Components: connect
Affects Versions: 3.9.0
Reporter: Chris Egerton
Assignee: Chris Egerton


We introduced a health check endpoint for Kafka Connect as part of work on 
KAFKA-10816. We should start to use that endpoint to verify worker readiness in 
our system tests, instead of scanning worker logs for specific messages or 
hitting other, less-reliable REST endpoints.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17076) logEndOffset could be lost due to log cleaning

2024-07-03 Thread Jun Rao (Jira)
Jun Rao created KAFKA-17076:
---

 Summary: logEndOffset could be lost due to log cleaning
 Key: KAFKA-17076
 URL: https://issues.apache.org/jira/browse/KAFKA-17076
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Jun Rao


It's possible for the log cleaner to remove all records in the suffix of the 
log. If the partition is then reassigned, the new replica won't be able to see 
the true logEndOffset since there is no record batch associated with it. If 
this replica becomes the leader, it will assign an already used offset to a 
newly produced record, which is incorrect.

 

It's relatively rare to trigger this issue since the active segment is never 
cleaned and typically is not empty. However, the following is one possibility.
 # records with offset 100-110 are produced and fully replicated to all ISR. 
All those records are delete records for certain keys.
 # record with offset 111 is produced. It forces the roll of a new segment in 
broker b1 and is added to the log. The record is not committed and is later 
truncated from the log, leaving an empty active segment in this log. b1 at some 
point becomes the leader.
 # log cleaner kicks in and removes records 100-110.
 # The partition is reassigned to another broker b2. b2 replicates all records 
from b1 up to offset 100 and marks its logEndOffset at 100. Since there is no 
record to replicate after offset 100 in b1, b2's logEndOffset stays at 100 and 
b2 can join the ISR.
 # b2 becomes the leader and assign offset 100 to a new record.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Test 3 [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah opened a new pull request, #6:
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/6

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Test 4 [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah opened a new pull request, #7:
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/7

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] test 2: trying again [kafka-merge-queue-sandbox]

2024-07-03 Thread via GitHub


mumrah closed pull request #5: test 2: trying again
URL: https://github.com/apache/kafka-merge-queue-sandbox/pull/5


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-17077) the node.id is inconsistent to broker.id when "broker.id.generation.enable=true"

2024-07-03 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17077:
--

 Summary: the node.id is inconsistent to broker.id when 
"broker.id.generation.enable=true"
 Key: KAFKA-17077
 URL: https://issues.apache.org/jira/browse/KAFKA-17077
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


We change the broker id of `KafkaConfig` directly when 
`broker.id.generation.enable=true` [0]. However, the update is NOT sync to 
node.id of `KafkaConfig`. It results in following issues:

1. we can see many "-1" in the log. for example:
{code:sh}
[2024-07-03 19:23:08,453] INFO [ExpirationReaper--1-AlterAcls]: Starting 
(kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
{code}

2.  `KafkaRaftManager` will use uninitialized node.id to create 
`KafkaRaftClient` in migration [1], and the error sequentially happens

[0] 
https://github.com/apache/kafka/blob/27220d146c5d043da4adc3d636036bd6e7b112d2/core/src/main/scala/kafka/server/KafkaServer.scala#L261
[1] 
https://github.com/apache/kafka/blob/27220d146c5d043da4adc3d636036bd6e7b112d2/core/src/main/scala/kafka/raft/RaftManager.scala#L230



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17078) Add SecurityManager reflective shim

2024-07-03 Thread Greg Harris (Jira)
Greg Harris created KAFKA-17078:
---

 Summary: Add SecurityManager reflective shim
 Key: KAFKA-17078
 URL: https://issues.apache.org/jira/browse/KAFKA-17078
 Project: Kafka
  Issue Type: Task
  Components: clients, connect, Tiered-Storage
Reporter: Greg Harris
Assignee: Greg Harris


Add a shim class to allow for detection and usage of legacy and modern methods 
before and after the SecurityManager removal.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3073

2024-07-03 Thread Apache Jenkins Server
See 




[jira] [Reopened] (KAFKA-10370) WorkerSinkTask: IllegalStateException cased by consumer.seek(tp, offsets) when (tp, offsets) are supplied by WorkerSinkTaskContext

2024-07-03 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris reopened KAFKA-10370:
-
  Assignee: (was: Ning Zhang)

> WorkerSinkTask: IllegalStateException cased by consumer.seek(tp, offsets) 
> when (tp, offsets) are supplied by WorkerSinkTaskContext
> --
>
> Key: KAFKA-10370
> URL: https://issues.apache.org/jira/browse/KAFKA-10370
> Project: Kafka
>  Issue Type: New Feature
>  Components: connect
>Affects Versions: 2.5.0
>Reporter: Ning Zhang
>Priority: Major
>
> In 
> [WorkerSinkTask.java|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java],
>  when we want the consumer to consume from certain offsets, rather than from 
> the last committed offset, 
> [WorkerSinkTaskContext|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTaskContext.java#L63-L66]
>  provided a way to supply the offsets from external (e.g. implementation of 
> SinkTask) to rewind the consumer. 
> In the [poll() 
> method|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L312],
>  it first call 
> [rewind()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L615-L633]
>  to (1) read the offsets from WorkerSinkTaskContext, if the offsets are not 
> empty, (2) consumer.seek(tp, offset) to rewind the consumer.
> As a part of [WorkerSinkTask 
> initialization|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L290-L307],
>  when the [SinkTask 
> starts|https://github.com/apache/kafka/blob/trunk/connect/api/src/main/java/org/apache/kafka/connect/sink/SinkTask.java#L83-L88],
>  we can supply the specific offsets by +"context.offset(supplied_offsets);+" 
> in start() method, so that when the consumer does the first poll, it should 
> rewind to the specific offsets in rewind() method. However in practice, we 
> saw the following IllegalStateException when running consumer.seek(tp, 
> offsets);
> {code:java}
> [2020-08-07 23:53:55,752] INFO WorkerSinkTask{id=MirrorSinkConnector-0} 
> Rewind test-1 to offset 3 
> (org.apache.kafka.connect.runtime.WorkerSinkTask:648)
> [2020-08-07 23:53:55,752] INFO [Consumer 
> clientId=connector-consumer-MirrorSinkConnector-0, 
> groupId=connect-MirrorSinkConnector] Seeking to offset 3 for partition test-1 
> (org.apache.kafka.clients.consumer.KafkaConsumer:1592)
> [2020-08-07 23:53:55,752] ERROR WorkerSinkTask{id=MirrorSinkConnector-0} Task 
> threw an uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask:187)
> java.lang.IllegalStateException: No current assignment for partition test-1
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:368)
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.seekUnvalidated(SubscriptionState.java:385)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1597)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.rewind(WorkerSinkTask.java:649)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:334)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:198)
> at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
> at 
> org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> [2020-08-07 23:53:55,752] ERROR WorkerSinkTask{id=MirrorSinkConnector-0} Task 
> is being killed and will not recover until manually restarted 
> (org.apache.kafka.connect.runtime.WorkerTask:188)
> {code}
> As suggested in 
> https://stackoverflow.com/questions/41008610/kafkaconsumer-0-10-java-api-error-message-no-current-assignment-for-partition/41010594,
>  the resolution (that has been initially verified) proposed in the attached 
> PR is to use *consumer.assign* with *consumer.seek* , instead of 
> *consumer.subscribe*, to handle the

[jira] [Resolved] (KAFKA-17069) Remote copy throttle metrics

2024-07-03 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-17069.
---
Fix Version/s: 3.9.0
   Resolution: Fixed

> Remote copy throttle metrics 
> -
>
> Key: KAFKA-17069
> URL: https://issues.apache.org/jira/browse/KAFKA-17069
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Abhijeet Kumar
>Priority: Major
> Fix For: 3.9.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-752: Support --bootstrap-server in ReplicaVerificationTool

2024-07-03 Thread Dongjin Lee
Okay, Ismael's opinion seems reasonable, so I will follow it.

Then, could you please assign me the deprecation issue & KIP? Since I
opened this KIP in the past, I hope to wrap it up. If you are okay with it,
I will close this issue & KIP and file the deprecation KIP instead.

Thanks,
Dongjin

On Thu, Jul 4, 2024 at 1:42 AM Chia-Ping Tsai  wrote:

> see https://issues.apache.org/jira/browse/KAFKA-17073 for deprecation.
>
> On 2024/07/03 14:57:51 Chia-Ping Tsai wrote:
> > Agree to Juma
> >
> > > Ismael Juma  於 2024年7月3日 晚上10:41 寫道:
> > >
> > > I think we should just do a KIP to remove it in 4.0 with deprecation
> in 3.9.
> > >
> > > Ismael
> > >
> > >> On Wed, Jul 3, 2024 at 7:38 AM Chia-Ping Tsai 
> wrote:
> > >>
> > >> hi Dongjin
> > >>
> > >> It will be removed in 4.0 if we are able to deprecate it in 3.9.
> Hence, it
> > >> seems to me enhancing it is a bit weird since the feature is active
> only
> > >> for one release …
> > >>
> > >>>
> >  Dongjin Lee  於 2024年7月3日 晚上10:04 寫道:
> > >>>
> > >>> Hi Tsai,
> > >>>
> > >>> Sorry for being late. How about this way?
> > >>>
> > >>> 1. Amend mention on the deprecation plan to the original KIP.
> > >>> 2. You cast +1 to this voting thread.
> > >>> 3. Add a new KIP to remove this tool with the 4.0 release.
> > >>>
> > >>> Since this KIP already has +2 bindings with PR, this way would be
> > >> slightly
> > >>> more swift. How do you think?
> > >>>
> > >>> Thanks,
> > >>> Dongjin
> > >>>
> >  On Mon, Jun 3, 2024 at 4:15 AM Chia-Ping Tsai 
> > >> wrote:
> > 
> >  `replica_verification_test.py` is unstable in my jenkins, and then I
> >  notice this thread.
> > 
> >  Maybe kafka 4 is a good timing to remove this tool, but does it
> need a
> >  KIP? If so, I'd like to file a KIP for it.
> > 
> >  Best,
> >  Chia-Ping
> > 
> > > On 2021/06/10 05:01:43 Ismael Juma wrote:
> > > KAFKA-12600 was a general change, not related to this tool
> > >> specifically.
> >  I
> > > am not convinced this tool is actually useful, I haven't seen
> anyone
> >  using
> > > it in years.
> > >
> > > Ismael
> > >
> > >> On Wed, Jun 9, 2021 at 9:51 PM Dongjin Lee 
> > >> wrote:
> > >
> > >> Hi Ismael,
> > >>
> > >> Before I submit this KIP, I reviewed some history. When KIP-499
> > >> <
> > >>
> > 
> > >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-499+-+Unify+connection+name+flag+for+command+line+tool
> > >>>
> > >> tried to resolve the inconsistencies between the command line
> tools,
> >  two
> > >> tools were omitted, probably by mistake.
> > >>
> > >> - KAFKA-12878: Support --bootstrap-server
> >  kafka-streams-application-reset
> > >> 
> > >> - KAFKA-12899: Support --bootstrap-server in
> ReplicaVerificationTool
> > >>  (this one)
> > >>
> > >> And it seems like this tool is still working. The last update was
> > >> KAFKA-12600 
> by
> >  you,
> > >> which will also be included in this 3.0.0 release. It is why I
> >  determined
> > >> that this tool is worth updating.
> > >>
> > >> Thanks,
> > >> Dongjin
> > >>
> > >> On Thu, Jun 10, 2021 at 1:26 PM Ismael Juma 
> > >> wrote:
> > >>
> > >>> Hi Dongjin,
> > >>>
> > >>> Does this tool still work? I recall that there were some doubts
> >  about it
> > >>> and that's why it wasn't updated previously.
> > >>>
> > >>> Ismael
> > >>>
> > >>> On Sat, Jun 5, 2021 at 2:38 PM Dongjin Lee 
> >  wrote:
> > >>>
> >  Hi all,
> > 
> >  I'd like to call for a vote on KIP-752: Support
> --bootstrap-server
> >  in
> >  ReplicaVerificationTool:
> > 
> > 
> > 
> > >>>
> > >>
> > 
> > >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-752%3A+Support+--bootstrap-server+in+ReplicaVerificationTool
> > 
> >  Best,
> >  Dongjin
> > 
> >  --
> >  *Dongjin Lee*
> > 
> >  *A hitchhiker in the mathematical world.*
> > 
> > 
> > 
> >  *github:  github.com/dongjinleekr
> >  keybase:
> > >>> https://keybase.io/dongjinleekr
> >  linkedin:
> > >>> kr.linkedin.com/in/dongjinleekr
> >  speakerdeck:
> >  speakerdeck.com/dongjin
> >  *
> > 
> > >>>
> > >>
> > >>
> > >> --
> > >> *Dongjin Lee*
> > >>
> > >> *A hitchhiker in the mathematical world.*
> > >>
> > >>
> > >>
> > >> *github:  github.com/dongjinleekr
>

[jira] [Created] (KAFKA-17079) scoverage plugin not found in maven repo, version 1.9.3

2024-07-03 Thread kaushik srinivas (Jira)
kaushik srinivas created KAFKA-17079:


 Summary: scoverage plugin not found in maven repo, version 1.9.3
 Key: KAFKA-17079
 URL: https://issues.apache.org/jira/browse/KAFKA-17079
 Project: Kafka
  Issue Type: Improvement
Reporter: kaushik srinivas


Team, 

Creating coverage reports for kafka core module. Below issue is seen. Branch 
used is 3.6

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task 
':core:compileScoverageScala'.
        at 
org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:38)
        at 
org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
        at 
org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
        at 
org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
        at 
org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
        at 
org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
        at 
org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
        at 
org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
        at 
org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
        at 
org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
        at 
org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
        at 
org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
        at 
org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
        at 
org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:42)
        at 
org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:337)
        at 
org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:324)
        at 
org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:317)
        at 
org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:303)
        at 
org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:463)
        at 
org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:380)
        at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
        at 
org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
Caused by: 
org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration$ArtifactResolveException:
 Could not resolve all files for configuration ':core:scoverage'.
        at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.mapFailure(DefaultConfiguration.java:1769)
        at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.access$3400(DefaultConfiguration.java:176)
        at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$DefaultResolutionHost.mapFailure(DefaultConfiguration.java:2496)
        at 
org.gradle.api.internal.artifacts.configurations.ResolutionHost.rethrowFailure(ResolutionHost.java:30)
        at 
org.gradle.api.internal.artifacts.configurations.ResolutionBackedFileCollection.visitContents(ResolutionBackedFileCollection.java:74)
        at 
org.gradle.api.internal.file.AbstractFileCollection.visitStructure(AbstractFileCollection.java:366)
        at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.visitContents(DefaultConfiguration.java:574)
        at 
org.gradle.api.internal.file.AbstractFileCollection.visitStructure(AbstractFileCollection.java:366)
        at 
org.gradle.api.internal.file.CompositeFileCollection.lambda$visitContents$0(CompositeFileCollection.java:133)
        at 
org.gradle.api.internal.file.UnionFileCollection.visitChildren(UnionFileCollection.java:81)
        at 
org.gradle.api.internal.file.CompositeFileCollection.visitContents(CompositeFileCollection.java:133)
        at 
org.gradle.api.internal.file.AbstractFileCollection.visitStructure(AbstractFileCollection.java:366)
        at 
org.gradle.api.internal.file.CompositeFileCollection.lambda$visitCo