Re: [DISCUSS] Please review the 3.1.0 blog post

2021-12-16 Thread David Jacot
Hey folks,

I have updated the blog post based on offline feedback that I have received. The
changes were minor.

Cheers,
David

On Wed, Dec 15, 2021 at 3:00 PM Igor Soarez  wrote:
>
> Hi David,
>
> Apart from the obviously identified TODO items I coudln't find any issues. It 
> looks great!
>
> --
> Igor
>
> On Wed, Dec 15, 2021, at 8:37 AM, David Jacot wrote:
> > Thanks, Mickael. I will fix this.
> >
> > Best,
> > David
> >
> > On Mon, Dec 13, 2021 at 3:55 PM Mickael Maison  
> > wrote:
> >>
> >> Hi David,
> >>
> >> It looks good, I just noticed a typo:
> >> "KIP-733" should be "KIP-773"
> >>
> >> Thanks
> >>
> >> On Sun, Dec 12, 2021 at 4:05 PM David Jacot  
> >> wrote:
> >> >
> >> > Hi Luke,
> >> >
> >> > Thanks for your feedback. I have found and have fixed the issue. It was
> >> > actually
> >> > due to the formatting of the title of the AK 3.0 blog post.
> >> >
> >> > Best,
> >> > David
> >> >
> >> > On Sat, Dec 11, 2021 at 9:44 AM Luke Chen  wrote:
> >> >
> >> > > Oh, sorry! I have a typo in your name!
> >> > > Sorry, David! >.<
> >> > >
> >> > > Luke
> >> > >
> >> > > On Sat, Dec 11, 2021 at 4:42 PM Luke Chen  wrote:
> >> > >
> >> > >> Hi Davie,
> >> > >>
> >> > >> Thanks for drafting the release announcement post.
> >> > >> I've checked the content, and looks good to me.
> >> > >> But I think the header section: "What's New in Apache..." is not
> >> > >> formatted properly.
> >> > >> I checked the previous blog post, and it should be a hyperlink just 
> >> > >> like
> >> > >> the "Main" kind of font.
> >> > >>
> >> > >> [image: image.png]
> >> > >>
> >> > >> Thank you.
> >> > >> Luke
> >> > >>
> >> > >>
> >> > >> On Sat, Dec 11, 2021 at 5:51 AM David Jacot 
> >> > >> 
> >> > >> wrote:
> >> > >>
> >> > >>> I have put the wrong link in my previous email. Here is the public 
> >> > >>> one:
> >> > >>>
> >> > >>>
> >> > >>> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache7
> >> > >>>
> >> > >>> Best,
> >> > >>> David
> >> > >>>
> >> > >>> On Fri, Dec 10, 2021 at 10:35 PM David Jacot 
> >> > >>> wrote:
> >> > >>> >
> >> > >>> > Hello all,
> >> > >>> >
> >> > >>> > I have prepared a draft of the release announcement post for the
> >> > >>> > Apache Kafka 3.1.0 release:
> >> > >>> >
> >> > >>> >
> >> > >>> https://blogs.apache.org/roller-ui/authoring/preview/kafka/?previewEntry=what-s-new-in-apache7
> >> > >>> >
> >> > >>> > I would greatly appreciate your reviews if you have a moment.
> >> > >>> >
> >> > >>> > Thanks,
> >> > >>> > David
> >> > >>>
> >> > >>


Re: [VOTE] KIP-784: Add top-level error code field to DescribeLogDirsResponse

2021-12-16 Thread David Jacot
+1 (binding). Thanks for the KIP!

On Mon, Dec 13, 2021 at 11:14 AM Mickael Maison
 wrote:
>
> Bumping this thread another time.
>
> This is a very minor change to make DescribeLogDirsResponse consistent
> with the other APIs.
> Let me know if you have any feedback.
>
> Thanks,
> Mickael
>
> On Mon, Nov 22, 2021 at 10:29 AM Tom Bentley  wrote:
> >
> > Hi Mickael,
> >
> > It's pretty low value, but I think consistency is a useful trait, and it's
> > easily achievable here.
> >
> > +1 (binding).
> >
> > Kind regards,
> >
> > Tom
> >
> >
> > On Thu, Nov 18, 2021 at 2:56 PM Mickael Maison 
> > wrote:
> >
> > > Bumping this thread.
> > >
> > > Let me know if you have any feedback.
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Wed, Oct 27, 2021 at 3:25 PM Luke Chen  wrote:
> > > >
> > > > Hi Mickael,
> > > > Thanks for the KIP.
> > > > It's good to keep it consistent with others, to have top-level error
> > > field.
> > > >
> > > > + 1 (non-binding)
> > > >
> > > > Thank you.
> > > > Luke
> > > >
> > > > On Wed, Oct 27, 2021 at 9:01 PM Mickael Maison  > > >
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to start the vote on this minor KIP.
> > > > >
> > > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+top-level+error+code+field+to+DescribeLogDirsResponse
> > > > >
> > > > > Please take a look, vote or let me know if you have any feedback.
> > > > >
> > > > > Thanks
> > > > >
> > >
> > >


[jira] [Created] (KAFKA-13551) kafka-log4j-appender-2.1.1.jar Is cve-2021-44228 involved?

2021-12-16 Thread xiansheng fu (Jira)
xiansheng fu created KAFKA-13551:


 Summary:  kafka-log4j-appender-2.1.1.jar Is cve-2021-44228 
involved? 
 Key: KAFKA-13551
 URL: https://issues.apache.org/jira/browse/KAFKA-13551
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: xiansheng fu






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] KIP-591: Add Kafka Streams config to set default state store

2021-12-16 Thread Luke Chen
Hi Guozhang,

I've updated the KIP to use `enum`, instead of store implementation, and
some content accordingly.
Please let me know if there's other comments.


Thank you.
Luke

On Sun, Dec 12, 2021 at 3:55 PM Luke Chen  wrote:

> Hi Guozhang,
>
> Thanks for your comments.
> I agree that in the KIP, there's a trade-off regarding the API complexity.
> With the store impl, we can support default custom stores, but introduce
> more complexity for users, while with the enum types, users can configure
> default built-in store types easily, but it can't work for custom stores.
>
> For me, I'm OK to narrow down the scope and introduce the default built-in
> enum store types first.
> And if there's further request, we can consider a better way to support
> default store impl.
>
> I'll update the KIP next week, unless there are other opinions from other
> members.
>
> Thank you.
> Luke
>
> On Fri, Dec 10, 2021 at 6:33 AM Guozhang Wang  wrote:
>
>> Thanks Luke for the updated KIP.
>>
>> One major change the new proposal has it to change the original enum store
>> type with a new interface. Where in the enum approach our internal
>> implementations would be something like:
>>
>> "
>> Stores#keyValueBytesStoreSupplier(storeImplTypes, storeName, ...)
>> Stores#windowBytesStoreSupplier(storeImplTypes, storeName, ...)
>> Stores#sessionBytesStoreSupplier(storeImplTypes, storeName, ...)
>> "
>>
>> And inside the impl classes like here we would could directly do:
>>
>> "
>> if ((supplier = materialized.storeSupplier) == null) {
>> supplier =
>> Stores.windowBytesStoreSupplier(materialized.storeImplType())
>> }
>> "
>>
>> While I understand the benefit of having an interface such that user
>> customized stores could be used as the default store types as well,
>> there's
>> a trade-off I feel regarding the API complexity. Since with this approach,
>> our API complexity granularity is in the order of "number of impl types" *
>> "number of store types". This means that whenever we add new store types
>> in
>> the future, this API needs to be augmented and customized impl needs to be
>> updated to support the new store types, in addition, not all custom impl
>> types may support all store types, but with this interface they are forced
>> to either support all or explicitly throw un-supported exceptions.
>>
>> The way I see a default impl type is that, they would be safe to use for
>> any store types, and since store types are evolved by the library itself,
>> the default impls would better be controlled by the library as well.
>> Custom
>> impl classes can choose to replace some of the stores explicitly, but may
>> not be a best fit as the default impl classes --- if there are in the
>> future, one way we can consider is to make Stores class extensible along
>> with the enum so that advanced users can add more default impls, assuming
>> such scenarios are not very common.
>>
>> So I'm personally still a bit learning towards the enum approach with a
>> narrower scope, for its simplicity as an API and also its low maintenance
>> cost in the future. Let me know what do you think?
>>
>>
>> Guozhang
>>
>>
>> On Wed, Dec 1, 2021 at 6:48 PM Luke Chen  wrote:
>>
>> > Hi devs,
>> >
>> > I'd like to propose a KIP to allow users to set default store
>> > implementation class (built-in RocksDB/InMemory, or custom one), and
>> > default to RocksDB state store, to keep backward compatibility.
>> >
>> > Detailed description can be found here:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-591%3A+Add+Kafka+Streams+config+to+set+default+state+store
>> >
>> > Any feedback and comments are welcome.
>> >
>> > Thank you.
>> > Luke
>> >
>>
>>
>> --
>> -- Guozhang
>>
>


Re: [DISCUSS] KIP-810: Allow producing records with null values in Kafka Console Producer

2021-12-16 Thread Luke Chen
Hi Mickael,

Thanks for the KIP!
This will be a helpful feature for debugging, for sure!

I have one question:
Will we have some safe net for the collision of `key.separator` and the new
introduced `null.marker`.
That is, what if user set the same or overlapped  `key.separator` and
`null.marker`, how would we handle it?
Ex: key.separator="-", null.marker="--".
Maybe it's corner case, but I think it'd be better we handle it gracefully.

Thank you.
Luke



On Wed, Dec 15, 2021 at 11:08 PM Chris Egerton 
wrote:

> Hi Mickael,
>
> Thanks for the KIP. Given how important tombstone records are it's hard to
> believe that the console producer doesn't already support them!
>
> I wanted to clarify the intended behavior and how it will play with the
> parse.key and the newly-introduced (as of KIP-798 [1]) parse.headers
> properties. Is the intention that the null.marker should match the entire
> line read by the console producer, or that it can match individual portions
> of a line that correspond to the record's key, value, header key, or header
> value? I imagine so but think it may be worth calling out (and possibly
> illustrating with an example or two) in the KIP.
>
> [1] -
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-798%3A+Add+possibility+to+write+kafka+headers+in+Kafka+Console+Producer
>
> Cheers,
>
> Chris
>
> On Wed, Dec 15, 2021 at 6:08 AM Mickael Maison 
> wrote:
>
> > Hi all,
> >
> > I opened a KIP to add the option to produce records with a null value
> > using the Console Producer:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-810%3A+Allow+producing+records+with+null+values+in+Kafka+Console+Producer
> >
> > Let me know if you have any feedback.
> >
> > Thanks,
> > Mickael
> >
>


Re: [VOTE] KIP-806: Add session and window query over KV-store in IQv2

2021-12-16 Thread Bruno Cadonna

Hi Partick,

Thank you for the KIP!

+1 (binding)

Best,
Bruno

On 16.12.21 07:22, Luke Chen wrote:

Hi Patrick, John,

Thanks for your explanation and update.
It looks better now.

+1 (non-binding) from me.

just one minor comment:
10. In the example section, we are still using the variable names `lower`
and `upper` as before. We should update them, too. (follow the decision for
point 7 above)

Thank you.
Luke

On Thu, Dec 16, 2021 at 3:34 AM Guozhang Wang  wrote:


Thanks Patrick, John.

I read through the updated KIP and it's much cleared on the scope now,
appreciate that! I'm +1 overall, modulo a few minor comments:

7. The parameter names of `WindowKeyQuery#withKeyAndWindowStartRange`, i.e.
`startTime` and `endTime` seem not updated; also for the parameter names of
`WindowRangeQuery#withWindowStartRange`, the `earliest / latest` seems
inconsistent with the existing API parameter names, how about naming both
of them `fromWindowStartTime` and `toWindowStartTime`?

8.  `getStartTime / getEndTime` are not updated as well: should they be
`getEarliestWindowStartTime` / `getLatestWindowStartTime` or `getFrom...
getTo...` if you agree with 7) above?

9. "One reason we cannot use WindowKeyQuery to support
SessionStore.fetch(key)": not sure I understand this part, for the new APIs
we do not need to be following the old API's return types since they are
separated, right? So if we think it's actually better for the new API
mimicing `sessionStore.fetch(key)` to return a `WindowStoreIterator`,
why can't we just do that?


Guozhang

On Wed, Dec 15, 2021 at 8:49 AM John Roesler  wrote:


FYI, I filed this follow-on task to explore a more general
pattern for these queries:
https://issues.apache.org/jira/browse/KAFKA-13548

We can unblock the current scope for these queries but still
plan to revisit the API before the first release of IQv2.

Thanks!
-John

On Wed, 2021-12-15 at 10:34 -0600, John Roesler wrote:

Thanks for the update, Patrick!

Tl;dr: I'm +1 (binding)

I just reviewed the KIP again (I hope you don't mind, I
fixed a couple of missed renames in the text and examples).

One of the design of IQv2 is to make proposing and evolving
these queries much less onerous. Unlike adding new methods
to all StateStore interfaces and implementations, if we want
to make (eg) the WindowRangeQuery more flexible in the
future, we can easily do so by just adding some builder
methods to set the bounds independently, or by adding new
static methods to provide different parameterizations.

Therefore, even though this is not the ultimate expression
of what we think the range queries should be, it's a
perfectly fine starting point. The most pressing concerns to
me were the cases where Luke and Guozhang pointed out that
some parts of the proposal or interfaces were ambiguous,
which looks like it's fixed now.

My preference would be to go ahead and accept this proposed
scope so that we have at least some basic key-value, window,
and session store queries in the code base. Until we have
those MVP queries in place, we can't really start to address
higher-level follow-up items like:
https://issues.apache.org/jira/browse/KAFKA-13541 (to refine
how the serdes interplay with the queries) or
https://issues.apache.org/jira/browse/KAFKA-13541 (to
consider alternative ways to pin down the generic type
parameters better).

Anyway, for the current version of the KIP, my vote is +1
(binding).

Thanks again!
-John

On Wed, 2021-12-15 at 12:53 +0100, Patrick Stuedi wrote:

Thanks everyone for the sharing comments and suggestions, this is
very helpful!

I have updated the KIP based on the suggestions, please let me know

what

you think. Here are the main changes:
- Removed constructor in WindowRangeQuery (Guozhang)
- Clearly specified how each of the instantiation of the different

query

types maps to specific operations in WindowStore and SessionStore

(Guozhang)

- Renamed the window start end time parameters and getter methods

(Luke)

- Added some comments on the use of WindowRangeQuery.fromKey

(Guozhang,

John)

The KIP currently takes a minimalist approach to move things forward.

There

are two follow ups I can think of from this KIP:
* Add additional parameter combinations to WindowRangeQuery, e.g.,

support

for key range with and without window ranges.
* Remove WindowRangeQuery.fromKey and either use
WindowKeyQuery.withKeyAndWindowBounds or add a new call like
WindowKeyQuery.withKeyAndNoWindowBounds(): either way that would then

raise

the question which operation in WindowStore this query should map to,

given

that WindowKeyQuery is templated against WindowStoreIterator and the
current use of WindowRangeQuery.fromKey is to call SessionStore.fetch

which

returns a KeyValueIterator.
* Combine WindowKeyQuery and WindowRangeQuery, e.g., by exposing the
template type
* Create a "Builder" interface for the query types to avoid blowing

up

the

static methods.

Since I think any of those tasks will require some more discussion,

and in

Re: [DISCUSS] Please review the 3.1.0 blog post

2021-12-16 Thread Tom Bentley
Hi David,

A couple of minor nits:

For KIP-783: "This field is set for any exception that originates from, or
tied to, a specific task." should be "This field is set for any exception
that originates from, or *is* tied to, a specific task."

For KIP-690: "...so MM2 should have flexibility to let you override the
name of internal topics to follow the one you create." could be worded a
little better, I think "... to *use* the one*s* you create."

Apart from that LGTM.

Thanks!

Tom

On Thu, Dec 16, 2021 at 8:42 AM David Jacot 
wrote:

> Hey folks,
>
> I have updated the blog post based on offline feedback that I have
> received. The
> changes were minor.
>
> Cheers,
> David
>
> On Wed, Dec 15, 2021 at 3:00 PM Igor Soarez  wrote:
> >
> > Hi David,
> >
> > Apart from the obviously identified TODO items I coudln't find any
> issues. It looks great!
> >
> > --
> > Igor
> >
> > On Wed, Dec 15, 2021, at 8:37 AM, David Jacot wrote:
> > > Thanks, Mickael. I will fix this.
> > >
> > > Best,
> > > David
> > >
> > > On Mon, Dec 13, 2021 at 3:55 PM Mickael Maison <
> mickael.mai...@gmail.com> wrote:
> > >>
> > >> Hi David,
> > >>
> > >> It looks good, I just noticed a typo:
> > >> "KIP-733" should be "KIP-773"
> > >>
> > >> Thanks
> > >>
> > >> On Sun, Dec 12, 2021 at 4:05 PM David Jacot
>  wrote:
> > >> >
> > >> > Hi Luke,
> > >> >
> > >> > Thanks for your feedback. I have found and have fixed the issue. It
> was
> > >> > actually
> > >> > due to the formatting of the title of the AK 3.0 blog post.
> > >> >
> > >> > Best,
> > >> > David
> > >> >
> > >> > On Sat, Dec 11, 2021 at 9:44 AM Luke Chen 
> wrote:
> > >> >
> > >> > > Oh, sorry! I have a typo in your name!
> > >> > > Sorry, David! >.<
> > >> > >
> > >> > > Luke
> > >> > >
> > >> > > On Sat, Dec 11, 2021 at 4:42 PM Luke Chen 
> wrote:
> > >> > >
> > >> > >> Hi Davie,
> > >> > >>
> > >> > >> Thanks for drafting the release announcement post.
> > >> > >> I've checked the content, and looks good to me.
> > >> > >> But I think the header section: "What's New in Apache..." is not
> > >> > >> formatted properly.
> > >> > >> I checked the previous blog post, and it should be a hyperlink
> just like
> > >> > >> the "Main" kind of font.
> > >> > >>
> > >> > >> [image: image.png]
> > >> > >>
> > >> > >> Thank you.
> > >> > >> Luke
> > >> > >>
> > >> > >>
> > >> > >> On Sat, Dec 11, 2021 at 5:51 AM David Jacot
> 
> > >> > >> wrote:
> > >> > >>
> > >> > >>> I have put the wrong link in my previous email. Here is the
> public one:
> > >> > >>>
> > >> > >>>
> > >> > >>>
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache7
> > >> > >>>
> > >> > >>> Best,
> > >> > >>> David
> > >> > >>>
> > >> > >>> On Fri, Dec 10, 2021 at 10:35 PM David Jacot <
> dja...@confluent.io>
> > >> > >>> wrote:
> > >> > >>> >
> > >> > >>> > Hello all,
> > >> > >>> >
> > >> > >>> > I have prepared a draft of the release announcement post for
> the
> > >> > >>> > Apache Kafka 3.1.0 release:
> > >> > >>> >
> > >> > >>> >
> > >> > >>>
> https://blogs.apache.org/roller-ui/authoring/preview/kafka/?previewEntry=what-s-new-in-apache7
> > >> > >>> >
> > >> > >>> > I would greatly appreciate your reviews if you have a moment.
> > >> > >>> >
> > >> > >>> > Thanks,
> > >> > >>> > David
> > >> > >>>
> > >> > >>
>
>


[jira] [Resolved] (KAFKA-13488) Producer fails to recover if topic gets deleted (and gets auto-created)

2021-12-16 Thread Prateek Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prateek Agarwal resolved KAFKA-13488.
-
  Reviewer: David Jacot
Resolution: Fixed

Fixed in https://github.com/apache/kafka/pull/11552

> Producer fails to recover if topic gets deleted (and gets auto-created)
> ---
>
> Key: KAFKA-13488
> URL: https://issues.apache.org/jira/browse/KAFKA-13488
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.2.2, 2.3.1, 2.4.1, 2.5.1, 2.6.3, 2.7.2, 2.8.1
>Reporter: Prateek Agarwal
>Assignee: Prateek Agarwal
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: KAFKA-13488.patch
>
>
> Producer currently fails to produce messages to a topic if the topic is 
> deleted and gets auto-created OR is created manually during the lifetime of 
> the producer (and certain other conditions are met - leaderEpochs of deleted 
> topic > 0).
>  
> To reproduce, these are the steps which can be carried out:
> 0) A cluster with 2 brokers 0 and 1 with auto.topic.create=true.
> 1) Create a topic T with 2 partitions P0-> (0,1), P1-> (0,1)
> 2) Reassign the partitions such that P0-> (1,0), P1-> (1,0).
> 2) Create a producer P and send few messages which land on all the TPs of 
> topic T.
> 3) Delete the topic T
> 4) Immediately, send a new message from producer P, this message will be 
> failed to send and eventually timed out.
> A test-case which fails with the above steps is added at the end as well as a 
> patch file.
>  
> This happens after leaderEpoch (KIP-320) was introduced in the 
> MetadataResponse KAFKA-7738. There is a solution attempted to fix this issue 
> in KAFKA-12257, but the solution has a bug due to which the above use-case 
> still fails.
>  
> *Issue in the solution of KAFKA-12257:*
> {code:java}
> // org.apache.kafka.clients.Metadata.handleMetadataResponse():
>...
>         Map topicIds = new HashMap<>();
>         Map oldTopicIds = cache.topicIds();
>         for (MetadataResponse.TopicMetadata metadata : 
> metadataResponse.topicMetadata()) {
>             String topicName = metadata.topic();
>             Uuid topicId = metadata.topicId();
>             topics.add(topicName);
>             // We can only reason about topic ID changes when both IDs are 
> valid, so keep oldId null unless the new metadata contains a topic ID
>             Uuid oldTopicId = null;
>             if (!Uuid.ZERO_UUID.equals(topicId)) {
>                 topicIds.put(topicName, topicId);
>                 oldTopicId = oldTopicIds.get(topicName);
>             } else {
>                  topicId = null;
>             }
> ...
> } {code}
> With every new call to {{{}handleMetadataResponse{}}}(), {{cache.topicIds()}} 
> gets created afresh. When a topic is deleted and created immediately soon 
> afterwards (because of auto.create being true), producer's call to 
> {{MetadataRequest}} for the deleted topic T will result in a 
> {{UNKNOWN_TOPIC_OR_PARTITION}} or {{LEADER_NOT_AVAILABLE}} error 
> {{MetadataResponse}} depending on which point of topic recreation metadata is 
> being asked at. In the case of errors, TopicId returned back in the response 
> is {{{}Uuid.ZERO_UUID{}}}. As seen in the above logic, if the topicId 
> received is ZERO, the method removes the earlier topicId entry from the cache.
> Now, when a non-Error Metadata Response does come back for the newly created 
> topic T, it will have a non-ZERO topicId now but the leaderEpoch for the 
> partitions will mostly be ZERO. This situation will lead to rejection of the 
> new MetadataResponse if the older LeaderEpoch was >0 (for more details, refer 
> to KAFKA-12257). Because of the rejection of the metadata, producer will 
> never get to know the new Leader of the TPs of the newly created topic.
>  
> {{*}} 1. Solution / Fix (Preferred){*}:
> Client's metadata should keep on remembering the old topicId till:
> 1) response for the TP has ERRORs
> 2) topicId entry was already present in the cache earlier
> 3) retain time is not expired
> {code:java}
> --- a/clients/src/main/java/org/apache/kafka/clients/Metadata.java
> +++ b/clients/src/main/java/org/apache/kafka/clients/Metadata.java
> @@ -336,6 +336,10 @@ public class Metadata implements Closeable {
>  topicIds.put(topicName, topicId);
>  oldTopicId = oldTopicIds.get(topicName);
>  } else {
> +// Retain the old topicId for comparison with newer TopicId 
> created later. This is only needed till retainMs
> +if (metadata.error() != Errors.NONE && 
> oldTopicIds.get(topicName) != null && retainTopic(topicName, false, nowMs))
> +topicIds.put(topicName, oldTopicIds.get(topicName));
> +   

[jira] [Resolved] (KAFKA-13492) IQ Parity: queries for key/value store range and scan

2021-12-16 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-13492.
--
Resolution: Fixed

> IQ Parity: queries for key/value store range and scan
> -
>
> Key: KAFKA-13492
> URL: https://issues.apache.org/jira/browse/KAFKA-13492
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: John Roesler
>Assignee: Vicky Papavasileiou
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] KIP-792: Add "generation" field into consumer protocol

2021-12-16 Thread Colin McCabe
Thanks for the explanation, Luke. That makes sense.

best,
Colin

On Thu, Dec 9, 2021, at 13:31, Guozhang Wang wrote:
> Thanks Luke, in that case I'm +1 on this KIP.
>
> On Thu, Dec 9, 2021 at 1:46 AM Luke Chen  wrote:
>
>> Hi Guozhang,
>>
>> Thanks for your comment.
>>
>> > we need to make sure the old-versioned leader would be able to ignore the
>> new
>> field during an upgrade e.g. without crashing.
>>
>> Yes, I understand. I'll be careful to make sure it won't crash the old
>> versioned leader.
>> But basically, it won't, because we appended the new field into the last of
>> the ConsumerProtocolSubscription, which means, when read/deserialize the
>> Subscription metadata, the old versioned leader will just read the head
>> part of the data.
>>
>> Thanks for the reminder!
>>
>> Luke
>>
>> On Thu, Dec 9, 2021 at 4:00 AM Guozhang Wang  wrote:
>>
>> > Hi Luke,
>> >
>> > Thanks for the KIP.
>> >
>> > One thing I'd like to double check is that, since the
>> > ConsumerProtocolSubscription is not auto generated from the json file, we
>> > need to make sure the old-versioned leader would be able to ignore the
>> new
>> > field during an upgrade e.g. without crashing. Other than that, the KIP
>> > lgtm.
>> >
>> > Guozhang
>> >
>> > On Tue, Dec 7, 2021 at 6:16 PM Luke Chen  wrote:
>> >
>> > > Hi Colin,
>> > >
>> > > I'm not quite sure if I understand your thoughts correctly.
>> > > If I was wrong, please let me know.
>> > >
>> > > Also, I'm not quite sure how I could lock this feature to a new IBP
>> > > version.
>> > > I saw "KIP-584: Versioning scheme for features" is still under
>> > development.
>> > > Not sure if I need to lock the IBP version, how should I do?
>> > >
>> > > Thank you.
>> > > Luke
>> > >
>> > > On Tue, Dec 7, 2021 at 9:41 PM Luke Chen  wrote:
>> > >
>> > > > Hi Colin,
>> > > >
>> > > > Thanks for your comments. I've updated the KIP to mention about the
>> KIP
>> > > > won't affect current broker side behavior.
>> > > >
>> > > > > One scenario that we need to consider is what happens during a
>> > rolling
>> > > > upgrade. If the coordinator moves back and forth between brokers with
>> > > > different IBPs, it seems that the same epoch numbers could be reused
>> > for
>> > > a
>> > > > group, if things are done in the obvious manner (old IBP = don't read
>> > or
>> > > > write epoch, new IBP = do)
>> > > >
>> > > > I think this KIP doesn't care about the group epoch number at all.
>> The
>> > > > subscription metadata is passed from each member to group
>> coordinator,
>> > > and
>> > > > then the group coordinator pass all of them back to the consumer
>> lead.
>> > So
>> > > > even if the epoch number is reused in a group, it should be fine. On
>> > the
>> > > > other hand, the group coordinator will have no idea if the join group
>> > > > request sent from consumer containing the new subscription
>> "generation"
>> > > > field or not, because group coordinator won't deserialize the
>> metadata.
>> > > >
>> > > > I've added also added them into the KIP.
>> > > >
>> > > > Thank you.
>> > > > Luke
>> > > >
>> > > > On Mon, Dec 6, 2021 at 10:39 AM Colin McCabe 
>> > wrote:
>> > > >
>> > > >> Hi Luke,
>> > > >>
>> > > >> Thanks for the explanation.
>> > > >>
>> > > >> I don't see any description of how the broker decides to use the new
>> > > >> version of ConsumerProtocolSubscription or not. This probably needs
>> to
>> > > be
>> > > >> locked to a new IBP version.
>> > > >>
>> > > >> One scenario that we need to consider is what happens during a
>> rolling
>> > > >> upgrade. If the coordinator moves back and forth between brokers
>> with
>> > > >> different IBPs, it seems that the same epoch numbers could be reused
>> > > for a
>> > > >> group, if things are done in the obvious manner (old IBP = don't
>> read
>> > or
>> > > >> write epoch, new IBP = do).
>> > > >>
>> > > >> best,
>> > > >> Colin
>> > > >>
>> > > >>
>> > > >> On Fri, Dec 3, 2021, at 18:46, Luke Chen wrote:
>> > > >> > Hi Colin,
>> > > >> > Thanks for your comment.
>> > > >> >
>> > > >> >> How are we going to avoid the situation where the broker
>> restarts,
>> > > and
>> > > >> > the same generation number is reused?
>> > > >> >
>> > > >> > Actually, this KIP doesn't have anything to do with the brokers.
>> The
>> > > >> > "generation" field I added, is in the subscription metadata, which
>> > > will
>> > > >> not
>> > > >> > be deserialized by brokers. The metadata is only deserialized by
>> > > >> consumer
>> > > >> > lead. And for the consumer lead, the only thing the lead cared
>> > about,
>> > > is
>> > > >> > the highest generation of the ownedPartitions among all the
>> > consumers.
>> > > >> With
>> > > >> > the highest generation of the ownedPartitions, the consumer lead
>> can
>> > > >> > distribute the partitions as sticky as possible, and most
>> > importantly,
>> > > >> > without errors.
>> > > >> >
>> > > >> > That is, after this KIP, if the broker restarts, and the same
>> > > generation
>> > > >> > number is re

Re: [Proposal] It is hoped that Apache APISIX and Apache Kafka will carry out diversified community cooperation

2021-12-16 Thread Colin McCabe
Hi Yeliang Wang,

Have you considered submitting a talk to Kafka Summit discussing how Apache 
APISIX uses Kafka? That might be interesting.

I'm not sure what you mean by "fully integrating" the projects. Can you 
elaborate on what integration you see happening in the future?

best,
Colin

On Sun, Dec 12, 2021, at 23:13, yeliang wang wrote:
> Hi, community,
>
> My name is Yeliang Wang, and I am Apache APISIX Committer.
>
> long ago, Apache APISIX implemented the Kafka-logger plugin (
> https://apisix.apache.org/docs/apisix/plugins/kafka-logger/ ), can be used
> as NGX_ Kafka client driver for Lua nginx module, and many users are using
> this function.
>
> Here, I would like to invite the Apache Kafka community to cooperate with
> the following communities:
>
>1. Hold a wonderful technical meetup together
>2. Collaborative output technology blog to share with more people
>3. Carry out publicity activities on the official website and media
>channels together
>4. *Jointly develop Kafka-upstream function support to fully integrate
>APISIX and Kafka*
>
>
> In addition,Apache APISIX community has carried out community cooperation
> with many Apache projects (as Apache rocketmq, Apache pulsar, Apache
> skywalking...), and accumulated rich experience in this process.
>
> I believe in doing so, it can not only meet the diversified needs of users,
> but also enrich the surrounding ecology of Apache Kafka and Apache APISIX.
>
> Wait for more discussion……
>
> Thanks,
> Github: wang-yeliang, Twitter: @WYeliang


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #583

2021-12-16 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #34

2021-12-16 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 500891 lines...]
[2021-12-16T17:29:45.407Z] > Task :connect:json:testSrcJar
[2021-12-16T17:29:45.407Z] > Task :metadata:compileTestJava UP-TO-DATE
[2021-12-16T17:29:45.407Z] > Task :metadata:testClasses UP-TO-DATE
[2021-12-16T17:29:45.407Z] > Task :core:compileScala UP-TO-DATE
[2021-12-16T17:29:45.407Z] > Task :core:classes UP-TO-DATE
[2021-12-16T17:29:45.407Z] > Task :core:compileTestJava NO-SOURCE
[2021-12-16T17:29:45.407Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2021-12-16T17:29:45.407Z] > Task 
:clients:generatePomFileForMavenJavaPublication
[2021-12-16T17:29:45.407Z] 
[2021-12-16T17:29:45.407Z] > Task :streams:processMessages
[2021-12-16T17:29:45.407Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2021-12-16T17:29:45.407Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.1/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-12-16T17:29:45.407Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2021-12-16T17:29:45.407Z] 
[2021-12-16T17:29:45.407Z] > Task :core:compileTestScala UP-TO-DATE
[2021-12-16T17:29:45.407Z] > Task :core:testClasses UP-TO-DATE
[2021-12-16T17:29:45.407Z] > Task :streams:compileJava UP-TO-DATE
[2021-12-16T17:29:45.407Z] > Task :streams:classes UP-TO-DATE
[2021-12-16T17:29:45.407Z] > Task :streams:copyDependantLibs UP-TO-DATE
[2021-12-16T17:29:46.673Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-12-16T17:29:46.673Z] > Task :streams:jar UP-TO-DATE
[2021-12-16T17:29:46.673Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-12-16T17:29:49.701Z] > Task :connect:api:javadoc
[2021-12-16T17:29:49.701Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-12-16T17:29:49.701Z] > Task :connect:api:jar UP-TO-DATE
[2021-12-16T17:29:49.701Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-12-16T17:29:49.701Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-12-16T17:29:49.701Z] > Task :connect:json:jar UP-TO-DATE
[2021-12-16T17:29:49.701Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-12-16T17:29:49.701Z] > Task :connect:api:javadocJar
[2021-12-16T17:29:49.701Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-12-16T17:29:49.701Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-12-16T17:29:49.701Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-12-16T17:29:49.701Z] > Task :connect:json:publishToMavenLocal
[2021-12-16T17:29:49.701Z] > Task :connect:api:testJar
[2021-12-16T17:29:49.701Z] > Task :connect:api:testSrcJar
[2021-12-16T17:29:50.740Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-12-16T17:29:50.740Z] > Task :connect:api:publishToMavenLocal
[2021-12-16T17:29:53.879Z] > Task :streams:javadoc
[2021-12-16T17:29:54.918Z] > Task :streams:javadocJar
[2021-12-16T17:29:54.918Z] > Task :streams:compileTestJava UP-TO-DATE
[2021-12-16T17:29:54.918Z] > Task :streams:testClasses UP-TO-DATE
[2021-12-16T17:29:55.957Z] > Task :streams:testJar
[2021-12-16T17:29:55.957Z] > Task :streams:testSrcJar
[2021-12-16T17:29:55.957Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2021-12-16T17:29:55.957Z] > Task :streams:publishToMavenLocal
[2021-12-16T17:29:57.170Z] > Task :clients:javadoc
[2021-12-16T17:29:58.311Z] > Task :clients:javadocJar
[2021-12-16T17:29:59.256Z] 
[2021-12-16T17:29:59.256Z] > Task :clients:srcJar
[2021-12-16T17:29:59.256Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2021-12-16T17:29:59.256Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.1/clients/src/generated/java'.
 Reason: Task ':clients:srcJar' uses this output of task 
':clients:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-12-16T17:30:00.296Z] 
[2021-12-16T17:30:00.296Z] > Task :clients:testJar
[2021-12-16T17:30:01.335Z] > Task :clients:testSrcJar
[2021-12-16T17:30:01.335Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2021-12-16T17:30:01.335

Re: [VOTE] KIP-778 KRaft upgrades

2021-12-16 Thread Colin McCabe
Thanks for the KIP, David! Great work.

+1 (binding).

Should the "./kafka-features.sh downgrade" command also have a --release flag, 
to match upgrade?

Also, it seems like upgrade should have a --latest flag that upgrades 
everything to the latest installed version?

best,
Colin


On Fri, Dec 10, 2021, at 12:49, David Arthur wrote:
> Hey everyone, I'd like to start a vote for KIP-778 which adds support for
> KRaft to KRaft upgrades.
>
> Notably in this KIP is the first use case of KIP-584 feature flags. As
> such, there are some addendums to KIP-584 included.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-778%3A+KRaft+Upgrades
>
> Thanks!
> David


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.0 #162

2021-12-16 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 2.8 #92

2021-12-16 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 271437 lines...]
[2021-12-16T17:39:14.500Z] 
[2021-12-16T17:39:14.500Z] SimpleAclAuthorizerTest > 
testAddAclsOnWildcardResource() PASSED
[2021-12-16T17:39:14.500Z] 
[2021-12-16T17:39:14.500Z] SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2() STARTED
[2021-12-16T17:39:15.531Z] 
[2021-12-16T17:39:15.531Z] SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2() PASSED
[2021-12-16T17:39:15.531Z] 
[2021-12-16T17:39:15.531Z] SimpleAclAuthorizerTest > testAclManagementAPIs() 
STARTED
[2021-12-16T17:39:15.531Z] 
[2021-12-16T17:39:15.531Z] SimpleAclAuthorizerTest > testAclManagementAPIs() 
PASSED
[2021-12-16T17:39:15.531Z] 
[2021-12-16T17:39:15.531Z] SimpleAclAuthorizerTest > testWildCardAcls() STARTED
[2021-12-16T17:39:16.560Z] 
[2021-12-16T17:39:16.560Z] SimpleAclAuthorizerTest > testWildCardAcls() PASSED
[2021-12-16T17:39:16.560Z] 
[2021-12-16T17:39:16.560Z] SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2() STARTED
[2021-12-16T17:39:16.560Z] 
[2021-12-16T17:39:16.560Z] SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2() PASSED
[2021-12-16T17:39:16.560Z] 
[2021-12-16T17:39:16.560Z] SimpleAclAuthorizerTest > testTopicAcl() STARTED
[2021-12-16T17:39:17.590Z] 
[2021-12-16T17:39:17.590Z] SimpleAclAuthorizerTest > testTopicAcl() PASSED
[2021-12-16T17:39:17.590Z] 
[2021-12-16T17:39:17.590Z] SimpleAclAuthorizerTest > testSuperUserHasAccess() 
STARTED
[2021-12-16T17:39:17.590Z] 
[2021-12-16T17:39:17.590Z] SimpleAclAuthorizerTest > testSuperUserHasAccess() 
PASSED
[2021-12-16T17:39:17.590Z] 
[2021-12-16T17:39:17.590Z] SimpleAclAuthorizerTest > 
testDeleteAclOnPrefixedResource() STARTED
[2021-12-16T17:39:18.621Z] 
[2021-12-16T17:39:18.621Z] SimpleAclAuthorizerTest > 
testDeleteAclOnPrefixedResource() PASSED
[2021-12-16T17:39:18.621Z] 
[2021-12-16T17:39:18.621Z] SimpleAclAuthorizerTest > testDenyTakesPrecedence() 
STARTED
[2021-12-16T17:39:18.621Z] 
[2021-12-16T17:39:18.621Z] SimpleAclAuthorizerTest > testDenyTakesPrecedence() 
PASSED
[2021-12-16T17:39:18.621Z] 
[2021-12-16T17:39:18.621Z] SimpleAclAuthorizerTest > 
testSingleCharacterResourceAcls() STARTED
[2021-12-16T17:39:19.651Z] 
[2021-12-16T17:39:19.651Z] SimpleAclAuthorizerTest > 
testSingleCharacterResourceAcls() PASSED
[2021-12-16T17:39:19.651Z] 
[2021-12-16T17:39:19.651Z] SimpleAclAuthorizerTest > testNoAclFoundOverride() 
STARTED
[2021-12-16T17:39:19.651Z] 
[2021-12-16T17:39:19.651Z] SimpleAclAuthorizerTest > testNoAclFoundOverride() 
PASSED
[2021-12-16T17:39:19.651Z] 
[2021-12-16T17:39:19.651Z] SimpleAclAuthorizerTest > 
testEmptyAclThrowsException() STARTED
[2021-12-16T17:39:20.682Z] 
[2021-12-16T17:39:20.682Z] SimpleAclAuthorizerTest > 
testEmptyAclThrowsException() PASSED
[2021-12-16T17:39:20.682Z] 
[2021-12-16T17:39:20.682Z] SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess() STARTED
[2021-12-16T17:39:20.682Z] 
[2021-12-16T17:39:20.682Z] SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess() PASSED
[2021-12-16T17:39:20.682Z] 
[2021-12-16T17:39:20.682Z] SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal() STARTED
[2021-12-16T17:39:21.713Z] 
[2021-12-16T17:39:21.713Z] SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal() PASSED
[2021-12-16T17:39:21.713Z] 
[2021-12-16T17:39:21.713Z] SimpleAclAuthorizerTest > 
testDeleteAclOnWildcardResource() STARTED
[2021-12-16T17:39:21.713Z] 
[2021-12-16T17:39:21.713Z] SimpleAclAuthorizerTest > 
testDeleteAclOnWildcardResource() PASSED
[2021-12-16T17:39:21.713Z] 
[2021-12-16T17:39:21.713Z] SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED
[2021-12-16T17:39:22.744Z] 
[2021-12-16T17:39:22.744Z] SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED
[2021-12-16T17:39:22.744Z] 
[2021-12-16T17:39:22.744Z] SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED
[2021-12-16T17:39:22.744Z] 
[2021-12-16T17:39:22.744Z] SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED
[2021-12-16T17:39:22.744Z] 
[2021-12-16T17:39:22.744Z] SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource() STARTED
[2021-12-16T17:39:23.775Z] 
[2021-12-16T17:39:23.775Z] SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource() PASSED
[2021-12-16T17:39:23.775Z] 
[2021-12-16T17:39:23.775Z] SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls() STARTED
[2021-12-16T17:39:24.805Z] 
[2021-12-16T17:39:24.805Z] SimpleAclAuthorizerTest 

[jira] [Resolved] (KAFKA-13551) kafka-log4j-appender-2.1.1.jar Is cve-2021-44228 involved?

2021-12-16 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-13551.
-
Resolution: Information Provided

The impact is described in [https://kafka.apache.org/cve-list]

>  kafka-log4j-appender-2.1.1.jar Is cve-2021-44228 involved? 
> 
>
> Key: KAFKA-13551
> URL: https://issues.apache.org/jira/browse/KAFKA-13551
> Project: Kafka
>  Issue Type: Task
>  Components: consumer
>Reporter: xiansheng fu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] KIP-778 KRaft upgrades

2021-12-16 Thread Jun Rao
Hi, David,

Thanks for the KIP. It seems that we removed MinVersionLevel from ZK
and FeatureLevelRecord in the latest change. Should we
remove MinVersionLevel from ApiVersionsResponse too? Should we bump up the
version in FeatureLevelRecord and ApiVersionsRequest?

Jun

On Thu, Dec 16, 2021 at 10:14 AM Colin McCabe  wrote:

> Thanks for the KIP, David! Great work.
>
> +1 (binding).
>
> Should the "./kafka-features.sh downgrade" command also have a --release
> flag, to match upgrade?
>
> Also, it seems like upgrade should have a --latest flag that upgrades
> everything to the latest installed version?
>
> best,
> Colin
>
>
> On Fri, Dec 10, 2021, at 12:49, David Arthur wrote:
> > Hey everyone, I'd like to start a vote for KIP-778 which adds support for
> > KRaft to KRaft upgrades.
> >
> > Notably in this KIP is the first use case of KIP-584 feature flags. As
> > such, there are some addendums to KIP-584 included.
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-778%3A+KRaft+Upgrades
> >
> > Thanks!
> > David
>


Re: Confluent Schema Registry Compatibility config

2021-12-16 Thread Mayuresh Gharat
Hi Folks,

I was reading docs on Confluent Schema Registry about Compatibility :
https://docs.confluent.io/platform/current/schema-registry/avro.html#compatibility-types

I was confused with "BACKWARDS" vs "BACKWARDS_TRANSITIVE".

If we have 3 schemas X, X-1, X-2 and configure a schema registry with
compatibility = "BACKWARDS". When we registered the X-1 schema it must have
been compared against the X-2 schema. When we register Xth schema it must
have been compared against X-1 schema. So by transitivity Xth Schema would
also be compatible with X-2.

So I am wondering what is the difference between "BACKWARDS" vs
"BACKWARDS_TRANSITIVE"? Any example would be really helpful.

--
-Regards,
Mayuresh R. Gharat
(862) 250-7125


-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #584

2021-12-16 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » 2.8 #93

2021-12-16 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13552) Unable to dynamically change broker log levels on KRaft

2021-12-16 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-13552:
-

 Summary: Unable to dynamically change broker log levels on KRaft
 Key: KAFKA-13552
 URL: https://issues.apache.org/jira/browse/KAFKA-13552
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.0.0, 3.1.0
Reporter: Ron Dagostino


It is currently not possible to dynamically change the log level in KRaft.  For 
example:

kafka-configs.sh --bootstrap-server  --alter --add-config 
"kafka.server.ReplicaManager=DEBUG" --entity-type broker-loggers --entity-name 0

Results in:

org.apache.kafka.common.errors.InvalidRequestException: Unexpected resource 
type BROKER_LOGGER.

The code to process this request is in ZkAdminManager.alterLogLevelConfigs().  
This needs to be moved out of there, and the functionality has to be processed 
locally on the broker instead of being forwarded to the KRaft controller.

It is also an open question as to how we can dynamically alter log levels for a 
remote KRaft controller.  Connecting directly to it is one possible solution, 
but that may not be desirable since generally connecting directly to the 
controller is not necessary.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


????????????

2021-12-16 Thread ????
??
 kafka??
 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #585

2021-12-16 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 213802 lines...]
[2021-12-17T05:35:46.059Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2021-12-17T05:35:46.059Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_trunk/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-12-17T05:35:46.059Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2021-12-17T05:35:46.059Z] 
[2021-12-17T05:35:46.059Z] > Task 
:clients:generatePomFileForMavenJavaPublication
[2021-12-17T05:35:46.059Z] > Task :core:compileScala UP-TO-DATE
[2021-12-17T05:35:46.059Z] > Task :core:classes UP-TO-DATE
[2021-12-17T05:35:46.059Z] > Task :core:compileTestJava NO-SOURCE
[2021-12-17T05:35:46.059Z] > Task :streams:compileJava UP-TO-DATE
[2021-12-17T05:35:46.059Z] > Task :streams:classes UP-TO-DATE
[2021-12-17T05:35:46.059Z] > Task :streams:copyDependantLibs UP-TO-DATE
[2021-12-17T05:35:46.059Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-12-17T05:35:46.059Z] > Task :streams:jar UP-TO-DATE
[2021-12-17T05:35:46.059Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-12-17T05:35:46.059Z] > Task :core:compileTestScala UP-TO-DATE
[2021-12-17T05:35:46.059Z] > Task :core:testClasses UP-TO-DATE
[2021-12-17T05:35:49.669Z] > Task :connect:api:javadoc
[2021-12-17T05:35:49.669Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-12-17T05:35:49.669Z] > Task :connect:api:jar UP-TO-DATE
[2021-12-17T05:35:49.669Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-12-17T05:35:49.669Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-12-17T05:35:49.669Z] > Task :connect:json:jar UP-TO-DATE
[2021-12-17T05:35:49.669Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-12-17T05:35:49.669Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-12-17T05:35:49.669Z] > Task :connect:json:publishToMavenLocal
[2021-12-17T05:35:49.669Z] > Task :connect:api:javadocJar
[2021-12-17T05:35:49.669Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-12-17T05:35:49.669Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-12-17T05:35:49.669Z] > Task :connect:api:testJar
[2021-12-17T05:35:49.669Z] > Task :connect:api:testSrcJar
[2021-12-17T05:35:49.669Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-12-17T05:35:49.669Z] > Task :connect:api:publishToMavenLocal
[2021-12-17T05:35:52.325Z] 
[2021-12-17T05:35:52.325Z] > Task :streams:javadoc
[2021-12-17T05:35:52.325Z] 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:124:
 warning - Tag @link: reference not found: this#getResult()
[2021-12-17T05:35:52.325Z] 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:133:
 warning - Tag @link: reference not found: this#getFailureReason()
[2021-12-17T05:35:52.325Z] 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:133:
 warning - Tag @link: reference not found: this#getFailureMessage()
[2021-12-17T05:35:52.325Z] 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:191:
 warning - Tag @link: reference not found: this#isSuccess()
[2021-12-17T05:35:52.325Z] 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:191:
 warning - Tag @link: reference not found: this#isFailure()
[2021-12-17T05:35:52.325Z] 5 warnings
[2021-12-17T05:35:52.325Z] 
[2021-12-17T05:35:52.325Z] > Task :streams:javadocJar
[2021-12-17T05:35:52.325Z] > Task :streams:compileTestJava UP-TO-DATE
[2021-12-17T05:35:52.325Z] > Task :streams:testClasses UP-TO-DATE
[2021-12-17T05:35:53.271Z] > Task :streams:testJar
[2021-12-17T05:35:53.271Z] > Task :streams:testSrcJar
[2021-12-17T05:35:53.271Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2021-12-17T05:35:53.271Z] > Task :streams:publishToMavenLocal
[2021-12-17T05:35:54.217Z] > Task :clients:javadoc
[2021-12-17T05:35:55.162Z] > Task :clients:javadocJar
[2021-12-17T05:35:55.162Z] 
[2021-12-17T05:35:55.162Z] > Task :clients:srcJar
[2021-12-17T05:35:55.162Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2021-12-17T05:35:55.162Z]   - Gradle 

Re: 并发问题请教

2021-12-16 Thread Guozhang Wang
Hello,

One blog post I can think of would be this:
https://www.confluent.io/blog/kafka-fastest-messaging-system/

Here's one Chinese translation version that I found:
https://www.sohu.com/a/417379110_355140

Hope it helps,
Guozhang

On Thu, Dec 16, 2021 at 9:27 PM 酒虫  wrote:

> 你好,
>  请问方便提供一些kafka集群的并发方面的性能测试资料或者其他相关的参考书籍么,比如 单机处理最大并发数之类的



-- 
-- Guozhang


Re: Confluent Schema Registry Compatibility config

2021-12-16 Thread Bruno Cadonna

Hi Mayuresh,

since this is a Confluent-specific question and does not relate to 
Apache Kafka, I think it is better to ask your question in Confluent's 
community forum: https://forum.confluent.io/


There is even a Schema Registry category.

Best,
Bruno

On 16.12.21 21:14, Mayuresh Gharat wrote:

Hi Folks,

I was reading docs on Confluent Schema Registry about Compatibility :
https://docs.confluent.io/platform/current/schema-registry/avro.html#compatibility-types

I was confused with "BACKWARDS" vs "BACKWARDS_TRANSITIVE".

If we have 3 schemas X, X-1, X-2 and configure a schema registry with
compatibility = "BACKWARDS". When we registered the X-1 schema it must have
been compared against the X-2 schema. When we register Xth schema it must
have been compared against X-1 schema. So by transitivity Xth Schema would
also be compatible with X-2.

So I am wondering what is the difference between "BACKWARDS" vs
"BACKWARDS_TRANSITIVE"? Any example would be really helpful.

--
-Regards,
Mayuresh R. Gharat
(862) 250-7125