Re: [DISCUSS] KIP-811 Add separate delete.interval.ms to Kafka Streams

2021-12-22 Thread Bruno Cadonna

Hi Nick,

Thanks for the updates!

The motivation section and the description of the new config is much 
clearer and informative now!


You missed one "delete.interval.ms" in the last paragraph in section 
"Proposed Changes".


I am afraid, I again need to comment on point 7. IMO, it does not make 
sense to be able to tune repartition.purge.interval.ms and 
commit.interval.ms separately when the purge can only happen during a 
commit. For example, if I set commit.interval.ms to 3 ms and 
repartition.purge.interval.ms to 35000 ms, the records will be purged at 
every second commit, i.e., every 6 ms. What benefit do users have to 
set repartition.purge.interval.ms separately from commit.interval.ms? 
The rate of purging will never be 1 / 35000, the rate will be 1 / 
2*commit.interval.ms..


Additionally, I have a new point.
8. If user code has access to the processor context (e.g. in the 
processor API), a commit can also be requested on demand by user code. 
The KIP should clarify if purges might also happen during requested 
commits or if purges only happen during automatic commits.


Best,
Bruno

On 21.12.21 20:40, Nick Telford wrote:

Hi everyone,

Thanks for your feedback. I've made the suggested changes to the KIP (1, 2,
3 and 5).

For the new name, I've chosen repartition.purge.interval.ms, as I felt it
struck a good balance between being self-descriptive and concise. Please
let me know if you'd prefer something else.

On point 6: My PR has basic validation to ensure the value is positive, but
I don't think it's necessary to have dynamic validation, to ensure it's not
less than commit.interval.ms. The reason is that it will be implicitly
limited to that value anyway, and won't break anything. But I can add it if
you'd prefer it.

On point 7: I worry that defaulting it to follow the value of
commit.interval.ms may confuse users, who will likely expect the default to
not be affected by changes to other configuration options. I can see the
appeal of retaining the existing behaviour (following the commit interval)
by default, but I believe that the majority of users who customize
commit.interval.ms do not desire a different frequency of repartition
record purging as well.

As for multiples of commit interval: I think the argument against that is
that an interval is more intuitive when given as a time, rather than as a
multiple of some other operation. Users configuring this should not need to
break out a calculator to work out how frequently the records are going to
be purged!

I've also updated the PR with the relevant changes.

BTW, for some reason I didn't receive Sophie's email. I'll periodically
check the thread on the archive to ensure I don't miss any more of your
messages!

Regards,

Nick

On Tue, 21 Dec 2021 at 12:34, Luke Chen  wrote:


Thanks, Bruno.

I agree my point 4 is more like PR discussion, not KIP discussion.
@Nick , please update my point 4 in PR directly.

Thank you.
Luke




On Tue, Dec 21, 2021 at 7:24 PM Bruno Cadonna  wrote:


Hi Nick,

Thank you for the KIP!

I agree on points 1, 2, and 3. I am not sure about point 4. I agree that
we should update the docs for commit.interval.ms but I am not sure if
this needs to mentioned in the KIP. That seems to me more a PR
discussion. Also on point 2, I agree that we need to add a doc string
but the content should be exemplary not binding. What I want to say is
that, we do not need a KIP to change docs.

Here my points:

5. Could you specify in the motivation that the KIP is about deleting
records from repartition topics? Maybe with a short description when why
and when records are deleted from the repartition topics. For us it
might be clear, but IMO we should try to write KIPs so that someone that
is relatively new to Kafka Streams can understand the KIP without
needing to know too much background.

6. Does the config need to be validated? For example, does
delete.interval.ms need to be greater than or equal to

commit.interval.ms?


7. Should the default value for non-EOS be 30s or the same value as
commit.interval.ms? I am just thinking about the case where a user
explicitly changes commit.interval.ms but not delete.interval.ms (or
whatever name you come up for it). Once delete.interval.ms is set
explicitly it is decoupled from commit.interval.ms. Similar could be
done for the EOS case.
Alternatively, we could also define delete.interval.ms to take a
integral number without a unit that specifies after how many commit
intervals the records in repartition topics should be deleted. This
would make sense since delete.interval.ms is tightly bound to
commit.interval.ms. Additionally, it would make the semantics of the
config simpler. The name of the config should definitely change if we go
down this way.

Best,
Bruno



On 21.12.21 11:14, Luke Chen wrote:

Hi Nick,

Thanks for the KIP!

In addition to Sophie's comments, I have one more to this KIP:
3. I think you should mention the behavior change *explicitly* in
"Compatibility" secti

Re: [DISCUSS] KIP-802: Validation Support for Kafka Connect SMT Options

2021-12-22 Thread Tom Bentley
Hi Gunnar,

Thanks for the KIP, especially the careful reasoning about compatibility. I
think this would be a useful improvement. I have a few observations, which
are all about how we effectively communicate the contract to implementers:

1. I think it would be good for the Javadoc to give a bit more of a hint
about what the validate(Map) method is supposed to do: At least call
ConfigDef.validate(Map) with the provided configs (for implementers that
can be achieved via super.validate()), and optionally apply extra
validation for constraints that ConfigDef (and ConfigDef.Validator) cannot
check. I think typically that would be where there's a dependency between
two config parameters, e.g. if 'foo' is present that 'bar' must be too, or
'baz' and 'qux' cannot have the same value.
2. Can the Javadoc give a bit more detail about the return value of these
new methods? I'm not sure that the implementer of a Transformation would
necessarily know how the Config returned from validate(Map) might be
"updated", or that updating ConfigValue's errorMessages is the right way to
report config-specific errors. The KIP should be clear on how we expect
implementers to report errors due to dependencies between multiple config
parameters (must they be tied to a config parameter, or should the method
throw, for example?). I think this is a bit awkward, actually, since the
ConfigInfo structure used for the JSON REST response doesn't seem to have a
nice way to represent errors which are not associated with a config
parameter.
3. It might also be worth calling out that the expectation is that a
successful return from the new validate() method should imply that
configure(Map) will succeed (to do otherwise undermines the value of the
validate endpoint). This makes me wonder about implementers, who might
defensively program their configure(Map) method to implement the same
checks. Therefore the contract should make clear that the Connect runtime
guarantees that validate(Map) will be called before configure(Map).

I don't really like the idea of implementing more-or-less the same default
multiple times. Since these Transformation, Predicate etc will have a
common contract wrt validate() and configure(), I wondered whether there
was benefit in a common interface which Transformation etc could extend.
It's a bit tricky because Connector and Converter are not Configurable.
This was the best I could manage:

```
interface ConfigValidatable {
/**
 * Validate the given configuration values against the given
configuration definitions.
 * This method will be called prior to the invocation of any
initializer method, such as {@link Connector#initialize(ConnectorContext)},
or {@link Configurable#configure(Map)} and should report any errors in the
given configuration value using the errorMessages of the ConfigValues in
the returned Config. If the Config returned by this method has no errors
then the initializer method should not throw due to bad configuration.
 *
 * @param configDef the configuration definition, which may be null.
 * @param configs the provided configuration values.
 * @return The updated configuration information given the current
configuration values
 *
 * @since 3.2
 */
default Config validate(ConfigDef configDef, Map
configs) {
List configValues = configDef.validate(smtConfigs);
return new Config(configValues);
}

}
```

Note that the configDef is passed in, leaving it to the runtime to call
`thing.config()` to get the ConfigDef instance and validate whether it is
allowed to be null or not. The subinterfaces could override validate() to
define what the "initializer method" is in their case, and to indicate
whether configDef can actually be null.

To be honest, I'm not really sure this is better, but I thought I'd suggest
it to see what others thought.

Kind regards,

Tom

On Tue, Dec 21, 2021 at 6:46 PM Chris Egerton 
wrote:

> Hi Gunnar,
>
> Thanks, this looks great. I'm ready to cast a non-binding on the vote
> thread when it comes.
>
> One small non-blocking nit: I like that you call out that the new
> validation steps will take place when a connector gets registered or
> updated. IMO this is important enough to be included in the "Public
> Interfaces" section as that type of preflight check is arguably more
> important than the PUT /connector-plugins/{name}/config/validate endpoint,
> when considering that use of the validation endpoint is strictly opt-in,
> but preflight checks for new connector configs are unavoidable (without
> resorting to devious hacks like publishing directly to the config topic).
> But this is really minor, I'm happy to +1 the KIP as-is.
>
> Cheers,
>
> Chris
>
> On Tue, Dec 21, 2021 at 8:43 AM Gunnar Morling
>  wrote:
>
> > Hey Chris,
> >
> > Thanks a lot for reviewing this KIP and your comments! Some more answers
> > inline.
> >
> > Am Di., 7. Dez. 2021 um 23:49 Uhr schrieb Chris Egerton
> > :
> >
> > > Hi Gunnar,
> > >
> > > Thanks 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.1 #43

2021-12-22 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #593

2021-12-22 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #44

2021-12-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 501602 lines...]
[2021-12-22T12:46:08.411Z] > Task :raft:testClasses UP-TO-DATE
[2021-12-22T12:46:08.411Z] > Task :connect:json:testJar
[2021-12-22T12:46:08.411Z] > Task :connect:json:testSrcJar
[2021-12-22T12:46:08.411Z] > Task :metadata:compileTestJava UP-TO-DATE
[2021-12-22T12:46:08.411Z] > Task :metadata:testClasses UP-TO-DATE
[2021-12-22T12:46:08.411Z] > Task :core:compileScala UP-TO-DATE
[2021-12-22T12:46:08.411Z] > Task :core:classes UP-TO-DATE
[2021-12-22T12:46:08.411Z] > Task :core:compileTestJava NO-SOURCE
[2021-12-22T12:46:08.411Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2021-12-22T12:46:08.411Z] > Task 
:clients:generatePomFileForMavenJavaPublication
[2021-12-22T12:46:09.338Z] 
[2021-12-22T12:46:09.338Z] > Task :streams:processMessages
[2021-12-22T12:46:09.338Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2021-12-22T12:46:09.338Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_3.1/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-12-22T12:46:09.338Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2021-12-22T12:46:09.338Z] 
[2021-12-22T12:46:09.338Z] > Task :streams:compileJava UP-TO-DATE
[2021-12-22T12:46:09.338Z] > Task :streams:classes UP-TO-DATE
[2021-12-22T12:46:09.338Z] > Task :streams:copyDependantLibs UP-TO-DATE
[2021-12-22T12:46:09.338Z] > Task :core:compileTestScala UP-TO-DATE
[2021-12-22T12:46:09.338Z] > Task :core:testClasses UP-TO-DATE
[2021-12-22T12:46:09.338Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-12-22T12:46:09.338Z] > Task :streams:jar UP-TO-DATE
[2021-12-22T12:46:09.338Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-12-22T12:46:12.884Z] > Task :connect:api:javadoc
[2021-12-22T12:46:12.884Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-12-22T12:46:12.884Z] > Task :connect:api:jar UP-TO-DATE
[2021-12-22T12:46:12.884Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-12-22T12:46:12.884Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-12-22T12:46:12.884Z] > Task :connect:json:jar UP-TO-DATE
[2021-12-22T12:46:12.884Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-12-22T12:46:12.884Z] > Task :connect:api:javadocJar
[2021-12-22T12:46:12.884Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-12-22T12:46:12.884Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-12-22T12:46:12.884Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-12-22T12:46:12.884Z] > Task :connect:json:publishToMavenLocal
[2021-12-22T12:46:12.884Z] > Task :connect:api:testJar
[2021-12-22T12:46:12.884Z] > Task :connect:api:testSrcJar
[2021-12-22T12:46:12.884Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-12-22T12:46:12.884Z] > Task :connect:api:publishToMavenLocal
[2021-12-22T12:46:15.358Z] > Task :streams:javadoc
[2021-12-22T12:46:15.358Z] > Task :streams:javadocJar
[2021-12-22T12:46:16.284Z] > Task :streams:compileTestJava UP-TO-DATE
[2021-12-22T12:46:16.284Z] > Task :streams:testClasses UP-TO-DATE
[2021-12-22T12:46:16.284Z] > Task :streams:testJar
[2021-12-22T12:46:16.284Z] > Task :streams:testSrcJar
[2021-12-22T12:46:17.210Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2021-12-22T12:46:17.210Z] > Task :streams:publishToMavenLocal
[2021-12-22T12:46:17.210Z] > Task :clients:javadoc
[2021-12-22T12:46:17.210Z] > Task :clients:javadocJar
[2021-12-22T12:46:18.137Z] 
[2021-12-22T12:46:18.137Z] > Task :clients:srcJar
[2021-12-22T12:46:18.137Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2021-12-22T12:46:18.137Z]   - Gradle detected a problem with the following 
location: '/home/jenkins/workspace/Kafka_kafka_3.1/clients/src/generated/java'. 
Reason: Task ':clients:srcJar' uses this output of task 
':clients:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-12-22T12:46:19.063Z] 
[2021-12-22T12:46:19.063Z] > Task :clients:testJar
[2021-12-22T12:46:19.063Z] > Task :clients:testSrcJar
[2021-12-22T12:46:

Re: [DISCUSS] KIP-591: Add Kafka Streams config to set default state store

2021-12-22 Thread Luke Chen
Hi Guozhang,

Thanks for the comments.
And I agree it's better to rename it to `default.dsl.store.impl.type` for
differentiation.
I've updated the KIP.

Thank you.
Luke


On Wed, Dec 22, 2021 at 3:12 AM Guozhang Wang  wrote:

> Thanks Luke, I do not have any major comments on the wiki any more. BTW
> thanks for making the "public StreamsBuilder(final TopologyConfig
> topologyConfigs)" API public now, I think it will benefit a lot of future
> works!
>
> I think with the new API, we can deprecate the `build(props)` function in
> StreamsBuilder now, and will file a separate JIRA for it.
>
> Just a few nits:
>
> 1) There seems to be a typo at the end: "ROCK_DB"
> 2) Sometimes people refer to "store type" as kv-store, window-store etc;
> maybe we can differentiate them a bit by calling the new API names
> `StoreImplType`,
> `default.dsl.store.impl.type` and `The default store implementation type
> used by DSL operators`.
>
> On Thu, Dec 16, 2021 at 2:29 AM Luke Chen  wrote:
>
> > Hi Guozhang,
> >
> > I've updated the KIP to use `enum`, instead of store implementation, and
> > some content accordingly.
> > Please let me know if there's other comments.
> >
> >
> > Thank you.
> > Luke
> >
> > On Sun, Dec 12, 2021 at 3:55 PM Luke Chen  wrote:
> >
> > > Hi Guozhang,
> > >
> > > Thanks for your comments.
> > > I agree that in the KIP, there's a trade-off regarding the API
> > complexity.
> > > With the store impl, we can support default custom stores, but
> introduce
> > > more complexity for users, while with the enum types, users can
> configure
> > > default built-in store types easily, but it can't work for custom
> stores.
> > >
> > > For me, I'm OK to narrow down the scope and introduce the default
> > built-in
> > > enum store types first.
> > > And if there's further request, we can consider a better way to support
> > > default store impl.
> > >
> > > I'll update the KIP next week, unless there are other opinions from
> other
> > > members.
> > >
> > > Thank you.
> > > Luke
> > >
> > > On Fri, Dec 10, 2021 at 6:33 AM Guozhang Wang 
> > wrote:
> > >
> > >> Thanks Luke for the updated KIP.
> > >>
> > >> One major change the new proposal has it to change the original enum
> > store
> > >> type with a new interface. Where in the enum approach our internal
> > >> implementations would be something like:
> > >>
> > >> "
> > >> Stores#keyValueBytesStoreSupplier(storeImplTypes, storeName, ...)
> > >> Stores#windowBytesStoreSupplier(storeImplTypes, storeName, ...)
> > >> Stores#sessionBytesStoreSupplier(storeImplTypes, storeName, ...)
> > >> "
> > >>
> > >> And inside the impl classes like here we would could directly do:
> > >>
> > >> "
> > >> if ((supplier = materialized.storeSupplier) == null) {
> > >> supplier =
> > >> Stores.windowBytesStoreSupplier(materialized.storeImplType())
> > >> }
> > >> "
> > >>
> > >> While I understand the benefit of having an interface such that user
> > >> customized stores could be used as the default store types as well,
> > >> there's
> > >> a trade-off I feel regarding the API complexity. Since with this
> > approach,
> > >> our API complexity granularity is in the order of "number of impl
> > types" *
> > >> "number of store types". This means that whenever we add new store
> types
> > >> in
> > >> the future, this API needs to be augmented and customized impl needs
> to
> > be
> > >> updated to support the new store types, in addition, not all custom
> impl
> > >> types may support all store types, but with this interface they are
> > forced
> > >> to either support all or explicitly throw un-supported exceptions.
> > >>
> > >> The way I see a default impl type is that, they would be safe to use
> for
> > >> any store types, and since store types are evolved by the library
> > itself,
> > >> the default impls would better be controlled by the library as well.
> > >> Custom
> > >> impl classes can choose to replace some of the stores explicitly, but
> > may
> > >> not be a best fit as the default impl classes --- if there are in the
> > >> future, one way we can consider is to make Stores class extensible
> along
> > >> with the enum so that advanced users can add more default impls,
> > assuming
> > >> such scenarios are not very common.
> > >>
> > >> So I'm personally still a bit learning towards the enum approach with
> a
> > >> narrower scope, for its simplicity as an API and also its low
> > maintenance
> > >> cost in the future. Let me know what do you think?
> > >>
> > >>
> > >> Guozhang
> > >>
> > >>
> > >> On Wed, Dec 1, 2021 at 6:48 PM Luke Chen  wrote:
> > >>
> > >> > Hi devs,
> > >> >
> > >> > I'd like to propose a KIP to allow users to set default store
> > >> > implementation class (built-in RocksDB/InMemory, or custom one), and
> > >> > default to RocksDB state store, to keep backward compatibility.
> > >> >
> > >> > Detailed description can be found here:
> > >> >
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-591%3A+Add+Kafka+Str

Precision of kafka quotas

2021-12-22 Thread Thouqueer Ahmed
Hi,
  We are trying to setup a utilization plot for our multi-tenant 
kafka cluster.

Setup is Kafka: 2.6, Scala 2.12, Ubuntu 18

We noticed 2 issues:


  1.  Throttling happens even when quota violation doesn't occur
As per JMX metrics:
Producer byte-rate is 148KB/s,
Consumer byte-rate & kafka.server(type=FetchByteRate) both show 99KB/s whereas 
quota is set at 165K (split equally across 3 brokers => 55K)

Everything else checks out fine. There is no other quota coming into picture. 
All brokers have equal number of partitions. Per official docs, graph should 
show a spike which goes beyond 55K and then throttle window, but I don't see 
any peak go beyond 55K

[Diagram  Description automatically generated with medium confidence]


  1.  JMX metric kafka.server(type=FetchByteRate) is wrong in some cases when 
the load is low.
Only 1 or 2 brokers show proper value even though there is no partition skew 
and data are being distributed equally.

Am I missing something ?
Any help would be much appreciated.

Thanks,
Ahmed


[jira] [Created] (KAFKA-13563) Consumer failure after rolling Broker upgrade

2021-12-22 Thread Luke Chen (Jira)
Luke Chen created KAFKA-13563:
-

 Summary: Consumer failure after rolling Broker upgrade
 Key: KAFKA-13563
 URL: https://issues.apache.org/jira/browse/KAFKA-13563
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Luke Chen
Assignee: Luke Chen


This failure occurred again during this month's rolling OS security updates to 
the Brokers (no change to Broker version).  I have also been able to reproduce 
it locally with the following process:
 
1. Start a 3 Broker cluster with a Topic having Replicas=3.
2. Start a Client with Producer and Consumer communicating over the Topic.
3. Stop the Broker that is acting as the Group Coordinator.
4. Observe successful Rediscovery of new Group Coordinator.
5. Restart the stopped Broker.
6. Stop the Broker that became the new Group Coordinator at step 4.
7. Observe "Rediscovery will be attempted" message but no "Discovered group 
coordinator" message.
 
In short, Group Coordinator Rediscovery only works for the first Broker 
failover not any subsequent failover.
 
I conducted tests using 2.7.1 servers.  The issue occurs with 2.7.1 and 2.7.2 
Clients.  The issue does not occur with 2.5.1 and 2.7.0 Clients.  This make me 
suspect that https://issues.apache.org/jira/browse/KAFKA-10793 introduced this 
issue.
 
 

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #45

2021-12-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 499680 lines...]
[2021-12-22T14:51:11.900Z] TransactionsBounceTest > testWithGroupId() PASSED
[2021-12-22T14:51:11.900Z] 
[2021-12-22T14:51:11.900Z] UserClientIdQuotaTest > 
testProducerConsumerOverrideLowerQuota() STARTED
[2021-12-22T14:51:18.882Z] 
[2021-12-22T14:51:18.882Z] UserClientIdQuotaTest > 
testProducerConsumerOverrideLowerQuota() PASSED
[2021-12-22T14:51:18.882Z] 
[2021-12-22T14:51:18.882Z] UserClientIdQuotaTest > 
testProducerConsumerOverrideUnthrottled() STARTED
[2021-12-22T14:51:19.798Z] 
[2021-12-22T14:51:19.798Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsAllTopicsAllGroups() PASSED
[2021-12-22T14:51:19.798Z] 
[2021-12-22T14:51:19.798Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions() STARTED
[2021-12-22T14:51:28.202Z] 
[2021-12-22T14:51:28.202Z] UserClientIdQuotaTest > 
testProducerConsumerOverrideUnthrottled() PASSED
[2021-12-22T14:51:28.202Z] 
[2021-12-22T14:51:28.202Z] UserClientIdQuotaTest > 
testThrottledProducerConsumer() STARTED
[2021-12-22T14:51:31.717Z] 
[2021-12-22T14:51:31.717Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions() PASSED
[2021-12-22T14:51:31.717Z] 
[2021-12-22T14:51:31.717Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsByDurationFallbackToLatestWhenNoRecords() STARTED
[2021-12-22T14:51:35.287Z] 
[2021-12-22T14:51:35.287Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsByDurationFallbackToLatestWhenNoRecords() PASSED
[2021-12-22T14:51:35.287Z] 
[2021-12-22T14:51:35.287Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopics() STARTED
[2021-12-22T14:51:49.143Z] 
[2021-12-22T14:51:49.143Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopics() PASSED
[2021-12-22T14:51:49.143Z] 
[2021-12-22T14:51:49.143Z] ZkAuthorizationTest > testIsZkSecurityEnabled() 
STARTED
[2021-12-22T14:51:49.143Z] 
[2021-12-22T14:51:49.143Z] ZkAuthorizationTest > testIsZkSecurityEnabled() 
PASSED
[2021-12-22T14:51:49.143Z] 
[2021-12-22T14:51:49.143Z] ZkAuthorizationTest > testKafkaZkClient() STARTED
[2021-12-22T14:51:49.143Z] 
[2021-12-22T14:51:49.143Z] ZkAuthorizationTest > testKafkaZkClient() PASSED
[2021-12-22T14:51:49.143Z] 
[2021-12-22T14:51:49.143Z] ZkAuthorizationTest > testZkAntiMigration() STARTED
[2021-12-22T14:51:49.143Z] 
[2021-12-22T14:51:49.143Z] ZkAuthorizationTest > testZkAntiMigration() PASSED
[2021-12-22T14:51:49.143Z] 
[2021-12-22T14:51:49.144Z] ZkAuthorizationTest > testConsumerOffsetPathAcls() 
STARTED
[2021-12-22T14:51:49.144Z] 
[2021-12-22T14:51:49.144Z] ZkAuthorizationTest > testConsumerOffsetPathAcls() 
PASSED
[2021-12-22T14:51:49.144Z] 
[2021-12-22T14:51:49.144Z] ZkAuthorizationTest > testZkMigration() STARTED
[2021-12-22T14:51:49.832Z] 
[2021-12-22T14:51:49.832Z] UserClientIdQuotaTest > 
testThrottledProducerConsumer() PASSED
[2021-12-22T14:51:49.832Z] 
[2021-12-22T14:51:49.832Z] UserClientIdQuotaTest > testQuotaOverrideDelete() 
STARTED
[2021-12-22T14:51:50.077Z] 
[2021-12-22T14:51:50.077Z] ZkAuthorizationTest > testZkMigration() PASSED
[2021-12-22T14:51:50.077Z] 
[2021-12-22T14:51:50.077Z] ZkAuthorizationTest > testChroot() STARTED
[2021-12-22T14:51:50.077Z] 
[2021-12-22T14:51:50.077Z] ZkAuthorizationTest > testChroot() PASSED
[2021-12-22T14:51:50.077Z] 
[2021-12-22T14:51:50.077Z] ZkAuthorizationTest > testDelete() STARTED
[2021-12-22T14:51:51.011Z] 
[2021-12-22T14:51:51.011Z] ZkAuthorizationTest > testDelete() PASSED
[2021-12-22T14:51:51.011Z] 
[2021-12-22T14:51:51.011Z] ZkAuthorizationTest > testDeleteRecursive() STARTED
[2021-12-22T14:51:51.011Z] 
[2021-12-22T14:51:51.011Z] ZkAuthorizationTest > testDeleteRecursive() PASSED
[2021-12-22T14:51:51.011Z] 
[2021-12-22T14:51:51.011Z] AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllHostAce() STARTED
[2021-12-22T14:51:51.944Z] 
[2021-12-22T14:51:51.944Z] AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllHostAce() PASSED
[2021-12-22T14:51:51.944Z] 
[2021-12-22T14:51:51.944Z] AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() STARTED
[2021-12-22T14:51:51.944Z] 
[2021-12-22T14:51:51.944Z] AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() PASSED
[2021-12-22T14:51:51.944Z] 
[2021-12-22T14:51:51.944Z] AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() STARTED
[2021-12-22T14:51:52.878Z] 
[2021-12-22T14:51:52.878Z] AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() PASSED
[2021-12-22T14:51:52.878Z] 
[2021-12-22T14:51:52.878Z] AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() STARTED
[2021-12-22T14:51:52.878Z] 
[2021-12-22T14:51:52.878Z] AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() PASSED
[

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #46

2021-12-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 501643 lines...]
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2021-12-22T19:44:36.159Z] 
[2021-12-22T19:44:36.159Z] AlterUserScramCredentialsRequestTest > 
testAlterTooManyIterations() PASSED
[2021-12-22T19:44:36.159Z] 
[2021-12-22T19:44:36.159Z] AlterUserScramCredentialsRequestTest > 
testAlterNothing() STARTED
[2021-12-22T19:44:39.756Z] 
[2021-12-22T19:44:39.756Z] AlterUserScramCredentialsRequestTest > 
testAlterNothing() PASSED
[2021-12-22T19:44:39.756Z] 
[2021-12-22T19:44:39.756Z] AlterUserScramCredentialsRequestTest > 
testAlterNotController() STARTED
[2021-12-22T19:44:43.352Z] 
[2021-12-22T19:44:43.352Z] AlterUserScramCredentialsRequestTest > 
testAlterNotController() PASSED
[2021-12-22T19:44:43.352Z] 
[2021-12-22T19:44:43.352Z] ServerStartupTest > testBrokerStateRunningAfterZK() 
STARTED
[2021-12-22T19:44:45.994Z] 
[2021-12-22T19:44:45.994Z] ServerStartupTest > testBrokerStateRunningAfterZK() 
PASSED
[2021-12-22T19:44:45.994Z] 
[2021-12-22T19:44:45.994Z] ServerStartupTest > testBrokerCreatesZKChroot() 
STARTED
[2021-12-22T19:44:47.756Z] 
[2021-12-22T19:44:47.756Z] ServerStartupTest > testBrokerCreatesZKChroot() 
PASSED
[2021-12-22T19:44:47.756Z] 
[2021-12-22T19:44:47.756Z] ServerStartupTest > 
testConflictBrokerStartupWithSamePort() STARTED
[2021-12-22T19:44:51.523Z] 
[2021-12-22T19:44:51.523Z] ServerStartupTest > 
testConflictBrokerStartupWithSamePort() PASSED
[2021-12-22T19:44:51.523Z] 
[2021-12-22T19:44:51.523Z] ServerStartupTest > testConflictBrokerRegistration() 
STARTED
[2021-12-22T19:44:55.289Z] 
[2021-12-22T19:44:55.289Z] ServerStartupTest > testConflictBrokerRegistration() 
PASSED
[2021-12-22T19:44:55.289Z] 
[2021-12-22T19:44:55.289Z] ServerStartupTest > testBrokerSelfAware() STARTED
[2021-12-22T19:44:57.053Z] 
[2021-12-22T19:44:57.053Z] ServerStartupTest > testBrokerSelfAware() PASSED
[2021-12-22T19:44:57.053Z] 
[2021-12-22T19:44:57.053Z] ServerGenerateBrokerIdTest > 
testBrokerMetadataOnIdCollision() STARTED
[2021-12-22T19:45:01.788Z] 
[2021-12-22T19:45:01.788Z] ServerGenerateBrokerIdTest > 
testBrokerMetadataOnIdCollision() PASSED
[2021-12-22T19:45:01.788Z] 
[2021-12-22T19:45:01.788Z] ServerGenerateBrokerIdTest > 
testAutoGenerateBrokerId() STARTED
[2021-12-22T19:45:06.445Z] 
[2021-12-22T19:45:06.445Z] ServerGenerateBrokerIdTest > 
testAutoGenerateBrokerId() PASSED
[2021-12-22T19:45:06.445Z] 
[2021-12-22T19:45:06.445Z] ServerGenerateBrokerIdTest > 
testMultipleLogDirsMetaProps() STARTED
[2021-12-22T19:45:10.220Z] 
[2021-12-22T19:45:10.220Z] ServerGenerateBrokerIdTest > 
testMultipleLogDirsMetaProps() PASSED
[2021-12-22T19:45:10.220Z] 
[2021-12-22T19:45:10.220Z] ServerGenerateBrokerIdTest > 
testDisableGeneratedBrokerId() STARTED
[2021-12-22T19:45:12.159Z] 
[2021-12-22T19:45:12.159Z] ServerGenerateBrokerIdTest > 
testDisableGeneratedBrokerId() PASSED
[2021-12-22T19:45:12.159Z] 
[2021-12-22T19:45:12.159Z] ServerGenerateBrokerIdTest > 
testUserConfigAndGeneratedBrokerId() STARTED
[2021-12-22T19:45:19.221Z] 
[2021-12-22T19:45:19.221Z] ServerGenerateBrokerIdTest > 
testUserConfigAndGeneratedBrokerId() PASSED
[2021-12-22T19:45:19.221Z] 
[2021-12-22T19:45:19.221Z] ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps() STARTED
[2021-12-22T19:45:22.986Z] 
[2021-12-22T19:45:22.986Z] ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps() PASSED
[2021-12-22T19:45:22.986Z] 
[2021-12-22T19:45:22.986Z] MultipleListenersWithDefaultJaasContextTest > 
testProduceConsume() STARTED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] MultipleListenersWithDefaultJaasContextTest > 
testProduceConsume() PASSED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() STARTED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() PASSED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() STARTED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() PASSED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() STARTED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() PASSED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] ZooKeeperClientTest > testGetChildrenExistingZNode() 
STARTED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] ZooKeeperClientTest > testGetChildrenExistingZNode() 
PASSED
[2021-12-22T19:45:58.650Z] 
[2021-12-22T19:45:58.650Z] ZooKeeperClientTest > tes

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.0 #163

2021-12-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 416209 lines...]
[2021-12-22T21:37:34.006Z] [INFO] --- maven-clean-plugin:3.0.0:clean 
(default-clean) @ streams-quickstart ---
[2021-12-22T21:37:34.006Z] [INFO] 
[2021-12-22T21:37:34.006Z] [INFO] --- maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) @ streams-quickstart ---
[2021-12-22T21:37:34.942Z] [INFO] 
[2021-12-22T21:37:34.942Z] [INFO] --- maven-site-plugin:3.5.1:attach-descriptor 
(attach-descriptor) @ streams-quickstart ---
[2021-12-22T21:37:35.879Z] [INFO] 
[2021-12-22T21:37:35.879Z] [INFO] --- maven-gpg-plugin:1.6:sign 
(sign-artifacts) @ streams-quickstart ---
[2021-12-22T21:37:35.879Z] [INFO] 
[2021-12-22T21:37:35.879Z] [INFO] --- maven-install-plugin:2.5.2:install 
(default-install) @ streams-quickstart ---
[2021-12-22T21:37:35.879Z] [INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.0/streams/quickstart/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.0.1-SNAPSHOT/streams-quickstart-3.0.1-SNAPSHOT.pom
[2021-12-22T21:37:35.879Z] [INFO] 
[2021-12-22T21:37:35.879Z] [INFO] --< 
org.apache.kafka:streams-quickstart-java >--
[2021-12-22T21:37:35.879Z] [INFO] Building streams-quickstart-java 
3.0.1-SNAPSHOT[2/2]
[2021-12-22T21:37:35.879Z] [INFO] --[ maven-archetype 
]---
[2021-12-22T21:37:35.879Z] [INFO] 
[2021-12-22T21:37:35.879Z] [INFO] --- maven-clean-plugin:3.0.0:clean 
(default-clean) @ streams-quickstart-java ---
[2021-12-22T21:37:35.879Z] [INFO] 
[2021-12-22T21:37:35.879Z] [INFO] --- maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) @ streams-quickstart-java ---
[2021-12-22T21:37:35.879Z] [INFO] 
[2021-12-22T21:37:35.879Z] [INFO] --- maven-resources-plugin:2.7:resources 
(default-resources) @ streams-quickstart-java ---
[2021-12-22T21:37:35.879Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-12-22T21:37:35.879Z] [INFO] Copying 6 resources
[2021-12-22T21:37:35.879Z] [INFO] Copying 3 resources
[2021-12-22T21:37:35.879Z] [INFO] 
[2021-12-22T21:37:35.879Z] [INFO] --- maven-resources-plugin:2.7:testResources 
(default-testResources) @ streams-quickstart-java ---
[2021-12-22T21:37:35.879Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-12-22T21:37:35.879Z] [INFO] Copying 2 resources
[2021-12-22T21:37:35.879Z] [INFO] Copying 3 resources
[2021-12-22T21:37:35.879Z] [INFO] 
[2021-12-22T21:37:35.879Z] [INFO] --- maven-archetype-plugin:2.2:jar 
(default-jar) @ streams-quickstart-java ---
[2021-12-22T21:37:36.394Z] [INFO] Building archetype jar: 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.0/streams/quickstart/java/target/streams-quickstart-java-3.0.1-SNAPSHOT
[2021-12-22T21:37:36.394Z] [INFO] 
[2021-12-22T21:37:36.394Z] [INFO] --- maven-site-plugin:3.5.1:attach-descriptor 
(attach-descriptor) @ streams-quickstart-java ---
[2021-12-22T21:37:36.394Z] [INFO] 
[2021-12-22T21:37:36.394Z] [INFO] --- 
maven-archetype-plugin:2.2:integration-test (default-integration-test) @ 
streams-quickstart-java ---
[2021-12-22T21:37:36.394Z] [INFO] 
[2021-12-22T21:37:36.394Z] [INFO] --- maven-gpg-plugin:1.6:sign 
(sign-artifacts) @ streams-quickstart-java ---
[2021-12-22T21:37:36.394Z] [INFO] 
[2021-12-22T21:37:36.394Z] [INFO] --- maven-install-plugin:2.5.2:install 
(default-install) @ streams-quickstart-java ---
[2021-12-22T21:37:36.394Z] [INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.0/streams/quickstart/java/target/streams-quickstart-java-3.0.1-SNAPSHOT.jar
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.0.1-SNAPSHOT/streams-quickstart-java-3.0.1-SNAPSHOT.jar
[2021-12-22T21:37:36.394Z] [INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.0/streams/quickstart/java/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.0.1-SNAPSHOT/streams-quickstart-java-3.0.1-SNAPSHOT.pom
[2021-12-22T21:37:36.394Z] [INFO] 
[2021-12-22T21:37:36.394Z] [INFO] --- 
maven-archetype-plugin:2.2:update-local-catalog (default-update-local-catalog) 
@ streams-quickstart-java ---
[2021-12-22T21:37:36.394Z] [INFO] 

[2021-12-22T21:37:36.394Z] [INFO] Reactor Summary for Kafka Streams :: 
Quickstart 3.0.1-SNAPSHOT:
[2021-12-22T21:37:36.394Z] [INFO] 
[2021-12-22T21:37:36.394Z] [INFO] Kafka Streams :: Quickstart 
 SUCCESS [  1.600 s]
[2021-12-22T21:37:36.394Z] [INFO] streams-quickstart-java 
 SUCCESS [  0.689 s]
[2021-12-22T21:37:36.394Z] [INFO] 

[2021-12-22T21:37:36.394Z] [INFO] BUILD SUCCESS
[2021-12-22T21:37:36.394Z] [INFO] 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #47

2021-12-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 498371 lines...]
[2021-12-22T21:55:40.592Z] 
[2021-12-22T21:55:40.592Z] ControllerIntegrationTest > 
testTopicIdCreatedOnUpgradeMultiBrokerScenario() PASSED
[2021-12-22T21:55:40.592Z] 
[2021-12-22T21:55:40.592Z] ControllerIntegrationTest > 
testPreemptionWithCallbacks() STARTED
[2021-12-22T21:55:45.663Z] 
[2021-12-22T21:55:45.663Z] ControllerIntegrationTest > 
testPreemptionWithCallbacks() PASSED
[2021-12-22T21:55:45.663Z] 
[2021-12-22T21:55:45.663Z] ControllerIntegrationTest > 
testControllerDetectsBouncedBrokers() STARTED
[2021-12-22T21:55:49.972Z] 
[2021-12-22T21:55:49.972Z] ControllerIntegrationTest > 
testControllerDetectsBouncedBrokers() PASSED
[2021-12-22T21:55:49.972Z] 
[2021-12-22T21:55:49.972Z] ControllerIntegrationTest > testControlledShutdown() 
STARTED
[2021-12-22T21:55:53.081Z] 
[2021-12-22T21:55:53.081Z] ControllerIntegrationTest > testControlledShutdown() 
PASSED
[2021-12-22T21:55:53.081Z] 
[2021-12-22T21:55:53.081Z] ControllerIntegrationTest > 
testPreemptionOnControllerShutdown() STARTED
[2021-12-22T21:55:56.180Z] 
[2021-12-22T21:55:56.180Z] ControllerIntegrationTest > 
testPreemptionOnControllerShutdown() PASSED
[2021-12-22T21:55:56.180Z] 
[2021-12-22T21:55:56.180Z] ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress() STARTED
[2021-12-22T21:56:00.479Z] 
[2021-12-22T21:56:00.479Z] ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress() PASSED
[2021-12-22T21:56:00.479Z] 
[2021-12-22T21:56:00.479Z] ControllerIntegrationTest > 
testNoTopicIdPersistsThroughControllerReelection() STARTED
[2021-12-22T21:56:03.645Z] 
[2021-12-22T21:56:03.645Z] ControllerIntegrationTest > 
testNoTopicIdPersistsThroughControllerReelection() PASSED
[2021-12-22T21:56:03.645Z] 
[2021-12-22T21:56:03.645Z] ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown() STARTED
[2021-12-22T21:56:05.749Z] 
[2021-12-22T21:56:05.749Z] ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown() PASSED
[2021-12-22T21:56:05.749Z] 
[2021-12-22T21:56:05.749Z] ControllerIntegrationTest > testTopicIdsAreAdded() 
STARTED
[2021-12-22T21:56:08.987Z] 
[2021-12-22T21:56:08.987Z] ControllerIntegrationTest > testTopicIdsAreAdded() 
PASSED
[2021-12-22T21:56:08.987Z] 
[2021-12-22T21:56:08.987Z] ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica() STARTED
[2021-12-22T21:56:12.428Z] 
[2021-12-22T21:56:12.428Z] ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica() PASSED
[2021-12-22T21:56:12.428Z] 
[2021-12-22T21:56:12.428Z] ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline() STARTED
[2021-12-22T21:56:18.007Z] 
[2021-12-22T21:56:18.007Z] ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline() PASSED
[2021-12-22T21:56:18.007Z] 
[2021-12-22T21:56:18.007Z] ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled() STARTED
[2021-12-22T21:56:22.265Z] 
[2021-12-22T21:56:22.265Z] ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled() PASSED
[2021-12-22T21:56:22.265Z] 
[2021-12-22T21:56:22.265Z] ControllerIntegrationTest > 
testTopicIdMigrationAndHandlingWithOlderVersion() STARTED
[2021-12-22T21:56:25.276Z] 
[2021-12-22T21:56:25.276Z] ControllerIntegrationTest > 
testTopicIdMigrationAndHandlingWithOlderVersion() PASSED
[2021-12-22T21:56:25.276Z] 
[2021-12-22T21:56:25.276Z] ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica() STARTED
[2021-12-22T21:56:29.162Z] 
[2021-12-22T21:56:29.163Z] ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica() PASSED
[2021-12-22T21:56:29.163Z] 
[2021-12-22T21:56:29.163Z] ControllerIntegrationTest > 
testPartitionReassignmentToBrokerWithOfflineLogDir() STARTED
[2021-12-22T21:56:32.703Z] 
[2021-12-22T21:56:32.704Z] ControllerIntegrationTest > 
testPartitionReassignmentToBrokerWithOfflineLogDir() PASSED
[2021-12-22T21:56:32.704Z] 
[2021-12-22T21:56:32.704Z] ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica() STARTED
[2021-12-22T21:56:36.821Z] 
[2021-12-22T21:56:36.821Z] ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica() PASSED
[2021-12-22T21:56:36.821Z] 
[2021-12-22T21:56:36.821Z] ControllerIntegrationTest > 
testMetadataPropagationOnControlPlane() STARTED
[2021-12-22T21:56:39.054Z] 
[2021-12-22T21:56:39.054Z] ControllerIntegrationTest > 
testMetadataPropagationOnControlPlane() PASSED
[2021-12-22T21:56:39.054Z] 
[2021-12-22T21:56:39.054Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithNonExistingFeatureZNode()
 STARTED
[2021-12-22T21:56:41.175Z] 
[2021-12-22T21:56:41.176Z] ControllerIntegrationTest > 
testControllerFeatureZN

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #94

2021-12-22 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-719: Add Log4J2 Appender

2021-12-22 Thread Haruki Okada
Hi, Dongjin,

Sorry for interrupting the discussion.
And thank you for your hard work about KIP-653, KIP-719.

I understand that KIP-653 is already accepted so log4j2 is the choice of
the Kafka community though, I'm now feeling that logback is a better choice
here.

Reasons:

- even after "log4shell", several vulnerabilities found on log4j2 so new
versions are released and users have to update in high-pace
* actually, a CVE was also reported for logback (CVE-2021-42550) but it
requires edit-permission of the config file for an attacker so it's much
less threatening
- log4j1.x and logback are made by same developer (ceki), so substantially
the successor of log4j1 is logback rather than log4j2
- in Hadoop project, seems similar suggestion was made from a PMC
* https://issues.apache.org/jira/browse/HADOOP-12956


What do you think about adopting logback instead?


Thanks,

2021年12月21日(火) 18:02 Dongjin Lee :

> Hi Mickael,
>
> > In the meantime, you may want to bump the VOTE thread too.
>
> Sure, I just reset the voting thread with a brief context.
>
> Thanks,
> Dongjin
>
> On Tue, Dec 21, 2021 at 2:13 AM Mickael Maison 
> wrote:
>
> > Thanks Dongjin!
> >
> > I'll take a look soon.
> > In the meantime, you may want to bump the VOTE thread too.
> >
> > Best,
> > Mickael
> >
> >
> > On Sat, Dec 18, 2021 at 10:00 AM Dongjin Lee  wrote:
> > >
> > > Hi Mickael,
> > >
> > > Finally, I did it! As you can see at the PR
> > > , KIP-719 now uses
> log4j2's
> > > Kafka appender, and log4j-appender is not used by the other modules
> > > anymore. You can see how it will work with KIP-653 at this preview
> > > ,
> > based
> > > on Apache Kafka 3.0.0. The proposal document
> > > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-719%3A+Deprecate+Log4J+Appender
> > >
> > > is also updated accordingly, with its title.
> > >
> > > There is a minor issue on log4j2
> > > , but it seems like
> > it
> > > will be resolved soon.
> > >
> > > Best,
> > > Dongjin
> > >
> > > On Wed, Dec 15, 2021 at 9:28 PM Dongjin Lee 
> wrote:
> > >
> > > > Hi Mickael,
> > > >
> > > > > Can we do step 3 without breaking any compatibility? If so then
> that
> > > > sounds like a good idea.
> > > >
> > > > As far as I know, the answer is yes; I am now updating my PR, so I
> will
> > > > notify you as soon as I complete the work.
> > > >
> > > > Best,
> > > > Dongjin
> > > >
> > > > On Wed, Dec 15, 2021 at 2:00 AM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > >> Hi Dongjin,
> > > >>
> > > >> Sorry for the late reply. Can we do step 3 without breaking any
> > > >> compatibility? If so then that sounds like a good idea.
> > > >>
> > > >> Thanks,
> > > >> Mickael
> > > >>
> > > >>
> > > >>
> > > >> On Tue, Nov 23, 2021 at 2:08 PM Dongjin Lee 
> > wrote:
> > > >> >
> > > >> > Hi Mickael,
> > > >> >
> > > >> > I also thought over the issue thoroughly and would like to
> propose a
> > > >> minor
> > > >> > change to your proposal:
> > > >> >
> > > >> > 1. Deprecate log4j-appender now
> > > >> > 2. Document how to migrate into logging-log4j2
> > > >> > 3. (Changed) Replace the log4j-appender (in turn log4j 1.x)
> > > >> dependencies in
> > > >> > tools, trogdor, and shell and upgrade to log4j2 in 3.x, removing
> > log4j
> > > >> 1.x
> > > >> > dependencies.
> > > >> > 4. (Changed) Remove log4j-appender in Kafka 4.0
> > > >> >
> > > >> > What we need to do for the log4j2 upgrade is just removing the
> log4j
> > > >> > dependencies only, for they can cause a classpath error. And
> > actually,
> > > >> we
> > > >> > can do it without discontinuing publishing the log4j-appender
> > artifact.
> > > >> So,
> > > >> > I suggest separating the upgrade to log4j2 and removing the
> > > >> log4j-appender
> > > >> > module.
> > > >> >
> > > >> > How do you think? If you agree, I will update the KIP and the PR
> > > >> > accordingly ASAP.
> > > >> >
> > > >> > Thanks,
> > > >> > Dongjin
> > > >> >
> > > >> > On Mon, Nov 15, 2021 at 8:06 PM Mickael Maison <
> > > >> mickael.mai...@gmail.com>
> > > >> > wrote:
> > > >> >
> > > >> > > Hi Dongjin,
> > > >> > >
> > > >> > > Thanks for the clarifications.
> > > >> > >
> > > >> > > I wonder if a simpler course of action could be:
> > > >> > > - Deprecate log4j-appender now
> > > >> > > - Document how to use logging-log4j2
> > > >> > > - Remove log4j-appender and all the log4j dependencies in Kafka
> > 4.0
> > > >> > >
> > > >> > > This delays KIP-653 till Kafka 4.0 but (so far) Kafka is not
> > directly
> > > >> > > affected by the log4j CVEs. At least this gives us a clear and
> > simple
> > > >> > > roadmap to follow.
> > > >> > >
> > > >> > > What do you think?
> > > >> > >
> > > >> > > On Tue, Nov 9, 2021 at 12:12 PM Dongjin Lee  >
> > > >> wrote:
> > > >> > > >
> > > >> > > > Hi Mickael,
> > > >> > > >
> > > >> > > > I

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #48

2021-12-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 500453 lines...]
[2021-12-23T01:16:44.765Z] ControllerIntegrationTest > 
testPreemptionWithCallbacks() STARTED
[2021-12-23T01:16:50.585Z] 
[2021-12-23T01:16:50.585Z] ControllerIntegrationTest > 
testPreemptionWithCallbacks() PASSED
[2021-12-23T01:16:50.585Z] 
[2021-12-23T01:16:50.585Z] ControllerIntegrationTest > 
testControllerDetectsBouncedBrokers() STARTED
[2021-12-23T01:16:55.258Z] 
[2021-12-23T01:16:55.258Z] ControllerIntegrationTest > 
testControllerDetectsBouncedBrokers() PASSED
[2021-12-23T01:16:55.258Z] 
[2021-12-23T01:16:55.258Z] ControllerIntegrationTest > testControlledShutdown() 
STARTED
[2021-12-23T01:16:59.929Z] 
[2021-12-23T01:16:59.929Z] ControllerIntegrationTest > testControlledShutdown() 
PASSED
[2021-12-23T01:16:59.929Z] 
[2021-12-23T01:16:59.929Z] ControllerIntegrationTest > 
testPreemptionOnControllerShutdown() STARTED
[2021-12-23T01:17:01.708Z] 
[2021-12-23T01:17:01.708Z] ControllerIntegrationTest > 
testPreemptionOnControllerShutdown() PASSED
[2021-12-23T01:17:01.708Z] 
[2021-12-23T01:17:01.708Z] ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress() STARTED
[2021-12-23T01:17:06.549Z] 
[2021-12-23T01:17:06.549Z] ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress() PASSED
[2021-12-23T01:17:06.549Z] 
[2021-12-23T01:17:06.549Z] ControllerIntegrationTest > 
testNoTopicIdPersistsThroughControllerReelection() STARTED
[2021-12-23T01:17:09.216Z] 
[2021-12-23T01:17:09.216Z] ControllerIntegrationTest > 
testNoTopicIdPersistsThroughControllerReelection() PASSED
[2021-12-23T01:17:09.216Z] 
[2021-12-23T01:17:09.216Z] ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown() STARTED
[2021-12-23T01:17:11.882Z] 
[2021-12-23T01:17:11.882Z] ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown() PASSED
[2021-12-23T01:17:11.882Z] 
[2021-12-23T01:17:11.882Z] ControllerIntegrationTest > testTopicIdsAreAdded() 
STARTED
[2021-12-23T01:17:13.660Z] 
[2021-12-23T01:17:13.660Z] ControllerIntegrationTest > testTopicIdsAreAdded() 
PASSED
[2021-12-23T01:17:13.660Z] 
[2021-12-23T01:17:13.660Z] ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica() STARTED
[2021-12-23T01:17:19.494Z] 
[2021-12-23T01:17:19.494Z] ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica() PASSED
[2021-12-23T01:17:19.494Z] 
[2021-12-23T01:17:19.494Z] ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline() STARTED
[2021-12-23T01:17:24.164Z] 
[2021-12-23T01:17:24.164Z] ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline() PASSED
[2021-12-23T01:17:24.164Z] 
[2021-12-23T01:17:24.164Z] ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled() STARTED
[2021-12-23T01:17:28.837Z] 
[2021-12-23T01:17:28.837Z] ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled() PASSED
[2021-12-23T01:17:28.837Z] 
[2021-12-23T01:17:28.837Z] ControllerIntegrationTest > 
testTopicIdMigrationAndHandlingWithOlderVersion() STARTED
[2021-12-23T01:17:30.614Z] 
[2021-12-23T01:17:30.614Z] ControllerIntegrationTest > 
testTopicIdMigrationAndHandlingWithOlderVersion() PASSED
[2021-12-23T01:17:30.614Z] 
[2021-12-23T01:17:30.614Z] ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica() STARTED
[2021-12-23T01:17:35.285Z] 
[2021-12-23T01:17:35.285Z] ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica() PASSED
[2021-12-23T01:17:35.285Z] 
[2021-12-23T01:17:35.285Z] ControllerIntegrationTest > 
testPartitionReassignmentToBrokerWithOfflineLogDir() STARTED
[2021-12-23T01:17:37.951Z] 
[2021-12-23T01:17:37.951Z] ControllerIntegrationTest > 
testPartitionReassignmentToBrokerWithOfflineLogDir() PASSED
[2021-12-23T01:17:37.951Z] 
[2021-12-23T01:17:37.951Z] ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica() STARTED
[2021-12-23T01:17:42.620Z] 
[2021-12-23T01:17:42.620Z] ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica() PASSED
[2021-12-23T01:17:42.620Z] 
[2021-12-23T01:17:42.620Z] ControllerIntegrationTest > 
testMetadataPropagationOnControlPlane() STARTED
[2021-12-23T01:17:44.398Z] 
[2021-12-23T01:17:44.398Z] ControllerIntegrationTest > 
testMetadataPropagationOnControlPlane() PASSED
[2021-12-23T01:17:44.398Z] 
[2021-12-23T01:17:44.398Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithNonExistingFeatureZNode()
 STARTED
[2021-12-23T01:17:47.064Z] 
[2021-12-23T01:17:47.064Z] ControllerIntegrationTest > 
testControllerFeatureZNodeSetupWhenFeatureVersioningIsEnabledWithNonExistingFeatureZNode()
 PASSED
[2021-12-23T01:17:47.064Z] 
[2021-12-23T01:17:47.064Z] ControllerIntegrationTest > testAlter

Re: [DISCUSS] KIP-719: Add Log4J2 Appender

2021-12-22 Thread Dongjin Lee
Hi Haruki,


Thanks for organizing the issue.


If the community prefers logback, I will gladly change the dependency and
update the PR. However, it has the following issues:


1. The log4j2 vulnerabilities seem mostly fixed, and KIP-653 + KIP-719 are
not released yet. So, using log4j2 (whose recent update pace is so high)
will not affect the users.


2. To switch to logback, the following features should be reworked:


  a. Dynamic logger level configuration (core, connect)

  b. Logging tests (streams)

  c. Kafka Appender (tools)


a and b are the most challenging ones since there is little documentation
on how to do this, so it requires analyzing the implementation itself.
(what I actually did with log4j2) About c, logback does not provide a Kafka
Appender so we have to provide an equivalent.


It is why I prefer to use log4j2. How do you think?


Thanks,

Dongjin


On Thu, Dec 23, 2021 at 9:01 AM Haruki Okada  wrote:

> Hi, Dongjin,
>
> Sorry for interrupting the discussion.
> And thank you for your hard work about KIP-653, KIP-719.
>
> I understand that KIP-653 is already accepted so log4j2 is the choice of
> the Kafka community though, I'm now feeling that logback is a better choice
> here.
>
> Reasons:
>
> - even after "log4shell", several vulnerabilities found on log4j2 so new
> versions are released and users have to update in high-pace
> * actually, a CVE was also reported for logback (CVE-2021-42550) but it
> requires edit-permission of the config file for an attacker so it's much
> less threatening
> - log4j1.x and logback are made by same developer (ceki), so substantially
> the successor of log4j1 is logback rather than log4j2
> - in Hadoop project, seems similar suggestion was made from a PMC
> * https://issues.apache.org/jira/browse/HADOOP-12956
>
>
> What do you think about adopting logback instead?
>
>
> Thanks,
>
> 2021年12月21日(火) 18:02 Dongjin Lee :
>
> > Hi Mickael,
> >
> > > In the meantime, you may want to bump the VOTE thread too.
> >
> > Sure, I just reset the voting thread with a brief context.
> >
> > Thanks,
> > Dongjin
> >
> > On Tue, Dec 21, 2021 at 2:13 AM Mickael Maison  >
> > wrote:
> >
> > > Thanks Dongjin!
> > >
> > > I'll take a look soon.
> > > In the meantime, you may want to bump the VOTE thread too.
> > >
> > > Best,
> > > Mickael
> > >
> > >
> > > On Sat, Dec 18, 2021 at 10:00 AM Dongjin Lee 
> wrote:
> > > >
> > > > Hi Mickael,
> > > >
> > > > Finally, I did it! As you can see at the PR
> > > > , KIP-719 now uses
> > log4j2's
> > > > Kafka appender, and log4j-appender is not used by the other modules
> > > > anymore. You can see how it will work with KIP-653 at this preview
> > > > ,
> > > based
> > > > on Apache Kafka 3.0.0. The proposal document
> > > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-719%3A+Deprecate+Log4J+Appender
> > > >
> > > > is also updated accordingly, with its title.
> > > >
> > > > There is a minor issue on log4j2
> > > > , but it seems
> like
> > > it
> > > > will be resolved soon.
> > > >
> > > > Best,
> > > > Dongjin
> > > >
> > > > On Wed, Dec 15, 2021 at 9:28 PM Dongjin Lee 
> > wrote:
> > > >
> > > > > Hi Mickael,
> > > > >
> > > > > > Can we do step 3 without breaking any compatibility? If so then
> > that
> > > > > sounds like a good idea.
> > > > >
> > > > > As far as I know, the answer is yes; I am now updating my PR, so I
> > will
> > > > > notify you as soon as I complete the work.
> > > > >
> > > > > Best,
> > > > > Dongjin
> > > > >
> > > > > On Wed, Dec 15, 2021 at 2:00 AM Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > > wrote:
> > > > >
> > > > >> Hi Dongjin,
> > > > >>
> > > > >> Sorry for the late reply. Can we do step 3 without breaking any
> > > > >> compatibility? If so then that sounds like a good idea.
> > > > >>
> > > > >> Thanks,
> > > > >> Mickael
> > > > >>
> > > > >>
> > > > >>
> > > > >> On Tue, Nov 23, 2021 at 2:08 PM Dongjin Lee 
> > > wrote:
> > > > >> >
> > > > >> > Hi Mickael,
> > > > >> >
> > > > >> > I also thought over the issue thoroughly and would like to
> > propose a
> > > > >> minor
> > > > >> > change to your proposal:
> > > > >> >
> > > > >> > 1. Deprecate log4j-appender now
> > > > >> > 2. Document how to migrate into logging-log4j2
> > > > >> > 3. (Changed) Replace the log4j-appender (in turn log4j 1.x)
> > > > >> dependencies in
> > > > >> > tools, trogdor, and shell and upgrade to log4j2 in 3.x, removing
> > > log4j
> > > > >> 1.x
> > > > >> > dependencies.
> > > > >> > 4. (Changed) Remove log4j-appender in Kafka 4.0
> > > > >> >
> > > > >> > What we need to do for the log4j2 upgrade is just removing the
> > log4j
> > > > >> > dependencies only, for they can cause a classpath error. And
> > > actually,
> > > > >> we
> > > > >> > can do it without discontinuing publishing the log4j-appender
> > 

[jira] [Created] (KAFKA-13564) Kafka keep print NOT_LEADER_OR_FOLLOWER in log file after one broker dropped, and the producer can not work.

2021-12-22 Thread ZhenChun Pan (Jira)
ZhenChun Pan created KAFKA-13564:


 Summary: Kafka keep print NOT_LEADER_OR_FOLLOWER in log file after 
one broker dropped, and the producer can not work.
 Key: KAFKA-13564
 URL: https://issues.apache.org/jira/browse/KAFKA-13564
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: ZhenChun Pan


The machine of broker0 dropped, and some partition change the leader to 
broker1. We can found message like below in state-change.log:

[2021-12-11 15:34:14,868] TRACE [Broker id=0] Cached leader info 
UpdateMetadataPartitionState(topicName='ceae-1002-flink-characteristic-instance-data',
 partitionIndex=0, controllerEpoch=3, leader=1, leaderEpoch=8, isr=[1], 
zkVersion*#*#*, offlineReplicas=[]) for partition 
ceae-1002-flink-characteristic-instance-data-0 in response to UpdateMetadata 
request sent by controller 2 epoch 3 with correlation id 0 (state.change.logger)

But we found server.log keep print logs like below:

[2021-12-11 15:34:30,272] INFO [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=6] Retrying leaderEpoch request for partition 
ceae-1002-flink-characteristic-instance-data-0 as the leader reported an error: 
NOT_LEADER_OR_FOLLOWER (kafka.server.ReplicaFetcherThread)

And the producer also can not work and print messages below:

[2021-12-11 16:00:00,703] INFO [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=4] Retrying leaderEpoch request for partition 
ceae-1002-flink-characteristic-instance-data-0 as the leader reported an error: 
NOT_LEADER_OR_FOLLOWER (kafka.server.ReplicaFetcherThread)

We resume broker0, but did not work. So we restart all brokers of kafka 
cluster, and fix the trouble.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)