Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #234

2021-06-18 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-12835) Topic IDs can mismatch on brokers (after interbroker protocol version update)

2021-06-18 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-12835.
-
Fix Version/s: 3.0.0
 Reviewer: David Jacot
   Resolution: Fixed

> Topic IDs can mismatch on brokers (after interbroker protocol version update)
> -
>
> Key: KAFKA-12835
> URL: https://issues.apache.org/jira/browse/KAFKA-12835
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.8.0
>Reporter: Ivan Yurchenko
>Assignee: Justine Olshan
>Priority: Major
> Fix For: 3.0.0
>
>
> We had a Kafka cluster running 2.8 version with interbroker protocol set to 
> 2.7. It had a number of topics and everything was fine.
> Then we decided to update the interbroker protocol to 2.8 by the following 
> procedure:
> 1. Run new brokers with the interbroker protocol set to 2.8.
> 2. Move the data from the old brokers to the new ones (normal partition 
> reassignment API).
> 3. Decommission the old brokers.
> At the stage 2 we had the problem: old brokers started failing on 
> {{LeaderAndIsrRequest}} handling with
> {code:java}
> ERROR [Broker id=<...>] Topic Id in memory: <...> does not match the topic Id 
> for partition <...> provided in the request: <...>. (state.change.logger)
> {code}
> for multiple topics. Topics were not recreated.
> We checked {{partition.metadata}} files and IDs there were indeed different 
> from the values in ZooKeeper. It was fixed by deleting the metadata files 
> (and letting them be recreated).
>  
> The logs, unfortunately, didn't show anything that might point to the cause 
> of the issue (or it happened longer ago than we store the logs).
> We tried to reproduce this also, but no success.
> If the community can point out what to check or beware of in future, it will 
> be great. We'll be happy to provide additional information if needed. Thank 
> you! 
> Sorry for the ticket that might be not very actionable. We hope to at least 
> rise awareness of this issue.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12816) Add tier storage configs.

2021-06-18 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-12816.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

merged the PR to trunk

> Add tier storage configs. 
> --
>
> Key: KAFKA-12816
> URL: https://issues.apache.org/jira/browse/KAFKA-12816
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Satish Duggana
>Assignee: Satish Duggana
>Priority: Major
> Fix For: 3.0.0
>
>
> Add all the tier storage related configurations including remote log manager, 
> remote storage manager, and remote log metadata manager. 
> These configs are described in the KIP-405 
> [here|https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage#KIP405:KafkaTieredStorage-Configs.1].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #236

2021-06-18 Thread Apache Jenkins Server
See 




Contributor List

2021-06-18 Thread Kai Herrera
Hello,

I would like to be added to the contributor list.

Username: Kai1015

Many thanks! 
Kai


Re: Contributor List

2021-06-18 Thread Boyang Chen
Added you to the jira contributor, thanks for your interest. Let me know
your wiki user id so that I could add you there as well.

Boyang

On Fri, Jun 18, 2021 at 10:11 AM Kai Herrera  wrote:

> Hello,
>
> I would like to be added to the contributor list.
>
> Username: Kai1015
>
> Many thanks!
> Kai
>


Re: [DISCUSS] KIP-714: Client metrics and observability

2021-06-18 Thread Colin McCabe
On Thu, Jun 17, 2021, at 12:13, Ryanne Dolan wrote:
> Colin,
> 
> > lack of support for collecting client metrics
> 
> ...but kafka is not a metrics collector. There are lots of things kafka
> doesn't support. Should it also collect clients' logs for the same reasons?
> What other side channels should it proxy through brokers?
> 

Hi Ryanne,

Kafka already is a metrics collector. 

Take a look at KIP-511: "Collect and Expose Client's Name and Version in the 
Brokers," which aggregates metrics from various clients and re-exposes it as a 
broker metric. Or KIP-607: "Add Metrics to Kafka Streams to Report Properties 
of RocksDB" which aggregates metrics from the local RocksDB instances and 
re-exposes them. Or KIP-608 - "Expose Kafka Metrics in Authorizer". Or lots of 
other KIPs.

This has been the direction we've been moving for a while. It's a direction 
motivated by our experiences in the field with users, who find it cumbersome to 
set up dedicated infra to monitor individual Kafka clients. Magnus, especially, 
has a huge amount of experience here.

>
> > He mentioned the fact that configuring client metrics usually involves
> > setting up a separate metrics collection infrastructure.
> 
> This is not changed with the KIP. It's just a matter of who owns that
> infra, which I don't think should matter to Apache Kafka.
> 

Magnus and I explained a few times the reasons why it does matter. Within most 
organizations, there are usually several teams using clients, which are 
separate from the team which maintains the Kafka cluster. The Kafka team has 
the Kafka experts, which makes it the best place to centralize collecting and 
analyzing Kafka metrics.

In a sense the whole concept of cloud computing is "just a matter of who owns 
infra." It is quite important to users.

> We already have MetricsReporter. I still don't see specific motivation
> beyond the "opt-out" part?
> 
> I think we need exceptional motivation for such a proposal.
> 

 As I've said earlier, if you are happy with the current metrics setup, then 
you can continue using it -- nothing in this KIP means you have to change what 
you're doing.

best,
Colin


> On Thu, Jun 17, 2021, 1:43 PM Colin McCabe  wrote:
> 
> > Hi Ryan,
> >
> > These are not "arguments for observability in general" but descriptions of
> > specific issues that come up due to Kafka's lack of support for collecting
> > client metrics. He mentioned the fact that configuring client metrics
> > usually involves setting up a separate metrics collection infrastructure.
> > Even if this is easy and straightforward to do (which is not the case for
> > most organizations), it still requires reconfiguring and restarting the
> > application, which is disruptive. Correlating client metrics with server
> > metrics is also often hard. These issues are all mitigated by centralizing
> > metrics collection on the broker.
> >
> > best,
> > Colin
> >
> >
> > On Wed, Jun 16, 2021, at 19:03, Ryanne Dolan wrote:
> > > Magnus, I think these are arguments for observability in general, but not
> > > why kafka should sit between a client and a metics collector.
> > >
> > > Ryanne
> > >
> > > On Wed, Jun 16, 2021, 10:27 AM Magnus Edenhill 
> > wrote:
> > >
> > > > Hi Ryanne,
> > > >
> > > > this proposal stems from a need to improve troubleshooting Kafka
> > issues.
> > > >
> > > > As it currently stands, when an application team is experiencing Kafka
> > > > service degradation,
> > > > or the Kafka operator is seeing misbehaving clients, there are plenty
> > of
> > > > steps that needs
> > > > to be taken before any client-side metrics can be observed at all, if
> > at
> > > > all:
> > > >  - Is the application even collecting client metrics? If not it needs
> > to be
> > > > reconfigured or implemented, and restarted;
> > > >a restart may have business impact, and may also temporarily?
> > remedy the
> > > > problem without giving any further insight
> > > >into what was wrong.
> > > >  - Are the desired metrics collected? Where are they stored? For how
> > long?
> > > > Is there enough correlating information
> > > >to map it to cluster-side metrics and events? Does the application
> > > > on-call know how to find the collected metrics?
> > > >  - Export and send these metrics to whoever knows how to interpret
> > them. In
> > > > what format? Are all relevant metadata fields
> > > >provided?
> > > >
> > > > The KIP aims to solve all these obstacles by giving the Kafka operator
> > the
> > > > tools to collect this information.
> > > >
> > > > Regards,
> > > > Magnus
> > > >
> > > >
> > > > Den tis 15 juni 2021 kl 02:37 skrev Ryanne Dolan <
> > ryannedo...@gmail.com>:
> > > >
> > > > > Magnus, I think such a substantial change requires more motivation
> > than
> > > > is
> > > > > currently provided. As I read it, the motivation boils down to this:
> > you
> > > > > want your clients to phone-home unless they opt-out. As stated in the
> > > > KIP,
> > > > > "there are plenty of e

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #34

2021-06-18 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #123

2021-06-18 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-12252 and KAFKA-12262: Fix session key rotation when 
leadership changes (#10014)


--
[...truncated 3.18 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #159

2021-06-18 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-12252 and KAFKA-12262: Fix session key rotation when 
leadership changes (#10014)


--
[...truncated 3.45 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kaf

[jira] [Resolved] (KAFKA-12837) Process entire batch in broker metadata listener

2021-06-18 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12837.
-
Fix Version/s: 3.0.0
 Assignee: Jose Armando Garcia Sancio
   Resolution: Fixed

> Process entire batch in broker metadata listener
> 
>
> Key: KAFKA-12837
> URL: https://issues.apache.org/jira/browse/KAFKA-12837
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
>  Labels: kip-500
> Fix For: 3.0.0
>
>
> The currently {{BrokerMetadataListener}} process one batch at a time even 
> thought it is possible for the {{BatchReader}} to contain more than one 
> batch. This is functionally correct but it would required less coordination 
> between the {{RaftIOThread}} and the broker metadata listener thread if the 
> broker is changed to process all of the batches included in the 
> {{BatchReader}} sent through {{handleCommit}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-714: Client metrics and observability

2021-06-18 Thread Travis Bischel
H Colin (and Magnus),

Thanks for the replies!

I think the biggest concern I have is the cardinality bits. I'm sympathetic to 
the aspect of this making it easier for Kafka brokers to understand *every* 
aspect of the kafka ecoystem. I am not sure this will 100% solve the need 
there, though: if a client is unable to connect to a broker, visibility 
disappears immediately, no?

I do still think that the problem of difficulty of monitoring within an 
organization results from issues within organizations themselves: orgs should 
have proper processes in place such that anything talking to Kafka has the 
org's plug-in monitoring libraries. Kafka operators can define those libraries, 
such that all clients in the org have the libraries the operators require. This 
satisfies the same goals this KIP aims to provide, albeit with the increased 
org cost of not just having something defined to be plugged in.

If Kafka operators themselves can which metrics they want, so that the broker 
can tell the client "only send these metrics", then my biggest concern is 
removed.

I do still think that hooks can be a cleaner abstraction to this same goal, and 
then pre-provided libraries (say, "this library provides X,Y,Z and sends to 
prometheus from your client") could exist that more exactly satisfy what this 
KIP aims to provide. This would also avoid the kitchen-sink vs. 
not-comprehensive-enough issue I brought up previously. This would also avoid 
require KIPs for any supported metrics.

On 2021/06/16 22:27:55, "Colin McCabe"  wrote: 
> On Sun, Jun 13, 2021, at 21:51, Travis Bischel wrote:
> > Hi! I have a few thoughts on this KIP. First, I'd like to thank you for 
> > the writeup,
> > clearly a lot of thought has gone into it and it is very thorough. 
> > However, I'm not
> > convinced it's the right approach from a fundamental level.
> > 
> > Fundamentally, this KIP seems like somewhat of a solution to an 
> > organizational
> > problem. Metrics are organizational concerns, not Kafka operator concerns.
> 
> Hi Travis,
> 
> Metrics are certainly Kafka operator concerns. It is very important for 
> cluster operators to know things like how many clients there are, what they 
> clients are doing, and so forth. This information is needed to administer 
> Kafka. Therefore it certainly falls in the domain of the Kafka operations 
> team (and the Kafka development team.)
> 
> We have added many metrics in the past to make it easier to monitor clients. 
> I think this is just another step in that direction.
> 
> > Clients should make it easy to plug in metrics (this is the approach I take 
> > in
> > my own client), and organizations should have processes such that all 
> > clients
> > gather and ship metrics how that organization desires.
> >
> > If an organization is set up correctly, there is no reason for metrics to be
> > forwarded through Kafka. This feels like a solution to an organization not
> > properly setting up how processes ship metrics, and in some ways, it's an
> > overbroad solution, and in other ways, it doesn't cover the entire problem.
> 
> I think the reason was explained pretty clearly: many admins find it 
> difficult to set up monitoring for every client in the organization. In 
> general the team which maintains a Kafka cluster is often separate from the 
> teams that use the cluster. Therefore rolling out monitoring for clients can 
> be very difficult to coordinate.
> 
> No metrics will ever cover every possible use-case, but the set proposed here 
> does seem useful.
> 
> > 
> > From the perspective of Kafka operators, it is easy to see that this KIP is
> > nice in that it just dictates what clients should support for metrics and 
> > that
> > the metrics should ship through Kafka. But, from the perspective of an
> > observability team, this workflow is basically hijacking the standard flow 
> > that
> > organizations may have. I would rather have applications collect metrics and
> > ship them the same way every other application does. I'd rather not have to
> > configure additional plugins within Kafka to take metrics and forward them.
> 
> This change doesn't remove any functionality. If you don't want to use 
> KIP-714 metrics collection, you can simply turn it off and continue 
> collecting metrics the way you always have.
> 
> > 
> > More importantly, this KIP prescibes cardinality problems, requires that to
> > officially support the KIP a client must support all relevant metrics within
> > the KIP, and requires that a client cannot support other metrics unless 
> > those
> > other metrics also go through a KIP process. It is difficult to imagine all 
> > of
> > these metrics being relevant to every organization, and there is no way for 
> > an
> > organization to filter what is relevant within the client. Instead, the
> > filtering is pushed downwards, meaning more network IO and more CPU costs to
> > filter what is irrelevant and aggregate what needs to be aggregated, and 
> > more
> > tim

[GitHub] [kafka-site] jlprat opened a new pull request #359: MINOR: Backport of docs fixes to 28

2021-06-18 Thread GitBox


jlprat opened a new pull request #359:
URL: https://github.com/apache/kafka-site/pull/359


   This PR backports https://github.com/apache/kafka/pull/10766/ to folder
   28 as indicated in the same PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] jlprat commented on pull request #359: MINOR: Backport of docs fixes to 28

2021-06-18 Thread GitBox


jlprat commented on pull request #359:
URL: https://github.com/apache/kafka-site/pull/359#issuecomment-863870545


   Hi @bbejeck ready for review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] bbejeck merged pull request #359: MINOR: Backport of docs fixes to 28

2021-06-18 Thread GitBox


bbejeck merged pull request #359:
URL: https://github.com/apache/kafka-site/pull/359


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] jlprat commented on pull request #359: MINOR: Backport of docs fixes to 28

2021-06-18 Thread GitBox


jlprat commented on pull request #359:
URL: https://github.com/apache/kafka-site/pull/359#issuecomment-864286010


   Thanks for the review @bbejeck 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #160

2021-06-18 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] MINOR: Use MessageDigest equals when comparing signature 
(#10898)


--
[...truncated 3.45 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullR

[jira] [Resolved] (KAFKA-12870) RecordAccumulator stuck in a flushing state

2021-06-18 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12870.
-
Resolution: Fixed

> RecordAccumulator stuck in a flushing state
> ---
>
> Key: KAFKA-12870
> URL: https://issues.apache.org/jira/browse/KAFKA-12870
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , streams
>Affects Versions: 2.5.1, 2.8.0, 2.7.1, 2.6.2
>Reporter: Niclas Lockner
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 3.0.0, 2.8.1
>
> Attachments: RecordAccumulator.log, full.log
>
>
> After a Kafka Stream with exactly once enabled has performed its first 
> commit, the RecordAccumulator within the stream's internal producer gets 
> stuck in a state where all subsequent ProducerBatches that get allocated are 
> immediately flushed instead of being held in memory until they expire, 
> regardless of the stream's linger or batch size config.
> This is reproduced in the example code found at 
> [https://github.com/niclaslockner/kafka-12870] which can be run with 
> ./gradlew run --args=
> The example has a producer that sends 1 record/sec to one topic, and a Kafka 
> stream with EOS enabled that forwards the records from that topic to another 
> topic with the configuration linger = 5 sec, commit interval = 10 sec.
>  
> The expected behavior when running the example is that the stream's 
> ProducerBatches will expire (or get flushed because of the commit) every 5th 
> second, and that the stream's producer will send a ProduceRequest every 5th 
> second with an expired ProducerBatch that contains 5 records.
> The actual behavior is that the ProducerBatch is made immediately available 
> for the Sender, and the Sender sends one ProduceRequest for each record.
>  
> The example code contains a copy of the RecordAccumulator class (copied from 
> kafka-clients 2.8.0) with some additional logging added to
>  * RecordAccumulator#ready(Cluster, long)
>  * RecordAccumulator#beginFlush()
>  * RecordAccumulator#awaitFlushCompletion()
> These log entries show (see the attached RecordsAccumulator.log)
>  * that the batches are considered sendable because a flush is in progress
>  * that Sender.maybeSendAndPollTransactionalRequest() calls 
> RecordAccumulator's beginFlush() without also calling awaitFlushCompletion(), 
> and that this makes RecordAccumulator's flushesInProgress jump between 1-2 
> instead of the expected 0-1.
>  
> This issue is not reproducible in version 2.3.1 or 2.4.1.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 2.8 #35

2021-06-18 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 463423 lines...]
[2021-06-19T00:38:17.015Z] [INFO] 
[2021-06-19T00:38:17.015Z] [INFO] --- maven-resources-plugin:2.7:resources 
(default-resources) @ streams-quickstart-java ---
[2021-06-19T00:38:17.015Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-06-19T00:38:17.015Z] [INFO] Copying 6 resources
[2021-06-19T00:38:17.015Z] [INFO] Copying 3 resources
[2021-06-19T00:38:17.015Z] [INFO] 
[2021-06-19T00:38:17.015Z] [INFO] --- maven-resources-plugin:2.7:testResources 
(default-testResources) @ streams-quickstart-java ---
[2021-06-19T00:38:17.015Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-06-19T00:38:17.015Z] [INFO] Copying 2 resources
[2021-06-19T00:38:17.015Z] [INFO] Copying 3 resources
[2021-06-19T00:38:17.015Z] [INFO] 
[2021-06-19T00:38:17.015Z] [INFO] --- maven-archetype-plugin:2.2:jar 
(default-jar) @ streams-quickstart-java ---
[2021-06-19T00:38:17.539Z] [INFO] Building archetype jar: 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_2.8/streams/quickstart/java/target/streams-quickstart-java-2.8.1-SNAPSHOT
[2021-06-19T00:38:17.539Z] [INFO] 
[2021-06-19T00:38:17.539Z] [INFO] --- maven-site-plugin:3.5.1:attach-descriptor 
(attach-descriptor) @ streams-quickstart-java ---
[2021-06-19T00:38:17.539Z] [INFO] 
[2021-06-19T00:38:17.539Z] [INFO] --- 
maven-archetype-plugin:2.2:integration-test (default-integration-test) @ 
streams-quickstart-java ---
[2021-06-19T00:38:17.539Z] [INFO] 
[2021-06-19T00:38:17.539Z] [INFO] --- maven-gpg-plugin:1.6:sign 
(sign-artifacts) @ streams-quickstart-java ---
[2021-06-19T00:38:17.539Z] [INFO] 
[2021-06-19T00:38:17.539Z] [INFO] --- maven-install-plugin:2.5.2:install 
(default-install) @ streams-quickstart-java ---
[2021-06-19T00:38:17.539Z] [INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_2.8/streams/quickstart/java/target/streams-quickstart-java-2.8.1-SNAPSHOT.jar
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/2.8.1-SNAPSHOT/streams-quickstart-java-2.8.1-SNAPSHOT.jar
[2021-06-19T00:38:17.539Z] [INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_2.8/streams/quickstart/java/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/2.8.1-SNAPSHOT/streams-quickstart-java-2.8.1-SNAPSHOT.pom
[2021-06-19T00:38:17.539Z] [INFO] 
[2021-06-19T00:38:17.539Z] [INFO] --- 
maven-archetype-plugin:2.2:update-local-catalog (default-update-local-catalog) 
@ streams-quickstart-java ---
[2021-06-19T00:38:17.539Z] [INFO] 

[2021-06-19T00:38:17.539Z] [INFO] Reactor Summary for Kafka Streams :: 
Quickstart 2.8.1-SNAPSHOT:
[2021-06-19T00:38:17.539Z] [INFO] 
[2021-06-19T00:38:17.539Z] [INFO] Kafka Streams :: Quickstart 
 SUCCESS [  1.531 s]
[2021-06-19T00:38:17.539Z] [INFO] streams-quickstart-java 
 SUCCESS [  0.635 s]
[2021-06-19T00:38:17.539Z] [INFO] 

[2021-06-19T00:38:17.539Z] [INFO] BUILD SUCCESS
[2021-06-19T00:38:17.539Z] [INFO] 

[2021-06-19T00:38:17.539Z] [INFO] Total time:  2.412 s
[2021-06-19T00:38:17.539Z] [INFO] Finished at: 2021-06-19T00:38:16Z
[2021-06-19T00:38:17.539Z] [INFO] 

[Pipeline] dir
[2021-06-19T00:38:17.540Z] Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_2.8/streams/quickstart/test-streams-archetype
[Pipeline] {
[Pipeline] sh
[2021-06-19T00:38:19.719Z] + echo Y
[2021-06-19T00:38:19.719Z] + mvn archetype:generate -DarchetypeCatalog=local 
-DarchetypeGroupId=org.apache.kafka 
-DarchetypeArtifactId=streams-quickstart-java -DarchetypeVersion=2.8.1-SNAPSHOT 
-DgroupId=streams.examples -DartifactId=streams.examples -Dversion=0.1 
-Dpackage=myapps
[2021-06-19T00:38:20.667Z] [INFO] Scanning for projects...
[2021-06-19T00:38:20.667Z] [INFO] 
[2021-06-19T00:38:20.667Z] [INFO] --< 
org.apache.maven:standalone-pom >---
[2021-06-19T00:38:20.667Z] [INFO] Building Maven Stub Project (No POM) 1
[2021-06-19T00:38:20.667Z] [INFO] [ pom 
]-
[2021-06-19T00:38:20.667Z] [INFO] 
[2021-06-19T00:38:20.667Z] [INFO] >>> maven-archetype-plugin:3.2.0:generate 
(default-cli) > generate-sources @ standalone-pom >>>
[2021-06-19T00:38:20.667Z] [INFO] 
[2021-06-19T00:38:20.667Z] [INFO] <<< maven-archetype-plugin:3.2.0:generate 
(default-cli) < generate-sources @ standalone-pom <<<
[2021-06-19T00:38:20.667Z] [INFO] 
[2021-06-19T00:38:20.667Z] [INFO] 
[2021-06-19T00:38:20.667Z] [INFO] --- maven-archetype-plugin:3.2.0:generate 
(default-cli) @ standalone-pom ---
[2021-06-19T00:38:21.613Z] [I

Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #124

2021-06-18 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] MINOR: Use MessageDigest equals when comparing signature 
(#10898)


--
[...truncated 6.35 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateSt

Unable to unsubscribe from this mail list

2021-06-18 Thread Meiling He
Hi,

I have sent an email to


dev-unsubscr...@kafka.apache.org

however, it doesn’t work.

Can anyone please intrust me a different way to unsubscribe from this mail list?

Thanks a lot!


Build failed in Jenkins: Kafka » kafka-2.4-jdk8 #27

2021-06-18 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] MINOR: Use MessageDigest equals when comparing signature 
(#10898)


--
[...truncated 2.90 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTE

Re: Unable to unsubscribe from this mail list

2021-06-18 Thread Matthias J. Sax
The bot that maintains the list, should reply to your email and you need
to confirm it. Maybe check your spam folder?

-Matthias

On 6/18/21 8:54 PM, Meiling He wrote:
> Hi,
> 
> I have sent an email to
> 
> 
> dev-unsubscr...@kafka.apache.org
> 
> however, it doesn’t work.
> 
> Can anyone please intrust me a different way to unsubscribe from this mail 
> list?
> 
> Thanks a lot!
> 


Jenkins build is back to normal : Kafka » kafka-2.5-jdk8 #46

2021-06-18 Thread Apache Jenkins Server
See