Build failed in Jenkins: kafka-trunk-jdk11 #1350

2020-04-14 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9853: Improve performance of Log.fetchOffsetByTimestamp (#8474)

[github] KAFKA-9842; Add test case for OffsetsForLeaderEpoch grouping in Fetcher


--
[...truncated 3.04 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> T

Re: [kafka-clients] [VOTE] 2.5.0 RC3

2020-04-14 Thread Jonathan Santilli
Hello,

I have ran the tests (passed)
Follow the quick start guide with scala 2.12 (success)
+1


Thanks!
--
Jonathan

On Tue, Apr 14, 2020 at 1:16 AM Colin McCabe  wrote:

> +1 (binding)
>
> verified checksums
> ran unitTest
> ran check
>
> best,
> Colin
>
> On Tue, Apr 7, 2020, at 21:03, David Arthur wrote:
> > Hello Kafka users, developers and client-developers,
> >
> > This is the forth candidate for release of Apache Kafka 2.5.0.
> >
> > * TLS 1.3 support (1.2 is now the default)
> > * Co-groups for Kafka Streams
> > * Incremental rebalance for Kafka Consumer
> > * New metrics for better operational insight
> > * Upgrade Zookeeper to 3.5.7
> > * Deprecate support for Scala 2.11
> >
> > Release notes for the 2.5.0 release:
> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Friday April 10th 5pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
> >
> > * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.5.0-rc3
> >
> > * Documentation:
> > https://kafka.apache.org/25/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/25/protocol.html
> >
> > Successful Jenkins builds to follow
> >
> > Thanks!
> > David
> >
>
> > --
> >  You received this message because you are subscribed to the Google
> Groups "kafka-clients" group.
> >  To unsubscribe from this group and stop receiving emails from it, send
> an email to kafka-clients+unsubscr...@googlegroups.com.
> >  To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com
> <
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com?utm_medium=email&utm_source=footer
> >.
>


-- 
Santilli Jonathan


[jira] [Created] (KAFKA-9863) update the deprecated --zookeeper option in the documentation into --bootstrap-server

2020-04-14 Thread Luke Chen (Jira)
Luke Chen created KAFKA-9863:


 Summary: update the deprecated --zookeeper option in the 
documentation into --bootstrap-server
 Key: KAFKA-9863
 URL: https://issues.apache.org/jira/browse/KAFKA-9863
 Project: Kafka
  Issue Type: Bug
  Components: docs, documentation
Affects Versions: 2.4.1
Reporter: Luke Chen
Assignee: Luke Chen


Since V2.2.0, the --zookeeper option turned into deprecated because Kafka can 
directly connect to brokers with {{--bootstrap-server}} (KIP-377). But in the 
official documentation, there are many example commands use --zookeeper instead 
of --bootstrap-server. Follow the command in the documentation, you'll get this 
warning, which is not good.
{code:java}
Warning: --zookeeper is deprecated and will be removed in a future version of 
Kafka.
Use --bootstrap-server instead to specify a broker to connect to.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9864) Avoid expensive QuotaViolationException usage

2020-04-14 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-9864:
---

 Summary: Avoid expensive QuotaViolationException usage
 Key: KAFKA-9864
 URL: https://issues.apache.org/jira/browse/KAFKA-9864
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Lucas Bradstreet


QuotaViolationException generates stack traces and uses String.format in 
exception generation. QuotaViolationException is used for control flow and 
these costs add up even though the exception contents are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9865) Expose output topic names from TopologyTestDriver

2020-04-14 Thread Andy Coates (Jira)
Andy Coates created KAFKA-9865:
--

 Summary: Expose output topic names from TopologyTestDriver
 Key: KAFKA-9865
 URL: https://issues.apache.org/jira/browse/KAFKA-9865
 Project: Kafka
  Issue Type: Bug
  Components: streams-test-utils
Affects Versions: 2.4.1
Reporter: Andy Coates


Expose the output topic names from TopologyTestDriver, i.e. 
`outputRecordsByTopic.keySet()`.

This is useful to users of the test driver, as they can use it to determine the 
names of all output topics. Which can then be used to capture all output of a 
topology, without having to manually list all the output topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Apache Feathercast

2020-04-14 Thread Sönke Liebau
Hi all,

Rich is trying to reboot the Feathercast series [1] and looking for
projects that would be willing to give a quick intro about what they do.

I'd be happy to do this for Kafka if no one has any objections.

I am sure there are many people eminently more suited to this task than me
though, so happy to let someone else do the talking as well :)

Best regards,
Sönke

[1]
https://lists.apache.org/thread.html/re247e239da04256beaafc325aa7d411b2a8ad3884d90593b1a5bc139%40%3Cdev.community.apache.org%3E


[DISCUSS] KIP-594: Expose output topic names from TopologyTestDriver

2020-04-14 Thread Andy Coates
Hey all,
I would like to start off the discussion for KIP-594:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver

This KIP proposes to expose the names of the topics a topology produces
records during a test run from the TopologyTestDriver class.

Let me know your thoughts!
Andy


[RESULTS] [VOTE] 2.5.0 RC3

2020-04-14 Thread David Arthur
Thanks everyone! The vote passes with 7 +1 votes (4 of which are binding)
and no 0 or -1 votes.

4 binding +1 votes from PMC members Manikumar, Jun, Colin, and Matthias
1 committer +1 vote from Bill
2 community +1 votes from Israel Ekpo and Jonathan Santilli

Voting email thread
http://mail-archives.apache.org/mod_mbox/kafka-dev/202004.mbox/%3CCA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com%3E

I'll continue with the release steps and send out the announcement email
soon.

-David

On Tue, Apr 14, 2020 at 7:17 AM Jonathan Santilli <
jonathansanti...@gmail.com> wrote:

> Hello,
>
> I have ran the tests (passed)
> Follow the quick start guide with scala 2.12 (success)
> +1
>
>
> Thanks!
> --
> Jonathan
>
> On Tue, Apr 14, 2020 at 1:16 AM Colin McCabe  wrote:
>
>> +1 (binding)
>>
>> verified checksums
>> ran unitTest
>> ran check
>>
>> best,
>> Colin
>>
>> On Tue, Apr 7, 2020, at 21:03, David Arthur wrote:
>> > Hello Kafka users, developers and client-developers,
>> >
>> > This is the forth candidate for release of Apache Kafka 2.5.0.
>> >
>> > * TLS 1.3 support (1.2 is now the default)
>> > * Co-groups for Kafka Streams
>> > * Incremental rebalance for Kafka Consumer
>> > * New metrics for better operational insight
>> > * Upgrade Zookeeper to 3.5.7
>> > * Deprecate support for Scala 2.11
>> >
>> > Release notes for the 2.5.0 release:
>> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
>> >
>> > *** Please download, test and vote by Friday April 10th 5pm PT
>> >
>> > Kafka's KEYS file containing PGP keys we use to sign the release:
>> > https://kafka.apache.org/KEYS
>> >
>> > * Release artifacts to be voted upon (source and binary):
>> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
>> >
>> > * Maven artifacts to be voted upon:
>> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> >
>> > * Javadoc:
>> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
>> >
>> > * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
>> > https://github.com/apache/kafka/releases/tag/2.5.0-rc3
>> >
>> > * Documentation:
>> > https://kafka.apache.org/25/documentation.html
>> >
>> > * Protocol:
>> > https://kafka.apache.org/25/protocol.html
>> >
>> > Successful Jenkins builds to follow
>> >
>> > Thanks!
>> > David
>> >
>>
>> > --
>> >  You received this message because you are subscribed to the Google
>> Groups "kafka-clients" group.
>> >  To unsubscribe from this group and stop receiving emails from it, send
>> an email to kafka-clients+unsubscr...@googlegroups.com.
>> >  To view this discussion on the web visit
>> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com
>> <
>> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com?utm_medium=email&utm_source=footer
>> >.
>>
>
>
> --
> Santilli Jonathan
>


-- 
David Arthur


Essential of Kafka

2020-04-14 Thread Kara, Tamer
Hi There!

This is not really a development question, but I need some inspiration and 
feedback on my video about a Kafka streaming process. The link of the video is 
here:  https://www.youtube.com/watch?v=G_kZc_eSLoo.

And this is the process I have animated:
[cid:image001.png@01D6124F.04183520]

Do you think that this video shows the essentials of Kafka? Do you think I can 
improve it or add something?

Regards,
Tamer Kara


Disclaimer
Dit bericht met eventuele bijlagen is vertrouwelijk en uitsluitend bestemd voor 
de geadresseerde. Indien u niet de bedoelde ontvanger bent, wordt u verzocht de 
afzender te waarschuwen en dit bericht met eventuele bijlagen direct te 
verwijderen en/of te vernietigen. Het is niet toegestaan dit bericht en 
eventuele bijlagen te vermenigvuldigen, door te sturen, openbaar te maken, op 
te slaan of op andere wijze te gebruiken. Ordina N.V. en/of haar 
groepsmaatschappijen accepteren geen verantwoordelijkheid of aansprakelijkheid 
voor schade die voortvloeit uit de inhoud en/of de verzending van dit bericht.

This e-mail and any attachments are confidential and are solely intended for 
the addressee. If you are not the intended recipient, please notify the sender 
and delete and/or destroy this message and any attachments immediately. It is 
prohibited to copy, to distribute, to disclose or to use this e-mail and any 
attachments in any other way. Ordina N.V. and/or its group companies do not 
accept any responsibility nor liability for any damage resulting from the 
content of and/or the transmission of this message.


Re: Apache Feathercast

2020-04-14 Thread Jun Rao
Hi, Sonke,

That sounds like a good opportunity for promoting Kafka. Thanks for
volunteering.

Jun

On Tue, Apr 14, 2020 at 8:08 AM Sönke Liebau
 wrote:

> Hi all,
>
> Rich is trying to reboot the Feathercast series [1] and looking for
> projects that would be willing to give a quick intro about what they do.
>
> I'd be happy to do this for Kafka if no one has any objections.
>
> I am sure there are many people eminently more suited to this task than me
> though, so happy to let someone else do the talking as well :)
>
> Best regards,
> Sönke
>
> [1]
>
> https://lists.apache.org/thread.html/re247e239da04256beaafc325aa7d411b2a8ad3884d90593b1a5bc139%40%3Cdev.community.apache.org%3E
>


Re: [DISCUSS] KIP-591: Add Kafka Streams config to set default store type

2020-04-14 Thread John Roesler
Hi all,

Thanks for starting this, Matthias! I've had multiple people mention this
feature request to me. Actually, the most recent such request was from
someone developing an LMDB-backed set of store implementations, as
a drop-in replacement for RocksDB, so Sophie's suggestion seems
relevant.

What do you think, instead of defining a StoreType enum, of defining
an interface like:
vvv

public interface StoreImplementation {
KeyValueBytesStoreSupplier keyValueSupplier(
String name
);

WindowBytesStoreSupplier windowBytesStoreSupplier(
String name,
Duration retentionPeriod,
Duration windowSize,
boolean retainDuplicates
);

SessionBytesStoreSupplier sessionBytesStoreSupplier(
String name,
Duration retentionPeriod
);
}


Then the default.dsl.store.type you proposed would take a class name instead,
with the requirement that the class given must implement StoreImplementation,
and it must also have a zero-arg constructor so we can reflectively instantiate 
it.

The interface above is compatible with the existing "store supplier" "interface"
we have loosely defined in Stores. For veracity's sake, here's how we could
implement it:

vvv
public class RocksDBStoreImplementation implements StoreImplementation {

@Override
public KeyValueBytesStoreSupplier keyValueSupplier(final String name) {
return Stores.persistentTimestampedKeyValueStore(name);
}

@Override
public WindowBytesStoreSupplier windowBytesStoreSupplier(final String name,
 final Duration 
retentionPeriod,
 final Duration 
windowSize,
 final boolean 
retainDuplicates) {
return Stores.persistentTimestampedWindowStore(name, retentionPeriod, 
windowSize, retainDuplicates);
}

@Override
public SessionBytesStoreSupplier sessionBytesStoreSupplier(final String 
name, final Duration retentionPeriod) {
return Stores.persistentSessionStore(name, retentionPeriod);
}
}


public class InMemoryStoreImplementation implements StoreImplementation {

@Override
public KeyValueBytesStoreSupplier keyValueSupplier(final String name) {
return Stores.inMemoryKeyValueStore(name);
}

@Override
public WindowBytesStoreSupplier windowBytesStoreSupplier(final String name,
 final Duration 
retentionPeriod,
 final Duration 
windowSize,
 final boolean 
retainDuplicates) {
return Stores.inMemoryWindowStore(name, retentionPeriod, windowSize, 
retainDuplicates);
}

@Override
public SessionBytesStoreSupplier sessionBytesStoreSupplier(final String 
name, final Duration retentionPeriod) {
return Stores.inMemorySessionStore(name, retentionPeriod);
}


I liked your suggestion to add a new Materialized overload, as long as the
generics work out. I think we'd have to actually experiment with it to make
sure (might be nice to do this in a POC PR, rather than having to amend the
KIP later, but it's your call).

In fact, I have also gotten a lot of feedback that our 
StoreBuilder/StoreSupplier/
Stores/Materialized/etc. all amount to a pretty confusing ball of code for 
users.
It seems like this suggestion is a good opportunity to clear out a lot of the
confusion, by deprecating all the StoreSupplier methods in Stores, as well
as the other StoreSupplier methods on Materialized, and just converging on
passing around the StoreImplementation.

It seems like this general strategy actually nets a few benefits beyond just
being able to swap in a different "default" store implementation:
* It relieves users from having to specify the kind of store 
(KV/Window/Session) whenever the really just wanted to specify the 
implementation. Offhand, I don't think there's any situation in which you can 
"choose" which kind of store to use, it's always dictated by the topology, so 
it's purely a paper cut opportunity as-is.
* It allows Streams to select the kind of store that it actually needs. E.g., 
it opens up a future opportunity for us to correctly choose to use a Windowed 
store everywhere downstream of a windowing operation.

Thanks for proposing this KIP! I think it'll be a great addition to Streams.
-John

On Mon, Apr 13, 2020, at 22:56, Sophie Blee-Goldman wrote:
> Hey Matthias,
> 
> Thanks for picking this up! This'll be really nice for testing in
> particular.
> 
> My only question is, do we want to make this available for use with custom
> state stores as well? I'm not sure how c

Re: [Discuss] KIP-581: Value of optional null field which has default value

2020-04-14 Thread Christopher Egerton
Hi Cheng,

Thanks for the KIP! I really appreciate the care that was taken to ensure
backwards compatibility for existing users, and the minimal changes to
public interface that are suggested to address this.

I have two quick requests for clarification:

1) Where is the proposed "accept.optional.null" property going to apply?
It's hinted that it'll take effect on the JSON converter but not actually
called out anywhere.

2) Assuming this takes effect on the JSON converter, is the intent to alter
the semantics for both serialization and deserialization? The code snippet
from the JSON converter that's included in the KIP comes from the
"convertToJson" method, which is used for serialization. However, based on
https://github.com/apache/kafka/blob/ea47a885b1fe47dfb87c1dc86db1b0e7eb8a273c/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L712-L713
it
looks like the converter also inserts the default value for
optional-but-null data during deserialization.

Thanks again for the KIP!

Cheers,

Chris

On Wed, Mar 18, 2020 at 12:00 AM Cheng Pan <379377...@qq.com> wrote:

> Hi all,
>
> I'd like to use this thread to discuss KIP-581: Value of optional null
> field which has default value, please see detail at:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-581%3A+Value+of+optional+null+field+which+has+default+value
>
>
> There are some previous discussion at:
> https://github.com/apache/kafka/pull/7112
>
>
> I'm a beginner for apache project, please let me know if I did any thing
> wrong.
>
>
> Best regards,
> Cheng Pan


[jira] [Resolved] (KAFKA-9539) Add leader epoch in StopReplicaRequest

2020-04-14 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9539.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Add leader epoch in StopReplicaRequest
> --
>
> Key: KAFKA-9539
> URL: https://issues.apache.org/jira/browse/KAFKA-9539
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.6.0
>
>
> Unlike the LeaderAndIsrRequest, the StopReplicaRequest does not include the 
> leader epoch which makes it vulnerable to reordering. This KIP proposes to 
> add the leader epoch for each partition in the StopReplicaRequest and the 
> broker will verify the epoch before proceeding with the StopReplicaRequest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Preferred Partition Leaders

2020-04-14 Thread Łukasz Antoniak
Hi Everyone,

Recently I came across Kafka setup where two data centers are close to each
other, but the company could not find a suitable place for the third one.
As a result third DC is little further, lower network throughput, but still
within range of decent network latency, qualifying for stretch cluster. Let
us assume that client applications are being deployed only on two "primary"
DCs. My idea was to minimize network traffic between DC3 and other data
centers (ideally only to replication).

For Kafka consumer, we can configure rack-awareness, so that consumers will
read data from closest replica (replica.selector.class).
Kafka producers have to send data to partition leaders. There is no way to
tell that we prefer replica leaders to be running in DC1 and DC2. Kafka
will also try to evenly balance leaders across brokers
(auto.leader.rebalance.enable).

Does it sound like a good feature to make the choice of partition leaders
pluggable? Basically, users would be given list of topic-partitions with
ISRs and rack they are running, and could reshuffle them according to
custom logic.

Comments appreciated.

Kind regards,
Lukasz


Re: Apache Feathercast

2020-04-14 Thread Sönke Liebau
Thanks Jun!
I'll reach out to Rich about a date.


On Tue, 14 Apr 2020 at 19:11, Jun Rao  wrote:

> Hi, Sonke,
>
> That sounds like a good opportunity for promoting Kafka. Thanks for
> volunteering.
>
> Jun
>
> On Tue, Apr 14, 2020 at 8:08 AM Sönke Liebau
>  wrote:
>
> > Hi all,
> >
> > Rich is trying to reboot the Feathercast series [1] and looking for
> > projects that would be willing to give a quick intro about what they do.
> >
> > I'd be happy to do this for Kafka if no one has any objections.
> >
> > I am sure there are many people eminently more suited to this task than
> me
> > though, so happy to let someone else do the talking as well :)
> >
> > Best regards,
> > Sönke
> >
> > [1]
> >
> >
> https://lists.apache.org/thread.html/re247e239da04256beaafc325aa7d411b2a8ad3884d90593b1a5bc139%40%3Cdev.community.apache.org%3E
> >
>


[jira] [Created] (KAFKA-9866) Do not attempt to elect preferred leader replicas which are outside ISR

2020-04-14 Thread Stanislav Kozlovski (Jira)
Stanislav Kozlovski created KAFKA-9866:
--

 Summary: Do not attempt to elect preferred leader replicas which 
are outside ISR
 Key: KAFKA-9866
 URL: https://issues.apache.org/jira/browse/KAFKA-9866
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski


The controller automatically triggers a preferred leader election every N 
minutes. It tries to elect all preferred leaders of partitions without doing 
some basic checks like whether the leader is in sync.

This leads to a multitude of errors which cause confusion:
{code:java}
April 14th 2020, 17:01:11.015   [Controller id=0] Partition TOPIC-9 failed to 
complete preferred replica leader election to 1. Leader is still 0{code}
{code:java}
April 14th 2020, 17:01:11.002   [Controller id=0] Error completing replica 
leader election (PREFERRED) for partition TOPIC-9
kafka.common.StateChangeFailedException: Failed to elect leader for partition 
TOPIC-9 under strategy PreferredReplicaPartitionLeaderElectionStrategy {code}
It would be better if the Controller filtered out some of these elections, not 
attempt them at all and maybe log an aggregate INFO-level log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Preferred Partition Leaders

2020-04-14 Thread Michael K. Edwards
I have clients with a similar need relating to disaster recovery.  (Three
replicas per partition within a data center / AWS AZ/region, fourth replica
elsewhere, ineligible to become the partition leader without manual
intervention.)

On Tue, Apr 14, 2020 at 12:31 PM Łukasz Antoniak 
wrote:

> Hi Everyone,
>
> Recently I came across Kafka setup where two data centers are close to each
> other, but the company could not find a suitable place for the third one.
> As a result third DC is little further, lower network throughput, but still
> within range of decent network latency, qualifying for stretch cluster. Let
> us assume that client applications are being deployed only on two "primary"
> DCs. My idea was to minimize network traffic between DC3 and other data
> centers (ideally only to replication).
>
> For Kafka consumer, we can configure rack-awareness, so that consumers will
> read data from closest replica (replica.selector.class).
> Kafka producers have to send data to partition leaders. There is no way to
> tell that we prefer replica leaders to be running in DC1 and DC2. Kafka
> will also try to evenly balance leaders across brokers
> (auto.leader.rebalance.enable).
>
> Does it sound like a good feature to make the choice of partition leaders
> pluggable? Basically, users would be given list of topic-partitions with
> ISRs and rack they are running, and could reshuffle them according to
> custom logic.
>
> Comments appreciated.
>
> Kind regards,
> Lukasz
>


[jira] [Created] (KAFKA-9867) Log cleaner throws InvalidOffsetException for some partitions

2020-04-14 Thread Bradley (Jira)
Bradley created KAFKA-9867:
--

 Summary: Log cleaner throws InvalidOffsetException for some 
partitions
 Key: KAFKA-9867
 URL: https://issues.apache.org/jira/browse/KAFKA-9867
 Project: Kafka
  Issue Type: Bug
  Components: log cleaner
Affects Versions: 2.4.0
 Environment: AWS EC2 with data on EBS volumes.
Reporter: Bradley


About half the partitions for one topic are marked "uncleanable". This is 
consistent across broker replicas – if a partition is uncleanable on one 
broker, its replicas are also uncleanable.

The log-cleaner log seems to suggest out of order offsets. We've seen corrupt 
indexes before, so I removed the indexes from the affected segments and let 
Kafka rebuild them, but it hit the same error.

I don't know when the error first occurred because we've restarted the brokers 
and rotated logs, but it is possible the broker's crashed at some point.

How can I repair these partitions?

{noformat}
[2020-04-09 00:14:16,979] INFO Cleaner 0: Cleaning segment 223293473 in log 
x-9 (largest timestamp Wed Nov 13 20:35:57 UTC 2019
) into 223293473, retaining deletes. (kafka.log.LogCleaner)
[2020-04-09 00:14:29,486] INFO Cleaner 0: Cleaning segment 226762315 in log 
x-9 (largest timestamp Wed Nov 06 18:17:10 UTC 2019
) into 223293473, retaining deletes. (kafka.log.LogCleaner)
[2020-04-09 00:14:29,502] WARN [kafka-log-cleaner-thread-0]: Unexpected 
exception thrown when cleaning log Log(/var/lib/kafka/x
-9). Marking its partition (x-9) as uncleanable (kafka.log.LogCleaner)
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an 
offset (226765178) to position 941 no larger than the last offset appended 
(228774155) to /var/lib/kafka/x-9/000223293473.index.cleaned.
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-04-14 Thread Boyang Chen
Thanks Raymond for the proposal! Yea, adding a unified forwarding envelope
request is a good idea, but it doesn't really solve our problem in this KIP.

On Mon, Apr 13, 2020 at 11:14 PM Raymond Ng  wrote:

> Hi Boyang,
>
> Thanks for the KIP. Overall looks great.
>
> One suggestion: instead of bumping the version and adding an optional field
> (PrincipalName) for a number of requests, can we consider adding a general
> ProxyRequest that acts as an "envelope" for the forwarded requests?
>
> A few advantages to this approach come to mind:
>
>1. Add one (new) Request API instead of modifying a number of them
>2. Make the forwarded nature of the request explicit instead of
>implicitly relying on an optional field and a specific version that
> varies
>by type.
>3. This approach is flexible enough to be potentially useful beyond the
>current use case (e.g. federated, inter-cluster scenarios)
>
> As a bonus, the combination of 1. and 2. should also simplify
> implementation & validation.
>
>
Firstly the broker has to differentiate old and new admin clients as it
should only support forwarding for old ones. Without a version bump, broker
couldn't differentiate both. Besides the bumping of the existing
protocol is not a big overhead comparing with adding a new RPC, so I don't
worry too much about the complexity here.


> On the other hand, it's not clear if the underlying RPC request
> encoding/decode machinery supports embedded requests. Hopefully, even if it
> doesn't it would not be too difficult to extend.
>

Making the forwarding behavior more general is great, but could also come
with problems we couldn't anticipate such as usage abusiveness, more
changes to auto generation framework and increased metadata overhead. At
the moment, we don't expect the direct forwarding would be a bottleneck, so
I'm more inclined to make this proposal as simple as possible for now. If
we do have a strong need in the future, getting the ProxyRequest is
definitely worth the effort.


> What do you think?
>
> Regards,
> Ray
>
>
> On Wed, Apr 8, 2020 at 4:36 PM Boyang Chen 
> wrote:
>
> > Thanks for the info Agam! Will add to the KIP.
> >
> > On Wed, Apr 8, 2020 at 4:26 PM Agam Brahma  wrote:
> >
> > > Hi Boyang,
> > >
> > > The KIP already talks about incorporating changes for FindCoordinator
> > > request routing, wanted to point out one additional case where internal
> > > topics are created "as a side effect":
> > >
> > > As part of handling metadata requests, if we are looking for metadata
> for
> > > an internal topic and auto-topic-creation is enabled [1], the broker
> > > currently goes ahead and creates the internal topic in the same way [2]
> > as
> > > it would for the FindCoordinator request.
> > >
> > > -Agam
> > >
> > > [1]
> > >
> > >
> >
> https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1096
> > > [2]
> > >
> > >
> >
> https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1041
> > >
> > >
> > >
> > > On Mon, Apr 6, 2020 at 8:25 PM Boyang Chen  >
> > > wrote:
> > >
> > > > Thanks for the various inputs everyone!
> > > >
> > > > I think Sonke and Colin's suggestions make sense. The tagged field
> also
> > > > avoids the unnecessary protocol changes for affected requests. Will
> add
> > > it
> > > > to the header. As for the verification, I'm not sure whether it's
> > > necessary
> > > > to require a higher permission level, as it is just an ignorable
> field?
> > > >
> > > > Guozhang's suggestions about metrics also sound great, I will think
> > > through
> > > > the use cases and make some changes to the KIP.
> > > >
> > > > Best,
> > > > Boyang
> > > >
> > > > On Mon, Apr 6, 2020 at 4:28 PM Guozhang Wang 
> > wrote:
> > > >
> > > > > Thanks for the KIP Boyang, this looks good to me. Some minor
> > comments:
> > > > >
> > > > > 1) I think in order to implement the forwarding mechanism the
> brokers
> > > > needs
> > > > > some purgatory to keep the forwarded requests; if that's true,
> should
> > > we
> > > > > add some broker-side metrics for those purgatories for debugging
> > > > purposes?
> > > > >
> > > > > 2) Should we also consider adding some extra metric counting old
> > > > versioned
> > > > > admin client request rates (this goes beyond
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
> > > > > since
> > > > > old versioned client would not report its Kafka version anyways);
> one
> > > use
> > > > > case I can think of besides debugging purposes, is that if we ever
> > > > decides
> > > > > to break compatibility in future versions way after the bridge
> > > releases,
> > > > to
> > > > > reject any v1 requests and hence can totally remove this forwarding
> > > logic
> > > > > on brokers, we can leverage on this metric

Re: Preferred Partition Leaders

2020-04-14 Thread Jamie
Hi All, 
There is a KIP which might be of interest to you: 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=120736982 - it 
sounds like you want to blacklist brokers in DC3? 

Thanks, 
Jamie

-Original Message-
From: Michael K. Edwards 
To: dev@kafka.apache.org
Sent: Tue, 14 Apr 2020 20:50
Subject: Re: Preferred Partition Leaders

I have clients with a similar need relating to disaster recovery.  (Three
replicas per partition within a data center / AWS AZ/region, fourth replica
elsewhere, ineligible to become the partition leader without manual
intervention.)

On Tue, Apr 14, 2020 at 12:31 PM Łukasz Antoniak 
wrote:

> Hi Everyone,
>
> Recently I came across Kafka setup where two data centers are close to each
> other, but the company could not find a suitable place for the third one.
> As a result third DC is little further, lower network throughput, but still
> within range of decent network latency, qualifying for stretch cluster. Let
> us assume that client applications are being deployed only on two "primary"
> DCs. My idea was to minimize network traffic between DC3 and other data
> centers (ideally only to replication).
>
> For Kafka consumer, we can configure rack-awareness, so that consumers will
> read data from closest replica (replica.selector.class).
> Kafka producers have to send data to partition leaders. There is no way to
> tell that we prefer replica leaders to be running in DC1 and DC2. Kafka
> will also try to evenly balance leaders across brokers
> (auto.leader.rebalance.enable).
>
> Does it sound like a good feature to make the choice of partition leaders
> pluggable? Basically, users would be given list of topic-partitions with
> ISRs and rack they are running, and could reshuffle them according to
> custom logic.
>
> Comments appreciated.
>
> Kind regards,
> Lukasz
>

Re: Preferred Partition Leaders

2020-04-14 Thread Michael K. Edwards
#nailedit

On Tue, Apr 14, 2020 at 2:05 PM Jamie  wrote:

> Hi All,
>
> There is a KIP which might be of interest to you:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=120736982 -
> it sounds like you want to blacklist brokers in DC3?
>
> Thanks,
>
> Jamie
>
> -Original Message-
> From: Michael K. Edwards 
> To: dev@kafka.apache.org
> Sent: Tue, 14 Apr 2020 20:50
> Subject: Re: Preferred Partition Leaders
>
> I have clients with a similar need relating to disaster recovery.  (Three
> replicas per partition within a data center / AWS AZ/region, fourth replica
> elsewhere, ineligible to become the partition leader without manual
> intervention.)
>
> On Tue, Apr 14, 2020 at 12:31 PM Łukasz Antoniak <
> lukasz.anton...@gmail.com>
> wrote:
>
> > Hi Everyone,
> >
> > Recently I came across Kafka setup where two data centers are close to
> each
> > other, but the company could not find a suitable place for the third one.
> > As a result third DC is little further, lower network throughput, but
> still
> > within range of decent network latency, qualifying for stretch cluster.
> Let
> > us assume that client applications are being deployed only on two
> "primary"
> > DCs. My idea was to minimize network traffic between DC3 and other data
> > centers (ideally only to replication).
> >
> > For Kafka consumer, we can configure rack-awareness, so that consumers
> will
> > read data from closest replica (replica.selector.class).
> > Kafka producers have to send data to partition leaders. There is no way
> to
> > tell that we prefer replica leaders to be running in DC1 and DC2. Kafka
> > will also try to evenly balance leaders across brokers
> > (auto.leader.rebalance.enable).
> >
> > Does it sound like a good feature to make the choice of partition leaders
> > pluggable? Basically, users would be given list of topic-partitions with
> > ISRs and rack they are running, and could reshuffle them according to
> > custom logic.
> >
> > Comments appreciated.
> >
> > Kind regards,
> > Lukasz
> >
>


Jenkins build is back to normal : kafka-trunk-jdk11 #1351

2020-04-14 Thread Apache Jenkins Server
See 




Re: Preferred Partition Leaders

2020-04-14 Thread Sönke Liebau
Hi Lukasz,

that sounds like a worthwhile addition to me! Just a few thoughts that
occurred to me while reading your mail
"Preferred Leader" as a term is somewhat taken in the Kafka namespace. It
refers to the concept that the first leader from the list of replicas is
the preferred leader and will be chosen as the leader for that partition if
possible. Quite similar to what you are proposing - but without the logic
behind it, as currently that preferred leader is chosen mostly at random.

If your need for this is urgent, you could fairly easily use the
AdminClient to generate a new partition assignment that takes into account
the rack ids of the brokers already, and Kafka would try to honor that
preferred leader for your partitions. But this would only work
retrospectively, not for new topics, for those a random distribution would
again be chosen.

Regarding your idea, I agree that replica and leader assignment is a topic
that is in need of some love. However, by "just" adding the rack id to this
logic we run the risk of making potential future work for things like load
or size based replica assignment to brokers / disks harder. I'm not saying
we need to do it now, but I think we should consider what it might look
like to ensure that your solution can lay to groundwork for later work to
build on.

Best regards,
Sönke




On Tue, 14 Apr 2020 at 23:32, Michael K. Edwards 
wrote:

> #nailedit
>
> On Tue, Apr 14, 2020 at 2:05 PM Jamie  wrote:
>
> > Hi All,
> >
> > There is a KIP which might be of interest to you:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=120736982
> -
> > it sounds like you want to blacklist brokers in DC3?
> >
> > Thanks,
> >
> > Jamie
> >
> > -Original Message-
> > From: Michael K. Edwards 
> > To: dev@kafka.apache.org
> > Sent: Tue, 14 Apr 2020 20:50
> > Subject: Re: Preferred Partition Leaders
> >
> > I have clients with a similar need relating to disaster recovery.  (Three
> > replicas per partition within a data center / AWS AZ/region, fourth
> replica
> > elsewhere, ineligible to become the partition leader without manual
> > intervention.)
> >
> > On Tue, Apr 14, 2020 at 12:31 PM Łukasz Antoniak <
> > lukasz.anton...@gmail.com>
> > wrote:
> >
> > > Hi Everyone,
> > >
> > > Recently I came across Kafka setup where two data centers are close to
> > each
> > > other, but the company could not find a suitable place for the third
> one.
> > > As a result third DC is little further, lower network throughput, but
> > still
> > > within range of decent network latency, qualifying for stretch cluster.
> > Let
> > > us assume that client applications are being deployed only on two
> > "primary"
> > > DCs. My idea was to minimize network traffic between DC3 and other data
> > > centers (ideally only to replication).
> > >
> > > For Kafka consumer, we can configure rack-awareness, so that consumers
> > will
> > > read data from closest replica (replica.selector.class).
> > > Kafka producers have to send data to partition leaders. There is no way
> > to
> > > tell that we prefer replica leaders to be running in DC1 and DC2. Kafka
> > > will also try to evenly balance leaders across brokers
> > > (auto.leader.rebalance.enable).
> > >
> > > Does it sound like a good feature to make the choice of partition
> leaders
> > > pluggable? Basically, users would be given list of topic-partitions
> with
> > > ISRs and rack they are running, and could reshuffle them according to
> > > custom logic.
> > >
> > > Comments appreciated.
> > >
> > > Kind regards,
> > > Lukasz
> > >
> >
>


-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


[jira] [Created] (KAFKA-9868) Flaky Test EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore

2020-04-14 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9868:
--

 Summary: Flaky Test 
EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore
 Key: KAFKA-9868
 URL: https://issues.apache.org/jira/browse/KAFKA-9868
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Sophie Blee-Goldman


h3. Error Message

java.lang.AssertionError: Condition not met within timeout 15000. Expected 
ERROR state but driver is on RUNNING



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7544) Transient Failure: org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFails

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7544.

Resolution: Duplicate

Closing this ticket as it links to the older, non-parameterized version if the 
test, in favor of KAFKA-9831.

> Transient Failure: 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFails
> 
>
> Key: KAFKA-7544
> URL: https://issues.apache.org/jira/browse/KAFKA-7544
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: John Roesler
>Assignee: Matthias J. Sax
>Priority: Major
>
> Observed on Java 11: 
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/264/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFails/]
> at 
> [https://github.com/apache/kafka/pull/5795/commits/5bdcd0e023c6f406d585155399f6541bb6a9f9c2]
>  
> stacktrace:
> {noformat}
> java.lang.AssertionError: 
> Expected: <[KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 2), KeyValue(0, 3), 
> KeyValue(0, 4), KeyValue(0, 5), KeyValue(0, 6), KeyValue(0, 7), KeyValue(0, 
> 8), KeyValue(0, 9)]>
>  but: was <[KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 2), KeyValue(0, 
> 3), KeyValue(0, 4), KeyValue(0, 5), KeyValue(0, 6), KeyValue(0, 7), 
> KeyValue(0, 8), KeyValue(0, 9), KeyValue(0, 10), KeyValue(0, 11), KeyValue(0, 
> 12), KeyValue(0, 13), KeyValue(0, 14)]>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.checkResultPerKey(EosIntegrationTest.java:218)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFails(EosIntegrationTest.java:360)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDi

[Vote] KIP-588: Allow producers to recover gracefully from transaction timeouts

2020-04-14 Thread Boyang Chen
Hey all,

I would like to start the vote for KIP-588:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-588
%3A+Allow+producers+to+recover+gracefully+from+transaction+timeouts

Feel free to continue posting on discussion thread if you have
any questions, thanks!

Best,
Boyang


Build failed in Jenkins: kafka-trunk-jdk11 #1352

2020-04-14 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9539; Add leader epoch in StopReplicaRequest (KIP-570) (#8257)


--
[...truncated 3.04 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

or

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-14 Thread Jun Rao
Hi, Kowshik,

Thanks for addressing those comments. Just a few more minor comments.

200. The UpdateFeaturesRequest includes an AllowDowngrade field. It seems
that field needs to be persisted somewhere in ZK?

201. UpdateFeaturesResponse has the following top level fields. Should
those fields be per feature?

  "fields": [
{ "name": "ErrorCode", "type": "int16", "versions": "0+",
  "about": "The error code, or 0 if there was no error." },
{ "name": "ErrorMessage", "type": "string", "versions": "0+",
  "about": "The error message, or null if there was no error." }
  ]

202. The /features path in ZK has a field min_version_level. Which API and
tool can change that value?

Jun

On Mon, Apr 13, 2020 at 5:12 PM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> Thanks for the feedback! I have updated the KIP-584 addressing your
> comments.
> Please find my response below.
>
> > 100.6 You can look for the sentence "This operation requires ALTER on
> > CLUSTER." in KIP-455. Also, you can check its usage in
> > KafkaApis.authorize().
>
> (Kowshik): Done. Great point! For the newly introduced UPDATE_FEATURES api,
> I have added a
> requirement that AclOperation.ALTER is required on ResourceType.CLUSTER.
>
> > 110. Keeping the feature version as int is probably fine. I just felt
> that
> > for some of the common user interactions, it's more convenient to
> > relate that to a release version. For example, if a user wants to
> downgrade
> > to a release 2.5, it's easier for the user to use the tool like "tool
> > --downgrade 2.5" instead of "tool --downgrade --feature X --version 6".
>
> (Kowshik): Great point. Generally, maximum feature version levels are not
> downgradable after
> they are finalized in the cluster. This is because, as a guideline bumping
> feature version level usually is used mainly to convey important breaking
> changes.
> Despite the above, there may be some extreme/rare cases where a user wants
> to downgrade
> all features to a specific previous release. The user may want to do this
> just
> prior to rolling back a Kafka cluster to a previous release.
>
> To support the above, I have made a change to the KIP explaining that the
> CLI tool is versioned.
> The CLI tool internally has knowledge about a map of features to their
> respective max
> versions supported by the Broker. The tool's knowledge of features and
> their version values,
> is limited to the version of the CLI tool itself i.e. the information is
> packaged into the CLI tool
> when it is released. Whenever a Kafka release introduces a new feature
> version, or modifies
> an existing feature version, the CLI tool shall also be updated with this
> information,
> Newer versions of the CLI tool will be released as part of the Kafka
> releases.
>
> Therefore, to achieve the downgrade need, the user just needs to run the
> version of
> the CLI tool that's part of the particular previous release that he/she is
> downgrading to.
> To help the user with this, there is a new command added to the CLI tool
> called `downgrade-all`.
> This essentially downgrades max version levels of all features in the
> cluster to the versions
> known to the CLI tool internally.
>
> I have explained the above in the KIP under these sections:
>
> Tooling support (have explained that the CLI tool is versioned):
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Toolingsupport
>
> Regular CLI tool usage (please refer to point #3, and see the tooling
> example)
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-RegularCLItoolusage
>
> > 110. Similarly, if the client library finds a feature mismatch with the
> broker,
> > the client likely needs to log some error message for the user to take
> some
> > actions. It's much more actionable if the error message is "upgrade the
> > broker to release version 2.6" than just "upgrade the broker to feature
> > version 7".
>
> (Kowshik): That's a really good point! If we use ints for feature versions,
> the best
> message that client can print for debugging is "broker doesn't support
> feature version 7", and alongside that print the supported version range
> returned
> by the broker. Then, does it sound reasonable that the user could then
> reference
> Kafka release logs to figure out which version of the broker release is
> required
> be deployed, to support feature version 7? I couldn't think of a better
> strategy here.
>
> > 120. When should a developer bump up the version of a feature?
>
> (Kowshik): Great question! In the KIP, I have added a section: 'Guidelines
> on feature versions and workflows'
> providing some guidelines on when to use the versioned feature flags, and
> what
> are the regular workflows with the CLI tool.
>
> Link to the relevant sections:
> Guidelines:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+sche

Re: [DISCUSS] KIP-591: Add Kafka Streams config to set default store type

2020-04-14 Thread Matthias J. Sax
While writing the KIP, the idea about custom stores also came to my
mind, but I thought it could be done in a follow up KIP. However, given
that two people asked about it, it might be better to just do it right
away. Using a "super supplier interface" instead of an String config
seems to be the more natural choice and we would not introduce a config
what we might later deprecate again.

Doing a POC sounds good to me. I'll start working on it and let you know
when I have something ready. I am not worried to much about generics,
but John's suggestion to get rid of the API that accepts StoreSuppliers
is larger scope and it's unclear to me atm how it will unfold.


-Matthias

On 4/14/20 11:24 AM, John Roesler wrote:
> Hi all,
> 
> Thanks for starting this, Matthias! I've had multiple people mention this
> feature request to me. Actually, the most recent such request was from
> someone developing an LMDB-backed set of store implementations, as
> a drop-in replacement for RocksDB, so Sophie's suggestion seems
> relevant.
> 
> What do you think, instead of defining a StoreType enum, of defining
> an interface like:
> vvv
> 
> public interface StoreImplementation {
> KeyValueBytesStoreSupplier keyValueSupplier(
> String name
> );
> 
> WindowBytesStoreSupplier windowBytesStoreSupplier(
> String name,
> Duration retentionPeriod,
> Duration windowSize,
> boolean retainDuplicates
> );
> 
> SessionBytesStoreSupplier sessionBytesStoreSupplier(
> String name,
> Duration retentionPeriod
> );
> }
> 
> 
> Then the default.dsl.store.type you proposed would take a class name instead,
> with the requirement that the class given must implement StoreImplementation,
> and it must also have a zero-arg constructor so we can reflectively 
> instantiate it.
> 
> The interface above is compatible with the existing "store supplier" 
> "interface"
> we have loosely defined in Stores. For veracity's sake, here's how we could
> implement it:
> 
> vvv
> public class RocksDBStoreImplementation implements StoreImplementation {
> 
> @Override
> public KeyValueBytesStoreSupplier keyValueSupplier(final String name) {
> return Stores.persistentTimestampedKeyValueStore(name);
> }
> 
> @Override
> public WindowBytesStoreSupplier windowBytesStoreSupplier(final String 
> name,
>  final Duration 
> retentionPeriod,
>  final Duration 
> windowSize,
>  final boolean 
> retainDuplicates) {
> return Stores.persistentTimestampedWindowStore(name, retentionPeriod, 
> windowSize, retainDuplicates);
> }
> 
> @Override
> public SessionBytesStoreSupplier sessionBytesStoreSupplier(final String 
> name, final Duration retentionPeriod) {
> return Stores.persistentSessionStore(name, retentionPeriod);
> }
> }
> 
> 
> public class InMemoryStoreImplementation implements StoreImplementation {
> 
> @Override
> public KeyValueBytesStoreSupplier keyValueSupplier(final String name) {
> return Stores.inMemoryKeyValueStore(name);
> }
> 
> @Override
> public WindowBytesStoreSupplier windowBytesStoreSupplier(final String 
> name,
>  final Duration 
> retentionPeriod,
>  final Duration 
> windowSize,
>  final boolean 
> retainDuplicates) {
> return Stores.inMemoryWindowStore(name, retentionPeriod, windowSize, 
> retainDuplicates);
> }
> 
> @Override
> public SessionBytesStoreSupplier sessionBytesStoreSupplier(final String 
> name, final Duration retentionPeriod) {
> return Stores.inMemorySessionStore(name, retentionPeriod);
> }
> 
> 
> I liked your suggestion to add a new Materialized overload, as long as the
> generics work out. I think we'd have to actually experiment with it to make
> sure (might be nice to do this in a POC PR, rather than having to amend the
> KIP later, but it's your call).
> 
> In fact, I have also gotten a lot of feedback that our 
> StoreBuilder/StoreSupplier/
> Stores/Materialized/etc. all amount to a pretty confusing ball of code for 
> users.
> It seems like this suggestion is a good opportunity to clear out a lot of the
> confusion, by deprecating all the StoreSupplier methods in Stores, as well
> as the other StoreSupplier methods on Materialized, and just converging on
> passing around the StoreImplementation.
> 
> It seems like this general strategy actually nets a few benefits beyond just
> being able to swap in a d

Re: [DISCUSS] KIP-594: Expose output topic names from TopologyTestDriver

2020-04-14 Thread Matthias J. Sax
Andy,

thanks for the KIP. The motivation is a little unclear to me:

> This information allows all the outputs of a test run to be captured without 
> prior knowledge of the output topics.

Given that the TTD users writes the `Topology` themselves, they should
always have prior knowledge what output topics they use. So why would
this be useful?

Also, there is `Topology#describe()` to get all topic names (even the
name of internal topics -- to be fair, changelog topic names are not
exposed, only store names, but those can we used to infer changelog
topic names, too).

Can you elaborate about the motivation? So far, it's not convincing to me.



-Matthias


On 4/14/20 8:09 AM, Andy Coates wrote:
> Hey all,
> I would like to start off the discussion for KIP-594:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver
> 
> This KIP proposes to expose the names of the topics a topology produces
> records during a test run from the TopologyTestDriver class.
> 
> Let me know your thoughts!
> Andy
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-1.1-jdk8 #290

2020-04-14 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Upgrade ducktape to 0.7.7 (#8487)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H24 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/1.1^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/1.1^{commit} # timeout=10
Checking out Revision f1a5730500b99938497923995619ccd5730e1c5b 
(refs/remotes/origin/1.1)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f1a5730500b99938497923995619ccd5730e1c5b
Commit message: "MINOR: Upgrade ducktape to 0.7.7 (#8487)"
 > git rev-list --no-walk 63a76f8cc42ca792c7f9ee43e601f46877ccf103 # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-1.1-jdk8] $ /bin/bash -xe /tmp/jenkins8923183289003514172.sh
+ rm -rf 
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-1.1-jdk8] $ /bin/bash -xe /tmp/jenkins1641511626317500408.sh
+ export 'GRADLE_OPTS=-Xmx1024m -XX:MaxPermSize=256m'
+ GRADLE_OPTS='-Xmx1024m -XX:MaxPermSize=256m'
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean testAll
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
support was removed in 8.0
Exception in thread "main" java.lang.NullPointerException
at 
org.gradle.wrapper.BootstrapMainStarter.findLauncherJar(BootstrapMainStarter.java:38)
at 
org.gradle.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:26)
at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:108)
at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:61)
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=f1a5730500b99938497923995619ccd5730e1c5b, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #287
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user konstant...@confluent.io


[jira] [Reopened] (KAFKA-5676) MockStreamsMetrics should be in o.a.k.test

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-5676:


> MockStreamsMetrics should be in o.a.k.test
> --
>
> Key: KAFKA-5676
> URL: https://issues.apache.org/jira/browse/KAFKA-5676
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: newbie
>  Time Spent: 96h
>  Remaining Estimate: 0h
>
> {{MockStreamsMetrics}}'s package should be `o.a.k.test` not 
> `o.a.k.streams.processor.internals`. 
> In addition, it should not require a {{Metrics}} parameter in its constructor 
> as it is only needed for its extended base class; the right way of mocking 
> should be implementing {{StreamsMetrics}} with mock behavior than extended a 
> real implementaion of {{StreamsMetricsImpl}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5676) MockStreamsMetrics should be in o.a.k.test

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-5676.

Resolution: Fixed

> MockStreamsMetrics should be in o.a.k.test
> --
>
> Key: KAFKA-5676
> URL: https://issues.apache.org/jira/browse/KAFKA-5676
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: newbie
>  Time Spent: 96h
>  Remaining Estimate: 0h
>
> {{MockStreamsMetrics}}'s package should be `o.a.k.test` not 
> `o.a.k.streams.processor.internals`. 
> In addition, it should not require a {{Metrics}} parameter in its constructor 
> as it is only needed for its extended base class; the right way of mocking 
> should be implementing {{StreamsMetrics}} with mock behavior than extended a 
> real implementaion of {{StreamsMetricsImpl}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-14 Thread Kowshik Prakasam
Hi Jun,

Thanks a lot for the feedback and the questions!
Please find my response below.

> 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It seems
> that field needs to be persisted somewhere in ZK?

(Kowshik): Great question! Below is my explanation. Please help me
understand,
if you feel there are cases where we would need to still persist it in ZK.

Firstly I have updated my thoughts into the KIP now, under the 'guidelines'
section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows

The allowDowngrade boolean field is just to restrict the user intent, and
to remind
them to double check their intent before proceeding. It should be set to
true
by the user in a request, only when the user intent is to forcefully
"attempt" a
downgrade of a specific feature's max version level, to the provided value
in
the request.

We can extend this safeguard. The controller (on it's end) can maintain
rules in the code, that, for safety reasons would outright reject certain
downgrades
from a specific max_version_level for a specific feature. Such rejections
may
happen depending on the feature being downgraded, and from what version
level.

The CLI tool only allows a downgrade attempt in conjunction with specific
flags and sub-commands. For example, in the CLI tool, if the user uses the
'downgrade-all' command, or passes '--allow-downgrade' flag when updating a
specific feature, only then the tool will translate this ask to setting
'allowDowngrade' field in the request to the server.

> 201. UpdateFeaturesResponse has the following top level fields. Should
> those fields be per feature?
>
>   "fields": [
> { "name": "ErrorCode", "type": "int16", "versions": "0+",
>   "about": "The error code, or 0 if there was no error." },
> { "name": "ErrorMessage", "type": "string", "versions": "0+",
>   "about": "The error message, or null if there was no error." }
>   ]

(Kowshik): Great question!
As such, the API is transactional, as explained in the sections linked
below.
Either all provided FeatureUpdate was applied, or none.
It's the reason I felt we can have just one error code + message.
Happy to extend this if you feel otherwise. Please let me know.

Link to sections:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-ChangestoKafkaController

https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guarantees

> 202. The /features path in ZK has a field min_version_level. Which API and
> tool can change that value?

(Kowshik): Great question! Currently this cannot be modified by using the
API or the tool.
Feature version deprecation (by raising min_version_level) can be done only
by the Controller directly. The rationale is explained in this section:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Featureversiondeprecation


Cheers,
Kowshik

On Tue, Apr 14, 2020 at 5:33 PM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for addressing those comments. Just a few more minor comments.
>
> 200. The UpdateFeaturesRequest includes an AllowDowngrade field. It seems
> that field needs to be persisted somewhere in ZK?
>
> 201. UpdateFeaturesResponse has the following top level fields. Should
> those fields be per feature?
>
>   "fields": [
> { "name": "ErrorCode", "type": "int16", "versions": "0+",
>   "about": "The error code, or 0 if there was no error." },
> { "name": "ErrorMessage", "type": "string", "versions": "0+",
>   "about": "The error message, or null if there was no error." }
>   ]
>
> 202. The /features path in ZK has a field min_version_level. Which API and
> tool can change that value?
>
> Jun
>
> On Mon, Apr 13, 2020 at 5:12 PM Kowshik Prakasam 
> wrote:
>
> > Hi Jun,
> >
> > Thanks for the feedback! I have updated the KIP-584 addressing your
> > comments.
> > Please find my response below.
> >
> > > 100.6 You can look for the sentence "This operation requires ALTER on
> > > CLUSTER." in KIP-455. Also, you can check its usage in
> > > KafkaApis.authorize().
> >
> > (Kowshik): Done. Great point! For the newly introduced UPDATE_FEATURES
> api,
> > I have added a
> > requirement that AclOperation.ALTER is required on ResourceType.CLUSTER.
> >
> > > 110. Keeping the feature version as int is probably fine. I just felt
> > that
> > > for some of the common user interactions, it's more convenient to
> > > relate that to a release version. For example, if a user wants to
> > downgrade
> > > to a release 2.5, it's easier for the user to use the tool like "tool
> > > --downgrade 2.5" instead of "tool --downgrade --feature X --version 6".
> >
> > (Kowshik): Great point. Generally, maximum feature version levels are not
> > downgradable after
> 

Jenkins build is back to normal : kafka-1.0-jdk8 #293

2020-04-14 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #4432

2020-04-14 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade ducktape to 0.7.7 (#8487)


--
[...truncated 3.02 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-sys

Re: [DISCUSS] KIP-591: Add Kafka Streams config to set default store type

2020-04-14 Thread Guozhang Wang
Thanks for writing this up Matthias. I'm also thinking that maybe we can
allow customized store types to be replaced as a single config knob.

I'm even thinking if this can be extended to processor API besides DSL as
well: e.g. if we allow classes like "KeyValueStoreBuilder" to take a
constructor like

KeyValueStoreBuilder(StoreImplementation)


Then the config would then be used for StreamsBuilder#build, while for
processor API user can pass in impl for `Topology#addStateStore`; of course
if users want they can pass in different impl classes which we cannot
forbid, but at least processor API users can still partially benefit from
this (also maybe we can rename the config to "default.store.impl.class"?).

Guozhang

On Tue, Apr 14, 2020 at 5:44 PM Matthias J. Sax  wrote:

> While writing the KIP, the idea about custom stores also came to my
> mind, but I thought it could be done in a follow up KIP. However, given
> that two people asked about it, it might be better to just do it right
> away. Using a "super supplier interface" instead of an String config
> seems to be the more natural choice and we would not introduce a config
> what we might later deprecate again.
>
> Doing a POC sounds good to me. I'll start working on it and let you know
> when I have something ready. I am not worried to much about generics,
> but John's suggestion to get rid of the API that accepts StoreSuppliers
> is larger scope and it's unclear to me atm how it will unfold.
>
>
> -Matthias
>
> On 4/14/20 11:24 AM, John Roesler wrote:
> > Hi all,
> >
> > Thanks for starting this, Matthias! I've had multiple people mention this
> > feature request to me. Actually, the most recent such request was from
> > someone developing an LMDB-backed set of store implementations, as
> > a drop-in replacement for RocksDB, so Sophie's suggestion seems
> > relevant.
> >
> > What do you think, instead of defining a StoreType enum, of defining
> > an interface like:
> > vvv
> > 
> > public interface StoreImplementation {
> > KeyValueBytesStoreSupplier keyValueSupplier(
> > String name
> > );
> >
> > WindowBytesStoreSupplier windowBytesStoreSupplier(
> > String name,
> > Duration retentionPeriod,
> > Duration windowSize,
> > boolean retainDuplicates
> > );
> >
> > SessionBytesStoreSupplier sessionBytesStoreSupplier(
> > String name,
> > Duration retentionPeriod
> > );
> > }
> > 
> >
> > Then the default.dsl.store.type you proposed would take a class name
> instead,
> > with the requirement that the class given must implement
> StoreImplementation,
> > and it must also have a zero-arg constructor so we can reflectively
> instantiate it.
> >
> > The interface above is compatible with the existing "store supplier"
> "interface"
> > we have loosely defined in Stores. For veracity's sake, here's how we
> could
> > implement it:
> >
> > vvv
> > public class RocksDBStoreImplementation implements StoreImplementation {
> >
> > @Override
> > public KeyValueBytesStoreSupplier keyValueSupplier(final String
> name) {
> > return Stores.persistentTimestampedKeyValueStore(name);
> > }
> >
> > @Override
> > public WindowBytesStoreSupplier windowBytesStoreSupplier(final
> String name,
> >  final
> Duration retentionPeriod,
> >  final
> Duration windowSize,
> >  final
> boolean retainDuplicates) {
> > return Stores.persistentTimestampedWindowStore(name,
> retentionPeriod, windowSize, retainDuplicates);
> > }
> >
> > @Override
> > public SessionBytesStoreSupplier sessionBytesStoreSupplier(final
> String name, final Duration retentionPeriod) {
> > return Stores.persistentSessionStore(name, retentionPeriod);
> > }
> > }
> >
> >
> > public class InMemoryStoreImplementation implements StoreImplementation {
> >
> > @Override
> > public KeyValueBytesStoreSupplier keyValueSupplier(final String
> name) {
> > return Stores.inMemoryKeyValueStore(name);
> > }
> >
> > @Override
> > public WindowBytesStoreSupplier windowBytesStoreSupplier(final
> String name,
> >  final
> Duration retentionPeriod,
> >  final
> Duration windowSize,
> >  final
> boolean retainDuplicates) {
> > return Stores.inMemoryWindowStore(name, retentionPeriod,
> windowSize, retainDuplicates);
> > }
> >
> > @Override
> > public SessionBytesStoreSupplier sessionBytesStoreSupplier(final
> String name, final Duration retentionPeriod) {
> > 

Build failed in Jenkins: kafka-2.0-jdk8 #318

2020-04-14 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Upgrade ducktape to 0.7.7 (#8487)


--
[...truncated 436.35 KB...]

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenUnknownTopicOrPartitionError PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotEnoughReplicasAfterAppendError STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotEnoughReplicasAfterAppendError PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenMessageTooLargeError STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenMessageTooLargeError PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionIfErrorCodeNotAvailableForPid STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionIfErrorCodeNotAvailableForPid PASSED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
STARTED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
PASSED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn STARTED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn PASSED

kafka.coordinator.transaction.TransactionCoordinatorConcurrencyTest > 
testConcurrentGoodPathSequence STARTED

kafka.coordinator.transaction.TransactionCoordinatorConcurrencyTest > 
testConcurrentGoodPathSequence PASSED

kafka.coordinator.transaction.TransactionCoordinatorConcurrencyTest > 
testConcurrentTransactionExpiration STARTED

kafka.coordinator.transaction.TransactionCoordinatorConcurrencyTest > 
testConcurrentTransactionExpiration PASSED

kafka.coordinator.transaction.TransactionCoordinatorConcurrencyTest > 
testConcurrentRandomSequences STARTED

kafka.coordinator.transaction.TransactionCoordinatorConcurrencyTest > 
testConcurrentRandomSequences PASSED

kafka.coordinator.transaction.TransactionCoordinatorConcurrencyTest > 
testConcurrentLoadUnloadPartitions STARTED

kafka.coordinator.transaction.TransactionCoordinatorConcurrencyTest > 
testConcurrentLoadUnloadPartitions PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testZeroMaxConnectionsPe

[jira] [Created] (KAFKA-9869) kafka offset is discontinuous when appear when appear EOFException

2020-04-14 Thread Lee chen (Jira)
Lee chen created KAFKA-9869:
---

 Summary: kafka offset is discontinuous when  appear when  appear 
EOFException
 Key: KAFKA-9869
 URL: https://issues.apache.org/jira/browse/KAFKA-9869
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.11.0.4
Reporter: Lee chen


 My kafka version is 0.11.x .The producer I used add the config 
conpression.type = gzip, which Configurate in broker is  “producer”.
 When I send some messages in one topic ,and use simpleConsumer to read the 
topic messages.When read the partition-1 's offset 167923404.I got a error like 
this: java.util.zip.ZipException
 I use the tools ./kafka-simple-consumer-shell.sh to read the  partition-1 
's data of 167923404,but cannot read it. It seems like that this message  has 
missed.\

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9870) Error is reported until the cluster is restarted. Error for partition [__consumer_offsets,19] to broker 0:org.apache.kafka.common.errors.NotLeaderForPartitionException: T

2020-04-14 Thread yonglu gao (Jira)
yonglu gao created KAFKA-9870:
-

 Summary: Error is reported until the cluster is restarted. Error 
for partition [__consumer_offsets,19] to broker 
0:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. 
 Key: KAFKA-9870
 URL: https://issues.apache.org/jira/browse/KAFKA-9870
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.11.0.0
 Environment: jdk 1.8
Reporter: yonglu gao


I have a kafka cluster .when broker 2 restart,then error,Error is reported 
until the cluster is restarted in turn after 3 hours:

 

broker2:
ERROR [KafkaApi-2] Error when handling request 
\{replica_id=0,max_wait_time=500,min_bytes=1,max_bytes=10485760,isolation_level=0,topics=[{topic=__consumer_offsets,partitions=[{partition=13,fetch_offset=17465556,log_start_offset=0,max_bytes=1048576}]}]}
 (kafka.server.KafkaApis)kafka.common.NotAssignedReplicaException: Leader 2 
failed to record follower 0's position -1 since the replica is not recognized 
to be one of the assigned replicas  for partition __consumer_offsets-13. at 
kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:274) at 
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1091)
 at 
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1088)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at 
kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:1088)
 at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:622) at 
kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:606) at 
kafka.server.KafkaApis.handle(KafkaApis.scala:98) at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:66) at 
java.lang.Thread.run(Thread.java:748) [2020-04-11 03:00:32,285] INFO 
[ReplicaFetcherManager on broker 2] Removed fetcher for partitions 
test-2,base_registerAndCardAdd-25,pay-total-order-topic-30,base_registerAndCardAdd-7,pay-total-order-topic-12,base_registerAndCardAdd-51,mac-dbasync-topic-51,test-10,base_registerAndCardAdd-33,pay-total-order-topic-38,mac-base-topic-26,base_registerAndCardAdd-15,pay-total-order-topic-39,mac-dbasync-topic-59,mac-base-topic-8,pay-total-order-topic-21,test-0,base_registerAndCardAdd-23,pay-total-order-topic-3,pay-total-order-topic-47,mac-base-topic-16,base_registerAndCardAdd-24,pay-total-order-topic-29,__consumer_offsets-31,mac-dbasync-topic-31,mac-base-topic-24,mac-dbasync-topic-13,mac-dbasync-topic-57,mac-base-topic-6,__consumer_offsets-39,mac-dbasync-topic-39,mac-dbasync-topic-21,mac-base-topic-14,mac-dbasync-topic-47,__consumer_offsets-48,pay-total-order-topic-9,mac-dbasync-topic-48,__consumer_offsets-30,mac-dbasync-topic-30,mac-base-topic-4,__consumer_offsets-19,mac-dbasync-topic-19,base-beginnerOper-topic-56,__consumer_offsets-1,mac-dbasync-topic-1,base-beginnerOper-topic-38,__consumer_offsets-2,base-beginnerOper-topic-20,__consumer_offsets-28,base-beginnerOper-topic-46,__consumer_offsets-10,base-beginnerOper-topic-28,__consumer_offsets-36,base-beginnerOper-topic-54,__consumer_offsets-18,base_registerAndCardAdd-56,pay-total-order-topic-61,base-beginnerOper-topic-36,base_registerAndCardAdd-38,base-beginnerOper-topic-18,base-beginnerOper-topic-0,base-beginnerOper-topic-44,base_registerAndCardAdd-46,base-beginnerOper-topic-26,base-beginnerOper-topic-8,base-beginnerOper-topic-16,pay-total-order-topic-60,pay-total-order-topic-23,mac-dbasync-topic-62,base_registerAndCardAdd-0,base_registerAndCardAdd-44,mac-dbasync-topic-44,test-3,base_registerAndCardAdd-26,pay-total-order-topic-31,mac-base-topic-19,base_registerAndCardAdd-8,pay-total-order-topic-32,mac-base-topic-1,test-11,pay-total-order-topic-14,pay-total-order-topic-58,mac-base-topic-27,test-12,base_registerAndCardAdd-16,pay-total-order-topic-40,base_registerAndCardAdd-17,pay-total-order-topic-22,base_registerAndCardAdd-43,pay-total-order-topic-48,pay-total-order-topic-4,mac-dbasync-topic-24,mac-base-topic-17,mac-dbasync-topic-50,mac-dbasync-topic-6,mac-dbasync-topic-32,mac-base-topic-25,mac-dbasync-topic-58,mac-base-topic-7,pay-total-order-topic-20,mac-dbasync-topic-40,__consumer_offsets-41,pay-total-order-topic-2,mac-dbasync-topic-41,__consumer_offsets-23,mac-dbasync-topic-23,__consumer_offsets-49,base_registerAndCardAdd-5,mac-dbasync-topic-49,__consumer_offsets-12,mac-dbasync-topic-12,base-beginnerOper-topic-49,base-beginnerOper-topic-31,base-beginnerOper-topic-57,base-beginnerOper-topic-13,__consumer_offsets-21,base-beginnerOper-topic-39,__consumer_offsets-3,__consumer_offsets-47,mac-dbasync-topic-3,__consumer_offsets-29,mac-dbasync-topic-29,base-beginnerOper-topic-47,__consumer_offsets-11,mac-dbasync-topic-11,base-beginnerOper-topic-4

Build failed in Jenkins: kafka-2.2-jdk8 #45

2020-04-14 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Upgrade ducktape to 0.7.7 (#8487)


--
[...truncated 2.74 MB...]
kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource STARTED

kafka.security.auth.S

Jenkins build is back to normal : kafka-trunk-jdk11 #1353

2020-04-14 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9871) Consumer group description showing duplicate partition information

2020-04-14 Thread OM Singh (Jira)
OM Singh created KAFKA-9871:
---

 Summary: Consumer group description showing duplicate partition 
information
 Key: KAFKA-9871
 URL: https://issues.apache.org/jira/browse/KAFKA-9871
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.2.1
 Environment: Preprod environment (Staging )
Reporter: OM Singh
 Attachments: externalDevice-lag

Kafka consumer describe command showing duplicate values of same partitions :
qea1.preprod.fe.deviceTopic 3  3061365 3061365 
0   
feexternaldevicestreamproducer-preprod-63dcef9e-f721-457f-8273-d7761aa24844-StreamThread-5-consumer-248ba8e8-dc47-4763-884f-2e07592b66e5
  /10.243.227.103

qea1.preprod.feexternal.deviceTopic 3  2525385 2568565 
43180   -
 

qea1.preprod.feexternal.deviceTopic topic has 60 partition . We observed this 
issue with 12 partition. This is not getting away even after restarting 
contains or reseting the offsets. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9826) Log cleaning repeatedly picks same segment with no effect when first dirty offset is past start of active segment

2020-04-14 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-9826.

Fix Version/s: 2.6.0
   Resolution: Fixed

merged the PR to trunk

> Log cleaning repeatedly picks same segment with no effect when first dirty 
> offset is past start of active segment
> -
>
> Key: KAFKA-9826
> URL: https://issues.apache.org/jira/browse/KAFKA-9826
> Project: Kafka
>  Issue Type: Bug
>  Components: log cleaner
>Affects Versions: 2.4.1
>Reporter: Steve Rodrigues
>Assignee: Steve Rodrigues
>Priority: Major
> Fix For: 2.6.0
>
>
> Seen on a system where a given partition had a single segment, and for 
> whatever reason (deleteRecords?), the logStartOffset was greater than the 
> base segment of the log, there were a continuous series of 
> ```
> [2020-03-03 16:56:31,374] WARN Resetting first dirty offset of  FOO-3 to log 
> start offset 55649 since the checkpointed offset 0 is invalid. 
> (kafka.log.LogCleanerManager$)
> ```
> messages (partition name changed, it wasn't really FOO). This was expected to 
> be resolved by KAFKA-6266 but clearly wasn't. 
> Further investigation revealed that  a few segments were continuously 
> cleaning and generating messages in the `log-cleaner.log` of the form:
> ```
> [2020-03-31 13:34:50,924] INFO Cleaner 1: Beginning cleaning of log FOO-3 
> (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,924] INFO Cleaner 1: Building offset map for FOO-3... 
> (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO Cleaner 1: Building offset map for log FOO-3 
> for 0 segments in offset range [55287, 54237). (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO Cleaner 1: Offset map for log FOO-3 complete. 
> (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO Cleaner 1: Cleaning log FOO-3 (cleaning prior 
> to Wed Dec 31 19:00:00 EST 1969, discarding tombstones prior to Tue Dec 10 
> 13:39:08 EST 2019)... (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO [kafka-log-cleaner-thread-1]: Log cleaner 
> thread 1 cleaned log FOO-3 (dirty section = [55287, 55287])
> 0.0 MB of log processed in 0.0 seconds (0.0 MB/sec).
> Indexed 0.0 MB in 0.0 seconds (0.0 Mb/sec, 100.0% of total time)
> Buffer utilization: 0.0%
> Cleaned 0.0 MB in 0.0 seconds (NaN Mb/sec, 0.0% of total time)
> Start size: 0.0 MB (0 messages)
> End size: 0.0 MB (0 messages) NaN% size reduction (NaN% fewer messages) 
> (kafka.log.LogCleaner)
> ```
> What seems to have happened here (data determined for a different partition) 
> is:
> There exist a number of partitions here which get relatively low traffic, 
> including our friend FOO-5. For whatever reason, LogStartOffset of this 
> partition has moved beyond the baseOffset of the active segment. (Notes in 
> other issues indicate that this is a reasonable scenario.) So there’s one 
> segment, starting at 166266, and a log start of 166301.
> grabFilthiestCompactedLog runs and reads the checkpoint file. We see that 
> this topicpartition needs to be cleaned, and call cleanableOffsets on it 
> which returns an OffsetsToClean with firstDirtyOffset == logStartOffset == 
> 166301 and firstUncleanableOffset = max(logStart, activeSegment.baseOffset) = 
> 116301, and forceCheckpoint = true.
> The checkpoint file is updated in grabFilthiestCompactedLog (this is the fix 
> for KAFKA-6266). We then create a LogToClean object based on the 
> firstDirtyOffset and firstUncleanableOffset of 166301 (past the active 
> segment’s base offset).
> The LogToClean object has cleanBytes = logSegments(-1, 
> firstDirtyOffset).map(_.size).sum → the size of this segment. It has 
> firstUncleanableOffset and cleanableBytes determined by 
> calculateCleanableBytes. calculateCleanableBytes returns:
> {{}}
> {{val firstUncleanableSegment = 
> log.nonActiveLogSegmentsFrom(uncleanableOffset).headOption.getOrElse(log.activeSegment)}}
> {{val firstUncleanableOffset = firstUncleanableSegment.baseOffset}}
> {{val cleanableBytes = log.logSegments(firstDirtyOffset, 
> math.max(firstDirtyOffset, firstUncleanableOffset)).map(_.size.toLong).sum
> (firstUncleanableOffset, cleanableBytes)}}
> firstUncleanableSegment is activeSegment. firstUncleanableOffset is the base 
> offset, 166266. cleanableBytes is looking for logSegments(166301, max(166301, 
> 166266) → which _is the active segment_
> So there are “cleanableBytes” > 0.
> We then filter out segments with totalbytes (clean + cleanable) > 0. This 
> segment has totalBytes > 0, and it has cleanablebytes, so great! It’s 
> filthiest.
> The cleaner picks it, calls cleanLog on it, which then does cleaner.clean, 
> which returns nextDirtyOffset and cleaner stats. cleaner.clean callls 
> doClean, which builds an offsetMap. The offsetMap 

Jenkins build is back to normal : kafka-2.1-jdk8 #265

2020-04-14 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-9834) Add interface to set ZSTD compresson level

2020-04-14 Thread jiamei xie (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiamei xie resolved KAFKA-9834.
---
Resolution: Duplicate

Duplicate with KAFKA-7632

> Add interface to set ZSTD compresson level
> --
>
> Key: KAFKA-9834
> URL: https://issues.apache.org/jira/browse/KAFKA-9834
> Project: Kafka
>  Issue Type: Bug
>Reporter: jiamei xie
>Assignee: jiamei xie
>Priority: Major
>
> It seems kafka use zstd default compression level 3 and doesn't have support 
> for setting zstd compression level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.5-jdk8 #93

2020-04-14 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Upgrade ducktape to 0.7.7 (#8487)


--
[...truncated 5.88 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

Jenkins build is back to normal : kafka-trunk-jdk8 #4433

2020-04-14 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.4-jdk8 #187

2020-04-14 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Upgrade ducktape to 0.7.7 (#8487)


--
[...truncated 7.05 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest >