[jira] [Created] (KAFKA-9918) SslEngineFactory is NOT closed when channel is closing

2020-04-25 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-9918:
-

 Summary: SslEngineFactory is NOT closed when channel is closing
 Key: KAFKA-9918
 URL: https://issues.apache.org/jira/browse/KAFKA-9918
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


the default implementation (DefaultSslEngineFactory) does not have any 
releasable object so we didn't notice this issue. However, it would be better 
to fix this issue for the custom engine factory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [Discuss] KIP-582 Add a "continue" option for Kafka Connect error handling

2020-04-25 Thread Andrew Schofield
Hi Zihan,
Thanks for the KIP. I have a question about the proposal.

Why do you think putting a broken record somewhere other than a dead-letter 
topic is
better? For a reliable system, you really want zero broken records, or perhaps 
close to zero.
Broken records represent exceptions that need to be alerted and dealt with, 
either by
fixing an incorrect application, or improving the quality of the data incoming. 
I wouldn't
imagine a second set of connectors reading dead-letter topics storing the 
broken events
elsewhere.

If you did want to store them in S3, HDFS or wherever, why couldn't you run 
another
connector off the dead-letter topic, with the ByteArrayConverter, that just 
bundles up
the broken records as raw bytes. This seems to me very close to what this KIP 
is trying to
achieve, only without needing any interface or behaviour changes in the 
connectors. Yes,
you need to run more connectors, but in a distributed connect cluster, that's 
easy to achieve.

Thanks,
Andrew Schofield
IBM

On 24/04/2020, 22:00, "Zihan Li"  wrote:

Hi Chris,

Thanks a lot for your comments. 

1. The complexity comes from maintaining an additional topic and a 
connector, rather than configuring them. Users need to spend extra time and 
money to maintain the additional connectors. I can imagine a case where a user 
has 3 topics consumed by S3, HDFS and JDBC respectively  The user has to 
maintain 3 more connectors to consume three DLQs, in order to put broken 
records to the place they should go. This new option will give users a choice 
to only maintain half of their connectors, yet having broken records stored in 
each destination system. 

2. This is a great question. I updated my KIP to reflect the most recent 
plan. We can add a new method to SinkTask called “putBrokenRecord”, so that 
sink connectors is able to differentiate between well-formed records and broken 
records. The default implementation of this method should be throwing errors to 
indicate that the connector does not support broken record handling yet.

3. I think the Schema should be Optional Byte Array, in order to handle all 
possibilities. But I’m open to suggestions on that. 

4. Yes, this rejected alternative plan makes sense to me. I’ll put that 
into the KIP. Compared with this alternative, the point of this proposal is to 
save the effort to maintain twice as many connectors as necessary. 

Thanks again. Looking forward to the discussion!

Sorry if you see this email twice, the previous one didn't show up on this 
discussion thread.

Best,
Zihan

On 2020/04/13 22:35:56, Christopher Egerton  wrote: 
> HI Zihan,
> 
> Thanks for the KIP! I have some questions that I'm hoping we can address 
to
> help better understand the motivation for this proposal.
> 
> 1. In the "Motivation" section it's written that "If users want to store
> their broken records, they have to config a broken record queue, which is
> too much work for them in some cases." Could you elaborate on what makes
> this a lot of work? Ideally, users should be able to configure the dead
> letter queue by specifying a value for the "
> errors.deadletterqueue.topic.name" property in their sink connector 
config;
> this doesn't seem like a lot of work on the surface.
> 
> 2. If the "errors.tolerance" property is set to "continue", would sink
> connectors be able to differentiate between well-formed records whose
> successfully-deserialized contents are byte arrays and malformed records
> whose contents are the still-serialized byte arrays of the Kafka message
> from which they came?
> 
> 3. I think it's somewhat implied by the KIP, but it'd be nice to see what
> the schema for a malformed record would be. Null? Byte array? Optional 
byte
> array?
> 
> 4. This is somewhat covered by the first question, but it seems worth
> pointing out that this exact functionality can already be achieved by 
using
> features already provided by the framework. Configure your connector to
> send malformed records to a dead letter queue topic, and configure a
> separate connector to consume from that dead letter queue topic, use the
> ByteArrayConverter to deserialize records, and send those records to the
> destination sink. It'd be nice if this were called out in the "Rejected
> Alternatives" section with a reason on why the changes proposed in the KIP
> are preferable, especially since it may still work as a viable workaround
> for users who are working on older versions of the Connect framework.
> 
> Looking forward to the discussion!
> 
> Cheers,
> 
> Chris
> 
> On Tue, Mar 24, 2020 at 11:50 AM Zihan Li  wrote:
> 
> > Hi,
> >
> > I just want to re-up this discussion thread about KIP-582 Add a 
"continue"
> > option for Kafka Connect error handling.
> >
> > Wiki page: 

Re: [Discuss] KIP-582 Add a "continue" option for Kafka Connect error handling

2020-04-25 Thread Christopher Egerton
Hi Zihan,

Thanks for the changes and the clarifications! I agree that the complexity
of maintaining a second topic and a second connector is a fair amount of
work; to Andrew's question, it seems less about the cost of just running
another connector, and more about managing that second connector (and
topic) when a lot of the logic is identical, such as topic ACLs,
credentials for the connector to access the external system, and other
fine-tuning.

However, I'm still curious about the general use case here. For example, if
a converter fails to deserialize a record, it seems like the right thing to
do would be to examine the record, try to understand why it's failing, and
then find a converter that can handle it. If the raw byte array for the
Kafka message gets written to the external system instead, what's the
benefit to the user? Yes, they won't have to configure another connector
and manage another topic, but they're still going to want to examine that
data at some point; why would it be easier to deal with malformed records
from an external system than it would from where they originally broke, in
Kafka?

If we're going to add a new feature like this to the framework, I just want
to make sure that there's a general use case for this that isn't tied to
one specific type of connector, external system, usage pattern, etc.

Oh, and one other question that came to mind--what would the expected
behavior be if a converter was unable to deserialize a record's key, but
was able to deserialize its value?

Cheers,

Chris

On Sat, Apr 25, 2020 at 12:27 PM Andrew Schofield 
wrote:

> Hi Zihan,
> Thanks for the KIP. I have a question about the proposal.
>
> Why do you think putting a broken record somewhere other than a
> dead-letter topic is
> better? For a reliable system, you really want zero broken records, or
> perhaps close to zero.
> Broken records represent exceptions that need to be alerted and dealt
> with, either by
> fixing an incorrect application, or improving the quality of the data
> incoming. I wouldn't
> imagine a second set of connectors reading dead-letter topics storing the
> broken events
> elsewhere.
>
> If you did want to store them in S3, HDFS or wherever, why couldn't you
> run another
> connector off the dead-letter topic, with the ByteArrayConverter, that
> just bundles up
> the broken records as raw bytes. This seems to me very close to what this
> KIP is trying to
> achieve, only without needing any interface or behaviour changes in the
> connectors. Yes,
> you need to run more connectors, but in a distributed connect cluster,
> that's easy to achieve.
>
> Thanks,
> Andrew Schofield
> IBM
>
> On 24/04/2020, 22:00, "Zihan Li"  wrote:
>
> Hi Chris,
>
> Thanks a lot for your comments.
>
> 1. The complexity comes from maintaining an additional topic and a
> connector, rather than configuring them. Users need to spend extra time and
> money to maintain the additional connectors. I can imagine a case where a
> user has 3 topics consumed by S3, HDFS and JDBC respectively  The user has
> to maintain 3 more connectors to consume three DLQs, in order to put broken
> records to the place they should go. This new option will give users a
> choice to only maintain half of their connectors, yet having broken records
> stored in each destination system.
>
> 2. This is a great question. I updated my KIP to reflect the most
> recent plan. We can add a new method to SinkTask called “putBrokenRecord”,
> so that sink connectors is able to differentiate between well-formed
> records and broken records. The default implementation of this method
> should be throwing errors to indicate that the connector does not support
> broken record handling yet.
>
> 3. I think the Schema should be Optional Byte Array, in order to
> handle all possibilities. But I’m open to suggestions on that.
>
> 4. Yes, this rejected alternative plan makes sense to me. I’ll put
> that into the KIP. Compared with this alternative, the point of this
> proposal is to save the effort to maintain twice as many connectors as
> necessary.
>
> Thanks again. Looking forward to the discussion!
>
> Sorry if you see this email twice, the previous one didn't show up on
> this discussion thread.
>
> Best,
> Zihan
>
> On 2020/04/13 22:35:56, Christopher Egerton 
> wrote:
> > HI Zihan,
> >
> > Thanks for the KIP! I have some questions that I'm hoping we can
> address to
> > help better understand the motivation for this proposal.
> >
> > 1. In the "Motivation" section it's written that "If users want to
> store
> > their broken records, they have to config a broken record queue,
> which is
> > too much work for them in some cases." Could you elaborate on what
> makes
> > this a lot of work? Ideally, users should be able to configure the
> dead
> > letter queue by specifying a value for the "
> > errors.deadletterqueue.topic.name" property in their sink connector
> co

Re: [VOTE] KIP-584: Versioning scheme for features

2020-04-25 Thread Kowshik Prakasam
Hi Colin,

Thanks for the explanation! I agree with you, and I have updated the KIP.
Here is a link to relevant section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Controller:ZKnodebootstrapwithdefaultvalues


Cheers,
Kowshik

On Fri, Apr 24, 2020 at 8:50 PM Colin McCabe  wrote:

> On Fri, Apr 24, 2020, at 00:01, Kowshik Prakasam wrote:
> > (Kowshik): Great point! However for case #1, I'm not sure why we need to
> > create a '/features' ZK node with disabled features. Instead, do you see
> > any drawback if we just do not create it? i.e. if IBP is less than 2.6,
> the
> > controller treats the case as though the versioning system is completely
> > disabled, and would not create a non-existing '/features' node.
>
> Hi Kowshik,
>
> When the IBP is less than 2.6, but the software has been upgraded to a
> state where it supports this KIP, that
>  means the user is upgrading from an earlier version of the software.  In
> this case, we want to start with all the features disabled and allow the
> user to enable them when they are ready.
>
> Enabling all the possible features immediately after an upgrade could be
> harmful to the cluster.  On the other hand, for a new cluster, we do want
> to enable all the possible features immediately . I was proposing this as a
> way to distinguish the two cases (since the new cluster will never be
> started with an old IBP).
>
> > Colin MccCabe wrote:
> > > And now, something a little bit bigger (sorry).  For finalized
> features,
> > > why do we need both min_version_level and max_version_level?  Assuming
> that
> > > we want all the brokers to be on the same feature version level, we
> really only care
> > > about three numbers for each feature, right?  The minimum supported
> version
> > > level, the maximum supported version level, and the current active
> version level.
> >
> > > We don't actually want different brokers to be on different versions of
> > > the same feature, right?  So we can just have one number for current
> > > version level, rather than two.  At least that's what I was thinking
> -- let
> > > me know if I missed something.
> >
> > (Kowshik): It is my understanding that the "current active version level"
> > that you have mentioned, is the "max_version_level". But we still
> > maintain/publish both min and max version levels, because, the detail
> about
> > min level is useful to external clients. This is described below.
> >
> > For any feature F, think of the closed range: [min_version_level,
> > max_version_level] as the range of finalized versions, that's guaranteed
> to
> > be supported by all brokers in the cluster.
> >  - "max_version_level" is the finalized highest common version among all
> > brokers,
> >  - "min_version_level" is the finalized lowest common version among all
> > brokers.
> >
> > Next, think of "client" here as the "user of the new feature versions
> > system". Imagine that such a client learns about finalized feature
> > versions, and exercises some logic based on the version. These clients
> can
> > be of 2 types:
> > 1. Some part of the broker code itself could behave like a client trying
> to
> > use some feature that's "internal" to the broker cluster. Such a client
> > would learn the latest finalized features via ZK.
> > 2. An external system (ex: Streams) could behave like a client, trying to
> > use some "external" facing feature. Such a client would learn latest
> > finalized features via ApiVersionsRequest. Ex: group_coordinator feature
> > described in the KIP.
> >
> > Next, imagine that for F, the max_version_level is successfully bumped by
> > +1 (via Controller API). Now it is guaranteed that all brokers (i.e.
> > internal clients) understand max_version_level + 1. However, it is still
> > not guaranteed that all external clients have support for (or have
> > activated) the logic for the newer version. Why? Because, this is
> > subjective as explained next:
> >
> > 1. On one hand, imagine F as an internal feature only relevant to
> Brokers.
> > The binary for the internal client logic is controlled by Broker cluster
> > deployments. When shipping a new Broker release, we wouldn't bump max
> > "supported" feature version for F by 1, unless we have introduced some
> new
> > logic (with a potentially breaking change) in the Broker. Furthermore,
> such
> > feature logic in the broker should/will not be implemented in a way that
> it
> > would activate logic for an older feature version after it has migrated
> to
> > using the logic for a newer feature version (because this could break the
> > cluster!). For these cases, max_version_level will be very useful for
> > decision making.
> >
> > 2. On the other hand, imagine F as an external facing feature. External
> > clients are not within the control of Broker cluster. An external client
> > may not have upgraded it's code (yet) to use 'max_version_level + 1'.
> But,
> > the Kafka clus

Re: [DISCUSS] KIP-585: Conditional SMT

2020-04-25 Thread Christopher Egerton
Hi Andrew,

I know a DSL seems like overkill, and I'm not attached to it by any means,
but I do think it serves a vital purpose in that it allows people who don't
know how or have the time to write Java code to act on data being processed
by connectors.

Are you proposing that the "RecordPredicate" be another pluggable interface
that users could then implement on their own? Or, would this be purely a
syntactic expansion to SMT configuration with some subset of hard-coded
predicates provided by the framework? I think there's value in the latter,
but the former doesn't seem like it'd bring much to the table as far as
users are concerned, and for developers, it is an improvement, but not a
major one.

Cheers,

Chris

On Fri, Apr 24, 2020 at 12:04 PM Andrew Schofield <
schofieldandr...@gmail.com> wrote:

> I wonder whether we're getting a bit overcomplicated with this. I think all
> that's required here is to add an optional guard predicate for a
> Transformation.
> The predicate cannot end the Transformation chain, but it can allow or
> prevent a particular
> Transformation from running.
>
> How about this as syntax?
> transforms: extractInt
> transforms.extractInt.?type:
> org.apache.kafka.connect.predicates.TopicMatches
> transforms.extractInt.?regex: my-prefix-.*
> transforms.extractInt.type:
> org.apache.kafka.connect.transforms.ExtractField$Key
> transforms.extractInt.field: c1
>
> The idea is that a Transformation can have an optional RecordPredicate.
> The RecordPredicate
> can have configuration items, in a similar way to a Transformation. The
> syntax of
> using a '?' prefix to separate configuration for the RecordPredicate from
> the configuration
> for the Transformation could conceivably clash with an existing
> Transformation
> but the chances are minimal.
>
> This syntax doesn't allow for an 'else', but if the KIP offers say a
> TopicMatches predicate
> then that can be configured to return FALSE if the predicate matches.
>
> I feel that a DSL for SMTs is overkill. If you need something that
> complex, it's
> perhaps too complex for a transformation chain and it's really a streaming
> application.
>
> Andrew Schofield
> IBM Event Streams
>
> On 2020/04/08 21:39:31, Christopher Egerton  wrote:
> > Hi Tom,
> >
> > With regards to the potential Transformation::validate method, I don't
> > quite follow your objections. The AbstractHerder class, ConnectorConfig
> > class, and embedding of ConfigDefs that happens with the two is all
> > internal logic and we're allowed to modify it however we want, as long as
> > it doesn't alter any public-facing APIs (and if it does, we just have to
> > document those changes in a KIP). We don't have to embed the ConfigDef
> for
> > a transformation chain inside another ConfigDef if the API we want to
> > present to our users doesn't play well with that part of the code base.
> > Additionally, beyond aligning with the existing API provided for
> > connectors, another advantage is that it becomes possible to validate
> > properties for a configuration in the context of all other properties, so
> > to be clear, it's not just about preserving what may be perceived as a
> > superficial similarity and comes with real functional benefits that can't
> > be provided (at least, not as easily) by dynamic construction of a
> > ConfigDef object.
> >
> > As far as the new proposal goes, I hate to say it, but I think we're
> stuck
> > with the worst of both worlds here. Adding the new RecordPredicate
> > interface seems like it defeats the whole purpose of SMTs, which is to
> > allow manipulation of a connector's event stream by users who don't
> > necessarily know how or have the time to write Java code of their own.
> This
> > is also why I'm in favor of adding a lightweight DSL for the condition;
> > emphasizing readability for people who aren't very familiar with Connect
> > and just want to get something going quickly should be a priority for
> SMTs.
> > But if that's not going to happen with this KIP, I'd still choose the
> > simpler, less-flexible approach initially outlined, in order to keep
> things
> > simple for people creating connectors and try to let them accomplish what
> > they want via configuration, not code.
> >
> > With regards to the question about where the line should be drawn and how
> > much is too much and comparisons to other stream processing frameworks, I
> > think the nature of SMTs draws the line quite nicely: you can only
> process
> > one message at a time. There's plenty of use cases out there for
> > heavier-duty processing frameworks like Kafka Streams, with aggregate
> > operations, joining of streams, expanding a single message into multiple
> > messages, etc. With SMTs, none of this is possible; the general use case
> is
> > to filter and clean a stream of data. If any of the heavier-weight logic
> > provided by, e.g., Streams, isn't required for a project, it should be
> > possible to get along with just a collection of sink connectors, sou

Re: [DISCUSS] KIP-598: Augment TopologyDescription with store and source / sink serde information

2020-04-25 Thread Guozhang Wang
Hi John,

Thanks for the review! Replied inline.

On Fri, Apr 24, 2020 at 8:09 PM John Roesler  wrote:

> Hi Guozhang,
>
> Thanks for the KIP! I took a quick look, and I'm really happy to see this
> underway.
>
> Some quick questions:
>
> 1.  Can you elaborate on the reason that stores just have a list of
> serdes, whereas
> other components have an explicit key/value serde?
>

This is because of the existing API "List StoreBuilder#serdes()".
Although both of its implementations would return two serdes (one for key
and one for value), the API is more general to return a list. And hence the
TopologyDescription#Store which gets them directly from StoreBuilder is
exposing the same API.

1.5. A side-effect of this seems to be that the string-formatted serde
> description is
> different, depending on whether the serdes are listed on a store or a
> topic. Just an
> observation.
>

Yes I agree. I think we can probably change the "List
StoreBuilder#serdes()" signature as well (which would be a breaking change
though, so we should do that via deprecation), but I'm a bit concerned
since it was designed for future store types which may not be of K-V format
any more.


> 2. You mentioned the key compatibility concern in my mind. We do know that
> such
> use cases exist. Namely, our own tests and
> https://zz85.github.io/kafka-streams-viz/
> I'm wondering if we can add a forward-compatible machine-readable format
> to the
> KIP, so that even though we must break the parsers right now, maybe we'll
> never
> have to break them again. For example, I'm thinking of a "toJson" method
> on the
> TopologyDescription that formats the entire topology description as a json
> string.
>
>
Yes, I also have concerns about that (as described in the compatibility
section). One proposal I have is that we ONLY augment the toString result
if the TopologyDescription is from a Topology built from
`StreamsBuilder#build(Properties)`, which is only recently added and hence
most old usage would not get the benefits of it. But after thinking about
this a bit more, I'm now more inclined to just always augment the string,
and also add a `toJson` method in addition to `toString`.


> Thanks again!
> -John
>
> On Fri, Apr 24, 2020, at 00:26, Guozhang Wang wrote:
> > Hello folks,
> >
> > I'd like to start the discussion for KIP-598:
> >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=148648762
> >
> > It proposes to augment the topology description's sub-classes with store
> > and source / sink serde information. Let me know what you think, thanks!
> >
> > --
> > -- Guozhang
> >
>


-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk8 #4469

2020-04-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: reduce allocations in log start and recovery checkpoints (#8467)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H35 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision cfc34cace523e6d9698a892b9ec7ba4af33d00ea 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f cfc34cace523e6d9698a892b9ec7ba4af33d00ea
Commit message: "MINOR: reduce allocations in log start and recovery 
checkpoints (#8467)"
 > git rev-list --no-walk 99b8b51f1ecc7fb92f3d7c48709b20133cf15bb2 # timeout=10
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins3964447030177504106.sh
+ rm -rf 
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins6199265098174858986.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew --no-daemon --continue -PmaxParallelForks=2 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
-PscalaVersion=2.12 clean test
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/6.3/userguide/gradle_daemon.html.

FAILURE: Build failed with an exception.

* What went wrong:
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the User Manual chapter on the daemon at 
https://docs.gradle.org/6.3/userguide/gradle_daemon.html
Process command line: /usr/local/asfpackages/java/jdk1.8.0_241/bin/java -Xss2m 
-Xmx1024m -Dfile.encoding=ISO-8859-1 -Duser.country=US -Duser.language=en 
-Duser.variant -cp 
/home/jenkins/.gradle/wrapper/dists/gradle-6.3-all/b4awcolw9l59x95tu1obfh9i8/gradle-6.3/lib/gradle-launcher-6.3.jar
 org.gradle.launcher.daemon.bootstrap.GradleDaemon 6.3
Please read the following process output to find out more:
---
Error occurred during initialization of VM
java.lang.OutOfMemoryError: unable to create new native thread


* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/*bugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
No credentials specified
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=cfc34cace523e6d9698a892b9ec7ba4af33d00ea, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #4468
Recording test results
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
Not sending mail to unregistered user nore...@github.com


[jira] [Created] (KAFKA-9919) Add logging to KafkaBasedLog

2020-04-25 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-9919:


 Summary: Add logging to KafkaBasedLog
 Key: KAFKA-9919
 URL: https://issues.apache.org/jira/browse/KAFKA-9919
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Chris Egerton
Assignee: Chris Egerton


The logging emitted on startup is a little thin, especially in the case where a 
worker is having trouble reading to the end of its offset, status, or config 
topic. We should add some {{TRACE}} and possibly {{DEBUG}} level logs to the 
{{KafkaBasedLog}} class so that it's clearer when this is happening.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1389

2020-04-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: reduce allocations in log start and recovery checkpoints (#8467)


--
[...truncated 3.03 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eo

Jenkins build is back to normal : kafka-trunk-jdk14 #20

2020-04-25 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-9906) Is bytesSinceLastIndexEntry updated correctly in LogSegment.append()?

2020-04-25 Thread Xiang Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Zhang resolved KAFKA-9906.

Resolution: Not A Problem

> Is bytesSinceLastIndexEntry updated correctly in LogSegment.append()?
> -
>
> Key: KAFKA-9906
> URL: https://issues.apache.org/jira/browse/KAFKA-9906
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xiang Zhang
>Priority: Major
>
> I was reading code in LogSegment.scala and I found the code below:
>  
> {code:java}
> def append(largestOffset: Long,
>largestTimestamp: Long,
>shallowOffsetOfMaxTimestamp: Long,
>records: MemoryRecords): Unit = {
>   ...
>   val appendedBytes = log.append(records)
>   if (bytesSinceLastIndexEntry > indexIntervalBytes) {
> offsetIndex.append(largestOffset, physicalPosition)
> timeIndex.maybeAppend(maxTimestampSoFar, offsetOfMaxTimestampSoFar)
> bytesSinceLastIndexEntry = 0
>   }
>   bytesSinceLastIndexEntry += records.sizeInBytes
> }
> {code}
> when bytesSinceLastIndexEntry > indexIntervalBytes, we update the offsetIndex 
> and maybe the timeIndex and set bytesSinceLastIndexEntry to zero, which makes 
> sense to me because we just update the index. However, following that, 
> bytesSinceLastIndexEntry is incremented by records.sizeInBytes, which I find 
> confusing since the records are appended before the index are updated. Maybe 
> it should work like this :
> {code:java}
> if (bytesSinceLastIndexEntry > indexIntervalBytes) {
>   offsetIndex.append(largestOffset, physicalPosition)
>   timeIndex.maybeAppend(maxTimestampSoFar, offsetOfMaxTimestampSoFar)
>   bytesSinceLastIndexEntry = 0
> } else {
>   bytesSinceLastIndexEntry += records.sizeInBytes
> }{code}
>  Sorry if I misunderstood this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-596 Safely abort Producer transactions during application shutdown

2020-04-25 Thread 张祥
Sorry, but this KIP is still open to discussion, any comments and ideas
would be appreciated, Thanks.

张祥  于2020年4月17日周五 下午1:04写道:

> Guozhang, thanks for the valuable suggestion.
>
> A new part called "suggested coding pattern" has been added and I copy the
> core code here:
>
> try {
> producer.beginTransaction();
> for (int i = 0; i < 100; i++)
> producer.send(new ProducerRecord<>("my-topic",
> Integer.toString(i), Integer.toString(i)));
> producer.commitTransaction();
> } catch (Exception e) {
> producer.abortTransaction();
> if(e instanceof IllegalStateException ||
> e instanceof ProducerFencedException ||
> e instanceof UnsupportedVersionException ||
> e instanceof AuthorizationException ||
> e instanceof OutOfOrderSequenceException) {
> producer.close();
> }
> }
>
> As you can see, in the catch block,  all fatal exceptions need to be
> listed, I am not sure I have listed all of them and I wonder if there is a
> better way to do this.
>
>
> Guozhang Wang  于2020年4月17日周五 上午8:50写道:
>
>> Xiang, thanks for the written KIP. I just have one meta comment and
>> otherwise it looks good to me: could you also add a section about
>> suggested
>> coding patterns (especially how try - catch should be implemented) as we
>> discussed on the JIRA to the wiki page as well?
>>
>> And please also note that besides the javadoc of the function, on top of
>> the KafkaProducer class there are also comments regarding example snippet:
>>
>> ```
>>
>> * 
>> * {@code
>> * Properties props = new Properties();
>> * props.put("bootstrap.servers", "localhost:9092");
>> * props.put("transactional.id", "my-transactional-id");
>> * Producer producer = new KafkaProducer<>(props, new
>> StringSerializer(), new StringSerializer());
>> *
>> * producer.initTransactions();
>> *
>> * try {
>> * producer.beginTransaction();
>> * for (int i = 0; i < 100; i++)
>> * producer.send(new ProducerRecord<>("my-topic",
>> Integer.toString(i), Integer.toString(i)));
>> * producer.commitTransaction();
>> * } catch (ProducerFencedException | OutOfOrderSequenceException |
>> AuthorizationException e) {
>> * // We can't recover from these exceptions, so our only option is
>> to close the producer and exit.
>> * producer.close();
>> * } catch (KafkaException e) {
>> * // For all other exceptions, just abort the transaction and try
>> again.
>> * producer.abortTransaction();
>> * }
>> * producer.close();
>>
>> * } 
>> ```
>>
>> I think with this change we do not need to educate users that they should
>> distinguish the types of exceptions when calling `abortTxn`, instead they
>> only need to depend on the exception to decide whether to `close` the
>> producer, so the above recommendation could look like:
>>
>> try {
>>
>> } catch {Exception e} {
>>
>> producer.abortTxn;
>>
>> if (e instanceof /*fatal exceptions*/) {
>> producer.close();
>> }
>> }
>>
>>
>> Guozhang
>>
>> On Thu, Apr 16, 2020 at 12:14 AM 张祥  wrote:
>>
>> > Thanks for the structure change Boyang. And I agree with you on the weak
>> > proposal part, I have adjusted it according to your suggestion. Thanks
>> > again!
>> >
>> > Boyang Chen  于2020年4月16日周四 下午2:39写道:
>> >
>> > > Thanks for the KIP Xiang!
>> > >
>> > > I think the motivation looks good, and I just did a slight structure
>> > change
>> > > to separate "Proposed Changes" and "Public Interfaces", hope you don't
>> > > mind.
>> > >
>> > > However, "we can determine whether the producer client is already in
>> > error
>> > > state in abortTransaction" sounds a bit weak about the actual
>> proposal,
>> > > instead we could propose something as "we would remember whether a
>> fatal
>> > > exception has already been thrown to the application level, so that in
>> > > abort transaction we will not throw again, thus making the function
>> safe
>> > to
>> > > be called in an error state".
>> > >
>> > > Other than that, I think the KIP is in pretty good shape.
>> > >
>> > > Boyang
>> > >
>> > > On Wed, Apr 15, 2020 at 7:07 PM 张祥  wrote:
>> > >
>> > > > Hi everyone,
>> > > >
>> > > > I have opened a small KIP about safely aborting transaction during
>> > > > shutdown. I'd like to use this thread to discuss about it and any
>> > > feedback
>> > > > is appreciated (sorry for earlier KIP number mistake). Here is a
>> link
>> > to
>> > > > KIP-596 :
>> > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-596%3A+Safely+abort+Producer+transactions+during+application+shutdown
>> > > >
>> > > > Thank you!
>> > > >
>> > >
>> >
>>
>>
>> --
>> -- Guozhang
>>
>


[jira] [Created] (KAFKA-9920) Fix NetworkDegradeTest.test_rate test error

2020-04-25 Thread jiamei xie (Jira)
jiamei xie created KAFKA-9920:
-

 Summary: Fix NetworkDegradeTest.test_rate test error
 Key: KAFKA-9920
 URL: https://issues.apache.org/jira/browse/KAFKA-9920
 Project: Kafka
  Issue Type: Bug
  Components: core, system tests
Reporter: jiamei xie
Assignee: jiamei xie


The test case of 
kafkatest.tests.core.network_degrade_test.NetworkDegradeTest.test_rate.
rate_limit_kbit=100.device_name=eth0.task_name=rate-1000-latency-50.latency_ms=50
failed. And the error log was "Expected most of the measured rates to be within 
an order
of magnitude of target 100. This means `tc` did not limit the bandwidth as 
expected."
It was because that the rate_limt didn't take immediately after starting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4470

2020-04-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Partition is under reassignment when adding and removing (#8364)


--
[...truncated 3.01 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecord

[jira] [Resolved] (KAFKA-9901) TimeoutError: Never saw message indicating StreamsTest finished startup on ducker@ducker07

2020-04-25 Thread jiamei xie (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiamei xie resolved KAFKA-9901.
---
Resolution: Duplicate

> TimeoutError: Never saw message indicating StreamsTest finished startup on 
> ducker@ducker07
> --
>
> Key: KAFKA-9901
> URL: https://issues.apache.org/jira/browse/KAFKA-9901
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, system tests
>Reporter: jiamei xie
>Assignee: jiamei xie
>Priority: Major
>
> When running  _DUCKTAPE_OPTIONS="--debug" 
> TC_PATHS="tests/kafkatest/tests/streams/streams_broker_bounce_test.py::StreamsBrokerBounceTest.test_all_brokers_bounce"
>  bash tests/docker/run_tests.sh | tee debug_logs.txt
> It failed because of below error.
> TimeoutError: Never saw message indicating StreamsTest finished startup on 
> ducker@ducker07
> https://github.com/apache/kafka/pull/8443 updated the constructor of 
> StreamsSmokeTestJobRunnerService.  But it wasn't updated in 
> streams_broker_bounce_test



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #21

2020-04-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Partition is under reassignment when adding and removing (#8364)


--
[...truncated 6.06 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
should