[jira] [Resolved] (KAFKA-9309) Add the ability to translate Message classes to and from JSON

2020-04-18 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-9309.

Resolution: Fixed

> Add the ability to translate Message classes to and from JSON
> -
>
> Key: KAFKA-9309
> URL: https://issues.apache.org/jira/browse/KAFKA-9309
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
>
> Add the ability to translate Message classes to and from JSON.  This will be 
> useful for logging, and eventually for modifying these classes via tools.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9798) Flaky test: org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldAllowConcurrentAccesses

2020-04-18 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9798.
--
Resolution: Fixed

> Flaky test: 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldAllowConcurrentAccesses
> 
>
> Key: KAFKA-9798
> URL: https://issues.apache.org/jira/browse/KAFKA-9798
> Project: Kafka
>  Issue Type: Test
>  Components: streams, unit tests
>Reporter: Bill Bejeck
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test, test
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9889) Flaky RocksDBTimestampedStoreTest.shouldVerifyThatMetricsGetMeasurementsFromRocksDB

2020-04-18 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-9889:


 Summary: Flaky 
RocksDBTimestampedStoreTest.shouldVerifyThatMetricsGetMeasurementsFromRocksDB
 Key: KAFKA-9889
 URL: https://issues.apache.org/jira/browse/KAFKA-9889
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang


Stacktrace
org.apache.kafka.streams.errors.ProcessorStateException: Error opening store 
db-name at location rocksdb/db-name
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openRocksDB(RocksDBStore.java:221)
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:195)
at 
org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:231)
at 
org.apache.kafka.streams.state.internals.RocksDBStoreTest.shouldVerifyThatMetricsGetMeasurementsFromRocksDB(RocksDBStoreTest.java:633)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164

[jira] [Resolved] (KAFKA-8993) Add an EOS performance test suite similar to ProducerPerformance

2020-04-18 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-8993.

Resolution: Won't Fix

> Add an EOS performance test suite similar to ProducerPerformance
> 
>
> Key: KAFKA-8993
> URL: https://issues.apache.org/jira/browse/KAFKA-8993
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8803) Stream will not start due to TimeoutException: Timeout expired after 60000milliseconds while awaiting InitProducerId

2020-04-18 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8803.
--
Fix Version/s: 2.5.0
 Assignee: Guozhang Wang  (was: Sophie Blee-Goldman)
   Resolution: Fixed

> Stream will not start due to TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> 
>
> Key: KAFKA-8803
> URL: https://issues.apache.org/jira/browse/KAFKA-8803
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Raman Gupta
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.5.0
>
> Attachments: logs-20200311.txt.gz, logs-client-20200311.txt.gz, 
> logs.txt.gz, screenshot-1.png
>
>
> One streams app is consistently failing at startup with the following 
> exception:
> {code}
> 2019-08-14 17:02:29,568 ERROR --- [2ce1b-StreamThread-2] 
> org.apa.kaf.str.pro.int.StreamTask: task [0_36] Timeout 
> exception caught when initializing transactions for task 0_36. This might 
> happen if the broker is slow to respond, if the network connection to the 
> broker was interrupted, or if similar circumstances arise. You can increase 
> producer parameter `max.block.ms` to increase this timeout.
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> {code}
> These same brokers are used by many other streams without any issue, 
> including some in the very same processes for the stream which consistently 
> throws this exception.
> *UPDATE 08/16:*
> The very first instance of this error is August 13th 2019, 17:03:36.754 and 
> it happened for 4 different streams. For 3 of these streams, the error only 
> happened once, and then the stream recovered. For the 4th stream, the error 
> has continued to happen, and continues to happen now.
> I looked up the broker logs for this time, and see that at August 13th 2019, 
> 16:47:43, two of four brokers started reporting messages like this, for 
> multiple partitions:
> [2019-08-13 20:47:43,658] INFO [ReplicaFetcher replicaId=3, leaderId=1, 
> fetcherId=0] Retrying leaderEpoch request for partition xxx-1 as the leader 
> reported an error: UNKNOWN_LEADER_EPOCH (kafka.server.ReplicaFetcherThread)
> The UNKNOWN_LEADER_EPOCH messages continued for some time, and then stopped, 
> here is a view of the count of these messages over time:
>  !screenshot-1.png! 
> However, as noted, the stream task timeout error continues to happen.
> I use the static consumer group protocol with Kafka 2.3.0 clients and 2.3.0 
> broker. The broker has a patch for KAFKA-8773.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8894) Flaky org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromFileAfterResetWithoutIntermediateUserTopic

2020-04-18 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8894.
--
Fix Version/s: 2.5.0
   Resolution: Fixed

https://github.com/apache/kafka/pull/8208

> Flaky 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromFileAfterResetWithoutIntermediateUserTopic
> --
>
> Key: KAFKA-8894
> URL: https://issues.apache.org/jira/browse/KAFKA-8894
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Stanislav Kozlovski
>Priority: Minor
> Fix For: 2.5.0
>
>
> {code:java}
> java.lang.AssertionError: Condition not met within timeout 3. Topics are 
> not expected after 3 milli seconds.
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:353)
>   at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.waitForRemainingTopics(EmbeddedKafkaCluster.java:298)
>   at 
> org.apache.kafka.streams.integration.AbstractResetIntegrationTest.assertInternalTopicsGotDeleted(AbstractResetIntegrationTest.java:589)
>   at 
> org.apache.kafka.streams.integration.AbstractResetIntegrationTest.testReprocessingFromFileAfterResetWithoutIntermediateUserTopic(AbstractResetIntegrationTest.java:399)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromFileAfterResetWithoutIntermediateUserTopic(ResetIntegrationTest.java:82)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   {code}
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/24733/testReport/junit/org.apache.kafka.streams.integration/ResetIntegrationTest/testReprocessingFromFileAfterResetWithoutIntermediateUserTopic/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8893) Flaky ResetIntegrationTest. testReprocessingFromScratchAfterResetWithIntermediateUserTopic

2020-04-18 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8893.
--
Fix Version/s: 2.5.0
   Resolution: Fixed

https://github.com/apache/kafka/pull/8208

> Flaky ResetIntegrationTest. 
> testReprocessingFromScratchAfterResetWithIntermediateUserTopic
> --
>
> Key: KAFKA-8893
> URL: https://issues.apache.org/jira/browse/KAFKA-8893
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Stanislav Kozlovski
>Priority: Major
> Fix For: 2.5.0
>
>
> {code:java}
> java.lang.AssertionError: Condition not met within timeout 6. Did not 
> receive all 10 records from topic outputTopic
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:353)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:506)
>   {code}
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/24733/testReport/junit/org.apache.kafka.streams.integration/ResetIntegrationTest/testReprocessingFromScratchAfterResetWithIntermediateUserTopic/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8895) Flaky org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromDateTimeAfterResetWithoutIntermediateUserTopic

2020-04-18 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8895.
--
Resolution: Fixed

https://github.com/apache/kafka/pull/8208

> Flaky 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromDateTimeAfterResetWithoutIntermediateUserTopic
> --
>
> Key: KAFKA-8895
> URL: https://issues.apache.org/jira/browse/KAFKA-8895
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Stanislav Kozlovski
>Priority: Minor
>
> {code:java}
> java.lang.AssertionError: Condition not met within timeout 3. Topics are 
> not expected after 3 milli seconds.
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:353)
>   at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.waitForRemainingTopics(EmbeddedKafkaCluster.java:298)
>   at 
> org.apache.kafka.streams.integration.AbstractResetIntegrationTest.assertInternalTopicsGotDeleted(AbstractResetIntegrationTest.java:589)
>  {code}
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/24733/testReport/junit/org.apache.kafka.streams.integration/ResetIntegrationTest/testReprocessingFromDateTimeAfterResetWithoutIntermediateUserTopic/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-04-18 Thread Colin McCabe
On Fri, Apr 17, 2020, at 13:11, Ismael Juma wrote:
> Hi Colin,
> 
> The read/modify/write is protected by the zk version, right?
> 
> Ismael

No, we don't use the ZK version when doing the write to the config znodes.  We 
do for ACLs, I think.

This is something that we could fix just by using the ZK version, but there are 
other race conditions like what if we're deleting a topic while setting this 
config, etc.  A single writer is a lot easier to reason about.

best,
Colin


> 
> On Fri, Apr 17, 2020 at 12:53 PM Colin McCabe  wrote:
> 
> > On Thu, Apr 16, 2020, at 08:51, Ismael Juma wrote:
> > > I don't think these requests are necessarily infrequent under multi
> > tenant
> > > environments though. I've seen Controller availability being an issue for
> > > describe topics for example (before it was changed to go to any broker).
> >
> > Hi Ismael,
> >
> > I don't think DescribeTopics is a good comparison.  That RPC is available
> > to regular users and is used many orders of magnitude more frequently than
> > administrative operations like changing ACLs or setting quotas.
> >
> > The operations we're talking about redirecting here all require the
> > highest possible permissions and will not be frequent in any real-world
> > cluster... unless someone is running a stress-test or a benchmark.  We
> > didn't even notice some of the serious bugs in setting dynamic configs
> > until recently because the alterConfigs / incrementalAlterConfigs RPCs are
> > so infrequently called.
> >
> > Additionally, this KIP fixes some existing bugs.  The current approach of
> > having random writers do a read-write-modify cycle on a configuration znode
> > is buggy since it could be interleaved with another node's read-modify
> > write cycle.  It has a "lost updates" problem.
> >
> > For example, node 1 reads a config znode.  Node 2 reads the same config
> > znode.  Node 1 writes back a modified version of the znode.  Node 2 writes
> > back its (differently) modified version, overwriting the changes from node
> > 1.
> >
> > I don't think anyone ever noticed this problem since, again, these
> > operations are very infrequent, making the chance of such a collision low.
> > But it is a serious bug that is fixed by having a single writer.  (We
> > should add this to the KIP...)
> >
> > >
> > > Would it be better to redirect once the controller quorum is there?
> >
> > This KIP is needed for the bridge release.  The bridge release upgrade
> > process relies on the old nodes sending their administrative operations to
> > the controller quorum, not directly to zookeeper.
> >
> > best,
> > Colin
> >
> >
> > >
> > > Note that this is different from things like AlterIsr since these calls
> > are
> > > coming from clients versus other brokers.
> > >
> > > Ismael
> > >
> > > On Wed, Apr 15, 2020, 5:10 PM Colin McCabe  wrote:
> > >
> > > > Hi Ismael,
> > > >
> > > > I agree that sending these requests through the controller will not
> > work
> > > > during the periods when there is no controller.  However, those periods
> > > > should be short-- otherwise we have bigger problems in the cluster.
> > > >
> > > > These requests are very infrequent because they are administrative
> > > > operations.  Basically the affected operations are changing ACLs,
> > changing
> > > > dynamic configurations, and changing quotas.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Wed, Apr 15, 2020, at 15:25, Ismael Juma wrote:
> > > > > Hi Boyang,
> > > > >
> > > > > Thanks for the KIP. Have we considered that this reduces
> > availability for
> > > > > these operations since we have a single Controller instead of the ZK
> > > > quorum?
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Fri, Apr 3, 2020 at 4:45 PM Boyang Chen <
> > reluctanthero...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hey all,
> > > > > >
> > > > > > I would like to start off the discussion for KIP-590, a follow-up
> > > > > > initiative after KIP-500:
> > > > > >
> > > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-590%3A+Redirect+Zookeeper+Mutation+Protocols+to+The+Controller
> > > > > >
> > > > > > This KIP proposes to migrate existing Zookeeper mutation paths,
> > > > including
> > > > > > configuration, security and quota changes, to controller-only by
> > always
> > > > > > routing these alterations to the controller.
> > > > > >
> > > > > > Let me know your thoughts!
> > > > > >
> > > > > > Best,
> > > > > > Boyang
> > > > > >
> > > > >
> > > >
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4446

2020-04-18 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: cleanup RocksDBStore tests  (#8510)


--
[...truncated 2.14 MB...]
org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-te

Build failed in Jenkins: kafka-trunk-jdk11 #1365

2020-04-18 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: cleanup RocksDBStore tests  (#8510)


--
[...truncated 2.16 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.i