Jenkins build is back to normal : kafka-trunk-jdk11 #1142

2020-02-09 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.5-jdk8 #9

2020-02-09 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-7658: Follow up to original PR (#8027)


--
[...truncated 2.06 MB...]
org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinMaterializationIntegrationTest
 > shouldEmitTombstoneWhenDeletingNonJoiningRecords[materialized=false, 
queriable=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinMaterializationIntegrationTest
 > shouldEmitTombstoneWhenDeletingNonJoiningRecords[materialized=false, 
queriable=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinMaterializationIntegrationTest
 > shouldEmitTombstoneWhenDeletingNonJoiningRecords[materialized=true, 
queriable=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinMaterializationIntegrationTest
 > shouldEmitTombstoneWhenDeletingNonJoiningRecords[materialized=true, 
queriable=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinMaterializationIntegrationTest
 > shouldEmitTombstoneWhenDeletingNonJoiningRecords[materialized=true, 
queriable=true] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinMaterializationIntegrationTest
 > shouldEmitTombstoneWhenDeletingNonJoiningRecords[materialized=true, 
queriable=true] PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitToMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitToMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFails STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFails PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToPerformMultipleTransactions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToPerformMultipleTransactions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitMultiplePartitionOffsets STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitMultiplePartitionOffsets PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologies STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologies PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose PASSED

org.apache.kafka.streams.integration.SmokeTestDriverIntegrationTest > 
shouldWorkWithRebalance STARTED
ERROR: Could not install GRADLE_4_10_3_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:873)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:484)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:693)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:658)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:400)
at hudson.scm.SCM.poll(SCM.java:417)
at hudson.model.AbstractProject._poll(AbstractProject.java:1390)
at hudson.model.AbstractProject.poll(AbstractProject.java:1293)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:603)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java

[jira] [Created] (KAFKA-9529) Request Header v0 inconsistent between documentation and implementation

2020-02-09 Thread Diggory James Joshua Blake (Jira)
Diggory James Joshua Blake created KAFKA-9529:
-

 Summary: Request Header v0 inconsistent between documentation and 
implementation
 Key: KAFKA-9529
 URL: https://issues.apache.org/jira/browse/KAFKA-9529
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Diggory James Joshua Blake


The [protocol 
documentation|https://kafka.apache.org/protocol#protocol_messages] specifies 
Request Header v0 like this:
{code:java}
Request Header v0 => request_api_key request_api_version correlation_id 
  request_api_key => INT16
  request_api_version => INT16
  correlation_id => INT32
{code}
With the `client_id` field only being added in Request Header v1.

However, Kafka will error on any request without the `client_id` even if the 
client sets the API version to zero. Also, the JSON files specify that 
`client_id` has been present since version zero:

[https://github.com/apache/kafka/blob/e24d0e22abb0fb3e4cb3974284a3dad126544584/clients/src/main/resources/common/message/RequestHeader.json#L27]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9530) Flaky Test kafka.admin.DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout

2020-02-09 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9530:
--

 Summary: Flaky Test 
kafka.admin.DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout
 Key: KAFKA-9530
 URL: https://issues.apache.org/jira/browse/KAFKA-9530
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Bill Bejeck


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4570/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeGroupWithShortInitializationTimeout/]

 
{noformat}
Error Messagejava.lang.AssertionError: assertion 
failedStacktracejava.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:267)
at 
kafka.admin.DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout(DescribeConsumerGroupTest.scala:585)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at jdk.internal.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHubBack

Re: [DISCUSS] 2.4.1 Bug Fix Release

2020-02-09 Thread Bill Bejeck
Hi All,

I wanted to post an update on where we are with the 2.4.1 release.

Currently, there four blockers, and you can find the details on the release
plan -https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.4.1.

Three of the four blockers have PRs in flight, and the fourth blocker
depends on the Zookeeper 3.5.7 release.
I anticipate that an RC will be possible soon as we merge the blocker PRs
and the new Zookeeper release is out, assuming we don't discover any
additional blockers.

Thanks for your patience,

Bill

On Tue, Feb 4, 2020 at 12:41 PM Colin McCabe  wrote:

> Hi Bill,
>
> Sounds good.  +1.
>
> best,
> Colin
>
> On Mon, Feb 3, 2020, at 17:27, Bill Bejeck wrote:
> > Hi All,
> >
> > I'd like to volunteer for the release manager of the 2.4.1 bug fix
> release.
> >
> > Kafka 2.4.0 was released on December 16, 2019, and so far 19 issues have
> > been fixed since then.
> >
> > Here is a complete list:
> >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.4.1
> >
> > The release plan is documented here:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.4.1
> >
> > Thanks!
> >
> > Bill
> >
>


[jira] [Resolved] (KAFKA-9284) Add documentation and system tests for TLS-encrypted Zookeeper connections

2020-02-09 Thread Ron Dagostino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Dagostino resolved KAFKA-9284.
--
Fix Version/s: 2.5.0
   Resolution: Duplicate

Duplicate

> Add documentation and system tests for TLS-encrypted Zookeeper connections
> --
>
> Key: KAFKA-9284
> URL: https://issues.apache.org/jira/browse/KAFKA-9284
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, system tests
>Affects Versions: 2.4.0
>Reporter: Ron Dagostino
>Assignee: Ron Dagostino
>Priority: Minor
> Fix For: 2.5.0
>
>
> TLS connectivity to Zookeeper became available in the 3.5.x versions.  Now 
> with the inclusion of these Zookeeper versions Kafka should supply 
> documentation that distills the steps required to take advantage of TLS and 
> include systems tests to validate such setups.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)