Build failed in Jenkins: kafka-trunk-jdk11 #1289

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[github] DOCS-3625: Add section to config topic: parameters controlled by Kafka


--
[...truncated 4.85 MB...]

kafka.api.PlaintextAdminIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testCreateExistingTopicsThrowTopicExistsException PASSED

kafka.api.PlaintextAdminIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.TransactionsTest > testCommitTransactionTimeout PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota STARTED

kafka.api.PlaintextAdminIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.PlaintextAdminIntegrationTest > testAuthorizedOperations STARTED

kafka.api.PlaintextAdminIntegrationTest > testAuthorizedOperations PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.UserClientIdQuotaTest > testThrottledRequest STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.UserClientIdQuotaTest > testThrottledRequest PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclDescribe STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclDescribe PASSED

kafka.api.SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed STARTED

kafka.api.SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed PASSED

kafka.api.SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig STARTED

kafka.api.SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig PASSED

kafka.api.SaslSslAdminIntegrationTest > testAttemptToCreateInvalidAcls STARTED

kafka.api.SaslSslAdminIntegrationTest > testAttemptToCreateInvalidAcls PASSED

kafka.api.SaslSslAdminIntegrat

Build failed in Jenkins: kafka-trunk-jdk8 #4369

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Allow topics with `null` leader on MockAdminClient createTopic.


--
[...truncated 5.95 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOUR

Build failed in Jenkins: kafka-2.2-jdk8 #36

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[konstantine] MINOR: Backport kafkatest per-broker overrides and extra JVM args


--
[...truncated 2.68 MB...]

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionOnTopicToWriteToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionOnTopicToWriteToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testMetadataWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testMetadataWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testCreatePartitionsWithWildCardAuth 
STARTED

kafka.api.AuthorizerIntegrationTest > testCreatePartitionsWithWildCardAuth 
PASSED

kafka.api.AuthorizerIntegrationTest > 
testUnauthorizedDeleteTopicsWithoutDescribe STARTED

kafka.api.AuthorizerIntegrationTest > 
testUnauthorizedDeleteTopicsWithoutDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testDescribeGroupApiWithGroupDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testDescribeGroupApiWithGroupDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicNotExisting 
STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicNotExisting 
PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > 
testListGroupApiWithAndWithoutListGroupAcls STARTED

kafka.api.AuthorizerIntegrationTest > 
testListGroupApiWithAndWithoutListGroupAcls PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionMetadataRequestAutoCreate STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionMetadataRequestAutoCreate PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteGroupApiWithNoDeleteGroupAcl2 
STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteGroupApiWithNoDeleteGroupAcl2 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionNotMatchingInternalTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionNotMatchingInternalTopic PASSED

kafka.api.AuthorizerIntegrationTest > testFetchAllOffsetsTopicAuthorization 
STARTED

kafka.api.AuthorizerIntegrationTest > testFetchAllOffsetsTopicAuthorization 
PASSED

kafka.api.AuthorizerIntegrationTest > 
shouldSendSuccessfullyWhenIdempotentAndHasCorrectACL STARTED

kafka.api.AuthorizerIntegrationTest > 
shouldSendSuccessfullyWhenIdempotentAndHasCorrectACL PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > 
testOffsetsForLeaderEpochClusterPermission STARTED

kafka.api.AuthorizerIntegrationTest > 
testOffsetsForLeaderEpochClusterPermission PASSED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerTopicAuthorizationExceptionInSendCallback STARTED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerTopicAuthorizationExceptionInSendCallback PASSED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoWriteTransactionalIdAcl STARTED

kafka.api.AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoWriteTransactionalIdAcl PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedCreatePartitions STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedCreatePartitions PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
PASSED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction
 STARTED

kafka.api.AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransacti

Build failed in Jenkins: kafka-trunk-jdk8 #4370

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add synchronization to the protocol name check (#8349)

[github] MINOR: Improve performance of checkpointHighWatermarks, patch 1/2


--
[...truncated 2.94 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DA

Build failed in Jenkins: kafka-trunk-jdk11 #1290

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Allow topics with `null` leader on MockAdminClient createTopic.

[github] MINOR: Add synchronization to the protocol name check (#8349)

[github] MINOR: Improve performance of checkpointHighWatermarks, patch 1/2


--
[...truncated 2.96 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-S

Jenkins build is back to normal : kafka-2.1-jdk8 #258

2020-03-26 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-9373) Improve shutdown performance via lazy accessing the offset and time indices.

2020-03-26 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-9373.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Improve shutdown performance via lazy accessing the offset and time indices.
> 
>
> Key: KAFKA-9373
> URL: https://issues.apache.org/jira/browse/KAFKA-9373
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 2.3.0, 2.4.0, 2.3.1
>Reporter: Adem Efe Gencer
>Assignee: Adem Efe Gencer
>Priority: Major
> Fix For: 2.6.0
>
>
> KAFKA-7283 enabled lazy mmap on index files by initializing indices on-demand 
> rather than performing costly disk/memory operations when creating all 
> indices on broker startup. This helped reducing the startup time of brokers. 
> However, segment indices are still created on closing segments, regardless of 
> whether they need to be closed or not.
>  
> Ideally we should:
>  * Improve shutdown performance via lazy accessing the offset and time 
> indices.
>  * Eliminate redundant disk accesses and memory mapped operations while 
> deleting or renaming files that back segment indices.
>  * Prevent illegal accesses to underlying indices of a closed segment, which 
> would lead to memory leaks due to recreation of the underlying memory mapped 
> objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-570: Add leader epoch in StopReplicaRequest

2020-03-26 Thread David Jacot
I shouldn't have said "always", sorry.

When a replica is deleted, either due to a topic deletion or a reassignment,
the controller transitions the replica to the `OfflineReplica` state and
then to
the `ReplicaDeletionStarted` state. The first transition issues a
StopReplicaRequest
with DeletePartitions=false and the second transition issues a
StopReplicaRequest
with DeleatePartitions=true. The two operations are actually doing the same
except that the latter one deletes the replica.

At the moment, the `ControllerRequestBatch` partitions the accumulated stop
replica requests and sends two requests.
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/controller/ControllerChannelManager.scala#L554

Having a per-partition flag allows us to combine everything into one request
here if the requests are sent within the same batch obviously. This won't
guarantee that all requests will be combined though because requests are
batched optimistically in the controller but it opens the door to improve
it in
the future.

Best,
David


One side note: The batching of the two requests depends on how the

On Wed, Mar 25, 2020 at 6:12 PM Ismael Juma  wrote:

> Is it really true that the controller always sends two requests? Aren't the
> operations different (stop replica with delete versus stop replica
> without)?
>
> On Wed, Mar 25, 2020, 9:59 AM David Jacot  wrote:
>
> > Hi all,
> >
> > I'd like to inform you that I have slightly changed the schema which was
> > proposed
> > in the KIP. During the implementation, I have realized that the proposed
> > schema
> > did not work. The new one reorganises how topics/partitions are stored.
> >
> > I'd like to amend the current KIP with the following:
> >
> > At the moment, the StopReplicaRequest has a top level field named
> > `DeletePartitions`
> > which indicates whether the partitions present in the request must be
> > deleted or not.
> > The downside of this is that the controller always ends up sending two
> > StopReplica
> > requests, one with DeletePartitions=true and one with
> > DeletePartitions=false.
> >
> > Instead, I'd like to add a per-partition DeletePartition field to combine
> > everything in
> > one request. This will reduce the number of requests sent to each broker
> > and also
> > increase the batching. I've already implemented it.
> >
> > I've already updated the schema in the KIP if you want to see it. I will
> > update the
> > KIP itself if you agree with the amendment.
> >
> > What do you think? Does it sound reasonable?
> >
> > Best,
> > David
> >
> > On Fri, Mar 6, 2020 at 3:37 PM David Jacot  wrote:
> >
> > > Hi all,
> > >
> > > The vote has passed with +3 binding votes (Jason Gustafson, Gwen
> Shapira,
> > > Jun Rao).
> > >
> > > Thanks to everyone!
> > >
> > > Best,
> > > David
> > >
> > > On Wed, Mar 4, 2020 at 9:02 AM David Jacot 
> wrote:
> > >
> > >> Hi Jun,
> > >>
> > >> You're right. I have noticed it while implementing it. I plan to use a
> > >> default
> > >> value as a sentinel in the protocol (e.g. -2) to cover this case.
> > >>
> > >> David
> > >>
> > >> On Wed, Mar 4, 2020 at 3:18 AM Jun Rao  wrote:
> > >>
> > >>> Hi, David,
> > >>>
> > >>> Thanks for the KIP. +1 from me too. Just one comment below.
> > >>>
> > >>> 1. Regarding the sentinel leader epoch to indicate topic deletion, it
> > >>> seems
> > >>> that we need to use a different sentinel value to indicate that the
> > >>> leader
> > >>> epoch is not present when the controller is still on the old version
> > >>> during
> > >>> upgrade.
> > >>>
> > >>> Jun
> > >>>
> > >>> On Mon, Mar 2, 2020 at 11:20 AM Gwen Shapira 
> > wrote:
> > >>>
> > >>> > +1
> > >>> >
> > >>> > On Mon, Feb 24, 2020, 2:16 AM David Jacot 
> > wrote:
> > >>> >
> > >>> > > Hi all,
> > >>> > >
> > >>> > > I would like to start a vote on KIP-570: Add leader epoch in
> > >>> > > StopReplicaRequest
> > >>> > >
> > >>> > > The KIP is here:
> > >>> > >
> > >>> > >
> > >>> >
> > >>>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-570%3A+Add+leader+epoch+in+StopReplicaRequest
> > >>> > >
> > >>> > > Thanks,
> > >>> > > David
> > >>> > >
> > >>> >
> > >>>
> > >>
> >
>


Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-26 Thread David Jacot
Rajini has made a good point. I don't feel strong for either ways but if
people
are confused by this, it is probably better without it.

Best,
David

On Thu, Mar 26, 2020 at 7:23 AM Colin McCabe  wrote:

> Hi Kamal,
>
> Are you suggesting that we not support STDIN here?  I have mixed feelings.
>
> I think the ideal solution would be to support "-" in these tools whenever
> a file argument was expected.  But that would be a bigger change than what
> we're talking about here.  Maybe you are right and we should keep it simple
> for now.
>
> best,
> Colin
>
> On Wed, Mar 25, 2020, at 01:24, Kamal Chandraprakash wrote:
> > STDIN wasn't standard practice in other scripts like
> > kafka-console-consumer.sh, kafka-console-producer.sh and kafka-acls.sh
> > in which the props file is accepted via consumer.config /
> producer.config /
> > command-config parameter.
> >
> > Shouldn't we have to maintain the uniformity across scripts?
> >
> > On Mon, Mar 16, 2020 at 4:13 PM David Jacot  wrote:
> >
> > > Hi Aneel,
> > >
> > > Thanks for the updated KIP. I have made a second pass over it and the
> > > KIP looks good to me.
> > >
> > > Best,
> > > David
> > >
> > > On Tue, Mar 10, 2020 at 9:39 PM Aneel Nazareth 
> wrote:
> > >
> > > > After reading a bit more about it in the Kubernetes case, I think
> it's
> > > > reasonable to do this and be explicit that we're ignoring the value,
> > > > just deleting all keys that appear in the file.
> > > >
> > > > I've updated the KIP wiki page to reflect that:
> > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > >
> > > > And updated my sample PR:
> > > > https://github.com/apache/kafka/pull/8184
> > > >
> > > > If there are no further comments, I'll request a vote in a few days.
> > > >
> > > > Thanks for the feedback!
> > > >
> > > > On Mon, Mar 9, 2020 at 1:24 PM Aneel Nazareth 
> > > wrote:
> > > > >
> > > > > Hi David,
> > > > >
> > > > > Is the expected behavior that the keys are deleted without
> checking the
> > > > values?
> > > > >
> > > > > Let's say I had this file new.properties:
> > > > > a=1
> > > > > b=2
> > > > >
> > > > > And ran:
> > > > >
> > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > >   --entity-type brokers --entity-default \
> > > > >   --alter --add-config-file new.properties
> > > > >
> > > > > It seems clear what should happen if I run this immediately:
> > > > >
> > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > >   --entity-type brokers --entity-default \
> > > > >   --alter --delete-config-file new.properties
> > > > >
> > > > > (Namely that both a and b would now have no values in the config)
> > > > >
> > > > > But what if this were run in-between:
> > > > >
> > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > >   --entity-type brokers --entity-default \
> > > > >   --alter --add-config a=3
> > > > >
> > > > > Would it be surprising if the key/value pair a=3 was deleted, even
> > > > > though the config that is in the file is a=1? Or would that be
> > > > > expected?
> > > > >
> > > > > On Mon, Mar 9, 2020 at 1:02 PM David Jacot 
> > > wrote:
> > > > > >
> > > > > > Hi Colin,
> > > > > >
> > > > > > Yes, you're right. This is weird but convenient because you don't
> > > have
> > > > to
> > > > > > duplicate
> > > > > > the "keys". I was thinking about the kubernetes API which allows
> to
> > > > create
> > > > > > a Pod
> > > > > > based on a file and allows to delete it as well with the same
> file. I
> > > > have
> > > > > > always found
> > > > > > this convenient, especially when doing local tests.
> > > > > >
> > > > > > Best,
> > > > > > David
> > > > > >
> > > > > > On Mon, Mar 9, 2020 at 6:35 PM Colin McCabe 
> > > > wrote:
> > > > > >
> > > > > > > Hi Aneel,
> > > > > > >
> > > > > > > Thanks for the KIP.  I like the idea.
> > > > > > >
> > > > > > > You mention that "input from STDIN can be used instead of a
> file on
> > > > > > > disk."  The example given in the KIP seems to suggest that the
> > > > command
> > > > > > > defaults to reading from STDIN if no argument is given to
> > > > --add-config-file.
> > > > > > >
> > > > > > > I would argue against this particular command-line pattern.
> From
> > > the
> > > > > > > user's point of view, if they mess up and forget to supply an
> > > > argument, or
> > > > > > > for some reason the parser doesn't treat something like an
> > > argument,
> > > > the
> > > > > > > program will appear to hang in a confusing way.
> > > > > > >
> > > > > > > Instead, it would be better to follow the traditional UNIX
> pattern
> > > > where a
> > > > > > > dash indicates that STDIN should be read.  So
> "--add-config-file -"
> > > > would
> > > > > > > indicate that the program should read form STDIN.  This would
> be
> > > > difficult
> > > > > > > to trigger accidentally, and more in line with the traditional
> > > > conventions.
> > > > > > >
> > > > > > > On Mon, 

Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-26 Thread Kamal Chandraprakash
Hi Colin,

We should not support STDIN to maintain uniformity across scripts. If the
user wants to pass the arguments in command line,
they can always use the existing --add-config option.




On Thu, Mar 26, 2020 at 7:20 PM David Jacot  wrote:

> Rajini has made a good point. I don't feel strong for either ways but if
> people
> are confused by this, it is probably better without it.
>
> Best,
> David
>
> On Thu, Mar 26, 2020 at 7:23 AM Colin McCabe  wrote:
>
> > Hi Kamal,
> >
> > Are you suggesting that we not support STDIN here?  I have mixed
> feelings.
> >
> > I think the ideal solution would be to support "-" in these tools
> whenever
> > a file argument was expected.  But that would be a bigger change than
> what
> > we're talking about here.  Maybe you are right and we should keep it
> simple
> > for now.
> >
> > best,
> > Colin
> >
> > On Wed, Mar 25, 2020, at 01:24, Kamal Chandraprakash wrote:
> > > STDIN wasn't standard practice in other scripts like
> > > kafka-console-consumer.sh, kafka-console-producer.sh and kafka-acls.sh
> > > in which the props file is accepted via consumer.config /
> > producer.config /
> > > command-config parameter.
> > >
> > > Shouldn't we have to maintain the uniformity across scripts?
> > >
> > > On Mon, Mar 16, 2020 at 4:13 PM David Jacot 
> wrote:
> > >
> > > > Hi Aneel,
> > > >
> > > > Thanks for the updated KIP. I have made a second pass over it and the
> > > > KIP looks good to me.
> > > >
> > > > Best,
> > > > David
> > > >
> > > > On Tue, Mar 10, 2020 at 9:39 PM Aneel Nazareth 
> > wrote:
> > > >
> > > > > After reading a bit more about it in the Kubernetes case, I think
> > it's
> > > > > reasonable to do this and be explicit that we're ignoring the
> value,
> > > > > just deleting all keys that appear in the file.
> > > > >
> > > > > I've updated the KIP wiki page to reflect that:
> > > > >
> > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > > >
> > > > > And updated my sample PR:
> > > > > https://github.com/apache/kafka/pull/8184
> > > > >
> > > > > If there are no further comments, I'll request a vote in a few
> days.
> > > > >
> > > > > Thanks for the feedback!
> > > > >
> > > > > On Mon, Mar 9, 2020 at 1:24 PM Aneel Nazareth 
> > > > wrote:
> > > > > >
> > > > > > Hi David,
> > > > > >
> > > > > > Is the expected behavior that the keys are deleted without
> > checking the
> > > > > values?
> > > > > >
> > > > > > Let's say I had this file new.properties:
> > > > > > a=1
> > > > > > b=2
> > > > > >
> > > > > > And ran:
> > > > > >
> > > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > > >   --entity-type brokers --entity-default \
> > > > > >   --alter --add-config-file new.properties
> > > > > >
> > > > > > It seems clear what should happen if I run this immediately:
> > > > > >
> > > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > > >   --entity-type brokers --entity-default \
> > > > > >   --alter --delete-config-file new.properties
> > > > > >
> > > > > > (Namely that both a and b would now have no values in the config)
> > > > > >
> > > > > > But what if this were run in-between:
> > > > > >
> > > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > > >   --entity-type brokers --entity-default \
> > > > > >   --alter --add-config a=3
> > > > > >
> > > > > > Would it be surprising if the key/value pair a=3 was deleted,
> even
> > > > > > though the config that is in the file is a=1? Or would that be
> > > > > > expected?
> > > > > >
> > > > > > On Mon, Mar 9, 2020 at 1:02 PM David Jacot 
> > > > wrote:
> > > > > > >
> > > > > > > Hi Colin,
> > > > > > >
> > > > > > > Yes, you're right. This is weird but convenient because you
> don't
> > > > have
> > > > > to
> > > > > > > duplicate
> > > > > > > the "keys". I was thinking about the kubernetes API which
> allows
> > to
> > > > > create
> > > > > > > a Pod
> > > > > > > based on a file and allows to delete it as well with the same
> > file. I
> > > > > have
> > > > > > > always found
> > > > > > > this convenient, especially when doing local tests.
> > > > > > >
> > > > > > > Best,
> > > > > > > David
> > > > > > >
> > > > > > > On Mon, Mar 9, 2020 at 6:35 PM Colin McCabe <
> cmcc...@apache.org>
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi Aneel,
> > > > > > > >
> > > > > > > > Thanks for the KIP.  I like the idea.
> > > > > > > >
> > > > > > > > You mention that "input from STDIN can be used instead of a
> > file on
> > > > > > > > disk."  The example given in the KIP seems to suggest that
> the
> > > > > command
> > > > > > > > defaults to reading from STDIN if no argument is given to
> > > > > --add-config-file.
> > > > > > > >
> > > > > > > > I would argue against this particular command-line pattern.
> > From
> > > > the
> > > > > > > > user's point of view, if they mess up and forget to supply an
> > > > > argument, or
> > > > > > > > for some reason the par

Build failed in Jenkins: kafka-trunk-jdk11 #1291

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9373: Reduce shutdown time by avoiding unnecessary loading of

[github] MINOR: Fix a number of warnings in mirror/mirror-client (#8074)


--
[...truncated 2.96 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PA

Build failed in Jenkins: kafka-trunk-jdk8 #4371

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9373: Reduce shutdown time by avoiding unnecessary loading of

[github] MINOR: Fix a number of warnings in mirror/mirror-client (#8074)


--
[...truncated 2.94 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDrive

Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-26 Thread Aneel Nazareth
Hi Kamal,

Thanks for taking a look at this KIP.

Unfortunately the user actually can't pass the arguments on the
command line using the existing --add-config option if the values are
complex structures that contain commas. --add-config assumes that
commas separate distinct configuration properties. There's a
workaround using square brackets ("[a,b,c]") for simple lists, but it
doesn't work for things like nested lists or JSON values.

The motivation for allowing STDIN as well as files is to enable
grep/pipe workflows in scripts without creating a temporary file. I
don't know if such workflows will end up being common, and hopefully
someone with a complex enough use case to require it would also be
familiar with techniques for securely creating and cleaning up
temporary files.

I'm okay with excluding the option to allow STDIN in the name of
consistency, if the consensus thinks that's wise. Anyone else have
opinions on this?

On Thu, Mar 26, 2020 at 9:02 AM Kamal Chandraprakash
 wrote:
>
> Hi Colin,
>
> We should not support STDIN to maintain uniformity across scripts. If the
> user wants to pass the arguments in command line,
> they can always use the existing --add-config option.
>
>
>
>
> On Thu, Mar 26, 2020 at 7:20 PM David Jacot  wrote:
>
> > Rajini has made a good point. I don't feel strong for either ways but if
> > people
> > are confused by this, it is probably better without it.
> >
> > Best,
> > David
> >
> > On Thu, Mar 26, 2020 at 7:23 AM Colin McCabe  wrote:
> >
> > > Hi Kamal,
> > >
> > > Are you suggesting that we not support STDIN here?  I have mixed
> > feelings.
> > >
> > > I think the ideal solution would be to support "-" in these tools
> > whenever
> > > a file argument was expected.  But that would be a bigger change than
> > what
> > > we're talking about here.  Maybe you are right and we should keep it
> > simple
> > > for now.
> > >
> > > best,
> > > Colin
> > >
> > > On Wed, Mar 25, 2020, at 01:24, Kamal Chandraprakash wrote:
> > > > STDIN wasn't standard practice in other scripts like
> > > > kafka-console-consumer.sh, kafka-console-producer.sh and kafka-acls.sh
> > > > in which the props file is accepted via consumer.config /
> > > producer.config /
> > > > command-config parameter.
> > > >
> > > > Shouldn't we have to maintain the uniformity across scripts?
> > > >
> > > > On Mon, Mar 16, 2020 at 4:13 PM David Jacot 
> > wrote:
> > > >
> > > > > Hi Aneel,
> > > > >
> > > > > Thanks for the updated KIP. I have made a second pass over it and the
> > > > > KIP looks good to me.
> > > > >
> > > > > Best,
> > > > > David
> > > > >
> > > > > On Tue, Mar 10, 2020 at 9:39 PM Aneel Nazareth 
> > > wrote:
> > > > >
> > > > > > After reading a bit more about it in the Kubernetes case, I think
> > > it's
> > > > > > reasonable to do this and be explicit that we're ignoring the
> > value,
> > > > > > just deleting all keys that appear in the file.
> > > > > >
> > > > > > I've updated the KIP wiki page to reflect that:
> > > > > >
> > > > > >
> > > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > > > >
> > > > > > And updated my sample PR:
> > > > > > https://github.com/apache/kafka/pull/8184
> > > > > >
> > > > > > If there are no further comments, I'll request a vote in a few
> > days.
> > > > > >
> > > > > > Thanks for the feedback!
> > > > > >
> > > > > > On Mon, Mar 9, 2020 at 1:24 PM Aneel Nazareth 
> > > > > wrote:
> > > > > > >
> > > > > > > Hi David,
> > > > > > >
> > > > > > > Is the expected behavior that the keys are deleted without
> > > checking the
> > > > > > values?
> > > > > > >
> > > > > > > Let's say I had this file new.properties:
> > > > > > > a=1
> > > > > > > b=2
> > > > > > >
> > > > > > > And ran:
> > > > > > >
> > > > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > > > >   --entity-type brokers --entity-default \
> > > > > > >   --alter --add-config-file new.properties
> > > > > > >
> > > > > > > It seems clear what should happen if I run this immediately:
> > > > > > >
> > > > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > > > >   --entity-type brokers --entity-default \
> > > > > > >   --alter --delete-config-file new.properties
> > > > > > >
> > > > > > > (Namely that both a and b would now have no values in the config)
> > > > > > >
> > > > > > > But what if this were run in-between:
> > > > > > >
> > > > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > > > >   --entity-type brokers --entity-default \
> > > > > > >   --alter --add-config a=3
> > > > > > >
> > > > > > > Would it be surprising if the key/value pair a=3 was deleted,
> > even
> > > > > > > though the config that is in the file is a=1? Or would that be
> > > > > > > expected?
> > > > > > >
> > > > > > > On Mon, Mar 9, 2020 at 1:02 PM David Jacot 
> > > > > wrote:
> > > > > > > >
> > > > > > > > Hi Colin,
> > > > > > > >
> > > > > > > > Yes, you're right. This is wei

Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-26 Thread Aneel Nazareth
Hi Rajini,

Thanks for taking a look at this. It seems like the consensus is to
remove the --delete-config-file option, so I'll do so.

On Wed, Mar 25, 2020 at 5:48 AM Rajini Sivaram  wrote:
>
> Hi Aneel,
>
> Thanks for the KIP. As configurations get more complex, the ability to
> provide compound configs in a file is really useful. I am not convinced
> about the `--delete-config-file` option though. I am not familiar with the
> Kubernetes case, but I guess if you create an entity with a file, it is
> reasonable to delete the entity with the same config file, even though some
> configs of the entity may have changed. But like their non-file
> counterparts, `--add-config-file` and `--delete-config-file` aren't
> creating or deleting anything, they are both updating configs of an entity.
> To be precise, they are updating config overrides, one adds an element to a
> hierarachy and the other removes an element from a hierarchy. The current
> proposal allows you to update configs of entityA using fileA and then
> delete configs of entityB using fileA, with totally unintended consequences
> since config values are not validated. Since configs are like maps, it is
> confusing if delete with value doesn't have the semantics of remove(key,
> value). And the option to specify both `--delete-config` and `
> --delete-config-file` just makes this option inconsistent. Do we really
> need a `--delete-config-file` option?
>
> Regards,
>
> Rajini
>
> On Wed, Mar 25, 2020 at 8:25 AM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > STDIN wasn't standard practice in other scripts like
> > kafka-console-consumer.sh, kafka-console-producer.sh and kafka-acls.sh
> > in which the props file is accepted via consumer.config / producer.config /
> > command-config parameter.
> >
> > Shouldn't we have to maintain the uniformity across scripts?
> >
> > On Mon, Mar 16, 2020 at 4:13 PM David Jacot  wrote:
> >
> > > Hi Aneel,
> > >
> > > Thanks for the updated KIP. I have made a second pass over it and the
> > > KIP looks good to me.
> > >
> > > Best,
> > > David
> > >
> > > On Tue, Mar 10, 2020 at 9:39 PM Aneel Nazareth 
> > wrote:
> > >
> > > > After reading a bit more about it in the Kubernetes case, I think it's
> > > > reasonable to do this and be explicit that we're ignoring the value,
> > > > just deleting all keys that appear in the file.
> > > >
> > > > I've updated the KIP wiki page to reflect that:
> > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > >
> > > > And updated my sample PR:
> > > > https://github.com/apache/kafka/pull/8184
> > > >
> > > > If there are no further comments, I'll request a vote in a few days.
> > > >
> > > > Thanks for the feedback!
> > > >
> > > > On Mon, Mar 9, 2020 at 1:24 PM Aneel Nazareth 
> > > wrote:
> > > > >
> > > > > Hi David,
> > > > >
> > > > > Is the expected behavior that the keys are deleted without checking
> > the
> > > > values?
> > > > >
> > > > > Let's say I had this file new.properties:
> > > > > a=1
> > > > > b=2
> > > > >
> > > > > And ran:
> > > > >
> > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > >   --entity-type brokers --entity-default \
> > > > >   --alter --add-config-file new.properties
> > > > >
> > > > > It seems clear what should happen if I run this immediately:
> > > > >
> > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > >   --entity-type brokers --entity-default \
> > > > >   --alter --delete-config-file new.properties
> > > > >
> > > > > (Namely that both a and b would now have no values in the config)
> > > > >
> > > > > But what if this were run in-between:
> > > > >
> > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > >   --entity-type brokers --entity-default \
> > > > >   --alter --add-config a=3
> > > > >
> > > > > Would it be surprising if the key/value pair a=3 was deleted, even
> > > > > though the config that is in the file is a=1? Or would that be
> > > > > expected?
> > > > >
> > > > > On Mon, Mar 9, 2020 at 1:02 PM David Jacot 
> > > wrote:
> > > > > >
> > > > > > Hi Colin,
> > > > > >
> > > > > > Yes, you're right. This is weird but convenient because you don't
> > > have
> > > > to
> > > > > > duplicate
> > > > > > the "keys". I was thinking about the kubernetes API which allows to
> > > > create
> > > > > > a Pod
> > > > > > based on a file and allows to delete it as well with the same
> > file. I
> > > > have
> > > > > > always found
> > > > > > this convenient, especially when doing local tests.
> > > > > >
> > > > > > Best,
> > > > > > David
> > > > > >
> > > > > > On Mon, Mar 9, 2020 at 6:35 PM Colin McCabe 
> > > > wrote:
> > > > > >
> > > > > > > Hi Aneel,
> > > > > > >
> > > > > > > Thanks for the KIP.  I like the idea.
> > > > > > >
> > > > > > > You mention that "input from STDIN can be used instead of a file
> > on
> > > > > > > disk."  The example given in the KIP seems to su

Re: [VOTE] KIP-519: Make SSL context/engine configuration extensible

2020-03-26 Thread Maulin Vasavada
FYI - we have updated the KIP documentation also with appropriate code
samples for interfaces and few important changes.

Thanks
Maulin

On Wed, Mar 25, 2020 at 10:21 AM Maulin Vasavada 
wrote:

> bump
>
> On Wed, Mar 25, 2020 at 10:20 AM Maulin Vasavada <
> maulin.vasav...@gmail.com> wrote:
>
>> Hi all
>>
>> After much await on the approach conclusion we have a PR
>> https://github.com/apache/kafka/pull/8338.
>>
>> Can you please provide your vote so that we can more this forward?
>>
>> Thanks
>> Maulin
>>
>> On Sun, Jan 26, 2020 at 11:03 PM Maulin Vasavada <
>> maulin.vasav...@gmail.com> wrote:
>>
>>> Hi all
>>>
>>> After a good discussion on the KIP at
>>> https://www.mail-archive.com/dev@kafka.apache.org/msg101011.html I
>>> think we are ready to start voting.
>>>
>>> KIP:
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952
>>>
>>> The KIP proposes - Making SSLEngine creation pluggable to support
>>> customization of various security related aspects.
>>>
>>> Thanks
>>> Maulin
>>>
>>


[jira] [Resolved] (KAFKA-9760) Add EOS protocol changes to documentation

2020-03-26 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-9760.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Add EOS protocol changes to documentation
> -
>
> Key: KAFKA-9760
> URL: https://issues.apache.org/jira/browse/KAFKA-9760
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9729) Shrink inWriteLock time in SimpleAuthorizer

2020-03-26 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-9729.
--
Resolution: Fixed

> Shrink inWriteLock time in SimpleAuthorizer
> ---
>
> Key: KAFKA-9729
> URL: https://issues.apache.org/jira/browse/KAFKA-9729
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.1.0
>Reporter: Jiao Zhang
>Assignee: Lucas Bradstreet
>Priority: Minor
>
> Current SimpleAuthorizer needs 'inWriteLock' when processing add/remove acls 
> requests, while getAcls in authorize() needs 'inReadLock'.
>  That means handling add/remove acls requests would block all other requests 
> for example produce and fetch requests.
>  When processing add/remove acls, updateResourceAcls() access zk to update 
> acls, which could be long in the case like network glitch.
>  We did the simulation for zk delay.
>  When adding 100ms delay on zk side, 'inWriteLock' in addAcls()/removeAcls 
> lasts for 400ms~500ms.
>  When adding 500ms delay on zk side, 'inWriteLock' in addAcls()/removeAcls 
> lasts for 2000ms~2500ms.
> {code:java}
> override def addAcls(acls: Set[Acl], resource: Resource) {
>   if (acls != null && acls.nonEmpty) {
> inWriteLock(lock) {
>   val startMs = Time.SYSTEM.milliseconds()
>   updateResourceAcls(resource) { currentAcls =>
> currentAcls ++ acls
>   }
>   warn(s"inWriteLock in addAcls consumes ${Time.SYSTEM.milliseconds() - 
> startMs} milliseconds.")
> }
>   }
> }{code}
> Blocking produce/fetch requests for 2s would cause apparent performance 
> degradation for the whole cluster.
>  So considering is it possible to remove 'inWriteLock' in addAcls/removeAcls 
> and only put 'inWriteLock' inside updateCache, which is called by 
> addAcls/removeAcls. 
> {code:java}
> // code placeholder
> private def updateCache(resource: Resource, versionedAcls: VersionedAcls) {
>  if (versionedAcls.acls.nonEmpty) {
>  aclCache.put(resource, versionedAcls)
>  } else {
>  aclCache.remove(resource)
>  }
>  }
> {code}
> If do this, block time is only the time for updating local cache, which isn't 
> influenced by network glitch. But don't know if there were special concerns 
> to have current strict write lock and not sure if there are side effects if 
> only put lock to updateCache.
> Btw, the latest version uses 'inWriteLock' at same places as version 1.1.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9769) ReplicaManager Partition.makeFollower Increases LeaderEpoch when ZooKeeper disconnect occurs

2020-03-26 Thread Andrew Choi (Jira)
Andrew Choi created KAFKA-9769:
--

 Summary: ReplicaManager Partition.makeFollower Increases 
LeaderEpoch when ZooKeeper disconnect occurs
 Key: KAFKA-9769
 URL: https://issues.apache.org/jira/browse/KAFKA-9769
 Project: Kafka
  Issue Type: Bug
  Components: replication
Reporter: Andrew Choi


The ZooKeeper Session once expired and got disconnected and the broker received 
the 1st LeaderAndIsr request simultaneously. As the broker was processing the 
1st LeaderAndIsr Request, the ZooKeeper session has not been reestablished just 
yet.

Within the makeFollowers method, _partition.getOrCreateReplica_ is called 
before the fetcher begins. _partition.getOrCreateReplica_ needs to fetch 
information from ZooKeeper but an exception is thrown when calling the 
ZooKeeper client because the session is invalid, rendering the fetcher start to 
be skipped.

 

In Partition class's getOrCreateReplica method calls AdminZkClient's 
fetchEntityConfig(..) which throws an exception if the ZooKeeper session is 
invalid. 

 
{code:java}
val props = adminZkClient.fetchEntityConfig(ConfigType.Topic, topic){code}
 

When this occurs, the leader epoch should not have been incremented due to 
ZooKeeper being invalid because once the second LeaderAndIsr request comes in, 
the leader epoch could be the same between the brokers. 

Few options I can think of for a fix. I think third route could be feasible:

1 - Make LeaderEpoch update and fetch update atomic.

2 - Wait until all individual partitions are successful without problems then 
process fetch.

3 - Catch the ZooKeeper exception in the caller code block 
(ReplicaManager.makeFollowers) and simply do not touch the remaining partitions 
to ensure that the batch of successful partitions up to that point are updated 
and processed (fetch).

4 - Or make LeaderAndIsr request never arrive at the broker in case of 
ZooKeeper disconnect, then that would be safe because it is already possible 
for some replicas to receive the LeaderAndIsr later than the others. However, 
in that case, the code need to make sure the controller will retry.

 
{code:java}
 else if (requestLeaderEpoch > currentLeaderEpoch) {
 // If the leader epoch is valid record the epoch of the controller that made 
the leadership decision. 
// This is useful while updating the isr to maintain the decision maker 
controller's epoch in the zookeeper path 
if (stateInfo.basePartitionState.replicas.contains(localBrokerId))   
partitionState.put(partition, stateInfo)
else 
 
def getOrCreateReplica(replicaId: Int, isNew: Boolean = false): Replica = { 
 allReplicasMap.getAndMaybePut(replicaId, { 
if (isReplicaLocal(replicaId)) {
val adminZkClient = new AdminZkClient(zkClient) val props = 
adminZkClient.fetchEntityConfig(ConfigType.Topic, topic)




{code}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-570: Add leader epoch in StopReplicaRequest

2020-03-26 Thread Jason Gustafson
I'm +1 for the change. I think the main advantage is that it simplifies the
batching. More like the LeaderAndIsr/UpdateMetadata requests, the request
grouping becomes less significant. I'm not sure how much of the benefit can
be realized given the need to keep compatibility, but it still seems like a
good improvement to the protocol.

-Jason

On Thu, Mar 26, 2020 at 6:25 AM David Jacot  wrote:

> I shouldn't have said "always", sorry.
>
> When a replica is deleted, either due to a topic deletion or a
> reassignment,
> the controller transitions the replica to the `OfflineReplica` state and
> then to
> the `ReplicaDeletionStarted` state. The first transition issues a
> StopReplicaRequest
> with DeletePartitions=false and the second transition issues a
> StopReplicaRequest
> with DeleatePartitions=true. The two operations are actually doing the same
> except that the latter one deletes the replica.
>
> At the moment, the `ControllerRequestBatch` partitions the accumulated stop
> replica requests and sends two requests.
>
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/controller/ControllerChannelManager.scala#L554
>
> Having a per-partition flag allows us to combine everything into one
> request
> here if the requests are sent within the same batch obviously. This won't
> guarantee that all requests will be combined though because requests are
> batched optimistically in the controller but it opens the door to improve
> it in
> the future.
>
> Best,
> David
>
>
> One side note: The batching of the two requests depends on how the
>
> On Wed, Mar 25, 2020 at 6:12 PM Ismael Juma  wrote:
>
> > Is it really true that the controller always sends two requests? Aren't
> the
> > operations different (stop replica with delete versus stop replica
> > without)?
> >
> > On Wed, Mar 25, 2020, 9:59 AM David Jacot  wrote:
> >
> > > Hi all,
> > >
> > > I'd like to inform you that I have slightly changed the schema which
> was
> > > proposed
> > > in the KIP. During the implementation, I have realized that the
> proposed
> > > schema
> > > did not work. The new one reorganises how topics/partitions are stored.
> > >
> > > I'd like to amend the current KIP with the following:
> > >
> > > At the moment, the StopReplicaRequest has a top level field named
> > > `DeletePartitions`
> > > which indicates whether the partitions present in the request must be
> > > deleted or not.
> > > The downside of this is that the controller always ends up sending two
> > > StopReplica
> > > requests, one with DeletePartitions=true and one with
> > > DeletePartitions=false.
> > >
> > > Instead, I'd like to add a per-partition DeletePartition field to
> combine
> > > everything in
> > > one request. This will reduce the number of requests sent to each
> broker
> > > and also
> > > increase the batching. I've already implemented it.
> > >
> > > I've already updated the schema in the KIP if you want to see it. I
> will
> > > update the
> > > KIP itself if you agree with the amendment.
> > >
> > > What do you think? Does it sound reasonable?
> > >
> > > Best,
> > > David
> > >
> > > On Fri, Mar 6, 2020 at 3:37 PM David Jacot 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > The vote has passed with +3 binding votes (Jason Gustafson, Gwen
> > Shapira,
> > > > Jun Rao).
> > > >
> > > > Thanks to everyone!
> > > >
> > > > Best,
> > > > David
> > > >
> > > > On Wed, Mar 4, 2020 at 9:02 AM David Jacot 
> > wrote:
> > > >
> > > >> Hi Jun,
> > > >>
> > > >> You're right. I have noticed it while implementing it. I plan to
> use a
> > > >> default
> > > >> value as a sentinel in the protocol (e.g. -2) to cover this case.
> > > >>
> > > >> David
> > > >>
> > > >> On Wed, Mar 4, 2020 at 3:18 AM Jun Rao  wrote:
> > > >>
> > > >>> Hi, David,
> > > >>>
> > > >>> Thanks for the KIP. +1 from me too. Just one comment below.
> > > >>>
> > > >>> 1. Regarding the sentinel leader epoch to indicate topic deletion,
> it
> > > >>> seems
> > > >>> that we need to use a different sentinel value to indicate that the
> > > >>> leader
> > > >>> epoch is not present when the controller is still on the old
> version
> > > >>> during
> > > >>> upgrade.
> > > >>>
> > > >>> Jun
> > > >>>
> > > >>> On Mon, Mar 2, 2020 at 11:20 AM Gwen Shapira 
> > > wrote:
> > > >>>
> > > >>> > +1
> > > >>> >
> > > >>> > On Mon, Feb 24, 2020, 2:16 AM David Jacot 
> > > wrote:
> > > >>> >
> > > >>> > > Hi all,
> > > >>> > >
> > > >>> > > I would like to start a vote on KIP-570: Add leader epoch in
> > > >>> > > StopReplicaRequest
> > > >>> > >
> > > >>> > > The KIP is here:
> > > >>> > >
> > > >>> > >
> > > >>> >
> > > >>>
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-570%3A+Add+leader+epoch+in+StopReplicaRequest
> > > >>> > >
> > > >>> > > Thanks,
> > > >>> > > David
> > > >>> > >
> > > >>> >
> > > >>>
> > > >>
> > >
> >
>


Build failed in Jenkins: kafka-2.2-jdk8 #37

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-9752; New member timeout can leave group rebalance stuck 
(#8339)


--
[...truncated 2.55 MB...]
kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters STARTED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter PASSED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting STARTED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > testEncodeAsBytes STARTED

kafka.utils.JsonTest > testEncodeAsBytes PASSED

kafka.utils.JsonTest > testEncodeAsString STARTED

kafka.utils.JsonTest > testEncodeAsString PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
STARTED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.

Build failed in Jenkins: kafka-trunk-jdk11 #1292

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Don't process sasl.kerberos.principal.to.local.rules on

[github] KAFKA-9760: Add KIP-447 protocol change to upgrade notes (#8350)

[github] KAFKA-9688: kafka-topic.sh should show KIP-455 adding and removing


--
[...truncated 2.96 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.Topolog

Re: [VOTE] KIP-519: Make SSL context/engine configuration extensible

2020-03-26 Thread Rajini Sivaram
+1 (binding)
Thanks for the KIP, Maulin!

Regards,

Rajini

On Thu, Mar 26, 2020 at 4:14 PM Maulin Vasavada 
wrote:

> FYI - we have updated the KIP documentation also with appropriate code
> samples for interfaces and few important changes.
>
> Thanks
> Maulin
>
> On Wed, Mar 25, 2020 at 10:21 AM Maulin Vasavada <
> maulin.vasav...@gmail.com>
> wrote:
>
> > bump
> >
> > On Wed, Mar 25, 2020 at 10:20 AM Maulin Vasavada <
> > maulin.vasav...@gmail.com> wrote:
> >
> >> Hi all
> >>
> >> After much await on the approach conclusion we have a PR
> >> https://github.com/apache/kafka/pull/8338.
> >>
> >> Can you please provide your vote so that we can more this forward?
> >>
> >> Thanks
> >> Maulin
> >>
> >> On Sun, Jan 26, 2020 at 11:03 PM Maulin Vasavada <
> >> maulin.vasav...@gmail.com> wrote:
> >>
> >>> Hi all
> >>>
> >>> After a good discussion on the KIP at
> >>> https://www.mail-archive.com/dev@kafka.apache.org/msg101011.html I
> >>> think we are ready to start voting.
> >>>
> >>> KIP:
> >>>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952
> >>>
> >>> The KIP proposes - Making SSLEngine creation pluggable to support
> >>> customization of various security related aspects.
> >>>
> >>> Thanks
> >>> Maulin
> >>>
> >>
>


Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2020-03-26 Thread Matthias J. Sax
One more change for KIP-447.

Currently, Kafka Streams collects task-level metrics called
"commit-latency-[max|avg]". However, with KIP-447 tasks don't get
committed individually any longer, and thus this metrics do not make
sense going forward.

Therefore, we propose to remove those metrics in 2.6 release.
Deprecation does not make much sense, as there is just nothing to be
recorded in a useful way any longer.

I also updated the upgrade path section, as the upgrade path is actually
simpler as described originally.


If there are any concerns, please let us know!


-Matthias


On 3/5/20 12:56 PM, Matthias J. Sax wrote:
> There is one more change to this KIP for the upgrade path of Kafka
> Streams applications:
> 
> We cannot detect broker versions reliable, and thus, we need users to
> manually opt-in to the feature. Thus, we need to add a third value for
> configuration parameter `processing.guarantee` that we call
> `exactly_once_beta` -- specifying this config will enable the producer
> per thread design.
> 
> I updated the KIP accordingly. Please let us know if there are any
> concerns.
> 
> 
> -Matthias
> 
> On 2/11/20 4:44 PM, Guozhang Wang wrote:
>> Boyang,
> 
>> Thanks for the update. This change makes sense to me.
> 
>> Guozhang
> 
>> On Tue, Feb 11, 2020 at 11:37 AM Boyang Chen
>>  wrote:
> 
>>> Hey there,
>>>
>>> we are adding a small change to the KIP-447 public API. The
>>> default value of
>>> `transaction.abort.timed.out.transaction.cleanup.interval.ms`
>>> shall be changed from 1 minute to 10 seconds. The goal here is to
>>> trigger the expired transaction more frequently in order to
>>> reduce the consumer pending offset fetch wait time.
>>>
>>> Let me know if you have further questions, thanks!
>>>
>>>
>>> On Wed, Jan 8, 2020 at 3:44 PM Boyang Chen
>>>  wrote:
>>>
 Thanks Guozhang for another review! I have addressed all the
 javadoc changes for PendingTransactionException in the KIP.
 For
>>> FENCED_INSTANCE_ID
 the only thrown place would be on the new send offsets API,
 which is also addressed.

 Thanks Matthias for the vote! As we have 3 binding votes
 (Guozhang,
>>> Jason,
 and Matthias), the KIP is officially accepted and prepared to
 ship in
>>> 2.5.

 Still feel free to put more thoughts on either discussion or
 voting
>>> thread
 to refine the KIP!


 On Wed, Jan 8, 2020 at 3:15 PM Matthias J. Sax
  wrote:

> I just re-read the KIP. Overall I am +1 as well.
>

> Some minor comments (also apply to the Google design doc):
>
> 1) As 2.4 was release, references should be updated to 2.5.
>
> Addressed

>
>
>> 2) About the upgrade path, the KIP says:
>
> 2a)
>
>> Broker must be upgraded to 2.4 first. This means the
> `inter.broker.protocol.version` (IBP) has to be set to the
> latest. Any produce request with higher version will
> automatically get fenced
>>> because
> of no support.
>
> From my understanding, this is not correct? After a broker is
> updated to the new binaries, it should accept new requests,
> even if IBP was not bumped yet?
>
> Your understanding was correct, after some offline discussion
> we should
 not worry about IBP in this case.

> 2b)
>
> About the two rolling bounces for KS apps and the statement
>
>> one should never allow task producer and thread producer
>> under the
>>> same
> application group
>
> In the second rolling bounce, we might actually mix both (ie,
> per-task and per-thread producers) but this is fine as
> explained in the KIP. The only case we cannot allow is, old
> per-task producers (without consumer generation fencing) to
> be mixed with per-thread producers (that rely solely on
> consumer generation fencing).
>
> Does this sound correct?
>
> Correct, that's the purpose of doing 2 rolling bounce, where
> the first
 one is to guarantee everyone's opt-in for generation fencing.

>
> 3) We should also document how users can use KS 2.5
> applications against older brokers -- for this case, we need
> to stay on per-task producers and cannot use the new fencing
> mechanism. Currently, the KIP only describe a single way how
> users could make this work: by setting (and keeping)
> UPGRADE_FROM config to 2.4 (what might not be an ideal
> solution and might also not be clear by itself that people
> would need to do
>>> this)?
>
>
> Yes exactly, at the moment we are actively working on a plan
> to acquire
 broker's IBP during stream start-up and initialize based off
 that information, so that user doesn't need to keep
 UPGRADE_FROM simply for working with
>>> old
 brokers.

>
> -Matthias
>
>
>
> On 9/18/19 4:41 PM, Boyang Chen wrote:
>> Bump this thread to see if someone could also

Jenkins build is back to normal : kafka-trunk-jdk8 #4372

2020-03-26 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9770) Caching State Store does not Close Underlying State Store When Exception is Thrown During Flushing

2020-03-26 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-9770:


 Summary: Caching State Store does not Close Underlying State Store 
When Exception is Thrown During Flushing
 Key: KAFKA-9770
 URL: https://issues.apache.org/jira/browse/KAFKA-9770
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.6.0
Reporter: Bruno Cadonna


When a caching state store is closed it calls its {{flush()}} method. If 
{{flush()}} throws an exception the underlying state store is not closed.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9771) Inter-worker SSL is broken for keystores with multiple certificates

2020-03-26 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-9771:


 Summary: Inter-worker SSL is broken for keystores with multiple 
certificates
 Key: KAFKA-9771
 URL: https://issues.apache.org/jira/browse/KAFKA-9771
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.0
Reporter: Chris Egerton
Assignee: Chris Egerton


The recent bump in Jetty version causes inter-worker communication to fail in 
Connect when SSL is enabled and the keystore for the worker contains multiple 
certificates (which it might, in the case that SNI is enabled and the worker's 
REST interface is bound to multiple domain names). This is caused by [changes 
introduced in Jetty 9.4.23|https://github.com/eclipse/jetty.project/pull/4085], 
which are later [fixed in Jetty 
9.4.25|https://github.com/eclipse/jetty.project/pull/4404].

We recently tried and failed to [upgrade to Jetty 
9.4.25|https://github.com/apache/kafka/pull/8183], so upgrading the Jetty 
version to fix this issue isn't a viable option. Additionally, the [earliest 
clean version of Jetty|https://www.eclipse.org/jetty/security-reports.html] (at 
the time of writing) with regards to CVEs is 9.4.24, so reverting to a 
pre-9.4.23 version is also not a viable option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8470) State change logs should not be in TRACE level

2020-03-26 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-8470.

Fix Version/s: 2.6.0
   Resolution: Fixed

Merged the PR to trunk.

    1. Defaults state-change log level to INFO.

    2. INFO level state-change log includes  (a) request level logging with 
just partition counts; (b) the leader/isr changes per partition in the 
controller and in the broker (reduced to mostly just 1 logging per partition).

> State change logs should not be in TRACE level
> --
>
> Key: KAFKA-8470
> URL: https://issues.apache.org/jira/browse/KAFKA-8470
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Minor
> Fix For: 2.6.0
>
>
> The StateChange logger in Kafka should not be logging its state changes in 
> TRACE level.
> We consider these changes very useful in debugging and we additionally 
> configure that logger to log in TRACE levels by default.
> Since we consider it important enough to configure its own logger to log in a 
> separate log level, why don't we change those logs to INFO and have the 
> logger use the defaults?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 2.5.0 RC2

2020-03-26 Thread Christopher Egerton
Hi all,

I'd like to request that https://issues.apache.org/jira/browse/KAFKA-9771 be
treated as a release blocker for 2.5.

This is a regression caused by the recent bump in Jetty version that causes
inter-worker communication to fail for Connect clusters that use SSL and a
keystore that contains multiple certificates (which is necessary for SNI in
the event that the Connect REST interface is bound to multiple domain
names).

The impact for affected users is quite high; either the Connect worker must
be reconfigured to listen on a single domain name and its keystore must be
wiped accordingly, or inter-worker SSL needs to be disabled entirely by
adding an unsecured listener and configuring the worker to advertise the
URL for that unsecured listener to other workers in the cluster.

I've already implemented a small fix that works with local testing, and
have opened a PR to add it to Kafka:
https://github.com/apache/kafka/pull/8369.

Would it be possible to get this fix included in 2.5.0, pending review?

Cheers,

Chris

On Fri, Mar 20, 2020 at 6:59 PM Ismael Juma  wrote:

> Hi Boyang,
>
> Is this a regression?
>
> Ismael
>
> On Fri, Mar 20, 2020, 5:43 PM Boyang Chen 
> wrote:
>
> > Hey David,
> >
> > I would like to raise https://issues.apache.org/jira/browse/KAFKA-9701
> as
> > a
> > 2.5 blocker. The impact of this bug is that it could throw fatal
> exception
> > and kill a stream thread on Kafka Streams level. It could also create a
> > crashing scenario for plain Kafka Consumer users as well as the exception
> > will be thrown all the way up.
> >
> > Let me know your thoughts.
> >
> > Boyang
> >
> > On Tue, Mar 17, 2020 at 8:10 AM David Arthur  wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the third candidate for release of Apache Kafka 2.5.0.
> > >
> > > * TLS 1.3 support (1.2 is now the default)
> > > * Co-groups for Kafka Streams
> > > * Incremental rebalance for Kafka Consumer
> > > * New metrics for better operational insight
> > > * Upgrade Zookeeper to 3.5.7
> > > * Deprecate support for Scala 2.11
> > >
> > >
> > >  Release notes for the 2.5.0 release:
> > >
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc2/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Tuesday March 24, 2020 by 5pm PT.
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > https://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~davidarthur/kafka-2.5.0-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~davidarthur/kafka-2.5.0-rc2/javadoc/
> > >
> > > * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> > > https://github.com/apache/kafka/releases/tag/2.5.0-rc2
> > >
> > > * Documentation:
> > > https://kafka.apache.org/25/documentation.html
> > >
> > > * Protocol:
> > > https://kafka.apache.org/25/protocol.html
> > >
> > >
> > > I'm thrilled to be able to include links to both build jobs with
> > successful
> > > builds! Thanks to everyone who has helped reduce our flaky test
> exposure
> > > these past few weeks :)
> > >
> > > * Successful Jenkins builds for the 2.5 branch:
> > > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.5-jdk8/64/
> > > System tests:
> > > https://jenkins.confluent.io/job/system-test-kafka/job/2.5/42/
> > >
> > > --
> > > David Arthur
> > >
> >
>


Re: [DISCUSS] KIP-580: Exponential Backoff for Kafka Clients

2020-03-26 Thread Konstantine Karantasis
Thank you Sanjana.

I checked the updates on the KIP and it looks good to me too now. I'll vote
on the voting thread.

Best,
Konstantine

On Wed, Mar 25, 2020 at 1:57 PM Sanjana Kaundinya 
wrote:

> Hi Konstantine,
>
> Thanks for the insightful feedback. I’ll address it here as well as update
> the KIP accordingly.
>
> I think it is important to call out the fact that we are leaving out
> Connect and Streams in the proposed changes, so that it can be addressed in
> future KIP/changes. As you pointed out, Kafka Connect does utilize
> ConsumerNetworkClient and Metadata for its rebalancing protocol, and as a
> result the changes made to exponential backoff would affect the
> WorkerGroupMember that utilizes these classes. Any Kafka client that
> utilizes these classes would be making use of exponential backoff instead
> of the current static backoff.
>
> That being said, although Kafka Connect will be affected with respect to
> those two things, not all of the backoff configs are being replaced here.
> As you correctly stated, classes such as AbstractCoordinator,
> ConsumerCoordinator, and the Heartbeat thread would be utilizing the static
> backoff behavior - no changes will be made with respect to rebalancing.
>
> With respect to Compatibility, I will add into that section the things
> I’ve mentioned above - affects to Kafka Connect as well as no changes to
> anything related to the rebalance protocol. In addition, the reason why
> retry.backoff.max.ms shouldn’t default to the same value as
> retry.backoff.ms is that then if a user isn’t aware of this feature and
> doesn’t set this, they wouldn’t enjoy the exponential backoff. Instead it’s
> important to ensure that we provide this as a default feature for all Kafka
> clients. In addition defaulting the retry.backoff.max.ms to 1000 ms
> unconditionally wouldn’t give users the flexibility to tune their clients
> to their environments.
>
> Finally, yes you are correct, in order to have exponential backoff, we
> actually do need both configs, with retry.backkoff.ms <
> retry.backoff.max.ms. I will update the KIP to reflect that as well as
> incorporate the wording change you have suggested.
>
> Thanks,
> Sanjana
>
> On Mar 25, 2020, 10:50 AM -0700, Konstantine Karantasis <
> konstant...@confluent.io>, wrote:
> > Hi Sanjana and thanks for the KIP!
> >
> > Sorry for the late response, but I still have a few questions that you
> > might find useful.
> >
> > The KIP currently does not mention Kafka Connect at all. I have read
> > the discussion above where it'd been decided to leave Connect and Streams
> > out of the proposed changes, but I feel this should be called out
> > explicitly. At the same time, Kafka Connect is also a Kafka client that
> > uses ConsumerNetworkClient and Metadata for its rebalancing protocol.
> It's
> > not clear to me whether changes in those classes will affect Connect
> > workers. Do you think it's worth clarifying that?
> >
> > Additionally, you might also want to add a section specifically to
> mention
> > how this new config affects the places where the current config
> > retry.backoff.ms is used today to back-off during rebalancing. Is
> > exponential backoff going to replace the old config in those places as
> > well? And if it does, should we add a mention that a very high value of
> the
> > new retry.backoff.max.ms might affect how quickly a consumer or worker
> > rejoins their group after it experiences a temporary network partitioning
> > from the broker coordinator?
> >
> > Places that explicitly use retry.backoff.ms at the moment include the
> > AbstractCoordinator, the ConsumerCoordinator and the Heartbeat thread. By
> > reading the previous discussion, I understand that these classes might
> keep
> > using the old static backoff. Even if that's the case, I think it's worth
> > mentioning that in the KIP for reference.
> >
> > In the rejected alternatives section, you mention that "existing behavior
> > is always maintained: for reasons explained in the compatibility
> section.".
> > However, the Compatibility section says that there are no compatibility
> > concerns. I'd suggest extending the compatibility section to help a bit
> > more in explaining why the alternatives were rejected. Also, in the
> > compatibility section you mention that the new config (
> retry.backoff.max.ms)
> > will replace the old one (retry.backoff.ms), but from reading at the
> > beginning, I understand that in order to have exponential increments, you
> > actually need both configs, with retry.backoff.ms < retry.backoff.max.ms
> .
> > Should the mention around replacement be removed?
> >
> > Finally, I have a minor suggestion that might help explain the following
> > sentence better:
> >
> > "If retry.backoff.ms is set to be greater than retry.backoff.max.ms,
> then
> > retry.backoff.max.ms will be used as a **constant backoff from the
> > beginning without exponential increase**." (highlighting the difference
> > only for reference here). Unless

Re: [VOTE] KIP-580: Exponential Backoff for Kafka Clients

2020-03-26 Thread Konstantine Karantasis
Thank you Sanjana!

+1 (binding) from me too.

Konstantine


On Wed, Mar 25, 2020 at 1:57 PM Sanjana Kaundinya 
wrote:

> Hi Konstantine,
>
> Thanks for the feedback, I have addressed it on the [DISCUSS] thread and
> will update the KIP shortly.
>
> Thanks,
> Sanjana
> On Mar 25, 2020, 10:52 AM -0700, Konstantine Karantasis <
> konstant...@confluent.io>, wrote:
> > Hi Sanjana.
> > Thanks for the KIP! Seems quite useful not to overwhelm the brokers with
> > the described requests from clients.
> >
> > You have the votes already, and I'm also in favor overall, but I've made
> a
> > couple of questions (sorry for the delay) regarding Connect, which is
> also
> > using retry.backoff.ms but currently is not mentioned in the KIP, as
> well
> > as a question around how we expect the new setting to work with
> rebalances
> > in clients that inherit from the AbstractCoordinator (Consumer and
> Connect
> > at a minimum).
> >
> > Maybe it's worth clarifying these points in the KIP, or the mailing list
> > thread in case I missed something w/r/t the intent of the changes.
> >
> > Best,
> > Konstantine
> >
> >
> > On Tue, Mar 24, 2020 at 9:42 PM David Jacot  wrote:
> >
> > > +1 (non-binding)
> > >
> > > Thanks for the KIP, great improvement!
> > >
> > > Le mer. 25 mars 2020 à 04:44, Gwen Shapira  a
> écrit :
> > >
> > > > +1 (binding) - thank you
> > > >
> > > > On Mon, Mar 23, 2020, 10:50 AM Sanjana Kaundinya <
> skaundi...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Everyone,
> > > > >
> > > > > I’d like to start a vote for KIP-580: Exponential Backoff for Kafka
> > > > > Clients. The link to the KIP can be found here:
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-580%3A+Exponential+Backoff+for+Kafka+Clients
> > > > > .
> > > > >
> > > > > Thanks,
> > > > > Sanjana
> > > > >
> > > > >
> > > >
> > >
>


Jenkins build is back to normal : kafka-1.0-jdk8 #290

2020-03-26 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-9707) InsertField.Key transformation should apply to tombstone records

2020-03-26 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9707.
---
Resolution: Fixed

This fix was merged in time so it will be available in 2.5.0 and 2.4.2, 2.3.2, 
2.2.3, 2.1.2, 2.0.2, 1.1.2, 1.0.3

> InsertField.Key transformation should apply to tombstone records
> 
>
> Key: KAFKA-9707
> URL: https://issues.apache.org/jira/browse/KAFKA-9707
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.2.2, 2.4.0, 2.3.1, 2.4.1
>Reporter: Greg Harris
>Assignee: Greg Harris
>Priority: Major
>  Labels: regression
> Fix For: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.2.3, 2.5.0, 2.3.2, 2.4.2
>
>
> *Note: This was an inadvertent regression caused by KAFKA-8523.*
> Reproduction steps:
>  # Configure an InsertField.Key transformation
>  # Pass a tombstone record (with non-null key, but null value) through the 
> transform
> Expected behavior:
> The key field is inserted, and the value remains null
> Observed behavior:
> The key field is not inserted, and the value remains null



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk11 #1293

2020-03-26 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-519: Make SSL context/engine configuration extensible

2020-03-26 Thread Zhou, Thomas
+1 (non-binding)

Regards,
Thomas

On 3/26/20, 12:36 PM, "Rajini Sivaram"  wrote:

+1 (binding)
Thanks for the KIP, Maulin!

Regards,

Rajini

On Thu, Mar 26, 2020 at 4:14 PM Maulin Vasavada 
wrote:

> FYI - we have updated the KIP documentation also with appropriate code
> samples for interfaces and few important changes.
>
> Thanks
> Maulin
>
> On Wed, Mar 25, 2020 at 10:21 AM Maulin Vasavada <
> maulin.vasav...@gmail.com>
> wrote:
>
> > bump
> >
> > On Wed, Mar 25, 2020 at 10:20 AM Maulin Vasavada <
> > maulin.vasav...@gmail.com> wrote:
> >
> >> Hi all
> >>
> >> After much await on the approach conclusion we have a PR
> >> 
https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fkafka%2Fpull%2F8338&data=01%7C01%7Cthzhou%40paypal.com%7C4520b56f3b1f44cceddb08d7d1bd052a%7Cfb00791460204374977e21bac5f3f4c8%7C1&sdata=1ydk0OMaucb8QhTyyQ8Ua3ereGzcS4usRlavU1RixkE%3D&reserved=0.
> >>
> >> Can you please provide your vote so that we can more this forward?
> >>
> >> Thanks
> >> Maulin
> >>
> >> On Sun, Jan 26, 2020 at 11:03 PM Maulin Vasavada <
> >> maulin.vasav...@gmail.com> wrote:
> >>
> >>> Hi all
> >>>
> >>> After a good discussion on the KIP at
> >>> 
https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mail-archive.com%2Fdev%40kafka.apache.org%2Fmsg101011.html&data=01%7C01%7Cthzhou%40paypal.com%7C4520b56f3b1f44cceddb08d7d1bd052a%7Cfb00791460204374977e21bac5f3f4c8%7C1&sdata=qsvbqkoxL6NSPDV6rm9B9xqZG5xvYaZkj0cYrTM6bPw%3D&reserved=0
 I
> >>> think we are ready to start voting.
> >>>
> >>> KIP:
> >>>
> 
https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fpages%2Fviewpage.action%3FpageId%3D128650952&data=01%7C01%7Cthzhou%40paypal.com%7C4520b56f3b1f44cceddb08d7d1bd052a%7Cfb00791460204374977e21bac5f3f4c8%7C1&sdata=rcqWc2inIbrWlMj2jssHPKcMlHuDuLvicmYHHDYWrF8%3D&reserved=0
> >>>
> >>> The KIP proposes - Making SSLEngine creation pluggable to support
> >>> customization of various security related aspects.
> >>>
> >>> Thanks
> >>> Maulin
> >>>
> >>
>




Build failed in Jenkins: kafka-2.5-jdk8 #79

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9707: Fix InsertField.Key should apply to keys of tombstone


--
[...truncated 2.90 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STAR

Build failed in Jenkins: kafka-2.2-jdk8 #38

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9707: Fix InsertField.Key should apply to keys of tombstone


--
[...truncated 2.55 MB...]
kafka.log.LogConfigTest > ensureNoStaticInitializationOrderDependency STARTED

kafka.log.LogConfigTest > ensureNoStaticInitializationOrderDependency PASSED

kafka.log.LogConfigTest > shouldValidateThrottledReplicasConfig STARTED

kafka.log.LogConfigTest > shouldValidateThrottledReplicasConfig PASSED

kafka.log.LogConfigTest > testFromPropsEmpty STARTED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps STARTED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid STARTED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.LogCleanerTest > testCleanCorruptMessageSet STARTED

kafka.log.LogCleanerTest > testCleanCorruptMessageSet PASSED

kafka.log.LogCleanerTest > testAbortedTransactionSpanningSegments STARTED

kafka.log.LogCleanerTest > testAbortedTransactionSpanningSegments PASSED

kafka.log.LogCleanerTest > 
testLogCleanerRetainsLastSequenceEvenIfTransactionAborted STARTED

kafka.log.LogCleanerTest > 
testLogCleanerRetainsLastSequenceEvenIfTransactionAborted PASSED

kafka.log.LogCleanerTest > testBuildOffsetMap STARTED

kafka.log.LogCleanerTest > testBuildOffsetMap PASSED

kafka.log.LogCleanerTest > testAbortMarkerRemoval STARTED

kafka.log.LogCleanerTest > testAbortMarkerRemoval PASSED

kafka.log.LogCleanerTest > testBuildOffsetMapFakeLarge STARTED

kafka.log.LogCleanerTest > testBuildOffsetMapFakeLarge PASSED

kafka.log.LogCleanerTest > testSegmentGrouping STARTED

kafka.log.LogCleanerTest > testSegmentGrouping PASSED

kafka.log.LogCleanerTest > testCorruptMessageSizeLargerThanBytesAvailable 
STARTED

kafka.log.LogCleanerTest > testCorruptMessageSizeLargerThanBytesAvailable PASSED

kafka.log.LogCleanerTest > testSizeTrimmedForPreallocatedAndCompactedTopic 
STARTED

kafka.log.LogCleanerTest > testSizeTrimmedForPreallocatedAndCompactedTopic 
PASSED

kafka.log.LogCleanerTest > testCommitMarkerRetentionWithEmptyBatch STARTED

kafka.log.LogCleanerTest > testCommitMarkerRetentionWithEmptyBatch PASSED

kafka.log.LogCleanerTest > testLogCleanerRetainsProducerLastSequence STARTED

kafka.log.LogCleanerTest > testLogCleanerRetainsProducerLastSequence PASSED

kafka.log.LogCleanerTest > testCleanSegmentsWithAbort STARTED

kafka.log.LogCleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.LogCleanerTest > testDeletedBatchesWithNoMessagesRead STARTED

kafka.log.LogCleanerTest > testDeletedBatchesWithNoMessagesRead PASSED

kafka.log.LogCleanerTest > testSegmentGroupingWithSparseOffsets STARTED

kafka.log.LogCleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.LogCleanerTest > testLargeMessage STARTED

kafka.log.LogCleanerTest > testLargeMessage PASSED

kafka.log.LogCleanerTest > testCleanEmptyControlBatch STARTED

kafka.log.LogCleanerTest > testCleanEmptyControlBatch PASSED

kafka.log.LogCleanerTest > testRecoveryAfterCrash STARTED

kafka.log.LogCleanerTest > testRecoveryAfterCrash PASSED

kafka.log.LogCleanerTest > testCleanTombstone STARTED

kafka.log.LogCleanerTest > testCleanTombstone PASSED

kafka.log.LogCleanerTest > testDuplicateCheckAfterCleaning STARTED

kafka.log.LogCleanerTest > testDuplicateCheckAfterCleaning PASSED

kafka.log.LogCleanerTest > testAbortMarkerRetentionWithEmptyBatch STARTED

kafka.log.LogCleanerTest > testAbortMarkerRetentionWithEmptyBatch PASSED

kafka.log.LogCleanerTest > testCleaningWithUncleanableSection STARTED

kafka.log.LogCleanerTest > testCleaningWithUncleanableSection PASSED

kafka.log.LogCleanerTest > testLogToClean STARTED

kafka.log.LogCleanerTest > testLogToClean PASSED

kafka.log.LogCleanerTest > testCleaningWithDeletes STARTED

kafka.log.LogCleanerTest > testCleaningWithDeletes PASSED

kafka.log.LogCleanerTest > testClientHandlingOfCorruptMessageSet STARTED

kafka.log.LogCleanerTest > testClientHandlingOfCorruptMessageSet PASSED

kafka.log.LogCleanerTest > testCleanWithTransactionsSpanningSegments STARTED

kafka.log.LogCleanerTest > testCleanWithTransactionsSpanningSegments PASSED

kafka.log.LogCleanerTest > testEmptyBatchRemovalWithSequenceReuse STARTED

kafka.log.LogCleanerTest > testEmptyBatchRemovalWithSequenceReuse PASSED

kafka.log.LogCleanerTest > testCommittedTransactionSpanningSegments STARTED

kafka.log.LogCleanerTest > testCommittedTransactionSpanningSegments PASSED

kafka.log.LogCleanerTest > testMessageLargerThanMaxMessageSize STARTED

kafka.log.LogCleanerTest > testMessageLargerThanMaxMessageSize PASSED

kafka.log.LogCleanerTest > testMessageLargerThanMaxMessageSizeWithCorruptHeader 
STARTED

kafka.log.LogCleanerTest > testMessageLargerThanMaxMessageSizeWithCorruptHeader 
PASSED

kafka.log.LogCleanerTest > testCleaningBeyondMissingOffsets STARTED

kafka.log.LogCleanerTest > testCleanin

Build failed in Jenkins: kafka-2.1-jdk8 #259

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9707: Fix InsertField.Key should apply to keys of tombstone


--
[...truncated 923.69 KB...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[20] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[20] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[21] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[21] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[22] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[22] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[23] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[23] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[24] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[24] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[25] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[25] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[26] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[26] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[27] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[27] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[28] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[28] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[29] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[29] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerConfigUpdateTest[0] 
STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerConfigUpdateTest[0] 
PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerConfigUpdateTest[1] 
STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerConfigUpdateTest[1] 
PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersion

Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-03-26 Thread Kowshik Prakasam
Hi Colin,

Thanks for the feedback! I've changed the KIP to address your suggestions.
Please find below my explanation. Here is a link to KIP 584:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
.

1. '__data_version__' is the version of the finalized feature metadata
(i.e. actual ZK node contents), while the '__schema_version__' is the
version of the schema of the data persisted in ZK. These serve different
purposes. '__data_version__' is is useful mainly to clients during reads,
to differentiate between the 2 versions of eventually consistent 'finalized
features' metadata (i.e. larger metadata version is more recent).
'__schema_version__' provides an additional degree of flexibility, where if
we decide to change the schema for '/features' node in ZK (in the future),
then we can manage broker roll outs suitably (i.e.
serialization/deserialization of the ZK data can be handled safely).

2. Regarding admin client needing min and max information - you are right!
I've changed the KIP such that the Admin API also allows the user to read
'supported features' from a specific broker. Please look at the section
"Admin API changes".

3. Regarding the use of `long` vs `Long` - it was not deliberate. I've
improved the KIP to just use `long` at all places.

4. Regarding kafka.admin.FeatureCommand tool - you are right! I've updated
the KIP sketching the functionality provided by this tool, with some
examples. Please look at the section "Tooling support examples".

Thank you!


Cheers,
Kowshik

On Wed, Mar 25, 2020 at 11:31 PM Colin McCabe  wrote:

> Thanks, Kowshik, this looks good.
>
> In the "Schema" section, do we really need both __schema_version__ and
> __data_version__?  Can we just have a single version field here?
>
> Shouldn't the Admin(Client) function have some way to get the min and max
> information that we're exposing as well?  I guess we could have min, max,
> and current.  Unrelated: is the use of Long rather than long deliberate
> here?
>
> It would be good to describe how the command line tool
> kafka.admin.FeatureCommand will work.  For example the flags that it will
> take and the output that it will generate to STDOUT.
>
> cheers,
> Colin
>
>
> On Tue, Mar 24, 2020, at 17:08, Kowshik Prakasam wrote:
> > Hi all,
> >
> > I've opened KIP-584 
> > which
> > is intended to provide a versioning scheme for features. I'd like to use
> > this thread to discuss the same. I'd appreciate any feedback on this.
> > Here
> > is a link to KIP-584:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features
> >  .
> >
> > Thank you!
> >
> >
> > Cheers,
> > Kowshik
> >
>


Build failed in Jenkins: kafka-2.3-jdk8 #191

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9707: Fix InsertField.Key should apply to keys of tombstone


--
[...truncated 2.85 MB...]

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testAclMethods STARTED

kafka.zk.KafkaZkClientTest > testAclMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates STARTED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates PASSED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification P

Build failed in Jenkins: kafka-trunk-jdk8 #4373

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9688: kafka-topic.sh should show KIP-455 adding and removing

[manikumar] KAFKA-9729: avoid readLock in authorizer ACL lookups

[github] Minor: Don't swallow errors when altering log dirs in ReplicaManager

[github] KAFKA-9707: Fix InsertField.Key should apply to keys of tombstone

[github] KAFKA-8470: State change logs should not be in TRACE level (#8320)


--
[...truncated 5.95 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardR

[jira] [Created] (KAFKA-9772) Transactional offset commit fails with IllegalStateException

2020-03-26 Thread Dhruvil Shah (Jira)
Dhruvil Shah created KAFKA-9772:
---

 Summary: Transactional offset commit fails with 
IllegalStateException
 Key: KAFKA-9772
 URL: https://issues.apache.org/jira/browse/KAFKA-9772
 Project: Kafka
  Issue Type: Bug
Reporter: Dhruvil Shah


java.lang.IllegalStateException: Trying to complete a transactional offset 
commit for producerId 7090 and groupId application-id even though the offset 
commit record itself hasn't been appended to the log. at 
kafka.coordinator.group.GroupMetadata.$anonfun$completePendingTxnOffsetCommit$2(GroupMetadata.scala:677)
 at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149) at 
scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237) at 
scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230) at 
scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44) at 
scala.collection.mutable.HashMap.foreach(HashMap.scala:149) at 
kafka.coordinator.group.GroupMetadata.$anonfun$completePendingTxnOffsetCommit$1(GroupMetadata.scala:674)
 at 
kafka.coordinator.group.GroupMetadata.completePendingTxnOffsetCommit(GroupMetadata.scala:673)
 at 
kafka.coordinator.group.GroupMetadataManager.$anonfun$handleTxnCompletion$2(GroupMetadataManager.scala:874)
 at kafka.coordinator.group.GroupMetadata.inLock(GroupMetadata.scala:228) at 
kafka.coordinator.group.GroupMetadataManager.$anonfun$handleTxnCompletion$1(GroupMetadataManager.scala:873)
 at scala.collection.mutable.HashSet.foreach(HashSet.scala:79) at 
kafka.coordinator.group.GroupMetadataManager.handleTxnCompletion(GroupMetadataManager.scala:870)
 at 
kafka.coordinator.group.GroupMetadataManager.$anonfun$scheduleHandleTxnCompletion$1(GroupMetadataManager.scala:865)
 at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114) at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1294

2020-03-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8470: State change logs should not be in TRACE level (#8320)


--
[...truncated 5.98 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apa

[jira] [Resolved] (KAFKA-9756) Refactor the main loop to process more than one record of one task at a time

2020-03-26 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9756.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

> Refactor the main loop to process more than one record of one task at a time
> 
>
> Key: KAFKA-9756
> URL: https://issues.apache.org/jira/browse/KAFKA-9756
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.6.0
>
>
> Our current main loop is implemented as the following:
> 1. Loop over all tasks that have records to process, each time process one 
> record at a time.
> 2. After finish processing one record from each task, check if commit / 
> punctuate / pool etc is needed.
> Because we process one record at a time from the task and then moves on to 
> the next task, we are effectively spending lots of time on context switches. 
> Maybe we can first investigate what if we just have each task to be hosted by 
> an individual thread, and see if the context switch cost is is not worse 
> already (which means our current implementation is already a baseline). If 
> that's true we can consider working on one task at a time, and see if it is 
> more efficient.
> For num.Iterations:
> 1. process one record from each of the tasks thread owns.
> 2. check if commit / punctuate / poll / etc needed.
> But in 1) above we process tasks A,B,C,A,B,C,... and effectively we are 
> introducing context switches within the thread as it needs to load the task 
> variables etc for each record processed.
> What I was thinking is to process tasks as A,A,A,B,B,B,C,C,C... so that we 
> can reduce the context switches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)