Jenkins build is back to normal : kafka-2.4-jdk8 #80

2019-11-20 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk11 #970

2019-11-20 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9051: Prematurely complete source offset read requests for 
stopped


--
[...truncated 2.74 MB...]

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor should create a Consumed with Serdes and timestampExtractor 
STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor should create a Consumed with Serdes and timestampExtractor 
PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
resetPolicy should create a Consumed with Serdes and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
resetPolicy should create a Consumed with Serdes and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses PASSED

or

Build failed in Jenkins: kafka-trunk-jdk8 #4056

2019-11-20 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9051: Prematurely complete source offset read requests for 
stopped


--
[...truncated 2.74 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturn

[jira] [Created] (KAFKA-9215) Mirrormaker 2.0 (connect-standalone.sh)

2019-11-20 Thread Jan Arve Sundt (Jira)
Jan Arve Sundt created KAFKA-9215:
-

 Summary: Mirrormaker 2.0 (connect-standalone.sh)
 Key: KAFKA-9215
 URL: https://issues.apache.org/jira/browse/KAFKA-9215
 Project: Kafka
  Issue Type: Wish
Reporter: Jan Arve Sundt


I'm testing to replicate all topics from one Kafka cluster to replica Kafka 
cluster(active/pasive), with the same topic name, include topic data, 
consumers' offset and configuration settings for topics. I can see topic data 
and consumers' offset, but I am not able to see the configuration settings for 
topic. I also need to have the same name for topic in replica. Can anyone 
explain what I am doing wrong?

 

Setup for my test is:

../bin/connect-standalone.sh worker.properties connect-mirror-source.properties 
> worker-connect.log

 

worker.properties:

bootstrap.servers=host:port
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
offset.storage.file.filename=/tmp/connect.offsets

connect-mirror-source.properties:

name = local-mirror-source
connector.class = org.apache.kafka.connect.mirror.MirrorSourceConnector
topics = test-event
topics.blacklist = [".*\.internal", ".*\.replica", "__consumer_offsets"]
source.cluster.alias = upstream
target.cluster.alias =
source.cluster.bootstrap.servers = host:port
target.cluster.bootstrap.servers =  host:port
sync.topic.acls = false
rename.topics = true
tasks.max = 1
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-2.2-jdk8 #4

2019-11-20 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.3-jdk8 #137

2019-11-20 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9051: Prematurely complete source offset read requests for 
stopped


--
[...truncated 2.92 MB...]
kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfIn

[VOTE] 2.4.0 RC1

2019-11-20 Thread Manikumar
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 2.4.0.

This release includes many new features, including:
- Allow consumers to fetch from closest replica
- Support for incremental cooperative rebalancing to the consumer rebalance
protocol
- MirrorMaker 2.0 (MM2), a new multi-cluster, cross-datacenter replication
engine
- New Java authorizer Interface
- Support for  non-key joining in KTable
- Administrative API for replica reassignment
- Sticky partitioner
- Return topic metadata and configs in CreateTopics response
- Securing Internal connect REST endpoints
- API to delete consumer offsets and expose it via the AdminClient.

Release notes for the 2.4.0 release:
https://home.apache.org/~manikumar/kafka-2.4.0-rc1/RELEASE_NOTES.html

** Please download, test and vote by Tuesday, November 26, 9am PT **

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~manikumar/kafka-2.4.0-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~manikumar/kafka-2.4.0-rc1/javadoc/

* Tag to be voted upon (off 2.4 branch) is the 2.4.0 tag:
https://github.com/apache/kafka/releases/tag/2.4.0-rc1

* Documentation:
https://kafka.apache.org/24/documentation.html

* Protocol:
https://kafka.apache.org/24/protocol.html

Thanks,
Manikumar


Re: Subject: [VOTE] 2.2.2 RC2

2019-11-20 Thread Randall Hauch
Thanks for reviewing this release candidate!

This vote passes with 9 +1 votes (3 bindings) and no 0 or -1 votes.

+1 votes
PMC Members:
* Matthias J. Sax
* Gwen Shapira
* Jason Gustafson

Committers:
* Mickael Maison
* David Arthur
* Bill Bejeck

Community:
* Eric Lalonde
* Satish Duggana
* Jakub Scholz

0 votes
* No votes

-1 votes
* No votes

Vote thread:
https://lists.apache.org/thread.html/94ebd543423e39fb04d8935d901d18e01803e42977440ac1476982eb@%3Cdev.kafka.apache.org%3E

I'll continue with the release process and the release announcement will
follow in the next few days.

Randall Hauch
Committer and AK 2.2.2 Release Manager

On Tue, Nov 19, 2019 at 11:15 PM Jason Gustafson  wrote:

> +1 (binding)
>
> Verified release notes and ran through the 2.12 quickstart.
>
> Thanks Randall!
>
> On Thu, Nov 14, 2019 at 8:52 PM Gwen Shapira  wrote:
>
> > +1 (binding)
> >
> > Thanks Randall. Verified signatures and tests.
> >
> > On Fri, Oct 25, 2019 at 7:10 AM Randall Hauch  wrote:
> > >
> > > Hello all, we identified around three dozen bug fixes, including an
> > update
> > > of a third party dependency, and wanted to release a patch release for
> > the
> > > Apache Kafka 2.2.0 release.
> > >
> > > This is the *second* candidate for release of Apache Kafka 2.2.2. (RC1
> > did
> > > not include a fix for https://issues.apache.org/jira/browse/KAFKA-9053
> ,
> > but
> > > the fix appeared before RC1 was announced so it was easier to just
> create
> > > RC2.)
> > >
> > > Check out the release notes for a complete list of the changes in this
> > > release candidate:
> > > https://home.apache.org/~rhauch/kafka-2.2.2-rc2/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Wednesday, October 30, 9am PT>
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > https://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~rhauch/kafka-2.2.2-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~rhauch/kafka-2.2.2-rc2/javadoc/
> > >
> > > * Tag to be voted upon (off 2.2 branch) is the 2.2.2 tag:
> > > https://github.com/apache/kafka/releases/tag/2.2.2-rc2
> > >
> > > * Documentation:
> > > https://kafka.apache.org/22/documentation.html
> > >
> > > * Protocol:
> > > https://kafka.apache.org/22/protocol.html
> > >
> > > * Successful Jenkins builds for the 2.2 branch:
> > > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.2-jdk8/1/
> > > System tests:
> > > https://jenkins.confluent.io/job/system-test-kafka/job/2.2/216/
> > >
> > > /**
> > >
> > > Thanks,
> > >
> > > Randall Hauch
> >
>


[jira] [Created] (KAFKA-9216) Enforce connect internal topic configuration at startup

2019-11-20 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-9216:


 Summary: Enforce connect internal topic configuration at startup
 Key: KAFKA-9216
 URL: https://issues.apache.org/jira/browse/KAFKA-9216
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Randall Hauch


Users sometimes configure Connect's internal topic for configurations with more 
than one partition. One partition is expected, however, and using more than one 
leads to weird behavior that is sometimes not easy to spot.

Here's one example of a log message:

{noformat}
"textPayload": "[2019-11-20 11:12:14,049] INFO [Worker clientId=connect-1, 
groupId=td-connect-server] Current config state offset 284 does not match group 
assignment 274. Forcing rebalance. 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder:942)\n"
{noformat}

Would it be possible to add a check in the KafkaConfigBackingStore and prevent 
the worker from starting if connect config partition count !=1 ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk11 #971

2019-11-20 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-526: Reduce Producer Metadata Lookups for Large Number of Topics

2019-11-20 Thread Brian Byrne
Hello all,

I've refactored the KIP to remove implementing asynchronous metadata
fetching in the producer during send(). It's now exclusively focused on
reducing the topic metadata fetch payload and proposes adding a new
configuration flag to control topic expiry behavior. Please take a look
when possible.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-526
%3A+Reduce+Producer+Metadata+Lookups+for+Large+Number+of+Topics

Thanks,
Brian

On Fri, Oct 4, 2019 at 10:04 AM Brian Byrne  wrote:

> Lucas, Guozhang,
>
> Thank you for the comments. Good point on METADATA_MAX_AGE_CONFIG - it
> looks like the ProducerMetadata was differentiating between expiry and
> refresh, but it should be unnecessary to do so once the cost of fetching a
> single topic's metadata is reduced.
>
> I've updated the rejected alternatives and removed the config variables.
>
> Brian
>
> On Fri, Oct 4, 2019 at 9:20 AM Guozhang Wang  wrote:
>
>> Hello Brian,
>>
>> Thanks for the KIP.
>>
>> I think using asynchronous metadata update to address 1) metadata update
>> blocking send, but for other issues, currently at producer we do have a
>> configurable `METADATA_MAX_AGE_CONFIG` similar to consumer, by default is
>> 5min. So maybe we do not need to introduce new configs here, but only
>> change the semantics of that config from global expiry (today we just
>> enforce a full metadata update for the whole cluster) to single-topic
>> expiry, and we can also extend its expiry deadline whenever that metadata
>> is successfully used to send a produce request.
>>
>>
>> Guozhang
>>
>>
>>
>> On Thu, Oct 3, 2019 at 6:51 PM Lucas Bradstreet 
>> wrote:
>>
>> > Hi Brian,
>> >
>> > This looks great, and should help reduce blocking and high metadata
>> request
>> > volumes when the producer is sending to large numbers of topics,
>> especially
>> > at low volumes. I think the approach to make metadata fetching
>> asynchronous
>> > and batch metadata requests together will help significantly.
>> >
>> > The only other approach I can think of is to allow users to supply the
>> > producer with the expected topics upfront, allowing the producer to
>> perform
>> > a single initial metadata request before any sends occur. I see no real
>> > advantages to this approach compared to the async method you’ve
>> proposed,
>> > but maybe we could add it to the rejected alternatives section?
>> >
>> > Thanks,
>> >
>> > Lucas
>> >
>> > On Fri, 20 Sep 2019 at 11:46, Brian Byrne  wrote:
>> >
>> > > I've updated the 'Proposed Changes' to include two new producer
>> > > configuration variables: topic.expiry.ms and topic.refresh.ms. Please
>> > take
>> > > a look.
>> > >
>> > > Thanks,
>> > > Brian
>> > >
>> > > On Tue, Sep 17, 2019 at 12:59 PM Brian Byrne 
>> > wrote:
>> > >
>> > > > Dev team,
>> > > >
>> > > > Requesting discussion for improvement to the producer when dealing
>> > with a
>> > > > large number of topics.
>> > > >
>> > > > KIP:
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-526%3A+Reduce+Producer+Metadata+Lookups+for+Large+Number+of+Topics
>> > > >
>> > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-8904
>> > > >
>> > > > Thoughts and feedback would be appreciated.
>> > > >
>> > > > Thanks,
>> > > > Brian
>> > > >
>> > >
>> >
>>
>>
>> --
>> -- Guozhang
>>
>


Re: [DISCUSS] KIP-533: Add default api timeout to AdminClient

2019-11-20 Thread Jason Gustafson
This seems not very controversial, so I will go ahead and move to a vote in
the next couple days.

Thanks,
Jason

On Wed, Nov 13, 2019 at 10:23 AM Jason Gustafson  wrote:

> Hey Colin,
>
> Thanks for the suggestions. I've clarified the use of the API overrides
> and the request timeout.
>
> I think exponential backoff is a great idea. Can we do a separate KIP for
> this? I think we can add it to the consumer and producer at the same time.
>
> -Jason
>
> On Mon, Oct 21, 2019 at 2:01 PM Ismael Juma  wrote:
>
>> Thanks for the KIP, this makes sense to me. It will be good to align the
>> AdminClient better with the consumer and producer when it comes to
>> timeouts.
>>
>> Ismael
>>
>> On Wed, Oct 9, 2019 at 12:06 PM Jason Gustafson 
>> wrote:
>>
>> > Hi All,
>> >
>> > I wrote a short KIP to address a longstanding issue with timeout
>> behavior
>> > in the AdminClient:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-533%3A+Add+default+api+timeout+to+AdminClient
>> > .
>> >
>> > Take a look and let me know what you think.
>> >
>> > Thanks,
>> > Jason
>> >
>>
>


Build failed in Jenkins: kafka-trunk-jdk11 #972

2019-11-20 Thread Apache Jenkins Server
See 


Changes:

[github] Revert "KAFKA-9165: Fix jersey warnings in Trogdor (#7669)" (#7721)


--
[...truncated 2.75 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturn

Jenkins build is back to normal : kafka-trunk-jdk8 #4057

2019-11-20 Thread Apache Jenkins Server
See 




RE: [DISCUSS] KIP-280: Enhanced log compaction

2019-11-20 Thread Senthilnathan Muthusamy


Hi Gouzhang & Jun,

Thanks for the detailed on the scenarios.

#51 => thanks for the details Gouzhang with example. Does followers won't be 
sync'ing LEO as well with leader? If yes, keeping last record always (without 
compaction for non-offset scenarios) would work and this needed only if the new 
strategy ends up removing LEO record, right? Also I couldn’t able to retrieve 
Jason's mail related to creating an empty message... Can you please forward if 
you have? Wondering how that can solve this particular issue unless creating 
record for random key that won't conflict with the producer/consumer keys for 
that topic/partition.

#53 => I see that this can happen for the low produce rate from remaining 
ineligible for compaction for an unbounded duration where by 
"delete.retention.ms" triggers that removes the tombstone record. If that's the 
case (please correct me if I am missing any other scenarios), then we can 
suggest the Kafka users to have "segment.ms" & "max.compaction.lag.ms" (as 
compaction won't happen on active segment) to be smaller than the 
"delete.retention.ms" and that should address this scenario, right?

Thanks,
Senthil

-Original Message-
From: Jun Rao  
Sent: Wednesday, November 13, 2019 9:31 AM
To: dev 
Subject: Re: [DISCUSS] KIP-280: Enhanced log compaction

Hi, Seth,

51. The difference is that with the offset compaction strategy, the message 
corresponding to the last offset is always the winning record and will never be 
removed. But with the new strategies, it's now possible that the message 
corresponding to the last offset is a losing record and needs to be removed.

53. Similarly, with the offset compaction strategy, if we see a non-tombstone 
record after a tombstone record, the non-tombstone record is always the winning 
one. However, with the new strategies, that non-tombstone record with a larger 
offset could be a losing record. The question is then how do we retain the 
tombstone long enough so that we could still recognize that the non-tombstone 
record should be ignored.

Thanks,

Jun

-Original Message-
From: Guozhang Wang  
Sent: Tuesday, November 12, 2019 6:09 PM
To: dev 
Subject: Re: [DISCUSS] KIP-280: Enhanced log compaction

Hello Senthil,

Let me try to re-iterate on Jun's comments with some context here:

51: today with the offset-only compaction strategy, the last record of the log 
(we call it the log-end-record, whose offset is log-end-offset) would always be 
preserved and not compacted. This is kinda important for replication since 
followers reason about the log-end-offset on the leader.
Consider this case: three replicas of a partition, leader 1 and follower 2 and 
3.

Leader 1 has records a, b, c, d and d is the current last record of the 
partition, the current log-end-offset is 3 (assuming record a's offset is 0).
Follower 2 has replicated a, b, c, d. Log-end-offset is 3 Follower 3 has 
replicated a, b, c but not yet replicated d. Log-end-offset is 2.

NOTE that the compaction triggering are independent on brokers, it is possible 
that leader 1 triggers compaction and deletes record d, while other followers 
have not triggered compaction yet. At this moment the leader's log becomes a, 
b, c. Now let's say follower 3 fetch from leader after the compaction, it will 
no longer see record d.

Now suppose there's a leader migration and follower 3 becomes the new leader, 
it would accept new appends (say, it's e), and record e would be appended at 
*offset 3 *on new leader 3's log. But follower 2's offset 3's record is d 
still. Later let's say follower 2 also triggers compaction and also fetches the 
new record e from new leader 3:

Follower 2's log would be* a(0), b(1), c(2), e(4)* where the numbers in 
brackets are offset number; while leader 3's log would be *a(0), b(1), c(2), 
e(3)*. Now you see the two logs diverges in offsets, although their log entries 
are the same.

-

One way to resolve this, is to simply never remove the last message during 
compaction. Another way (suggested by Jason in the old VOTE thread) is to 
create an empty message batch to "take up" that offset slot.


53: Again here's some context on when we can delete a tombstone (null):
during compaction, if we see the latest record for a certain key is a tombstone 
we can remove all old records BUT that tombstone itself cannot be removed 
immediately since the old records may already be fetched by some consumers and 
that tombstone may not be fetched by consumer yet. Also that tombstone may have 
not been replicated to all other followers yet while the old records have 
already been replicated. Hence we have some config on the broker to "delay" the 
removal of the tombstone itself. You can find this config named 
"delete.retention.ms" in
https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkafka.apache.org%2Fdocumentation%2F%23brokerconfigs&data=02%7C01%7Csenthilm%40microsoft.com%7C9e3a2484adc54d48122408d767de70ab

Re: [DISCUSS] KIP-526: Reduce Producer Metadata Lookups for Large Number of Topics

2019-11-20 Thread deng ziming
I think it's ok, and you can add another issue about `asynchronous
metadata` if `topic expiry` is not enough.


On Thu, Nov 21, 2019 at 6:20 AM Brian Byrne  wrote:

> Hello all,
>
> I've refactored the KIP to remove implementing asynchronous metadata
> fetching in the producer during send(). It's now exclusively focused on
> reducing the topic metadata fetch payload and proposes adding a new
> configuration flag to control topic expiry behavior. Please take a look
> when possible.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-526
> %3A+Reduce+Producer+Metadata+Lookups+for+Large+Number+of+Topics
>
> Thanks,
> Brian
>
> On Fri, Oct 4, 2019 at 10:04 AM Brian Byrne  wrote:
>
> > Lucas, Guozhang,
> >
> > Thank you for the comments. Good point on METADATA_MAX_AGE_CONFIG - it
> > looks like the ProducerMetadata was differentiating between expiry and
> > refresh, but it should be unnecessary to do so once the cost of fetching
> a
> > single topic's metadata is reduced.
> >
> > I've updated the rejected alternatives and removed the config variables.
> >
> > Brian
> >
> > On Fri, Oct 4, 2019 at 9:20 AM Guozhang Wang  wrote:
> >
> >> Hello Brian,
> >>
> >> Thanks for the KIP.
> >>
> >> I think using asynchronous metadata update to address 1) metadata update
> >> blocking send, but for other issues, currently at producer we do have a
> >> configurable `METADATA_MAX_AGE_CONFIG` similar to consumer, by default
> is
> >> 5min. So maybe we do not need to introduce new configs here, but only
> >> change the semantics of that config from global expiry (today we just
> >> enforce a full metadata update for the whole cluster) to single-topic
> >> expiry, and we can also extend its expiry deadline whenever that
> metadata
> >> is successfully used to send a produce request.
> >>
> >>
> >> Guozhang
> >>
> >>
> >>
> >> On Thu, Oct 3, 2019 at 6:51 PM Lucas Bradstreet 
> >> wrote:
> >>
> >> > Hi Brian,
> >> >
> >> > This looks great, and should help reduce blocking and high metadata
> >> request
> >> > volumes when the producer is sending to large numbers of topics,
> >> especially
> >> > at low volumes. I think the approach to make metadata fetching
> >> asynchronous
> >> > and batch metadata requests together will help significantly.
> >> >
> >> > The only other approach I can think of is to allow users to supply the
> >> > producer with the expected topics upfront, allowing the producer to
> >> perform
> >> > a single initial metadata request before any sends occur. I see no
> real
> >> > advantages to this approach compared to the async method you’ve
> >> proposed,
> >> > but maybe we could add it to the rejected alternatives section?
> >> >
> >> > Thanks,
> >> >
> >> > Lucas
> >> >
> >> > On Fri, 20 Sep 2019 at 11:46, Brian Byrne 
> wrote:
> >> >
> >> > > I've updated the 'Proposed Changes' to include two new producer
> >> > > configuration variables: topic.expiry.ms and topic.refresh.ms.
> Please
> >> > take
> >> > > a look.
> >> > >
> >> > > Thanks,
> >> > > Brian
> >> > >
> >> > > On Tue, Sep 17, 2019 at 12:59 PM Brian Byrne 
> >> > wrote:
> >> > >
> >> > > > Dev team,
> >> > > >
> >> > > > Requesting discussion for improvement to the producer when dealing
> >> > with a
> >> > > > large number of topics.
> >> > > >
> >> > > > KIP:
> >> > > >
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-526%3A+Reduce+Producer+Metadata+Lookups+for+Large+Number+of+Topics
> >> > > >
> >> > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-8904
> >> > > >
> >> > > > Thoughts and feedback would be appreciated.
> >> > > >
> >> > > > Thanks,
> >> > > > Brian
> >> > > >
> >> > >
> >> >
> >>
> >>
> >> --
> >> -- Guozhang
> >>
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4058

2019-11-20 Thread Apache Jenkins Server
See 


Changes:

[github] Revert "KAFKA-9165: Fix jersey warnings in Trogdor (#7669)" (#7721)

[github] KAFKA-8986: Allow null as a valid default for tagged fields. (#7585)


--
[...truncated 2.74 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

or

Re: [DISCUSS] KIP-547: Extend ConsumerInterceptor to allow modification of Consumer Commits

2019-11-20 Thread deng ziming
Hi, Eric
what's the use of this method? I reviewed the code and couldn't find much
of the metadata's usage, and I find its usage trivial.

On Tue, Nov 19, 2019 at 3:19 AM Eric Azama  wrote:

> Hi all,
>
> I'd like to open discussion on KIP-547: Extend ConsumerInterceptor to allow
> modification of Consumer Commits
>
> This KIP hopes to enable better access to the Metadata included while
> committing offsets.
>
> LINK:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-547%3A+Extend+ConsumerInterceptor+to+allow+modification+of+Consumer+Commits
>
>
> Thanks,
> Eric A.
>


Re: [DISCUSS] KIP-547: Extend ConsumerInterceptor to allow modification of Consumer Commits

2019-11-20 Thread Eric Azama
Hi Deng,

It's similar to Record Headers in that the metadata isn't used within Kafka
itself. The intended purpose of the Metadata is to store additional
information such as what application instance made the commit. Most of the
value is going to be in giving organizations the ability to do things like
audit the commit log.

About half of the commit methods have no way of setting the Metadata so
it's currently difficult to make use of.

Thanks,
Eric A.

On Wed, Nov 20, 2019 at 7:58 PM deng ziming 
wrote:

> Hi, Eric
> what's the use of this method? I reviewed the code and couldn't find much
> of the metadata's usage, and I find its usage trivial.
>
> On Tue, Nov 19, 2019 at 3:19 AM Eric Azama  wrote:
>
> > Hi all,
> >
> > I'd like to open discussion on KIP-547: Extend ConsumerInterceptor to
> allow
> > modification of Consumer Commits
> >
> > This KIP hopes to enable better access to the Metadata included while
> > committing offsets.
> >
> > LINK:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-547%3A+Extend+ConsumerInterceptor+to+allow+modification+of+Consumer+Commits
> >
> >
> > Thanks,
> > Eric A.
> >
>


Jenkins build is back to normal : kafka-trunk-jdk11 #973

2019-11-20 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9217) Partial partition's log-end-offset is zero

2019-11-20 Thread lisen (Jira)
lisen created KAFKA-9217:


 Summary: Partial partition's log-end-offset is zero
 Key: KAFKA-9217
 URL: https://issues.apache.org/jira/browse/KAFKA-9217
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.10.0.1
 Environment: kafka0.10.0.1
Reporter: lisen
 Fix For: 0.11.0.1


The amount of data my consumers consume is 400222, But using the command to 
view the consumption results is only 279789, The command view results are as 
follows:

!Snipaste_2019-11-21_15-00-09.png!

The data result of partition 5 is

!Snipaste_2019-11-21_14-53-06.png!

I want to know if this is a kafka bug.Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)