[jira] [Resolved] (KAFKA-9266) KafkaConsumer manual assignment does not reset group assignment

2019-12-08 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9266.
--
Fix Version/s: 2.3.1
   Resolution: Fixed

> KafkaConsumer manual assignment does not reset group assignment
> ---
>
> Key: KAFKA-9266
> URL: https://issues.apache.org/jira/browse/KAFKA-9266
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0
>Reporter: G G
>Priority: Major
> Fix For: 2.3.1
>
>
> When using the manual assignment API, SubscriptionState still remembers group 
> subscriptions in its groupSubscription member of topics to which it is no 
> longer subscribed.
> See the following code which shows the unexpected behavior:
> {code:java}
> TopicPartition tp1 = new TopicPartition("a", 0);
> TopicPartition tp2 = new TopicPartition("b", 0);
> LogContext logContext = new LogContext();
> SubscriptionState state = new SubscriptionState(logContext, 
> OffsetResetStrategy.NONE);
> state.assignFromUser(ImmutableSet.of(tp1, tp2));
> state.unsubscribe();
> state.assignFromUser(ImmutableSet.of(tp1));
> assertEquals(ImmutableSet.of("a"), state.groupSubscription()); // Succeeds
> 
> state.assignFromUser(ImmutableSet.of(tp1, tp2));
> state.assignFromUser(ImmutableSet.of(tp1));
> assertEquals(ImmutableSet.of("a"), state.groupSubscription()); // Fails: 
> Expected [a] but was [a, b]
> {code}
> The problem seems to be that within SubscriptionState.changeSubscription() 
> the groupSubscription only grows and is never trimmed if the assignment is 
> manual:
> {code}
> private boolean changeSubscription(Set topicsToSubscribe) {
> ...
> groupSubscription = new HashSet<>(groupSubscription);
> groupSubscription.addAll(topicsToSubscribe);
> 
> }
> {code}
> This behavior in turn leads to METADATA requests by the client with 
> partitions which are actually no longer assigned:
> {code}
> KafkaConsumer consumer;
> consumer.assign(ImmutableList.of(topicPartition1, topicPartition2));
> consumer.poll(); // This will cause a MetadataRequest to be sent to the 
> broker with topic1 and topic2
> consumer.assign(ImmutableList.of(topicPartition1));
> consumer.poll(); // This will AGAIN cause a MetadataRequest for topic1 and 
> topic2 instead of only topic1
> {code}
> And this in turn causes the deletion of the topicPartion2 to fail. The 
> workaround is to do a consumer.unassign(); before the second 
> consumer.assign();



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4091

2019-12-08 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-9212; Ensure LeaderAndIsr state updated in controller context


--
[...truncated 5.57 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.a

Jenkins build is back to normal : kafka-trunk-jdk11 #1010

2019-12-08 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-526: Reduce Producer Metadata Lookups for Large Number of Topics

2019-12-08 Thread Guozhang Wang
Hello Brian,

Thanks for the updated PR and sorry for the late reply. I reviewed the page
again and here are some more comments:

Minor:

1. The addition of *metadata.expiry.ms  *should
be included in the public interface. Also its semantics needs more
clarification (since previously it is hard-coded internally we do not need
to explain it publicly, but now with the configurable value we do need).
2. There are a couple of hard-coded parameters like 25 and 0.5 in the
proposal, maybe we need to explain why these magic values makes sense in
common scenarios.
3. In the Urgent set condition, do you actually mean "with no cached
metadata AND there are existing data buffered for the topic"?

Meta:

One concern I have is whether or not we may introduce a regression,
especially during producer startup such that since we only require up to 25
topics each request, it may cause the send data to be buffered more time
than now due to metadata not available. I understand this is a acknowledged
trade-off in our design but any regression that may surface to users need
to be very carefully considered. I'm wondering, e.g. if we can tweak our
algorithm for the Urgent set, e.g. to consider those with non cached
metadata have higher priority than those who have elapsed max.age but not
yet have been called for sending. More specifically:

Urgent: topics that have been requested for sending but no cached metadata,
and topics that have send request failed with e.g. NOT_LEADER.
Non-urgent: topics that are not in Urgent but have expired max.age.

Then when sending metadata, we always send ALL in the urgent (i.e. ignore
the size limit), and only when they do not exceed the size limit, consider
fill in more topics from Non-urgent up to the size limit.


Guozhang



On Wed, Nov 20, 2019 at 7:00 PM deng ziming 
wrote:

> I think it's ok, and you can add another issue about `asynchronous
> metadata` if `topic expiry` is not enough.
>
>
> On Thu, Nov 21, 2019 at 6:20 AM Brian Byrne  wrote:
>
> > Hello all,
> >
> > I've refactored the KIP to remove implementing asynchronous metadata
> > fetching in the producer during send(). It's now exclusively focused on
> > reducing the topic metadata fetch payload and proposes adding a new
> > configuration flag to control topic expiry behavior. Please take a look
> > when possible.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-526
> > %3A+Reduce+Producer+Metadata+Lookups+for+Large+Number+of+Topics
> >
> > Thanks,
> > Brian
> >
> > On Fri, Oct 4, 2019 at 10:04 AM Brian Byrne  wrote:
> >
> > > Lucas, Guozhang,
> > >
> > > Thank you for the comments. Good point on METADATA_MAX_AGE_CONFIG - it
> > > looks like the ProducerMetadata was differentiating between expiry and
> > > refresh, but it should be unnecessary to do so once the cost of
> fetching
> > a
> > > single topic's metadata is reduced.
> > >
> > > I've updated the rejected alternatives and removed the config
> variables.
> > >
> > > Brian
> > >
> > > On Fri, Oct 4, 2019 at 9:20 AM Guozhang Wang 
> wrote:
> > >
> > >> Hello Brian,
> > >>
> > >> Thanks for the KIP.
> > >>
> > >> I think using asynchronous metadata update to address 1) metadata
> update
> > >> blocking send, but for other issues, currently at producer we do have
> a
> > >> configurable `METADATA_MAX_AGE_CONFIG` similar to consumer, by default
> > is
> > >> 5min. So maybe we do not need to introduce new configs here, but only
> > >> change the semantics of that config from global expiry (today we just
> > >> enforce a full metadata update for the whole cluster) to single-topic
> > >> expiry, and we can also extend its expiry deadline whenever that
> > metadata
> > >> is successfully used to send a produce request.
> > >>
> > >>
> > >> Guozhang
> > >>
> > >>
> > >>
> > >> On Thu, Oct 3, 2019 at 6:51 PM Lucas Bradstreet 
> > >> wrote:
> > >>
> > >> > Hi Brian,
> > >> >
> > >> > This looks great, and should help reduce blocking and high metadata
> > >> request
> > >> > volumes when the producer is sending to large numbers of topics,
> > >> especially
> > >> > at low volumes. I think the approach to make metadata fetching
> > >> asynchronous
> > >> > and batch metadata requests together will help significantly.
> > >> >
> > >> > The only other approach I can think of is to allow users to supply
> the
> > >> > producer with the expected topics upfront, allowing the producer to
> > >> perform
> > >> > a single initial metadata request before any sends occur. I see no
> > real
> > >> > advantages to this approach compared to the async method you’ve
> > >> proposed,
> > >> > but maybe we could add it to the rejected alternatives section?
> > >> >
> > >> > Thanks,
> > >> >
> > >> > Lucas
> > >> >
> > >> > On Fri, 20 Sep 2019 at 11:46, Brian Byrne 
> > wrote:
> > >> >
> > >> > > I've updated the 'Proposed Changes' to include two new producer
> > >> > > configuration variables: topic.expiry.ms and topic.refresh.ms.
> > Please
> > >> > take
> > >> > >

[jira] [Resolved] (KAFKA-9208) Flaky Test SslAdminClientIntegrationTest.testCreatePartitions

2019-12-08 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9208.
--
  Assignee: huxihx
Resolution: Fixed

This is resolved by the PR targeted for KAFKA-9069. Thanks to [~huxi_2b]

> Flaky Test SslAdminClientIntegrationTest.testCreatePartitions
> -
>
> Key: KAFKA-9208
> URL: https://issues.apache.org/jira/browse/KAFKA-9208
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.4.0
>Reporter: Sophie Blee-Goldman
>Assignee: huxihx
>Priority: Major
>  Labels: flaky-test
>
> Java 8 build failed on 2.4-targeted PR
> h3. Stacktrace
> java.lang.AssertionError: validateOnly expected:<3> but was:<1> at 
> org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> kafka.api.AdminClientIntegrationTest$$anonfun$testCreatePartitions$6.apply(AdminClientIntegrationTest.scala:625)
>  at 
> kafka.api.AdminClientIntegrationTest$$anonfun$testCreatePartitions$6.apply(AdminClientIntegrationTest.scala:599)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> kafka.api.AdminClientIntegrationTest.testCreatePartitions(AdminClientIntegrationTest.scala:599)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-280: Enhanced log compaction

2019-12-08 Thread Guozhang Wang
Thanks for the updated KIP, recasting my vote +1 on it again.

Thanks for driving the KIP discussion, and please feel free to ping the
community when the PR is ready for reviews! :) One minor recommendation is
to break it into smaller PRs to help on faster reviews and code merges.


Guozhang

On Tue, Nov 26, 2019 at 10:24 PM Senthilnathan Muthusamy
 wrote:

> Jun,
>
> If the updated KIP looks good, can you please vote for it.
>
> Thanks,
> Senthil
>
> -Original Message-
> From: Jun Rao 
> Sent: Thursday, November 7, 2019 4:33 PM
> To: dev 
> Subject: Re: [VOTE] KIP-280: Enhanced log compaction
>
> Hi, Senthil,
>
> Thanks for the KIP. Added a few more comments on the discussion thread.
>
> Jun
>
> On Wed, Nov 6, 2019 at 3:38 AM Senthilnathan Muthusamy <
> senth...@microsoft.com.invalid> wrote:
>
> > Thanks Matthias!
> >
> > Received 2 +1 binding... looking for one more +1 binding !
> >
> > Regards,
> > Senthil
> >
> > -Original Message-
> > From: Matthias J. Sax 
> > Sent: Wednesday, November 6, 2019 12:10 AM
> > To: dev@kafka.apache.org
> > Subject: Re: [VOTE] KIP-280: Enhanced log compaction
> >
> > +1 (binding)
> >
> > On 11/5/19 11:44 AM, Senthilnathan Muthusamy wrote:
> > > Thanks Gouzhang and I have made a note in the JIRA item to update
> > > the
> > wiki.
> > >
> > > Till now got 1 +1 binding... waiting for 2 more +1 binding... thnx!
> > >
> > > Regards,
> > > Senthil
> > >
> > > -Original Message-
> > > From: Guozhang Wang 
> > > Sent: Monday, November 4, 2019 11:01 AM
> > > To: dev 
> > > Subject: Re: [VOTE] KIP-280: Enhanced log compaction
> > >
> > > I only have one minor comment on the DISCUSS thread, otherwise I'm
> > > +1
> > (binding).
> > >
> > > On Mon, Nov 4, 2019 at 9:53 AM Senthilnathan Muthusamy <
> > senth...@microsoft.com.invalid> wrote:
> > >
> > >> Hi all,
> > >>
> > >> I would like to start the vote on the updated KIP-280: Enhanced log
> > >> compaction. Thanks to Guozhang, Matthias & Tom for the valuable
> > >> feedback on the discussion thread...
> > >>
> > >> KIP:
> > >> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fc
> > >> wi
> > >> k
> > >> i.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-280%253A%2BEnhanc
> > >> ed
> > >> %
> > >> 2Blog%2Bcompaction&data=02%7C01%7Csenthilm%40microsoft.com%7Ca8
> > >> ca
> > >> 2
> > >> 5d3f1894d0d271f08d7615966d3%7C72f988bf86f141af91ab2d7cd011db47%7C1%
> > >> 7C
> > >> 0
> > >> %7C637085005478393331&sdata=qrttmbYi2Ea4qfcF5qKVbn7CaYwmvRylO85
> > >> df
> > >> j
> > >> IY6pI%3D&reserved=0
> > >>
> > >> JIRA:
> > >> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fi
> > >> ss
> > >> u
> > >> es.apache.org%2Fjira%2Fbrowse%2FKAFKA-7061&data=02%7C01%7Csenth
> > >> il
> > >> m
> > >> %40microsoft.com%7Ca8ca25d3f1894d0d271f08d7615966d3%7C72f988bf86f14
> > >> 1a
> > >> f
> > >> 91ab2d7cd011db47%7C1%7C0%7C637085005478393331&sdata=7c%2BzF3XRR
> > >> z%
> > >> 2
> > >> BijyyjBRntP6ZMWqnyzy4BEE8rqnZaF1s%3D&reserved=0
> > >>
> > >> Thanks,
> > >> Senthil
> > >>
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> >
>


-- 
-- Guozhang


Re: Standby Tasks stays in “created” hash map in AssignedTasks

2019-12-08 Thread Guozhang Wang
Hello Giridhar,

I'm not totally sure but what you've described seems to be
https://issues.apache.org/jira/browse/KAFKA-8187 (I do remember some
similar bugs resulting in state directory cleanup incorrectly in the old
versions). Could you maybe try out newer version and see if this is fixed?


Guozhang

On Thu, Nov 14, 2019 at 9:48 PM Giridhar Addepalli 
wrote:

> We are using kakfa streams version 1.1.0
>
> We made some changes to kafka streams code. We are observing following
> sequence of events in our production environment. We want to understand if
> following sequence of events is possible in 1.1.0 version also.
>
> time T0
>
> StreamThread-1 : got assigned 0_1, 0_2 standby tasks
> StreamThread-2 : got assigned 0_3 standby task
>
> time T1 -
>
> Now let us say there is a consumer group rebalance.
>
> And task 0_1 got assigned to StreamThread-2 (i.e; it 0_1 standby task
> moved from StreamThread-1 to StreamThread-2).
>
> time T2 --
>
> StreamThread-2 sees that new standby task, 0_1, is assigned to it.
> Tries to initializeStateStores for 0_1, but gets *LockException* because
> *owningThread* for the lock is StreamThread-1.
>
> But LockException is being swallowed in *initializeNewTasks* function of
> *AssignedTasks.java*
>
> And 0_1 remains in *created* map inside *AssignedTasks.java*
>
> time T3 --
>
> StreamThread-1 realizes that 0_1 is not re-assigned to it and closes the
> suspended task.
> As part of closing suspended task, entry for 0_1 is deleted from *locks*
> map in *unlock* function in StateDirectory.java
>
> time T4 --
>
>  *CleanupThread* came along after *cleanupDelayMs* time and decided 0_1
> directory in local
>  file system is obsolete and deleted the directory !!!
> Since local directory is deleted for the task, and 0_1 is under created
> map, changelog topic-partitions won't be read for 0_1 standby task until
> next rebalance !!!
>
>
> Please let us know if this is valid sequence. If not, what are the guards
> to prevent this sequence.
>
> We see that in https://issues.apache.org/jira/browse/KAFKA-6122, retries
> around locks is removed. Please let us know why retry mechanism is removed?
>


-- 
-- Guozhang


[jira] [Resolved] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-12-08 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8953.

Fix Version/s: 2.5.0
   Resolution: Fixed

> Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor
> -
>
> Key: KAFKA-8953
> URL: https://issues.apache.org/jira/browse/KAFKA-8953
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Rabi Kumar K C
>Priority: Trivial
>  Labels: beginner, kip, newbie
> Fix For: 2.5.0
>
>
> Kafka Streams ships couple of different timestamp extractors, one named 
> `UsePreviousTimeOnInvalidTimestamp`.
> Given the latest improvements with regard to time tracking, it seems 
> appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as 
> we know have fixed definition of partition time, and also pass in partition 
> time into the `#extract(...)` method, instead of some non-well-defined 
> "previous timestamp".
> KIP-530: 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=130028807]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4092

2019-12-08 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8953: Rename UsePreviousTimeOnInvalidTimestamp to


--
[...truncated 2.76 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRe