[jira] [Resolved] (KAFKA-7210) Add system test for log compaction

2018-08-20 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7210.

Resolution: Fixed

> Add system test for log compaction
> --
>
> Key: KAFKA-7210
> URL: https://issues.apache.org/jira/browse/KAFKA-7210
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.1.0
>
>
> Currently we have TestLogCleaning tool for stress test log compaction. This 
> JIRA is to integrate the tool to system test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-349 Priorities for Source Topics

2018-08-20 Thread Jan Filipiak



On 20.08.2018 00:19, Matthias J. Sax wrote:

@Nick: A KIP is only accepted if it got 3 binding votes, ie, votes from
committers. If you close the vote before that, the KIP would not be
accepted. Note that committers need to pay attention to a lot of KIPs
and it can take a while until people can look into it. Thanks for your
understanding.

@Jan: Can you give a little bit more context on your concerns? It's
unclear why you mean atm.
Just saying that we should peek at the Samza approach, it's a much more 
powerful abstraction. We can ship a default MessageChooser

that looks at the topics priority.

@Adam: anyone can vote :)



-Matthias

On 8/19/18 9:58 AM, Adam Bellemare wrote:

While I am not sure if I can or can’t vote, my question re: Jan’s comment is, 
“should we be implementing it as Samza does?”

I am not familiar with the drawbacks of the current approach vs how samza does 
it.


On Aug 18, 2018, at 5:06 PM, n...@afshartous.com wrote:


I only saw one vote on KIP-349, just checking to see if anyone else would like 
to vote before closing this out.
--
  Nick



On Aug 13, 2018, at 9:19 PM, n...@afshartous.com wrote:


Hi All,

Calling for a vote on KIP-349

https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics

--
 Nick











[ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Ismael Juma
Hi everyone,

Dong Lin became a committer in March 2018. Since then, he has remained
active in the community and contributed a number of patches, reviewed
several pull requests and participated in numerous KIP discussions. I am
happy to announce that Dong is now a member of the
Apache Kafka PMC.

Congratulation Dong! Looking forward to your future contributions.

Ismael, on behalf of the Apache Kafka PMC


Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Manikumar
Congrats,  Dong Lin!

On Mon, Aug 20, 2018 at 4:25 PM Ismael Juma  wrote:

> Hi everyone,
>
> Dong Lin became a committer in March 2018. Since then, he has remained
> active in the community and contributed a number of patches, reviewed
> several pull requests and participated in numerous KIP discussions. I am
> happy to announce that Dong is now a member of the
> Apache Kafka PMC.
>
> Congratulation Dong! Looking forward to your future contributions.
>
> Ismael, on behalf of the Apache Kafka PMC
>


Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Mickael Maison
Congratulations Dong!
On Mon, Aug 20, 2018 at 12:46 PM Manikumar  wrote:
>
> Congrats,  Dong Lin!
>
> On Mon, Aug 20, 2018 at 4:25 PM Ismael Juma  wrote:
>
> > Hi everyone,
> >
> > Dong Lin became a committer in March 2018. Since then, he has remained
> > active in the community and contributed a number of patches, reviewed
> > several pull requests and participated in numerous KIP discussions. I am
> > happy to announce that Dong is now a member of the
> > Apache Kafka PMC.
> >
> > Congratulation Dong! Looking forward to your future contributions.
> >
> > Ismael, on behalf of the Apache Kafka PMC
> >


Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Dongjin Lee
Congratulations!!

On Mon, Aug 20, 2018 at 9:22 PM Mickael Maison 
wrote:

> Congratulations Dong!
> On Mon, Aug 20, 2018 at 12:46 PM Manikumar 
> wrote:
> >
> > Congrats,  Dong Lin!
> >
> > On Mon, Aug 20, 2018 at 4:25 PM Ismael Juma  wrote:
> >
> > > Hi everyone,
> > >
> > > Dong Lin became a committer in March 2018. Since then, he has remained
> > > active in the community and contributed a number of patches, reviewed
> > > several pull requests and participated in numerous KIP discussions. I
> am
> > > happy to announce that Dong is now a member of the
> > > Apache Kafka PMC.
> > >
> > > Congratulation Dong! Looking forward to your future contributions.
> > >
> > > Ismael, on behalf of the Apache Kafka PMC
> > >
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
> *github:  github.com/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> slideshare: 
> www.slideshare.net/dongjinleekr
> *
>


Re: [VOTE] KIP-349 Priorities for Source Topics

2018-08-20 Thread Thomas Becker
I agree with Jan. A strategy interface for choosing processing order is nice, 
and would hopefully be a step towards getting this in streams.

-Tommy

On Mon, 2018-08-20 at 12:52 +0200, Jan Filipiak wrote:

On 20.08.2018 00:19, Matthias J. Sax wrote:

@Nick: A KIP is only accepted if it got 3 binding votes, ie, votes from

committers. If you close the vote before that, the KIP would not be

accepted. Note that committers need to pay attention to a lot of KIPs

and it can take a while until people can look into it. Thanks for your

understanding.


@Jan: Can you give a little bit more context on your concerns? It's

unclear why you mean atm.

Just saying that we should peek at the Samza approach, it's a much more

powerful abstraction. We can ship a default MessageChooser

that looks at the topics priority.

@Adam: anyone can vote :)




-Matthias


On 8/19/18 9:58 AM, Adam Bellemare wrote:

While I am not sure if I can or can’t vote, my question re: Jan’s comment is, 
“should we be implementing it as Samza does?”


I am not familiar with the drawbacks of the current approach vs how samza does 
it.


On Aug 18, 2018, at 5:06 PM, n...@afshartous.com 
wrote:



I only saw one vote on KIP-349, just checking to see if anyone else would like 
to vote before closing this out.

--

  Nick



On Aug 13, 2018, at 9:19 PM, n...@afshartous.com 
wrote:



Hi All,


Calling for a vote on KIP-349


https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics


--

 Nick











This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.


Build failed in Jenkins: kafka-trunk-jdk10 #416

2018-08-20 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-7210: Add system test to verify log compaction (#5226)

--
[...truncated 1.54 MB...]

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendNewMetadataToLogOnAddPartitionsWhenPartitionsAdded PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithCoordinatorLoadInProgressOnEndTxnWhenCoordinatorIsLoading 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithCoordinatorLoadInProgressOnEndTxnWhenCoordinatorIsLoading 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithSuccessOnAddPartitionsWhenStateIsCompleteAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithSuccessOnAddPartitionsWhenStateIsCompleteAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExh

Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Paolo Patierno
Congratulations Dong!

Paolo Patierno
Principal Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


From: Dongjin Lee 
Sent: Monday, August 20, 2018 1:00 PM
To: dev@kafka.apache.org
Subject: Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

Congratulations!!

On Mon, Aug 20, 2018 at 9:22 PM Mickael Maison 
wrote:

> Congratulations Dong!
> On Mon, Aug 20, 2018 at 12:46 PM Manikumar 
> wrote:
> >
> > Congrats,  Dong Lin!
> >
> > On Mon, Aug 20, 2018 at 4:25 PM Ismael Juma  wrote:
> >
> > > Hi everyone,
> > >
> > > Dong Lin became a committer in March 2018. Since then, he has remained
> > > active in the community and contributed a number of patches, reviewed
> > > several pull requests and participated in numerous KIP discussions. I
> am
> > > happy to announce that Dong is now a member of the
> > > Apache Kafka PMC.
> > >
> > > Congratulation Dong! Looking forward to your future contributions.
> > >
> > > Ismael, on behalf of the Apache Kafka PMC
> > >
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
> *github:  github.com/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> slideshare: 
> www.slideshare.net/dongjinleekr
> *
>


Re: [DISCUSS] KIP-291: Have separate queues for control requests and data requests

2018-08-20 Thread Becket Qin
Hi Lucas,

In KIP-103, we introduced a convention to define and look up the listeners.
So it would be good if the later KIPs can follow the same convention.

>From what I understand, the advertised.listeners is actually designed for
our purpose, i.e. providing a list of listeners that can be used in
different cases. In KIP-103 it was used to separate internal traffic from
the external traffic. It is not just for the user traffic or data
only. So adding
a controller listener is not repurposing the config. Also, ZK structure is
only visible to brokers, the clients will still only see the listeners they
are seeing today.

For this KIP, we are essentially trying to separate the controller traffic
from the inter-broker data traffic. So adding a new
listener.name.for.controller config seems reasonable. The behavior would
be:
1. If the listener.name.for.controller is set, the broker-controller
communication will go through that listener.
2. Otherwise, the controller traffic falls back to use
inter.broker.listener.name or inter.broker.security.protocol, which is the
current behavior.

Regarding updating the security protocol with one line change v.s two-lines
change, I am a little confused, can you elaborate?

Regarding the possibility of hurry and misreading. It is the system admin's
responsibility to configure the right listener to ensure that different
kinds of traffic are using the correct endpoints. So I think it is better
that we always follow the same of convention instead of doing it in
different ways.

Thanks,

Jiangjie (Becket) Qin



On Fri, Aug 17, 2018 at 4:34 AM, Lucas Wang  wrote:

> Thanks for the review, Becket.
>
> (1) After comparing the two approaches, I still feel the current writeup is
> a little better.
> a. The current writeup asks for an explicit endpoint while reusing the
> existing "inter.broker.listener.name" with the exactly same semantic,
> and your proposed change asks for a new listener name for controller while
> reusing the existing "advertised.listeners" config with a slight semantic
> change since a new controller endpoint needs to be added to it.
> Hence conceptually the current writeup requires one config change instead
> of two.
> Also with one listener name, e.g. INTERNAL, for inter broker traffic,
> instead of two, e.g. "INTERNAL" and "CONTROLLER",
> if an operator decides to switch from PLAINTEXT to SSL for internal
> traffic, chances are that she wants to upgrade
> both controller connections and data connections, she only needs to update
> one line in
> the "listener.security.protocol.map" config, and avoids possible mistakes.
>
>
> b. When this KIP is picked up by an operator who is in a hurry without
> reading the docs, if she sees a
> new listener name for controller is required, and chances are there is
> already a list of listeners,
> it's possible for her to simply choose an existing listener name, without
> explicitly creating
> the new CONTROLLER listener and endpoints. If this is done, Kafka will be
> run with the existing
> behavior, defeating the purpose of this KIP.
> In comparison, if she sees a separate endpoint is being asked, I feel it's
> unlikely for her to
> copy and paste an existing endpoint.
>
> Please let me know your comments.
>
> (2) Good catch, it's a typo, and it's been fixed.
>
> Thanks,
> Lucas
>


Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Attila Sasvári
Congratulations Dong! Well deserved.

Regards,
Attila
Paolo Patierno  ezt írta (időpont: 2018. aug. 20., H
15:09):

> Congratulations Dong!
>
> Paolo Patierno
> Principal Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
> 
> From: Dongjin Lee 
> Sent: Monday, August 20, 2018 1:00 PM
> To: dev@kafka.apache.org
> Subject: Re: [ANNOUNCE] New Kafka PMC member: Dong Lin
>
> Congratulations!!
>
> On Mon, Aug 20, 2018 at 9:22 PM Mickael Maison 
> wrote:
>
> > Congratulations Dong!
> > On Mon, Aug 20, 2018 at 12:46 PM Manikumar 
> > wrote:
> > >
> > > Congrats,  Dong Lin!
> > >
> > > On Mon, Aug 20, 2018 at 4:25 PM Ismael Juma  wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > Dong Lin became a committer in March 2018. Since then, he has
> remained
> > > > active in the community and contributed a number of patches, reviewed
> > > > several pull requests and participated in numerous KIP discussions. I
> > am
> > > > happy to announce that Dong is now a member of the
> > > > Apache Kafka PMC.
> > > >
> > > > Congratulation Dong! Looking forward to your future contributions.
> > > >
> > > > Ismael, on behalf of the Apache Kafka PMC
> > > >
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> >
> > *github:  github.com/dongjinleekr
> > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > slideshare:
> www.slideshare.net/dongjinleekr
> > *
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #2908

2018-08-20 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-7210: Add system test to verify log compaction (#5226)

--
[...truncated 2.48 MB...]
org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithNullParents STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithNullParents PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithSelfParent STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithSelfParent STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStoreWithSink STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testTopicGroups STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testTopicGroups PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldSortProcessorNodesCorrectly STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldSortProcessorNodesCorrectly PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testBuild STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testBuild PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithNullParents STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithNullParents PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithEmptyParents STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithEmptyParents PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddSourceWithOffsetReset STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddSourceWithOffsetReset PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStoreWithSource STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddInternalTopicConfigForRepartitionTopics STARTED

org.apache.ka

Re: [DISCUSS] KIP-291: Have separate queues for control requests and data requests

2018-08-20 Thread Eno Thereska
Hi folks,

I looked at the previous numbers that Lucas provided (thanks!) but it's
still not clear to me whether the performance benefits justify the added
complexity. I'm looking for some intuition here (a graph would be great but
not required): for a small/medium/large cluster, what are the expected
percentage of control requests today that will benefit from the change?
It's a bit hard to go through this level of detail without knowing the
expected end-to-end benefit. The best folks to answer this might be ones
running such clusters, and ideally should pitch in with some data.

Thanks
Eno

On Mon, Aug 20, 2018 at 7:29 AM, Becket Qin  wrote:

> Hi Lucas,
>
> In KIP-103, we introduced a convention to define and look up the listeners.
> So it would be good if the later KIPs can follow the same convention.
>
> From what I understand, the advertised.listeners is actually designed for
> our purpose, i.e. providing a list of listeners that can be used in
> different cases. In KIP-103 it was used to separate internal traffic from
> the external traffic. It is not just for the user traffic or data
> only. So adding
> a controller listener is not repurposing the config. Also, ZK structure is
> only visible to brokers, the clients will still only see the listeners they
> are seeing today.
>
> For this KIP, we are essentially trying to separate the controller traffic
> from the inter-broker data traffic. So adding a new
> listener.name.for.controller config seems reasonable. The behavior would
> be:
> 1. If the listener.name.for.controller is set, the broker-controller
> communication will go through that listener.
> 2. Otherwise, the controller traffic falls back to use
> inter.broker.listener.name or inter.broker.security.protocol, which is the
> current behavior.
>
> Regarding updating the security protocol with one line change v.s two-lines
> change, I am a little confused, can you elaborate?
>
> Regarding the possibility of hurry and misreading. It is the system admin's
> responsibility to configure the right listener to ensure that different
> kinds of traffic are using the correct endpoints. So I think it is better
> that we always follow the same of convention instead of doing it in
> different ways.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
> On Fri, Aug 17, 2018 at 4:34 AM, Lucas Wang  wrote:
>
> > Thanks for the review, Becket.
> >
> > (1) After comparing the two approaches, I still feel the current writeup
> is
> > a little better.
> > a. The current writeup asks for an explicit endpoint while reusing the
> > existing "inter.broker.listener.name" with the exactly same semantic,
> > and your proposed change asks for a new listener name for controller
> while
> > reusing the existing "advertised.listeners" config with a slight semantic
> > change since a new controller endpoint needs to be added to it.
> > Hence conceptually the current writeup requires one config change instead
> > of two.
> > Also with one listener name, e.g. INTERNAL, for inter broker traffic,
> > instead of two, e.g. "INTERNAL" and "CONTROLLER",
> > if an operator decides to switch from PLAINTEXT to SSL for internal
> > traffic, chances are that she wants to upgrade
> > both controller connections and data connections, she only needs to
> update
> > one line in
> > the "listener.security.protocol.map" config, and avoids possible
> mistakes.
> >
> >
> > b. When this KIP is picked up by an operator who is in a hurry without
> > reading the docs, if she sees a
> > new listener name for controller is required, and chances are there is
> > already a list of listeners,
> > it's possible for her to simply choose an existing listener name, without
> > explicitly creating
> > the new CONTROLLER listener and endpoints. If this is done, Kafka will be
> > run with the existing
> > behavior, defeating the purpose of this KIP.
> > In comparison, if she sees a separate endpoint is being asked, I feel
> it's
> > unlikely for her to
> > copy and paste an existing endpoint.
> >
> > Please let me know your comments.
> >
> > (2) Good catch, it's a typo, and it's been fixed.
> >
> > Thanks,
> > Lucas
> >
>


[jira] [Resolved] (KAFKA-7298) Concurrent DeleteRecords can lead to fatal OutOfSequence error in producer

2018-08-20 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7298.

   Resolution: Fixed
Fix Version/s: 2.0.1

> Concurrent DeleteRecords can lead to fatal OutOfSequence error in producer
> --
>
> Key: KAFKA-7298
> URL: https://issues.apache.org/jira/browse/KAFKA-7298
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.0.1
>
>
> We have logic in the producer to handle unknown producer errors. Basically 
> when the producer gets an unknown producer error, it checks whether the log 
> start offset is larger than the last acknowledged offset. If it is, then we 
> know the error is spurious and we reset the sequence number to 0, which the 
> broker will then accept.
> It can happen after a DeleteRecords call, however, that the only record 
> remaining in the log is a transaction marker, which does not have a sequence 
> number. The error we get in this case is OUT_OF_SEQUENCE rather than 
> UNKNOWN_PRODUCER, which is fatal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6835) Enable topic unclean leader election to be enabled without controller change

2018-08-20 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-6835.

Resolution: Fixed

Merged the PR to trunk.

> Enable topic unclean leader election to be enabled without controller change
> 
>
> Key: KAFKA-6835
> URL: https://issues.apache.org/jira/browse/KAFKA-6835
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.1.0
>
>
> Dynamic update of broker's default unclean.leader.election.enable will be 
> processed without controller change (KAFKA-6526). We should probably do the 
> same for topic overrides as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7303) Kafka client is stuck when specifying wrong brokers

2018-08-20 Thread Chun Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Zhang resolved KAFKA-7303.
---
Resolution: Duplicate

> Kafka client is stuck when specifying wrong brokers
> ---
>
> Key: KAFKA-7303
> URL: https://issues.apache.org/jira/browse/KAFKA-7303
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 1.1.0
>Reporter: Chun Zhang
>Priority: Major
>
> {code:java}
> import java.util.Collections;
> import java.util.Properties;
> import org.apache.kafka.clients.CommonClientConfigs;
> import org.apache.kafka.clients.consumer.ConsumerConfig;
> import org.apache.kafka.clients.consumer.ConsumerRecord;
> import org.apache.kafka.clients.consumer.ConsumerRecords;
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> public class KafkaBug {
>   public static void main(String[] args) throws Exception {
> Properties props = new Properties();
> // intentionally use an irrelevant address
> props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, 
> "issues.apache.org:80");
> props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
> props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id_string");
> props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
> "org.apache.kafka.common.serialization.ByteArrayDeserializer");
> props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
> "org.apache.kafka.common.serialization.ByteArrayDeserializer");
> KafkaConsumer consumer = new KafkaConsumer<>(props);
> consumer.subscribe(Collections.singleton("mytopic"));
> // This call will block forever.
> consumer.poll(1000);
>   }
> }
> {code}
> When I run the code above, I keep getting the error log below:
> {code:java}
> DEBUG [main] (21:21:25,959) - [Consumer clientId=consumer-1, 
> groupId=group_id_string] Connection with issues.apache.org/207.244.88.139 
> disconnected
> java.net.ConnectException: Connection timed out
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
> at 
> org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:106)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:470)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:424)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:261)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:156)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:228)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:279)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at com.twosigma.example.kafka.bug.KafkaBug.main(KafkaBug.java:46)
> DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, 
> groupId=group_id_string] Node -1 disconnected.
> WARN [main] (21:21:25,963) - [Consumer clientId=consumer-1, 
> groupId=group_id_string] Connection to node -1 could not be established. 
> Broker may not be available.
> DEBUG [main] (21:21:25,963) - [Consumer clientId=consumer-1, 
> groupId=group_id_string] Give up sending metadata request since no node is 
> available
> DEBUG [main] (21:21:26,013) - [Consumer clientId=consumer-1, 
> groupId=group_id_string] Give up sending metadata request since no node is 
> available
> {code}
> I expect the program to fail when the wrong broker is specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-08-20 Thread Nikolay Izhikov
Hello, Ted.

Thanks for the comment.

I've edit KIP and change proposal to `windowSize`.

Guys, any other comments?


В Вс, 19/08/2018 в 14:57 -0700, Ted Yu пишет:
> bq. // or just Duration windowSize();
> 
> +1 to the above choice.
> The duration is obvious from the return type. For getter methods, we don't
> use get as prefix (as least for new code).
> 
> Cheers
> 
> On Sun, Aug 19, 2018 at 8:03 AM Nikolay Izhikov  wrote:
> 
> > Hello, John.
> > 
> > Thank you very much for your feedback!
> > I've addressed all your comments.
> > Please, see my answers and let my know is anything in KIP [1] needs to be
> > improved.
> > 
> > > The correct choice is actually "Instant", not> "LocalDateTime"
> > 
> > I've changed the methods proposed in KIP [1] to use Instant.
> > 
> > > I noticed some recent APIs are> missing (see KIP-328)
> > > those APIs were just added and have never been released... you can just
> > 
> > replace them.
> > 
> > I've added new methods to KIP [1].
> > Not released methods marked for remove.
> > 
> > > any existing method that's already deprecated, don't bother
> > 
> > transitioning it to Duration.
> > 
> > Fixed.
> > 
> > > IllegalArgumentException... we should plan to mention this in the
> > 
> > javadoc for those methods.
> > 
> > Got it.
> > 
> > > In Stores, windowSize and segmentInterval should also be durations.
> > 
> > Fixed.
> > 
> > > StreamsMetrics, recordLatency ... this one is better left alone.
> > 
> > OK. I removed this method from KIP [1].
> > 
> > Two more questions question about implementation:
> > 
> > 1. We have serveral methods without parameters.
> > In java we can't have two methods with parameters with the same name.
> > It wouldn't compile.
> > So we have to rename new methods. Please, see suggested names and share
> > your thoughts:
> > 
> > Windows {
> > long size() -> Duration windowSize();
> > }
> > 
> > Window {
> > long start() -> Instant startTime();
> > long end() -> Instant endTime();
> > }
> > 
> > SessionWindows {
> > long inactivityGap() -> Duration inactivityGapDuration();
> > }
> > 
> > TimeWindowedDeserializer {
> > Long getWindowSize() -> Duration getWindowSizeDuration(); // or just
> > Duration windowSize();
> > }
> > 
> > SessionBytesStoreSupplier {
> > long retentionPeriod() -> Duration retentionPeriodDuration();
> > }
> > 
> > WindowBytesStoreSupplier {
> > long windowSize() -> Duration windowSizeDuration();
> > long retentionPeriod() -> Duration retentionPeriodDuration();
> > }
> > 
> > 2. Do we want to use Duration and Instant inside API implementations?
> > 
> > IGNITE-7277: "Durations potentially worsen memory pressure and gc
> > performance, so internally, we will still use longMs as the representation."
> > IGNITE-7222: Duration used to store retention.
> > 
> > [1]
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-358%3A+Migrate+Streams+API+to+Duration+instead+of+long+ms+times
> > [2]
> > https://github.com/apache/kafka/commit/b3771ba22acad7870e38ff7f58820c5b50946787#diff-47289575d3e3e2449f27b3a7b6788e1aR64
> > 
> > В Пт, 17/08/2018 в 14:46 -0500, John Roesler пишет:
> > > Hi Nikolay,
> > > 
> > > Thanks for this very nice KIP!
> > > 
> > > To answer your questions:
> > > 1. Correct, we should not delete existing methods that have been
> > 
> > released,
> > > but ...
> > > 
> > > 2. Yes, we should deprecate the 'long' variants so that we can drop them
> > > later on. Personally, I like to mention which version deprecated the
> > 
> > method
> > > so everyone can see later on how long it's been deprecated, but this may
> > 
> > be
> > > controversial, so let's let other weigh in.
> > > 
> > > 3. I think you're asking whether it's appropriate to drop the "Ms"
> > 
> > suffix,
> > > and I think yes. So "long inactivityGapMs" would become "Duration
> > > inactivityGap".
> > > In the places where the parameter's name is just "duration", I think we
> > 
> > can
> > > pick something more descriptive (I realize it was already "durationMs";
> > > this is just a good time to improve it).
> > > Also, you're correct that we shouldn't use a Duration to represent a
> > 
> > moment
> > > in time, like "startTime". The correct choice is actually "Instant", not
> > > "LocalDateTime", though.
> > > 
> > 
> > https://stackoverflow.com/questions/32437550/whats-the-difference-between-instant-and-localdatetime
> > > explains why.
> > > 
> > > I also had a few notes on the KIP itself:
> > > 4. You might want to pull trunk again. I noticed some recent APIs are
> > > missing (see KIP-328).
> > > 
> > > 5. Speaking of KIP-328: those APIs were just added and have never been
> > > released, so there's no need to deprecate the methods, you can just
> > 
> > replace
> > > them.
> > > 
> > > 6. For any existing method that's already deprecated, don't bother
> > > transitioning it to Duration. I think the examples I noticed were
> > > deprecated in KIP-328, so you'll see what I'm talking about when you pull
> > > trunk again.

Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Matthias J. Sax
Congrats Dong!

-Matthias

On 8/20/18 3:54 AM, Ismael Juma wrote:
> Hi everyone,
> 
> Dong Lin became a committer in March 2018. Since then, he has remained
> active in the community and contributed a number of patches, reviewed
> several pull requests and participated in numerous KIP discussions. I am
> happy to announce that Dong is now a member of the
> Apache Kafka PMC.
> 
> Congratulation Dong! Looking forward to your future contributions.
> 
> Ismael, on behalf of the Apache Kafka PMC
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Resolved] (KAFKA-7278) replaceSegments() should not call asyncDeleteSegment() for segments which have been removed from segments list

2018-08-20 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7278.

   Resolution: Fixed
Fix Version/s: 2.0.1

> replaceSegments() should not call asyncDeleteSegment() for segments which 
> have been removed from segments list
> --
>
> Key: KAFKA-7278
> URL: https://issues.apache.org/jira/browse/KAFKA-7278
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
>Priority: Major
> Fix For: 2.0.1
>
>
> Currently Log.replaceSegments() will call `asyncDeleteSegment(...)` for every 
> segment listed in the `oldSegments`. oldSegments should be constructed from 
> Log.segments and only contain segments listed in Log.segments.
> However, Log.segments may be modified between the time oldSegments is 
> determined to the time Log.replaceSegments() is called. If there are 
> concurrent async deletion of the same log segment file, Log.replaceSegments() 
> will call asyncDeleteSegment() for a segment that does not exist and Kafka 
> server may shutdown the log directory due to NoSuchFileException.
> This is likely the root cause of 
> https://issues.apache.org/jira/browse/KAFKA-6188.
> Given the understanding of the problem, we should be able to fix the issue by 
> only deleting segment if the segment can be found in Log.segments.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7314) MirrorMaker example in documentation does not work

2018-08-20 Thread John Wilkinson (JIRA)
John Wilkinson created KAFKA-7314:
-

 Summary: MirrorMaker example in documentation does not work
 Key: KAFKA-7314
 URL: https://issues.apache.org/jira/browse/KAFKA-7314
 Project: Kafka
  Issue Type: Bug
Reporter: John Wilkinson


Kafka MirrorMaker as described in the documentation 
[here|https://kafka.apache.org/documentation/#basic_ops_mirror_maker] does not 
work. Instead of pulling messages from the consumer-defined 
{{bootstrap.servers}} and pushing to the producer-defined 
{{bootstrap.servers}}, it consumes and producers on the same topic on the same 
host repeatedly.

To replicate, set up two instances of kafka following 
[this|https://docs.confluent.io/current/installation/docker/docs/installation/recipes/single-node-client.html]
 guide. The schema registry and rest proxy are unnecessary. 
[Here|https://hub.docker.com/r/confluentinc/cp-kafka/] is the DockerHub page 
for the image.  The Kafka version is 2.1.0.

Using those two kafka instances, go {{docker exec}} into one and set up the 
{{consumer.properties}} and the {{producer.properties}} following the 
MirrorMaker guide.

Oddly, if you put in garbage unresolvable server addresses in the config, there 
will be an error, despite the configs not getting used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk10 #417

2018-08-20 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-7311) Sender should reset next batch expiry time between poll loops

2018-08-20 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7311.
--
Resolution: Fixed

> Sender should reset next batch expiry time between poll loops
> -
>
> Key: KAFKA-7311
> URL: https://issues.apache.org/jira/browse/KAFKA-7311
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.1.0
>Reporter: Rohan Desai
>Assignee: Rohan Desai
>Priority: Major
> Fix For: 2.1.0
>
>
> Sender/RecordAccumulator never resets the next batch expiry time. Its always 
> computed as the min of the current value and the expiry time for all batches 
> being processed. This means that its always set to the expiry time of the 
> first batch, and once that time has passed Sender starts spinning on epoll 
> with a timeout of 0, which consumes a lot of CPU. This patch updates Sender 
> to reset the next batch expiry time on each poll loop so that a new value 
> reflecting the expiry time for the current set of batches is computed. We 
> observed this running KSQL when investigating why throughput would drop after 
> about 10 minutes (the default delivery timeout).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Guozhang Wang
Congratulations Dong!

On Mon, Aug 20, 2018 at 10:59 AM, Matthias J. Sax 
wrote:

> Congrats Dong!
>
> -Matthias
>
> On 8/20/18 3:54 AM, Ismael Juma wrote:
> > Hi everyone,
> >
> > Dong Lin became a committer in March 2018. Since then, he has remained
> > active in the community and contributed a number of patches, reviewed
> > several pull requests and participated in numerous KIP discussions. I am
> > happy to announce that Dong is now a member of the
> > Apache Kafka PMC.
> >
> > Congratulation Dong! Looking forward to your future contributions.
> >
> > Ismael, on behalf of the Apache Kafka PMC
> >
>
>


-- 
-- Guozhang


[jira] [Created] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2018-08-20 Thread John Roesler (JIRA)
John Roesler created KAFKA-7315:
---

 Summary: Streams serialization docs contain a broken link for Avro
 Key: KAFKA-7315
 URL: https://issues.apache.org/jira/browse/KAFKA-7315
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #2909

2018-08-20 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Vahid Hashemian
Congratulations Dong!

--Vahid

On Mon, Aug 20, 2018 at 1:08 PM Guozhang Wang  wrote:

> Congratulations Dong!
>
> On Mon, Aug 20, 2018 at 10:59 AM, Matthias J. Sax 
> wrote:
>
> > Congrats Dong!
> >
> > -Matthias
> >
> > On 8/20/18 3:54 AM, Ismael Juma wrote:
> > > Hi everyone,
> > >
> > > Dong Lin became a committer in March 2018. Since then, he has remained
> > > active in the community and contributed a number of patches, reviewed
> > > several pull requests and participated in numerous KIP discussions. I
> am
> > > happy to announce that Dong is now a member of the
> > > Apache Kafka PMC.
> > >
> > > Congratulation Dong! Looking forward to your future contributions.
> > >
> > > Ismael, on behalf of the Apache Kafka PMC
> > >
> >
> >
>
>
> --
> -- Guozhang
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Gwen Shapira
Congrats Dong Lin! Well deserved!

On Mon, Aug 20, 2018, 3:55 AM Ismael Juma  wrote:

> Hi everyone,
>
> Dong Lin became a committer in March 2018. Since then, he has remained
> active in the community and contributed a number of patches, reviewed
> several pull requests and participated in numerous KIP discussions. I am
> happy to announce that Dong is now a member of the
> Apache Kafka PMC.
>
> Congratulation Dong! Looking forward to your future contributions.
>
> Ismael, on behalf of the Apache Kafka PMC
>


Build failed in Jenkins: kafka-trunk-jdk8 #2910

2018-08-20 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-6835: Enable topic unclean leader election to be enabled without

[jason] KAFKA-7278; replaceSegments() should not call asyncDeleteSegment() for

[wangguoz] KAFKA-7311: Reset next batch expiry time on each poll loop

--
[...truncated 426.04 KB...]
kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler STARTED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

k

Re: [ANNOUNCE] New Kafka PMC member: Dong Lin

2018-08-20 Thread Mayuresh Gharat
Congrats Dong !!!

Thanks,

Mayuresh

On Mon, Aug 20, 2018 at 1:36 PM Gwen Shapira  wrote:

> Congrats Dong Lin! Well deserved!
>
> On Mon, Aug 20, 2018, 3:55 AM Ismael Juma  wrote:
>
> > Hi everyone,
> >
> > Dong Lin became a committer in March 2018. Since then, he has remained
> > active in the community and contributed a number of patches, reviewed
> > several pull requests and participated in numerous KIP discussions. I am
> > happy to announce that Dong is now a member of the
> > Apache Kafka PMC.
> >
> > Congratulation Dong! Looking forward to your future contributions.
> >
> > Ismael, on behalf of the Apache Kafka PMC
> >
>


-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Build failed in Jenkins: kafka-trunk-jdk10 #419

2018-08-20 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7311: Reset next batch expiry time on each poll loop

[jason] KAFKA-6312; Document --reset-offsets option for consumer group tool

--
[...truncated 1.79 MB...]
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:149)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:324)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:234)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:140)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:37)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy72.output(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.results.StateTrackingTestResultProcessor.output(StateTrackingTestResultProcessor.java:112)
at 
org.gradle.api.internal.tasks.testing.results.AttachParentTestResultProcessor.output(AttachParentTestResultProcessor.java:48)
at jdk.internal.reflect.GeneratedMethodAccessor284.invoke(Unknown 
Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.FailureHandlingDispatch.dispatch(FailureHandlingDispatch.java:29)
at 
org.gradle.internal.dispatch.AsyncDispatch.dispatchMessages(AsyncDispatch.java:133)
at 
org.gradle.internal.dispatch.AsyncDispatch.access$000(AsyncDispatch.java:34)
at 
org.gradle.internal.dispatch.AsyncDispatch$1.run(AsyncDispatch.java:73)
at 
org.gradle.internal.operations.CurrentBuildOperationPreservingRunnable.run(CurrentBuildOperationPreservingRunnable.java:42)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at 
org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at 
org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.io.IOException: No space left on device
at java.base/java.io.FileOutputStream.writeBytes(Native Method)
at java.base/java.io.FileOutputStream.write(FileOutputStream.java:355)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:154)
... 54 more
java.io.IOException: No space left on device
com.esotericsoftware.kryo.KryoException: java.io.IOException: No space left on 
device
at com.esotericsoftware.kryo.io.Output.flush(Output.java:156)
at com.esotericsoftware.kryo.io.Output.require(Output.java:134)
at com.esotericsoftware.kryo.io.Output.writeBoolean(Output.java:578)
at 
org.gradle.internal.serialize.kryo.KryoBackedEncoder.writeBoolean(KryoBackedEncoder.java:63)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestOutputStore$Writer.onOutput(TestOutputStore.java:99)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestReportDataCollector.onOutput(TestReportDataCollector.java:144)
at jdk.internal.reflect.GeneratedMethodAccessor282.invoke(Unknown 
Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:42)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:230)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:149)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:324)
at

[jira] [Resolved] (KAFKA-5891) Cast transformation fails if record schema contains timestamp field

2018-08-20 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5891.

   Resolution: Fixed
Fix Version/s: 2.0.1
   1.1.2
   1.0.3

> Cast transformation fails if record schema contains timestamp field
> ---
>
> Key: KAFKA-5891
> URL: https://issues.apache.org/jira/browse/KAFKA-5891
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Artem Plotnikov
>Priority: Major
> Fix For: 1.0.3, 1.1.2, 2.0.1
>
>
> I have the following simple type cast transformation:
> {code}
> name=postgresql-source-simple
> connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
> tasks.max=1
> connection.url=jdbc:postgresql://localhost:5432/testdb?user=postgres&password=mysecretpassword
> query=SELECT 1::INT as a, '2017-09-14 10:23:54'::TIMESTAMP as b
> transforms=Cast
> transforms.Cast.type=org.apache.kafka.connect.transforms.Cast$Value
> transforms.Cast.spec=a:boolean
> mode=bulk
> topic.prefix=clients
> {code}
> Which fails with the following exception in runtime:
> {code}
> [2017-09-14 16:51:01,885] ERROR Task postgresql-source-simple-0 threw an 
> uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask:148)
> org.apache.kafka.connect.errors.DataException: Invalid Java object for schema 
> type INT64: class java.sql.Timestamp for field: "null"
>   at 
> org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:239)
>   at 
> org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:209)
>   at org.apache.kafka.connect.data.Struct.put(Struct.java:214)
>   at 
> org.apache.kafka.connect.transforms.Cast.applyWithSchema(Cast.java:152)
>   at org.apache.kafka.connect.transforms.Cast.apply(Cast.java:108)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:190)
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:168)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> If I remove the  transforms.* part of the connector it will work correctly. 
> Actually, it doesn't really matter which types I use in the transformation 
> for field 'a', just the existence of a timestamp field brings the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7317) Use collections subscription for main consumer to reduce metadata

2018-08-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7317:
--

 Summary: Use collections subscription for main consumer to reduce 
metadata
 Key: KAFKA-7317
 URL: https://issues.apache.org/jira/browse/KAFKA-7317
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


In KAFKA-4633 we switched from "collection subscription" to "pattern 
subscription" for `Consumer#subscribe()` to avoid triggering auto topic 
creating on the broker. In KAFKA-5291, the metadata request was extended to 
overwrite the broker config within the request itself. However, this feature is 
only used in `KafkaAdminClient`.

This ticket proposes to use the same feature within Kafka Streams to allow the 
usage of collection based subscription in all clients to reduce the metadata 
response size than can be very large for large number of partitions in the 
cluster.

Also, the new metadata request cannot be used in consumer clients atm. Thus, we 
either need an internal config to allow Streams to enable this feature on the 
consumer, or we do a KIP and add a public config to enable this feature for all 
users.

For the AdminClient that is used during rebalance, Streams should also switch 
from wildcard metadata requests to topic-collection requests.

Note, that Streams need to be able to distinguish if it connects to older 
brokers that do not support the new metadata request and still use pattern 
subscription for this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7316) Use of filter method in KTable.scala may result in StackOverflowError

2018-08-20 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-7316:
-

 Summary: Use of filter method in KTable.scala may result in 
StackOverflowError
 Key: KAFKA-7316
 URL: https://issues.apache.org/jira/browse/KAFKA-7316
 Project: Kafka
  Issue Type: Bug
Reporter: Ted Yu


In this thread:

http://search-hadoop.com/m/Kafka/uyzND1dNbYKXzC4F1?subj=Issue+in+Kafka+2+0+0+

Druhin reported seeing StackOverflowError when using filter method from 
KTable.scala

This can be reproduced with the following change:
{code}
diff --git 
a/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala
 b/streams/streams-scala/src/test/scala
index 3d1bab5..e0a06f2 100644
--- 
a/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala
+++ 
b/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala
@@ -58,6 +58,7 @@ class StreamToTableJoinScalaIntegrationTestImplicitSerdes 
extends StreamToTableJ
 val userClicksStream: KStream[String, Long] = 
builder.stream(userClicksTopic)

 val userRegionsTable: KTable[String, String] = 
builder.table(userRegionsTopic)
+userRegionsTable.filter { case (_, count) => true }

 // Compute the total per region by summing the individual click counts per 
region.
 val clicksPerRegion: KTable[String, Long] =
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-1.1-jdk7 #184

2018-08-20 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5891; Proper handling of LogicalTypes in Cast (#4633)

--
[...truncated 1.93 MB...]

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicNotWrittenToDuringRestoration STARTED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicNotWrittenToDuringRestoration PASSED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled 
STARTED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled PASSED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosDisabled 
STARTED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosDisabled 
PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamTableJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest
 STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest
 PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets 
STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest
 STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest
 PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
shouldNotAllowToResetWhenInputTopicAbsent STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
shouldNotAllowToResetWhenInputTopicAbsent SKIPPED

org.apache.kafka.streams.integration.Rese

Build failed in Jenkins: kafka-trunk-jdk10 #420

2018-08-20 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5891; Proper handling of LogicalTypes in Cast (#4633)

--
[...truncated 1.54 MB...]

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendNewMetadataToLogOnAddPartitionsWhenPartitionsAdded PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithCoordinatorLoadInProgressOnEndTxnWhenCoordinatorIsLoading 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithCoordinatorLoadInProgressOnEndTxnWhenCoordinatorIsLoading 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithSuccessOnAddPartitionsWhenStateIsCompleteAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithSuccessOnAddPartitionsWhenStateIsCompleteAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhau

Jenkins build is back to normal : kafka-trunk-jdk8 #2911

2018-08-20 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by specifying member id

2018-08-20 Thread Boyang Chen
Thanks Matthias for the input. Sorry I was busy recently and haven't got time 
to update this thread. To summarize what we come up so far, here is a draft 
updated plan:


Introduce a new config called `member.name` which is supposed to be provided 
uniquely by the consumer client. The broker will maintain a cache with 
[key:member.name, value:member.id]. A join group request with member.name set 
will be treated as `static-membership` strategy, and will reject any join group 
request without member.name. So this coordination change will be differentiated 
from the `dynamic-membership` protocol we currently have.


When handling static join group request:

  1.   The broker will check the membership to see whether this is a new 
member. If new, broker allocate a unique member id, cache the mapping and move 
to rebalance stage.
  2.   Following 1, if this is an existing member, broker will not change group 
state, and return its cached member.id and current assignment. (unless this is 
leader, we shall trigger rebalance)
  3.   Although Guozhang has mentioned we could rejoin with pair member name 
and id, I think for join group request it is ok to leave member id blank as 
member name is the unique identifier. In commit offset request we *must* have 
both.


When handling commit offset request, if enabled with static membership, each 
time the commit request must have both member.name and member.id to be 
identified as a `certificated member`. If not, this means there are duplicate 
consumer members with same member name and the request will be rejected to 
guarantee consumption uniqueness.


When rolling restart/shutting down gracefully, the client will send a leave 
group request (static membership mode). In static membership, we will also 
define `change-group-timeout` to hold on rebalance provided by leader. So we 
will wait for all the members to rejoin the group and do exactly one rebalance 
since all members are expected to rejoin within timeout. If consumer crashes, 
the join group request from the restarted consumer will be recognized as an 
existing member and be handled as above condition 1; However, if the consumer 
takes longer than session timeout to return, we shall still trigger rebalance 
but it could still try to catch `change-group-timeout`. If it failed to catch 
second timeout, its cached state on broker will be garbage collected and 
trigger a new rebalance when it finally joins.


And consider the switch between dynamic to static membership.

  1.  Dynamic to static: the first joiner shall revise the membership to static 
and wait for all the current members to restart, since their membership is 
still dynamic. Here our assumption is that the restart process shouldn't take a 
long time, as long restart is breaking the `rebalance timeout` in whatever 
membership protocol we are using. Before restart, all dynamic member join 
requests will be rejected.
  2.  Static to dynamic: this is more like a downgrade which should be smooth: 
just erase the cached mapping, and wait for session timeout to trigger 
rebalance should be sufficient. (Fallback to current behavior)
  3.  Halfway switch: a corner case is like some clients keep dynamic 
membership while some keep static membership. This will cause the group 
rebalance forever without progress because dynamic/static states are bouncing 
each other. This could guarantee that we will not make the consumer group work 
in a wrong state by having half static and half dynamic.

To guarantee correctness, we will also push the member name/id pair to 
_consumed_offsets topic (as Matthias pointed out) and upgrade the API version, 
these details will be further discussed back in the KIP.


Are there any concern for this high level proposal? Just want to reiterate on 
the core idea of the KIP: "If the broker recognize this consumer as an existing 
member, it shouldn't trigger rebalance".

Thanks a lot for everyone's input! I feel this proposal is much more robust 
than previous one!


Best,

Boyang


From: Matthias J. Sax 
Sent: Friday, August 10, 2018 2:24 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by 
specifying member id

Hi,

thanks for the detailed discussion. I learned a lot about internals again :)

I like the idea or a user config `member.name` and to keep `member.id`
internal. Also agree with Guozhang, that reusing `client.id` might not
be a good idea.

To clarify the algorithm, each time we generate a new `member.id`, we
also need to update the "group membership" information (ie, mapping
[member.id, Assignment]), right? Ie, the new `member.id` replaces the
old entry in the cache.

I also think, we need to preserve the `member.name -> member.id` mapping
in the `__consumer_offset` topic. The KIP should mention this IMHO.

For changing the default value of config `leave.group.on.close`. I agree
with John, that we should not change the default config, because it
w

[jira] [Created] (KAFKA-7318) Should introduce a offset reset policy to consume only the messages after subscribing

2018-08-20 Thread joechen8...@gmail.com (JIRA)
joechen8...@gmail.com created KAFKA-7318:


 Summary: Should introduce a offset reset policy to consume only 
the messages after subscribing
 Key: KAFKA-7318
 URL: https://issues.apache.org/jira/browse/KAFKA-7318
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 2.0.0, 1.1.1, 1.1.0
Reporter: joechen8...@gmail.com


On our situation, we want the consumers only consume the messages which was 
produced after subscribing.   

Currently, kafka support 3 policies with auto.offset.reset, but seems both of 
them can not support the feature we want.
 * {{latest}} (the default) , if a consumer subscribe a new topic and then 
close, during these times, there are some message was produced,  the consumer 
can not poll these messages.
 * earliest , consumer may consume all the messages on the topic  before 
subscribing.
 * none, not in this scope.

Before version 1.1.0, we make the consumer poll and commit  after subscribe as 
below, this can mark the offset to 0 and works (enable.auto.commit is false) .

 
{code:java}
consumer.subscribe(topics, consumerRebalanceListener);
if(consumer.assignment().isEmpty()) {
   consumer.poll(0);
   consumer.commitSync();
}
{code}
After upgrade the clients to >=1.1.0,  it is broke. Seems it was broke by the 
fix 
[KAFKA-6397|https://github.com/apache/kafka/commit/677881afc52485aef94150be50d6258d7a340071#diff-267b7c1e68156c1301c56be63ae41dd0],
 but I am not sure about that.  Then I try to invoke the 
consumer.position(partitions) in onPartitionsAssigned of 
ConsumerRebalanceListener,  it works again. but it looks strangely that get the 
position but do nothing.  

 

so we want to know whether there is a formal way to do this, maybe introduce 
another policy for auto.offset.reset to only consume the message after  the 
consumer subscribing is perfect.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)