Build failed in Jenkins: kafka-2.1-jdk8 #109

2019-01-19 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Log partition info when creating new request batch in controller

--
[...truncated 911.65 KB...]
kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnWildcardResource ST

Re: [DISCUSS] Kafka 2.2.0 in February 2018

2019-01-19 Thread Alex D
KIP-379?

On Fri, 18 Jan 2019, 22:33 Matthias J. Sax  Just a quick update on the release.
>
>
> We have 22 KIP atm:
>
> 81, 207, 258, 289, 313, 328, 331, 339, 341, 351, 359, 361, 367, 368,
> 371, 376, 377, 380, 386, 393, 394, 414
>
> Let me know if I missed any KIP that is targeted for 2.2 release.
>
> 21 of those KIPs are accepted, and the vote for the last one is open and
> can be closed on time.
>
> The KIP deadline is Jan 24, so if any late KIPs are coming in, the vote
> must be started latest next Monday Jan 21, to be open for at least 72h
> and to meet the deadline.
>
> Also keep the feature freeze deadline in mind (31 Jan).
>
>
> Besides this, there are 91 open tickets and 41 ticket in progress. I
> will start to go through those tickets soon to see what will make it
> into 2.2 and what we need to defer. If you have any tickets assigned to
> yourself that are target for 2.2 and you know you cannot make it, I
> would appreciate if you could update those ticket yourself to help
> streamlining the release process. Thanks a lot for you support!
>
>
> -Matthias
>
>
> On 1/8/19 7:27 PM, Ismael Juma wrote:
> > Thanks for volunteering Matthias! The plan sounds good to me.
> >
> > Ismael
> >
> > On Tue, Jan 8, 2019, 1:07 PM Matthias J. Sax  wrote:
> >
> >> Hi all,
> >>
> >> I would like to propose a release plan (with me being release manager)
> >> for the next time-based feature release 2.2.0 in February.
> >>
> >> The recent Kafka release history can be found at
> >> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan.
> >> The release plan (with open issues and planned KIPs) for 2.2.0 can be
> >> found at
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512
> >> .
> >>
> >>
> >> Here are the suggested dates for Apache Kafka 2.2.0 release:
> >>
> >> 1) KIP Freeze: Jan 24, 2019.
> >>
> >> A KIP must be accepted by this date in order to be considered for this
> >> release)
> >>
> >> 2) Feature Freeze: Jan 31, 2019
> >>
> >> Major features merged & working on stabilization, minor features have
> >> PR, release branch cut; anything not in this state will be automatically
> >> moved to the next release in JIRA.
> >>
> >> 3) Code Freeze: Feb 14, 2019
> >>
> >> The KIP and feature freeze date is about 2-3 weeks from now. Please plan
> >> accordingly for the features you want push into Apache Kafka 2.2.0
> release.
> >>
> >> 4) Release Date: Feb 28, 2019 (tentative)
> >>
> >>
> >> -Matthias
> >>
> >>
> >
>
>


Re: [DISCUSS] Kafka 2.2.0 in February 2018

2019-01-19 Thread Dongjin Lee
Hi Matthias,

Thank you for taking the lead. KIP-389[^1] was accepted last week[^2], so
it seems like to be included.

Thanks,
Dongjin

[^1]:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-389%3A+Introduce+a+configurable+consumer+group+size+limit
[^2]:
https://lists.apache.org/thread.html/53b84cc35c93eddbc67c8d0dd86aedb93050e45016dfe0fc7b82caaa@%3Cdev.kafka.apache.org%3E

On Sat, Jan 19, 2019 at 9:04 PM Alex D  wrote:

> KIP-379?
>
> On Fri, 18 Jan 2019, 22:33 Matthias J. Sax 
> > Just a quick update on the release.
> >
> >
> > We have 22 KIP atm:
> >
> > 81, 207, 258, 289, 313, 328, 331, 339, 341, 351, 359, 361, 367, 368,
> > 371, 376, 377, 380, 386, 393, 394, 414
> >
> > Let me know if I missed any KIP that is targeted for 2.2 release.
> >
> > 21 of those KIPs are accepted, and the vote for the last one is open and
> > can be closed on time.
> >
> > The KIP deadline is Jan 24, so if any late KIPs are coming in, the vote
> > must be started latest next Monday Jan 21, to be open for at least 72h
> > and to meet the deadline.
> >
> > Also keep the feature freeze deadline in mind (31 Jan).
> >
> >
> > Besides this, there are 91 open tickets and 41 ticket in progress. I
> > will start to go through those tickets soon to see what will make it
> > into 2.2 and what we need to defer. If you have any tickets assigned to
> > yourself that are target for 2.2 and you know you cannot make it, I
> > would appreciate if you could update those ticket yourself to help
> > streamlining the release process. Thanks a lot for you support!
> >
> >
> > -Matthias
> >
> >
> > On 1/8/19 7:27 PM, Ismael Juma wrote:
> > > Thanks for volunteering Matthias! The plan sounds good to me.
> > >
> > > Ismael
> > >
> > > On Tue, Jan 8, 2019, 1:07 PM Matthias J. Sax  > wrote:
> > >
> > >> Hi all,
> > >>
> > >> I would like to propose a release plan (with me being release manager)
> > >> for the next time-based feature release 2.2.0 in February.
> > >>
> > >> The recent Kafka release history can be found at
> > >> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> .
> > >> The release plan (with open issues and planned KIPs) for 2.2.0 can be
> > >> found at
> > >>
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512
> > >> .
> > >>
> > >>
> > >> Here are the suggested dates for Apache Kafka 2.2.0 release:
> > >>
> > >> 1) KIP Freeze: Jan 24, 2019.
> > >>
> > >> A KIP must be accepted by this date in order to be considered for this
> > >> release)
> > >>
> > >> 2) Feature Freeze: Jan 31, 2019
> > >>
> > >> Major features merged & working on stabilization, minor features have
> > >> PR, release branch cut; anything not in this state will be
> automatically
> > >> moved to the next release in JIRA.
> > >>
> > >> 3) Code Freeze: Feb 14, 2019
> > >>
> > >> The KIP and feature freeze date is about 2-3 weeks from now. Please
> plan
> > >> accordingly for the features you want push into Apache Kafka 2.2.0
> > release.
> > >>
> > >> 4) Release Date: Feb 28, 2019 (tentative)
> > >>
> > >>
> > >> -Matthias
> > >>
> > >>
> > >
> >
> >
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*
*github:  github.com/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [DISCUSS] KIP-390: Add producer option to adjust compression level

2019-01-19 Thread Dongjin Lee
I just updated the draft PR. Please have a look when you are free.

Following the update, I have a minor update to the KIP document -
'compression.snappy.level' and 'compression.zstd.buffer.size' were removed
since these options are meaningless. (Snappy doesn't support compression
level, and zstd doesn't support buffer size.)

If there are no other feedbacks, I hope to put this proposal to the vote by
next Monday, Jan 21.

Thanks,
Dongjin

On Tue, Jan 15, 2019 at 3:17 AM Dongjin Lee  wrote:

> I just realized that there was a missing hole in the KIP, so I fixed it.
> The draft implementation will be updated soon.
>
> In short, the proposed change did not regard the case of the topic or
> broker's 'compression.type' is 'producer'; in this case, the broker has to
> handle all kinds of the supported codec. So I added additional options
> (compression.[gzip,snappy,lz4, zstd].level, compression.[gzip,snappy,lz4,
> zstd].buffer.size) with handling routines.
>
> Please have a look when you are free.
>
> Thanks,
> Dongjin
>
> On Mon, Jan 7, 2019 at 6:23 AM Dongjin Lee  wrote:
>
>> Thanks for pointing out Ismael. It's now updated.
>>
>> Best,
>> Dongjin
>>
>> On Mon, Jan 7, 2019 at 4:36 AM Ismael Juma  wrote:
>>
>>> Thanks Dongjin. One minor suggestion: we should mention that the broker
>>> side configs are also topic configs (i.e. can be set for a given topic).
>>>
>>> Ismael
>>>
>>> On Sun, Jan 6, 2019, 10:37 AM Dongjin Lee >>
>>> > Happy new year.
>>> >
>>> > I just updated the title and contents of KIP and Jira issue, with
>>> updated
>>> > draft implementation. Now both of compression level and buffer size
>>> options
>>> > are available to producer and broker configuration. You can check the
>>> > updated KIP from modified URL:
>>> >
>>> >
>>> >
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Allow+fine-grained+configuration+for+compression
>>> >
>>> > Please have a look when you are free.
>>> >
>>> > Thanks,
>>> > Dongjin
>>> >
>>> > On Mon, Dec 3, 2018 at 12:50 AM Ismael Juma  wrote:
>>> >
>>> > > The updated title sounds fine to me.
>>> > >
>>> > > Ismael
>>> > >
>>> > > On Sun, Dec 2, 2018, 5:25 AM Dongjin Lee >> > >
>>> > > > Hi Ismael,
>>> > > >
>>> > > > Got it. Your direction is perfectly reasonable. I am now updating
>>> the
>>> > KIP
>>> > > > document and the implementation.
>>> > > >
>>> > > > By allowing the buffer/block size to be configurable, it would be
>>> > better
>>> > > to
>>> > > > update the title of the KIP like 'Allow fine-grained configuration
>>> for
>>> > > > compression'. Is that right?
>>> > > >
>>> > > > @Other committers:
>>> > > >
>>> > > > Is there any other opinion on allowing the buffer/block size to be
>>> > > > configurable?
>>> > > >
>>> > > > Thanks,
>>> > > > Dongjin
>>> > > >
>>> > > > On Thu, Nov 29, 2018 at 1:45 AM Ismael Juma 
>>> wrote:
>>> > > >
>>> > > > > Hi Dongjin,
>>> > > > >
>>> > > > > To clarify, I mean a broker topic config with regards to point
>>> 1. As
>>> > > you
>>> > > > > know, compression can be done by the producer and/or by the
>>> broker.
>>> > The
>>> > > > > default is for the broker to just use whatever compression was
>>> used
>>> > by
>>> > > > the
>>> > > > > producer, but this can be changed by the user on a per topic
>>> basis.
>>> > It
>>> > > > > seems like it would make sense for the configs to be . consistent
>>> > > between
>>> > > > > producer and broker.
>>> > > > >
>>> > > > > For point 2, I haven't looked at the implementation, but we
>>> could do
>>> > it
>>> > > > in
>>> > > > > the `CompressionType` enum by invoking the right constructor or
>>> > > > retrieving
>>> > > > > the default value via a constant (if defined). That's an
>>> > implementation
>>> > > > > detail and can be discussed in the PR. The more general point is
>>> to
>>> > > rely
>>> > > > on
>>> > > > > the library defaults instead of choosing one ourselves.
>>> > > > >
>>> > > > > For point 3, I'm in favour of doing that in this KIP.
>>> > > > >
>>> > > > > Ismael
>>> > > > >
>>> > > > > On Wed, Nov 28, 2018 at 7:01 AM Dongjin Lee 
>>> > > wrote:
>>> > > > >
>>> > > > > > Thank you Ismael, here are the answers:
>>> > > > > >
>>> > > > > > *1. About topic config*
>>> > > > > >
>>> > > > > > After some consideration, I concluded that topic config doesn't
>>> > need
>>> > > to
>>> > > > > > support compression.level. Here is why: since the compression
>>> is
>>> > > > > conducted
>>> > > > > > by the client, the one who can select the best compression
>>> level is
>>> > > the
>>> > > > > > client itself. Let us assume that the compression level is set
>>> at
>>> > the
>>> > > > > topic
>>> > > > > > config level. In that case, there is a possibility that the
>>> > > compression
>>> > > > > > level is not optimal for some producers. Actually, Kafka's go
>>> > client
>>> > > > also
>>> > > > > > supports compression level functionality for the producer
>>> config
>>> > > only.
>>> > > > > > 

Re: [DISCUSS] KIP-390: Add producer option to adjust compression level

2019-01-19 Thread Ismael Juma
Hi Dongjin,

For topic level, you can only have a single compression type so the way it
was before was fine, right? The point you raise is how to set broker
defaults that vary depending on the compression type, correct?

Ismael

On Mon, Jan 14, 2019 at 10:18 AM Dongjin Lee  wrote:

> I just realized that there was a missing hole in the KIP, so I fixed it.
> The draft implementation will be updated soon.
>
> In short, the proposed change did not regard the case of the topic or
> broker's 'compression.type' is 'producer'; in this case, the broker has to
> handle all kinds of the supported codec. So I added additional options
> (compression.[gzip,snappy,lz4, zstd].level, compression.[gzip,snappy,lz4,
> zstd].buffer.size) with handling routines.
>
> Please have a look when you are free.
>
> Thanks,
> Dongjin
>
> On Mon, Jan 7, 2019 at 6:23 AM Dongjin Lee  wrote:
>
> > Thanks for pointing out Ismael. It's now updated.
> >
> > Best,
> > Dongjin
> >
> > On Mon, Jan 7, 2019 at 4:36 AM Ismael Juma  wrote:
> >
> >> Thanks Dongjin. One minor suggestion: we should mention that the broker
> >> side configs are also topic configs (i.e. can be set for a given topic).
> >>
> >> Ismael
> >>
> >> On Sun, Jan 6, 2019, 10:37 AM Dongjin Lee  >>
> >> > Happy new year.
> >> >
> >> > I just updated the title and contents of KIP and Jira issue, with
> >> updated
> >> > draft implementation. Now both of compression level and buffer size
> >> options
> >> > are available to producer and broker configuration. You can check the
> >> > updated KIP from modified URL:
> >> >
> >> >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Allow+fine-grained+configuration+for+compression
> >> >
> >> > Please have a look when you are free.
> >> >
> >> > Thanks,
> >> > Dongjin
> >> >
> >> > On Mon, Dec 3, 2018 at 12:50 AM Ismael Juma 
> wrote:
> >> >
> >> > > The updated title sounds fine to me.
> >> > >
> >> > > Ismael
> >> > >
> >> > > On Sun, Dec 2, 2018, 5:25 AM Dongjin Lee  >> > >
> >> > > > Hi Ismael,
> >> > > >
> >> > > > Got it. Your direction is perfectly reasonable. I am now updating
> >> the
> >> > KIP
> >> > > > document and the implementation.
> >> > > >
> >> > > > By allowing the buffer/block size to be configurable, it would be
> >> > better
> >> > > to
> >> > > > update the title of the KIP like 'Allow fine-grained configuration
> >> for
> >> > > > compression'. Is that right?
> >> > > >
> >> > > > @Other committers:
> >> > > >
> >> > > > Is there any other opinion on allowing the buffer/block size to be
> >> > > > configurable?
> >> > > >
> >> > > > Thanks,
> >> > > > Dongjin
> >> > > >
> >> > > > On Thu, Nov 29, 2018 at 1:45 AM Ismael Juma 
> >> wrote:
> >> > > >
> >> > > > > Hi Dongjin,
> >> > > > >
> >> > > > > To clarify, I mean a broker topic config with regards to point
> 1.
> >> As
> >> > > you
> >> > > > > know, compression can be done by the producer and/or by the
> >> broker.
> >> > The
> >> > > > > default is for the broker to just use whatever compression was
> >> used
> >> > by
> >> > > > the
> >> > > > > producer, but this can be changed by the user on a per topic
> >> basis.
> >> > It
> >> > > > > seems like it would make sense for the configs to be .
> consistent
> >> > > between
> >> > > > > producer and broker.
> >> > > > >
> >> > > > > For point 2, I haven't looked at the implementation, but we
> could
> >> do
> >> > it
> >> > > > in
> >> > > > > the `CompressionType` enum by invoking the right constructor or
> >> > > > retrieving
> >> > > > > the default value via a constant (if defined). That's an
> >> > implementation
> >> > > > > detail and can be discussed in the PR. The more general point is
> >> to
> >> > > rely
> >> > > > on
> >> > > > > the library defaults instead of choosing one ourselves.
> >> > > > >
> >> > > > > For point 3, I'm in favour of doing that in this KIP.
> >> > > > >
> >> > > > > Ismael
> >> > > > >
> >> > > > > On Wed, Nov 28, 2018 at 7:01 AM Dongjin Lee  >
> >> > > wrote:
> >> > > > >
> >> > > > > > Thank you Ismael, here are the answers:
> >> > > > > >
> >> > > > > > *1. About topic config*
> >> > > > > >
> >> > > > > > After some consideration, I concluded that topic config
> doesn't
> >> > need
> >> > > to
> >> > > > > > support compression.level. Here is why: since the compression
> is
> >> > > > > conducted
> >> > > > > > by the client, the one who can select the best compression
> >> level is
> >> > > the
> >> > > > > > client itself. Let us assume that the compression level is set
> >> at
> >> > the
> >> > > > > topic
> >> > > > > > config level. In that case, there is a possibility that the
> >> > > compression
> >> > > > > > level is not optimal for some producers. Actually, Kafka's go
> >> > client
> >> > > > also
> >> > > > > > supports compression level functionality for the producer
> config
> >> > > only.
> >> > > > > > 
> >> (wait,
> >> > do
> >> > > we
> >> > > > > > need
> >> > > > > > to 

Re: [VOTE] KIP-420: Add Single Value Fetch in Session Stores

2019-01-19 Thread Matthias J. Sax
Thanks for the KIP Guozhang!

Would it make sense to add a default implementation for the new method?
I am not sure, and I actually think it would not make sense:

 - Kafka Streams provided stores will implement the method anyway
 - Kafka Streams relies on a proper implementation for custom stores
(because the new method in used during flush()).

Thus, it seems that not adding a default implementation and hitting a
compilation error is better than hitting a runtime error later.

However, I think it's worth to mention this on the KIP (ie, why not to
add a default implementation).


-Matthias

On 1/18/19 10:33 PM, Guozhang Wang wrote:
> Hi Boyang,
> 
> Thanks for the feedback!
> 
> Although its direct result is a bug fix, it still changes the public apis.
> And we cannot enlarge the scope of a vote / adopted KIP that has been taken
> place in a previous release, so I think it is worthwhile with a new one.
> 
> 
> Guozhang
> 
> On Fri, Jan 18, 2019 at 10:14 PM Boyang Chen  wrote:
> 
>> Hey Guozhang,
>>
>> this is nice catch! One question I have is that this seems more like a bug
>> fix than a new feature proposal, maybe we could just update KIP-261
>> interface and resolve the JIRA to track the change?
>>
>> Boyang
>>
>> 
>> From: Guozhang Wang 
>> Sent: Saturday, January 19, 2019 1:07 PM
>> To: dev
>> Subject: [VOTE] KIP-420: Add Single Value Fetch in Session Stores
>>
>> Hello folks,
>>
>> I'd like to calling for a last-minute vote on the following KIP:
>>
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-420%3A+Add+Single+Value+Fetch+in+Session+Stores
>>
>> The idea comes from debugging a long lurking bug, but as an afterthought I
>> think it should be included long time ago when we did KIP-261 [1].
>>
>> [1]
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-261%3A+Add+Single+Value+Fetch+in+Window+Stores
>>
>>
>> --
>> -- Guozhang
>>
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] Kafka 2.2.0 in February 2018

2019-01-19 Thread Matthias J. Sax
Thanks you all!

Added 291, 379, 389, and 420 for tracking.


-Matthias


On 1/19/19 6:32 AM, Dongjin Lee wrote:
> Hi Matthias,
> 
> Thank you for taking the lead. KIP-389[^1] was accepted last week[^2], so
> it seems like to be included.
> 
> Thanks,
> Dongjin
> 
> [^1]:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-389%3A+Introduce+a+configurable+consumer+group+size+limit
> [^2]:
> https://lists.apache.org/thread.html/53b84cc35c93eddbc67c8d0dd86aedb93050e45016dfe0fc7b82caaa@%3Cdev.kafka.apache.org%3E
> 
> On Sat, Jan 19, 2019 at 9:04 PM Alex D  wrote:
> 
>> KIP-379?
>>
>> On Fri, 18 Jan 2019, 22:33 Matthias J. Sax >
>>> Just a quick update on the release.
>>>
>>>
>>> We have 22 KIP atm:
>>>
>>> 81, 207, 258, 289, 313, 328, 331, 339, 341, 351, 359, 361, 367, 368,
>>> 371, 376, 377, 380, 386, 393, 394, 414
>>>
>>> Let me know if I missed any KIP that is targeted for 2.2 release.
>>>
>>> 21 of those KIPs are accepted, and the vote for the last one is open and
>>> can be closed on time.
>>>
>>> The KIP deadline is Jan 24, so if any late KIPs are coming in, the vote
>>> must be started latest next Monday Jan 21, to be open for at least 72h
>>> and to meet the deadline.
>>>
>>> Also keep the feature freeze deadline in mind (31 Jan).
>>>
>>>
>>> Besides this, there are 91 open tickets and 41 ticket in progress. I
>>> will start to go through those tickets soon to see what will make it
>>> into 2.2 and what we need to defer. If you have any tickets assigned to
>>> yourself that are target for 2.2 and you know you cannot make it, I
>>> would appreciate if you could update those ticket yourself to help
>>> streamlining the release process. Thanks a lot for you support!
>>>
>>>
>>> -Matthias
>>>
>>>
>>> On 1/8/19 7:27 PM, Ismael Juma wrote:
 Thanks for volunteering Matthias! The plan sounds good to me.

 Ismael

 On Tue, Jan 8, 2019, 1:07 PM Matthias J. Sax >> wrote:

> Hi all,
>
> I would like to propose a release plan (with me being release manager)
> for the next time-based feature release 2.2.0 in February.
>
> The recent Kafka release history can be found at
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
>> .
> The release plan (with open issues and planned KIPs) for 2.2.0 can be
> found at
>
>>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512
> .
>
>
> Here are the suggested dates for Apache Kafka 2.2.0 release:
>
> 1) KIP Freeze: Jan 24, 2019.
>
> A KIP must be accepted by this date in order to be considered for this
> release)
>
> 2) Feature Freeze: Jan 31, 2019
>
> Major features merged & working on stabilization, minor features have
> PR, release branch cut; anything not in this state will be
>> automatically
> moved to the next release in JIRA.
>
> 3) Code Freeze: Feb 14, 2019
>
> The KIP and feature freeze date is about 2-3 weeks from now. Please
>> plan
> accordingly for the features you want push into Apache Kafka 2.2.0
>>> release.
>
> 4) Release Date: Feb 28, 2019 (tentative)
>
>
> -Matthias
>
>

>>>
>>>
>>
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2019-01-19 Thread Randall Hauch
Hi,

Thanks again for all of the feedback. Based upon this feedback, I do agree
that we should indeed just solve the simple problem for the vast majority
of use cases that only require a few simple properties. I believe I was the
only person advocating for a more general and flexible solution, but that
flexibility is simply not worth the much greater complexity.

So, I've dramatically simplified the KIP as follows:

   1. The KIP no longer proposes changes to the Java API. Source connector
   implementations would only be able to use this API if and only if they are
   willing to constrain the connectors to be deployed to AK 2.2 (or whichever
   is the first version that includes this feature) or later. I would be
   surprised if very many source connector developers would take advantage of
   this feature. Plus, removing this from the proposal eliminated a lot of
   complexity.
   2. A new `topic.creation.enable` Connect worker configuration property
   allows a Connect cluster operator to control whether or not connectors can
   use this feature. It defaults to not enabling the feature, which makes this
   fully backward compatible.
   3. The new properties specified in the source connector configurations
   are no longer rule based and are straightforward: the default replication
   factor, number of partitions, and other topic-specific settings for any and
   all new topics created by Connect. This also eliminates a lot of complexity
   of the design and should make using this feature much easier.

I would love for this to get into AK 2.2, which means voting needs to start
in the next few days. Short notice, but hopefully this smaller and simpler
proposal is something we can all agree to. Let's start simple and learn
from our users whether or not they need more flexibility and control.

Please respond with your thoughts. Thanks!

Best regards,

Randall

On Tue, Nov 27, 2018 at 7:36 PM Ryanne Dolan  wrote:

> Randall, have you considered something like:
>
> - introduce TopicCreationPolicy interface, with methods like
> partitionsForTopic(topic).
> - provide a DefaultTopicCreationPolicy implementation that implements the
> current behavior.
> - provide a SimpleTopicCreationPolicy that honors topic.creation.partitions
> property, etc.
> - perhaps also provide a RegexTopicCreationPolicy.
> - users can provide their own TopicCreationPolicy implementations when
> necessary.
> - support topic.creation.policy.class property in worker configs, with
> default = org.apache.kafka.connect.DefaultTopicCreationPolicy.
>
> This way, the only new configuration property is
> topic.creation.policy.class. The default behavior doesn't change from what
> we have today. If a user wants to change from the default, they can opt-in
> to one of the other policies or implement their own.
>
> Ryanne
>
> On Tue, Nov 27, 2018 at 6:31 PM Randall Hauch  wrote:
>
> > Thanks for the feedback. Some thoughts inline.
> >
> > On Tue, Nov 27, 2018 at 5:47 PM Ewen Cheslack-Postava  >
> > wrote:
> >
> > > re: AdminClient vs this proposal, one consideration is that AdminClient
> > > exposes a lot more surface area and probably a bunch of stuff we
> actually
> > > don't want Connectors to be able to do, such as deleting topics. You
> can
> > > always lock down by ACLs, but what the framework enables directly vs
> > > requiring the user to opt in via connector-specific config is an
> > important
> > > distinction.
> > >
> > > I'm not a fan of how complex the config is (same deal with
> > > transformations), and agree with Ryanne that any case requiring
> multiple
> > > rules is probably an outlier. A cleaner option for the common case
> might
> > be
> > > worth it. One option that's still aligned with the current state of the
> > KIP
> > > would be to change the default for topic.creation to a fixed default
> > value
> > > (e.g. 'default'), if that's the case turn the
> > topic.creation.default.regex
> > > default to .*, and then 99% use case would just be specifying the # of
> > > partitions with a single config and relying on cluster defaults for the
> > > rest. (I would suggest the same thing for transformations if we added a
> > > simple scripting transformation such that most use cases wouldn't need
> to
> > > compose multiple transformations.)
> > >
> >
> > I agree that any case requiring multiple rules is probably an outlier,
> and
> > I've been trying to think about how to start simple with a single case
> but
> > leave room if we really do need multiple rules in the future. I like
> Ewen's
> > suggestion a lot. IIUC, it would change the following common case:
> >
> > topic.creation=default
> > topic.creation.default.regex=.*
> > topic.creation.default.partitions=5
> >
> > into the following:
> >
> > topic.creation.default.partitions=5
> >
> > where the replication defaults to 3 and all others default to the
> brokers'
> > default topic settings. This is significantly simpler, yet it still
> allows
> > us to handle multiple rules if they'

Build failed in Jenkins: kafka-trunk-jdk8 #3329

2019-01-19 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6455: Extend CacheFlushListener to forward timestamp (#6147)

--
[...truncated 2.27 MB...]

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue PASSED

org.apache.kafka.connect.converters.LongConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.LongConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.LongConverterTest > testNullToBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > testNullToBytes 
STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.integration.ExampleConnectIntegrationTest > 
testProduceConsumeConnector STARTED

org.apache.kafka.connect.integration.ExampleConnectIntegrationTest > 
testProduceConsumeConnector PAS

Build failed in Jenkins: kafka-trunk-jdk11 #226

2019-01-19 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6455: Extend CacheFlushListener to forward timestamp (#6147)

--
[...truncated 2.27 MB...]

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime

Build failed in Jenkins: kafka-trunk-jdk11 #227

2019-01-19 Thread Apache Jenkins Server
See 


Changes:

[vahid.hashemian] Fix Javadoc of KafkaConsumer (#6155)

--
[...truncated 2.27 MB...]
org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfCommitIntervalMsIsNegative STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfCommitIntervalMsIsNegative PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideNonPrefixedCustomConfigsWithPrefixedConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideNonPrefixedCustomConfigsWithPrefixedConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyOptimizationWhenNotExplicitlyAddedToConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyOptimizationWhenNotExplicitlyAddedToConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetGlobalConsumerConfigs 
STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetGlobalConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
testGetRestoreConsumerConfigsWithRestoreConsumerOverridenPrefix STARTED

org.apache.kafka.streams.StreamsConfigTest > 
testGetRestoreConsumerConfigsWithRestoreConsumerOverridenPrefix PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowConfigExceptionIfMaxInFlightRequestsPerConnectionIsInvalidStringIfEosEnabled
 STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowConfigExceptionIfMaxInFlightRequestsPerConnectionIsInvalidStringIfEosEnabled
 PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig STARTED

o

Build failed in Jenkins: kafka-trunk-jdk8 #3330

2019-01-19 Thread Apache Jenkins Server
See 


Changes:

[vahid.hashemian] Fix Javadoc of KafkaConsumer (#6155)

--
[...truncated 2.27 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifie

Re: [DISCUSS] KIP-415: Incremental Cooperative Rebalancing in Kafka Connect

2019-01-19 Thread Randall Hauch
Thanks for all this work, Konstantine.

I have a question about when a member leaves. Here's the partial scenario,
repeated from the KIP:


Initial group and assignment: W1([AC0, AT1]), W2([AT2, BC0]), W3([BT1])
Config topic contains: AC0, AT1, AT2, BC0, BT1
W2 leaves
Rebalance is triggered
W1 joins with assignment: [AC0, AT1]
W3 joins with assignment: [BT1]
W1 becomes leader
W1 computes and sends assignments:
W1(delay: d, assigned: [AC0, AT1], revoked: [])
W3(delay: d, assigned: [BT1], revoked: [])
After delay d:
W1 joins with assignment: [AC0, AT1]
W3 joins with assignment: [BT1]
Rebalance is triggered
...

How does the leader/group know that all of the still-alive members have
joined and that it's safe to proceed by triggering the rebalance? Is it
because this time is capped by the heartbeat interval, or is it because the
leader sees a join from all of the workers to which it sent assignments?

Also, would it be helpful for the description of the "
scheduled.rebalance.max.delay.ms" property denote how well this might
tolerate workers that leave and rejoin? Without that context it might be
difficult for users to know what the value really means.

The KIP does a nice job showing how this change will better handle
evolution of the embedded protocol for Connect, especially with the
flatbuffers. How might the values of the "connect.protocol" property evolve
with those potential changes? Do the current literals lock us in more than
we'd like?

Finally, it's good to see the "Compatibility, Deprecation, and Migration
Plan" section discuss a plan for deprecating the current (v0) embedded
protocol in:

"Connect protocol version 0 will be marked deprecated in the next major
release of Apache Kafka (currently 3.0.0). After adding a deprecation
notice on this release, support of version 0 of the Connect protocol will
be removed on the subsequent major release of Apache Kafka (currently
4.0.0)."


One concern I have with this wording is that it leaves no room for proving
and validating the new cooperative protocol in real world use. Do we need
to instead suggest that the v0 protocol will be deprecated in a future
release after the cooperative protocol has been proven at least as good as
the V0 protocol, and removed in the first major release after that?

Once again, very nice work, Konstantine.

Best regards,

Randall


On Fri, Jan 18, 2019 at 2:00 PM Boyang Chen  wrote:

> Thanks a lot for the detailed explanation here Konstantine! I strongly
> agree that a rolling start of
> Kafka broker is not the optimal solution when we have an alternative to
> just upgrade the client. Also
> I fully understood your explanation on task shuffle minimum impact in the
> workers scenario, because
> the local storage usage is very limited.
>
> Focusing on the current KIP, a few more suggestions are:
>
> 1. I copy-pasted partial scenario on the Leader bounces  section
> d' is the remaining delay
> W1, which is the leader, leaves
> Rebalance is triggered
> W2 joins with assignment: []
> W3 joins with assignment: [BT1]
> W3 becomes leader.
> There's an active delay in progress.
> W3 computes and sends assignments:
> W2(delay: d'', assigned: [], revoked: [])
> W3(delay: d'', assigned: [BT1, AT1], revoked: [])
> after we start d' round of delayed rebalance. Why does W3 send assignment
> [BT1, AT1] instead of just [BT1] here? I guess we won't do the
> actual rebalance until the original scheduled delay d is reached right?
>
> 2. we are basically relying on the leader subscription to persist the
> group assignment across the generation and leader rejoin to trigger
> necessary rebalance. This assumption could potentially be broken with
> future upgrades of
> broker as we are discussing
> https://issues.apache.org/jira/browse/KAFKA-7728. This JIRA will be
> converted to a ready KIP by Mayuresh pretty soon,
> and our goal here is to avoid unnecessary rebalance due to leader bounces
> by specifying a field called JoinReason for broker to interpret. With that
> change in mind, I think it's worth mentioning this potential dependency
> within KIP-415 so that we don't forget to have corresponding change to adapt
> to 7728 broker upgrade in case JoinReason change happens before KIP-415.
> Am I clear on the situation explanation?
>
> 3. cooperative cmeans -> means that only Incremental Cooperative Connect
> protocol is enabled (version 1 or higher).
>
> 4. For the compatibility change, I'm wondering whether we could just use 2
> connect protocols instead of 3. Because the user knows when all the workers
> all upgraded to version 1, we could just use `compatible` for the first
> rolling bounce
> and 'cooperative' for the second bounce. Could you explain a bit why we
> need to start from `eager` stage?
>
> cc Mayuresh on this thread.
>
> Thanks,
> Boyang
>
> 
> From: Konstantine Karantasis 
> Sent: Friday, January 18, 2019 8:32 AM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-415: Incremental Cooperative Rebala