[jira] [Created] (KAFKA-8517) A lot of WARN messages in kafka log "Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch:

2019-06-10 Thread JIRA
Jacek Żoch created KAFKA-8517:
-

 Summary: A lot of WARN messages in kafka log "Received a 
PartitionLeaderEpoch assignment for an epoch < latestEpoch: 
 Key: KAFKA-8517
 URL: https://issues.apache.org/jira/browse/KAFKA-8517
 Project: Kafka
  Issue Type: Bug
  Components: logging
Affects Versions: 0.11.0.1
 Environment: PRD
Reporter: Jacek Żoch


We have 2.0 version but it was happening in version 0.11

In kafka log there is a lot of messages

"Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This 
implies messages have arrived out of order."

On 23.05 we had 

Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This 
implies messages have arrived out of order. New: \{epoch:181, 
offset:23562380995}, Current: \{epoch:362, offset10365488611} for Partition: 
__consumer_offsets-30 (kafka.server.epoch.LeaderEpochFileCache)

Currently we have

Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This 
implies messages have arrived out of order. New: \{epoch:199, 
offset:24588072027}, Current: \{epoch:362, offset:10365488611} for Partition: 
__consumer_offsets-30 (kafka.server.epoch.LeaderEpochFileCache)

I think kafka should either fix it "under the hood" or have information how to 
fix it

There is no information, how dangerous is it and how to fix it

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-467: Augment ProduceResponse error messaging

2019-06-10 Thread Kamal Chandraprakash
This KIP is inline with KIP-334

in which it's proposed that the consumer will throw FaultyRecordException
with corrupt/in-operative record offset and topic-partition on encountering
the invalid record.

In this KIP proposed changes, you've mentioned to skip the corrupted record
while producing the RecordBatch,
Will you also handle the same case while consuming the records by providing
a callback to the consumer to
either skip or halt the processing? (This is a follow-up idea of KIP-334
which seems relevant to this one)

On Sat, Jun 8, 2019 at 5:29 AM Guozhang Wang  wrote:

> Bump up this thread again for whoever's interested.
>
> On Sat, May 11, 2019 at 12:34 PM Guozhang Wang  wrote:
>
> > Hello everyone,
> >
> > I'd like to start a discussion thread on this newly created KIP to
> improve
> > error communication and handling for producer response:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-467%3A+Augment+ProduceResponse+error+messaging+for+specific+culprit+records
> >
> > Thanks,
> > --
> > -- Guozhang
> >
>
>
> --
> -- Guozhang
>


Re: [VOTE] KIP-434: Dead replica fetcher and log cleaner metrics

2019-06-10 Thread Kamal Chandraprakash
+1 (non-binding). Thanks for the KIP!

On Thu, Jun 6, 2019 at 8:12 PM Andrew Schofield 
wrote:

> +1 (non-binding)
>
> Andrew
>
> On 06/06/2019, 15:15, "Ryanne Dolan"  wrote:
>
> +1 (non-binding)
>
> Thanks
> Ryanne
>
> On Wed, Jun 5, 2019, 9:31 PM Satish Duggana 
> wrote:
>
> > Thanks Viktor, proposed metrics are really useful to monitor
> replication
> > status on brokers.
> >
> > +1 (non-binding)
> >
> > On Thu, Jun 6, 2019 at 2:05 AM Colin McCabe 
> wrote:
> >
> > > +1 (binding)
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Wed, Jun 5, 2019, at 03:38, Viktor Somogyi-Vass wrote:
> > > > Hi Folks,
> > > >
> > > > This vote sunk a bit, I'd like to draw some attention to this
> again in
> > > the
> > > > hope I get some feedback or votes.
> > > >
> > > > Thanks,
> > > > Viktor
> > > >
> > > > On Tue, May 7, 2019 at 4:28 PM Harsha  wrote:
> > > >
> > > > > Thanks for the kip. LGTM +1.
> > > > >
> > > > > -Harsha
> > > > >
> > > > > On Mon, Apr 29, 2019, at 8:14 AM, Viktor Somogyi-Vass wrote:
> > > > > > Hi Jason,
> > > > > >
> > > > > > I too agree this is more of a problem in older versions and
> > > therefore we
> > > > > > could backport it. Were you thinking of any specific
> versions? I
> > > guess
> > > > > the
> > > > > > 2.x and 1.x versions are definitely targets here but I was
> thinking
> > > that
> > > > > we
> > > > > > might not want to further.
> > > > > >
> > > > > > Viktor
> > > > > >
> > > > > > On Mon, Apr 29, 2019 at 12:55 AM Stanislav Kozlovski <
> > > > > stanis...@confluent.io>
> > > > > > wrote:
> > > > > >
> > > > > > > Thanks for the work done, Viktor! +1 (non-binding)
> > > > > > >
> > > > > > > I strongly agree with Jason that this monitoring-focused
> KIP is
> > > worth
> > > > > > > porting back to older versions. I am sure users will find
> it very
> > > > > useful
> > > > > > >
> > > > > > > Best,
> > > > > > > Stanislav
> > > > > > >
> > > > > > > On Fri, Apr 26, 2019 at 9:38 PM Jason Gustafson <
> > > ja...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Thanks, that works for me. +1
> > > > > > > >
> > > > > > > > By the way, we don't normally port KIPs to older
> releases, but
> > I
> > > > > wonder
> > > > > > > if
> > > > > > > > it's worth making an exception here. From recent
> experience, it
> > > > > tends to
> > > > > > > be
> > > > > > > > the older versions that are more prone to fetcher
> failures.
> > > Thoughts?
> > > > > > > >
> > > > > > > > -Jason
> > > > > > > >
> > > > > > > > On Fri, Apr 26, 2019 at 5:18 AM Viktor Somogyi-Vass <
> > > > > > > > viktorsomo...@gmail.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Let me have a second thought, I'll just add the
> clientId
> > > instead to
> > > > > > > > follow
> > > > > > > > > the convention, so it'll change DeadFetcherThreadCount
> but
> > > with the
> > > > > > > > > clientId tag.
> > > > > > > > >
> > > > > > > > > On Fri, Apr 26, 2019 at 11:29 AM Viktor Somogyi-Vass <
> > > > > > > > > viktorsomo...@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > > Hi Jason,
> > > > > > > > > >
> > > > > > > > > > Yea I think it could make sense. In this case I would
> > rename
> > > the
> > > > > > > > > > DeadFetcherThreadCount to
> DeadReplicaFetcherThreadCount and
> > > > > introduce
> > > > > > > > the
> > > > > > > > > > metric you're referring to as
> DeadLogDirFetcherThreadCount.
> > > > > > > > > > I'll update the KIP to reflect this.
> > > > > > > > > >
> > > > > > > > > > Viktor
> > > > > > > > > >
> > > > > > > > > > On Thu, Apr 25, 2019 at 8:07 PM Jason Gustafson <
> > > > > ja...@confluent.io>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > >> Hi Viktor,
> > > > > > > > > >>
> > > > > > > > > >> This looks good. Just one question I had is whether
> we may
> > > as
> > > > > well
> > > > > > > > cover
> > > > > > > > > >> the log dir fetchers as well.
> > > > > > > > > >>
> > > > > > > > > >> Thanks,
> > > > > > > > > >> Jason
> > > > > > > > > >>
> > > > > > > > > >>
> > > > > > > > > >> On Thu, Apr 25, 2019 at 7:46 AM Viktor Somogyi-Vass
> <
> > > > > > > > > >> viktorsomo...@gmail.com>
> > > > > > > > > >> wrote:
> > > > > > > > > >>
> > > > > > > > > >> > Hi Folks,
> > > > > > > > > >> >
> > > > > > > > > >> > This thread sunk a bit but I'd like to bump it
> hoping to
> > > get
> > > > > some
> > > > > > > > > >> feedback
> > > > > > > > > >> > and/or votes.
> > > > > > > > > >> >
> > > > > > > > > >> > T

How spark structured streaming consumers initiated and invoked while reading multi-partitioned kafka topics?

2019-06-10 Thread Shyam P
Hi,
 Any suggestions regarding below issue?


https://stackoverflow.com/questions/56524921/how-spark-structured-streaming-consumers-initiated-and-invoked-while-reading-mul


Thanks,
Shyam


Re: [DISCUSS] KIP-466: Add support for List serialization and deserialization

2019-06-10 Thread Development
Bump

> On May 24, 2019, at 2:09 PM, Development  wrote:
> 
> Hey,
> 
> - did we consider to make the return type (ie, ArrayList, vs
> LinkesList) configurable or encode it the serialized bytes?
> 
> Not sure about this one. Could you elaborate?
> 
> - atm the size of each element is encoded individually; did we consider
> an optimization for fixed size elements (like Long) to avoid this overhead?
> 
> I cannot think of any clean way to do so. How would you see it?
> 
> Btw I resolved all your comments under PR
> 
> Best,
> Daniyar Yeralin
> 
>> On May 24, 2019, at 12:01 AM, Matthias J. Sax  wrote:
>> 
>> Thanks for the KIP. I also had a look into the PR and have two follow up
>> question:
>> 
>> 
>> - did we consider to make the return type (ie, ArrayList, vs
>> LinkesList) configurable or encode it the serialized bytes?
>> 
>> - atm the size of each element is encoded individually; did we consider
>> an optimization for fixed size elements (like Long) to avoid this overhead?
>> 
>> 
>> 
>> -Matthias
>> 
>> On 5/15/19 6:05 PM, John Roesler wrote:
>>> Sounds good!
>>> 
>>> On Tue, May 14, 2019 at 9:21 AM Development  wrote:
 
 Hey,
 
 I think it the proposal is finalized, no one raised any concerns. Shall we 
 call it for a [VOTE]?
 
 Best,
 Daniyar Yeralin
 
> On May 10, 2019, at 10:17 AM, John Roesler  wrote:
> 
> Good observation, Daniyar.
> 
> Maybe we should just not implement support for serdeFrom.
> 
> We can always add it later, but I think you're right, we need some
> kind of more sophisticated support, or at least a second argument for
> the inner class.
> 
> For now, it seems like most use cases would be satisfied without
> serdeFrom(...List...)
> 
> -John
> 
> On Fri, May 10, 2019 at 8:57 AM Development  wrote:
>> 
>> Hi,
>> 
>> I was trying to add some test cases for the list serde, and it led me to 
>> this class `org.apache.kafka.common.serialization.SerializationTest`. I 
>> saw that it relies on method 
>> `org.apache.kafka.common.serialization.serdeFrom(Class type)`
>> 
>> Now, I’m not sure how to adapt List serde for this method, since it 
>> will be a “nested class”. What is the best approach in this case?
>> 
>> I remember that in Jackson for example, one uses a TypeFactory, and 
>> constructs “collectionType” of two classes. For example, 
>> `constructCollectionType(List.class, String.class).getClass()`. I don’t 
>> think it applies here.
>> 
>> Any ideas?
>> 
>> Best,
>> Daniyar Yeralin
>> 
>>> On May 9, 2019, at 2:10 PM, Development  wrote:
>>> 
>>> Hey Sophie,
>>> 
>>> Thank you for your input. I think I’d rather finish this KIP as is, and 
>>> then open a new one for the Collections (if everyone agrees). I don’t 
>>> want to extend the current KIP-466, since most of the work is already 
>>> done for it.
>>> 
>>> Meanwhile, I’ll start adding some test cases for this new list serde 
>>> since this discussion seems to be approaching its logical end.
>>> 
>>> Best,
>>> Daniyar Yeralin
>>> 
 On May 9, 2019, at 1:35 PM, Sophie Blee-Goldman  
 wrote:
 
 Good point about serdes for other Collections. On the one hand I'd 
 guess
 that non-List Collections are probably relatively rare in practice (if
 anyone disagrees please correct me!) but on the other hand, a) even if 
 just
 a small number of people benefit I think it's worth the extra effort 
 and b)
 if we do end up needing/wanting them in the future it would save us a 
 KIP
 to just add them now. Personally I feel it would make sense to expand 
 the
 scope of this KIP a bit to include all Collections as a logical unit, 
 but
 the ROI could be low..
 
 (I know of at least one instance in the unit tests where a Set serde 
 could
 be useful, and there may be more)
 
 On Thu, May 9, 2019 at 7:27 AM Development  wrote:
 
> Hey,
> 
> I don’t see any replies. Seems like this proposal can be finalized and
> called for a vote?
> 
> Also I’ve been thinking. Do we need more serdes for other Collections?
> Like queue or set for example
> 
> Best,
> Daniyar Yeralin
> 
>> On May 8, 2019, at 2:28 PM, John Roesler  wrote:
>> 
>> Hi Daniyar,
>> 
>> No worries about the procedural stuff. Prior experience with KIPs is
>> not required :)
>> 
>> I was just trying to help you propose this stuff in a way that the
>> others will find easy to review.
>> 
>> Thanks for updating the KIP. Thanks to the others for helping out 
>> with
>> the synt

[jira] [Created] (KAFKA-8518) Update GitHub repo description to make it obvious that it is not a "mirror" anymore

2019-06-10 Thread Etienne Neveu (JIRA)
Etienne Neveu created KAFKA-8518:


 Summary: Update GitHub repo description to make it obvious that it 
is not a "mirror" anymore
 Key: KAFKA-8518
 URL: https://issues.apache.org/jira/browse/KAFKA-8518
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Etienne Neveu


When I go to [https://github.com/apache/kafka], the description at the top is 
"Mirror of Apache Kafka", which makes me think that the development is done 
elsewhere and this is just a mirrored GitHub repo.

But I think the main development has now moved to GitHub, so it would be nice 
to change this description, to make it more obvious.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [kafka-clients] [VOTE] 2.3.0 RC1

2019-06-10 Thread Colin McCabe
Hi all,

Please take a look at the release candidate.  We haven't found any blockers yet!

best,
Colin

On Thu, Jun 6, 2019, at 14:45, Jason Gustafson wrote:
> Hi Colin,
> 
> After looking at these issues a little more, we do not believe they are
> blockers for the release.
> 
> Thanks,
> Jason
> 
> On Wed, Jun 5, 2019 at 1:45 PM Jason Gustafson  wrote:
> 
> > If we get in KAFKA-8487, we may as well do KAFKA-8386 since it causes a
> > similar problem. The patch is ready to be merged.
> >
> > -Jason
> >
> > On Wed, Jun 5, 2019 at 1:29 PM Guozhang Wang  wrote:
> >
> >> Hello Colin,
> >>
> >> I caught an issue which would affect KIP-345 (
> >> https://issues.apache.org/jira/browse/KAFKA-8487) and hence may need to
> >> be
> >> considered a blocker for this release.
> >>
> >>
> >> Guozhang
> >>
> >> On Tue, Jun 4, 2019 at 11:31 PM Colin McCabe  wrote:
> >>
> >> > On Tue, Jun 4, 2019, at 23:17, Colin McCabe wrote:
> >> > > Hi all,
> >> > >
> >> > > This is the first candidate for the release of Apache Kafka 2.3.0.
> >> > >
> >> > > This release includes many new features, including:
> >> > > * Support for incremental cooperative rebalancing
> >> > > * An in-memory session store and window store for Kafka Streams
> >> > > * An API for allowing users to determine what operations they are
> >> > authorized to perform on topics.
> >> > > * A new broker start time metric.
> >> > > * The ability for JMXTool to connect to secured RMI ports.
> >> > > * A new and improved API for setting topic and broker configurations.
> >> > > * Support for non-key joining in KTable
> >> >
> >> > One small correction here: support for non-key joining (KIP-213) slipped
> >> > from 2.3 due to time constraints.
> >> >
> >> > Regards,
> >> > Colin
> >> >
> >> > > * The ability to track the number of partitions which are under their
> >> > min ISR count.
> >> > > * The ability for consumers to opt out of automatic topic creation,
> >> even
> >> > when it is enabled on the broker.
> >> > > * The ability to use external configuration stores.
> >> > > * Improved replica fetcher behavior when errors are encountered.
> >> > >
> >> > > Check out the release notes for the 2.3.0 release here:
> >> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/RELEASE_NOTES.html
> >> > >
> >> > > The vote will go until Friday, June 7th, or until we create another
> >> R
> >> > >
> >> > > * Kafka's KEYS file containing PGP keys we use to sign the release can
> >> > be found here:
> >> > > https://kafka.apache.org/KEYS
> >> > >
> >> > > * The release artifacts to be voted upon (source and binary) are here:
> >> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/
> >> > >
> >> > > * Maven artifacts to be voted upon:
> >> > >
> >> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >> > >
> >> > > * Javadoc:
> >> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/javadoc/
> >> > >
> >> > > * The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
> >> > > https://github.com/apache/kafka/releases/tag/2.3.0-rc1
> >> > >
> >> > > thanks,
> >> > > Colin
> >> > >
> >> > >
> >> >
> >> > > --
> >> > >  You received this message because you are subscribed to the Google
> >> > Groups "kafka-clients" group.
> >> > >  To unsubscribe from this group and stop receiving emails from it,
> >> send
> >> > an email to kafka-clients+unsubscr...@googlegroups.com.
> >> > >  To post to this group, send email to kafka-clie...@googlegroups.com.
> >> > >  Visit this group at https://groups.google.com/group/kafka-clients.
> >> > >  To view this discussion on the web visit
> >> >
> >> https://groups.google.com/d/msgid/kafka-clients/461015c6-d018-40f6-a018-eaadf5c25f23%40www.fastmail.com
> >> > <
> >> >
> >> https://groups.google.com/d/msgid/kafka-clients/461015c6-d018-40f6-a018-eaadf5c25f23%40www.fastmail.com?utm_medium=email&utm_source=footer
> >> > >.
> >> > >  For more options, visit https://groups.google.com/d/optout.
> >> >
> >>
> >>
> >> --
> >> -- Guozhang
> >>
> >
>


[jira] [Resolved] (KAFKA-7315) Streams serialization docs contain a broken link for Avro

2019-06-10 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7315.

Resolution: Fixed

> Streams serialization docs contain a broken link for Avro
> -
>
> Key: KAFKA-7315
> URL: https://issues.apache.org/jira/browse/KAFKA-7315
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Assignee: Victoria Bialas
>Priority: Major
>  Labels: docuentation, newbie
>
> https://kafka.apache.org/documentation/streams/developer-guide/datatypes.html#avro



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-476: Add Java AdminClient interface

2019-06-10 Thread Andy Coates
`AdminClient` would be deprecated purely because it would no longer serve
any purpose and would be virtually empty, getting all of its implementation
from the new interfar. It would be nice to remove this from the API at the
next major version bump, hence the need to deprecate.

`AdminClient.create()` would return what it does today, (so not a breaking
change).

On Tue, 4 Jun 2019 at 22:24, Ryanne Dolan  wrote:

> > The existing `AdminClient` will be marked as deprecated.
>
> What's the reasoning behind this? I'm fine with the other changes, but
> would prefer to keep the existing public API intact if it's not hurting
> anything.
>
> Also, what will AdminClient.create() return? Would it be a breaking change?
>
> Ryanne
>
> On Tue, Jun 4, 2019, 11:17 AM Andy Coates  wrote:
>
>> Hi folks
>>
>> As there's been no chatter on this KIP I'm assuming it's non-contentious,
>> (or just boring), hence I'd like to call a vote for KIP-476:
>>
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-476%3A+Add+Java+AdminClient+Interface
>>
>> Thanks,
>>
>> Andy
>>
>


Re: [VOTE] KIP-476: Add Java AdminClient interface

2019-06-10 Thread Matthias J. Sax
Hmmm...

So the new interface, returns an instance of a class that implements the
interface. This sounds a little bit like an anti-pattern? Shouldn't
interfaces actually not know anything about classes that implement the
interface?


-Matthias

On 6/10/19 11:22 AM, Andy Coates wrote:
> `AdminClient` would be deprecated purely because it would no longer serve
> any purpose and would be virtually empty, getting all of its implementation
> from the new interfar. It would be nice to remove this from the API at the
> next major version bump, hence the need to deprecate.
> 
> `AdminClient.create()` would return what it does today, (so not a breaking
> change).
> 
> On Tue, 4 Jun 2019 at 22:24, Ryanne Dolan  wrote:
> 
>>> The existing `AdminClient` will be marked as deprecated.
>>
>> What's the reasoning behind this? I'm fine with the other changes, but
>> would prefer to keep the existing public API intact if it's not hurting
>> anything.
>>
>> Also, what will AdminClient.create() return? Would it be a breaking change?
>>
>> Ryanne
>>
>> On Tue, Jun 4, 2019, 11:17 AM Andy Coates  wrote:
>>
>>> Hi folks
>>>
>>> As there's been no chatter on this KIP I'm assuming it's non-contentious,
>>> (or just boring), hence I'd like to call a vote for KIP-476:
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-476%3A+Add+Java+AdminClient+Interface
>>>
>>> Thanks,
>>>
>>> Andy
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (KAFKA-8519) Trogdor should support network degradation

2019-06-10 Thread David Arthur (JIRA)
David Arthur created KAFKA-8519:
---

 Summary: Trogdor should support network degradation
 Key: KAFKA-8519
 URL: https://issues.apache.org/jira/browse/KAFKA-8519
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: David Arthur


Trogdor should allow us to simulate degraded networks, similar to the network 
partition spec.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-1.0-jdk7 #273

2019-06-10 Thread Apache Jenkins Server
See 




Re: [kafka-clients] [VOTE] 2.3.0 RC1

2019-06-10 Thread Boyang Chen
Hey Colin and Jason,

I would like to mention that https://issues.apache.org/jira/browse/KAFKA-8500 
could also potentially be a blocker. The PR is ready for review: 
https://github.com/apache/kafka/pull/6899

Boyang


From: Colin McCabe 
Sent: Tuesday, June 11, 2019 12:48 AM
To: dev@kafka.apache.org
Subject: Re: [kafka-clients] [VOTE] 2.3.0 RC1

Hi all,

Please take a look at the release candidate.  We haven't found any blockers yet!

best,
Colin

On Thu, Jun 6, 2019, at 14:45, Jason Gustafson wrote:
> Hi Colin,
>
> After looking at these issues a little more, we do not believe they are
> blockers for the release.
>
> Thanks,
> Jason
>
> On Wed, Jun 5, 2019 at 1:45 PM Jason Gustafson  wrote:
>
> > If we get in KAFKA-8487, we may as well do KAFKA-8386 since it causes a
> > similar problem. The patch is ready to be merged.
> >
> > -Jason
> >
> > On Wed, Jun 5, 2019 at 1:29 PM Guozhang Wang  wrote:
> >
> >> Hello Colin,
> >>
> >> I caught an issue which would affect KIP-345 (
> >> https://issues.apache.org/jira/browse/KAFKA-8487) and hence may need to
> >> be
> >> considered a blocker for this release.
> >>
> >>
> >> Guozhang
> >>
> >> On Tue, Jun 4, 2019 at 11:31 PM Colin McCabe  wrote:
> >>
> >> > On Tue, Jun 4, 2019, at 23:17, Colin McCabe wrote:
> >> > > Hi all,
> >> > >
> >> > > This is the first candidate for the release of Apache Kafka 2.3.0.
> >> > >
> >> > > This release includes many new features, including:
> >> > > * Support for incremental cooperative rebalancing
> >> > > * An in-memory session store and window store for Kafka Streams
> >> > > * An API for allowing users to determine what operations they are
> >> > authorized to perform on topics.
> >> > > * A new broker start time metric.
> >> > > * The ability for JMXTool to connect to secured RMI ports.
> >> > > * A new and improved API for setting topic and broker configurations.
> >> > > * Support for non-key joining in KTable
> >> >
> >> > One small correction here: support for non-key joining (KIP-213) slipped
> >> > from 2.3 due to time constraints.
> >> >
> >> > Regards,
> >> > Colin
> >> >
> >> > > * The ability to track the number of partitions which are under their
> >> > min ISR count.
> >> > > * The ability for consumers to opt out of automatic topic creation,
> >> even
> >> > when it is enabled on the broker.
> >> > > * The ability to use external configuration stores.
> >> > > * Improved replica fetcher behavior when errors are encountered.
> >> > >
> >> > > Check out the release notes for the 2.3.0 release here:
> >> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/RELEASE_NOTES.html
> >> > >
> >> > > The vote will go until Friday, June 7th, or until we create another
> >> R
> >> > >
> >> > > * Kafka's KEYS file containing PGP keys we use to sign the release can
> >> > be found here:
> >> > > https://kafka.apache.org/KEYS
> >> > >
> >> > > * The release artifacts to be voted upon (source and binary) are here:
> >> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/
> >> > >
> >> > > * Maven artifacts to be voted upon:
> >> > >
> >> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >> > >
> >> > > * Javadoc:
> >> > > https://home.apache.org/~cmccabe/kafka-2.3.0-rc1/javadoc/
> >> > >
> >> > > * The tag to be voted upon (off the 2.3 branch) is the 2.3.0 tag:
> >> > > https://github.com/apache/kafka/releases/tag/2.3.0-rc1
> >> > >
> >> > > thanks,
> >> > > Colin
> >> > >
> >> > >
> >> >
> >> > > --
> >> > >  You received this message because you are subscribed to the Google
> >> > Groups "kafka-clients" group.
> >> > >  To unsubscribe from this group and stop receiving emails from it,
> >> send
> >> > an email to kafka-clients+unsubscr...@googlegroups.com.
> >> > >  To post to this group, send email to kafka-clie...@googlegroups.com.
> >> > >  Visit this group at https://groups.google.com/group/kafka-clients.
> >> > >  To view this discussion on the web visit
> >> >
> >> https://groups.google.com/d/msgid/kafka-clients/461015c6-d018-40f6-a018-eaadf5c25f23%40www.fastmail.com
> >> > <
> >> >
> >> https://groups.google.com/d/msgid/kafka-clients/461015c6-d018-40f6-a018-eaadf5c25f23%40www.fastmail.com?utm_medium=email&utm_source=footer
> >> > >.
> >> > >  For more options, visit https://groups.google.com/d/optout.
> >> >
> >>
> >>
> >> --
> >> -- Guozhang
> >>
> >
>


Build failed in Jenkins: kafka-trunk-jdk11 #616

2019-06-10 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-7315 DOCS update TOC internal links serdes all versions (#6875)

--
[...truncated 2.50 MB...]
org.apache.kafka.connect.file.FileStreamSinkConnectorTest > 
testConnectorConfigValidation STARTED

org.apache.kafka.connect.file.FileStreamSinkConnectorTest > 
testConnectorConfigValidation PASSED

org.apache.kafka.connect.file.FileStreamSinkConnectorTest > testSinkTasksStdout 
STARTED

org.apache.kafka.connect.file.FileStreamSinkConnectorTest > testSinkTasksStdout 
PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > testBlankTopic 
STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > testBlankTopic 
PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > testSourceTasks 
STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > testSourceTasks 
PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > testMissingTopic 
STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > testMissingTopic 
PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testSourceTasksStdin STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testSourceTasksStdin PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > testTaskClass 
STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > testTaskClass 
PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testInvalidBatchSize STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testInvalidBatchSize PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testConnectorConfigValidation STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testConnectorConfigValidation PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create

[jira] [Created] (KAFKA-8520) TimeoutException in client side doesn't have stack trace

2019-06-10 Thread Shixiong Zhu (JIRA)
Shixiong Zhu created KAFKA-8520:
---

 Summary: TimeoutException in client side doesn't have stack trace
 Key: KAFKA-8520
 URL: https://issues.apache.org/jira/browse/KAFKA-8520
 Project: Kafka
  Issue Type: New Feature
  Components: clients
Reporter: Shixiong Zhu


When a TimeoutException is thrown directly in the client side, it doesn't have 
any stack trace because it inherits 
"org.apache.kafka.common.errors.ApiException". This makes the user hard to 
debug timeout issues, because it's hard to know which line in the user codes 
throwing this TimeoutException.

It would be great that adding a new client side TimeoutException which contains 
the stack trace.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-476: Add Java AdminClient interface

2019-06-10 Thread Colin McCabe
Hi Andy,

This is a big change, and I don't think there has been a lot of discussion 
about the pros and cons.  What specific benefits do we get from transitioning 
to using an interface rather than an abstract class?

If we are serious about doing this, would it be cleaner to just change 
AdminClient from an abstract class to an interface in Kafka 3.0?  It would 
break binary compatibility, but you have to break binary compatibility in any 
case to get what you want here (no abstract class).  And it would have the 
advantage of not creating a lot of churn in the codebase as people replaced 
"AdminClient" with "Admin" all over the place.  What do you think?

On a related note, one problem I've seen is that people will subclass 
AdminClient for testing.  Then, every time Kafka adds a new API, we add a new 
abstract method to the base class, which breaks compilation for them.  Their 
test classes would have been fine simply failing when the new API was called.  
So perhaps one useful class would be a class that implements the AdminClient 
API, and throws UnimplementedException for every method.  The test classes 
could subclass this method and never have to worry about new methods getting 
added again.

Another pattern I've seen is people wanting to implement a class which is 
similar to KafkaAdminClient in every way, except for the behavior of close().  
Specifically, people want to implement reference counting in order to reuse 
AdminClient instances.  So they start by implementing essentially a delegating 
class, which just forwards every method to an underlying AdminClient instance.  
But this has the same problem that it breaks every time the upstream project 
adds an API.  In order to support this, we could have an official 
DelegatingAdminClient base class that forwarded every method to an underlying 
AdminClient instance.  Then the client code could override the methods they 
needed, like close.

best,
Colin


On Tue, Jun 4, 2019, at 09:17, Andy Coates wrote:
> Hi folks
> 
> As there's been no chatter on this KIP I'm assuming it's non-contentious,
> (or just boring), hence I'd like to call a vote for KIP-476:
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-476%3A+Add+Java+AdminClient+Interface
> 
> Thanks,
> 
> Andy
>


[jira] [Created] (KAFKA-8521) Client unable to get a complete transaction set of messages using a single poll call

2019-06-10 Thread Boris Rybalkin (JIRA)
Boris Rybalkin created KAFKA-8521:
-

 Summary: Client unable to get a complete transaction set of 
messages using a single poll call
 Key: KAFKA-8521
 URL: https://issues.apache.org/jira/browse/KAFKA-8521
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Boris Rybalkin


I am unable to reliably get a complete list of messages from a successful 
transaction on a client side.

What I get instead sometimes is a subset of a complete transaction in one poll 
and a second half of a transaction in a second poll.

Am I right that poll should always give me a full transaction message set if a 
transaction was committed and client uses read_committed isolation level or not?

 

Pseudo code:

Server:

begin transaction

send (1, "test1")

send (2, "test2")

commit transaction

 

Client:

isolation level: read_committed

poll -> [ 1 ]

poll -> [ 2 ]

 

What I want is:

poll -> [1, 2]

 

Also what I observed, when keys are the same for the messages in the 
transaction I always get a complete message set in one poll, but when keys are 
very different in inside transaction I usually get transaction spread across 
multiple polls.

 

I can provide a working example if you think that this is a bug and not a 
misunderstanding of how poll works.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.0-jdk8 #275

2019-06-10 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-7315 DOCS update TOC internal links serdes all versions (#6875)

--
[...truncated 889.68 KB...]

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testOnlineReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testOnlineReplicaToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionIneligibleToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionIneligibleToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOnlineReplicaTransition PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.Cont

Jenkins build is back to normal : kafka-2.1-jdk8 #203

2019-06-10 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.3-jdk8 #47

2019-06-10 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-7315 DOCS update TOC internal links serdes all versions (#6875)

--
[...truncated 2.76 MB...]

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > testEncodeAsBytes STARTED

kafka.utils.JsonTest > testEncodeAsBytes PASSED

kafka.utils.JsonTest > testEncodeAsString STARTED

kafka.utils.JsonTest > testEncodeAsString PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg STARTED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters STARTED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter PASSED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting STARTED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
PASSED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
STARTED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCod

Build failed in Jenkins: kafka-trunk-jdk8 #3713

2019-06-10 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-7315 DOCS update TOC internal links serdes all versions (#6875)

--
[...truncated 4.76 MB...]
org.apache.kafka.common.network.SslTransportLayerTest > 
testClientEndpointNotValidated PASSED

org.apache.kafka.common.network.SslTransportLayerTest > testUnsupportedCiphers 
STARTED

org.apache.kafka.common.network.SslTransportLayerTest > testUnsupportedCiphers 
PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUnsupportedTLSVersion STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUnsupportedTLSVersion PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testIOExceptionsDuringHandshakeRead STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testIOExceptionsDuringHandshakeRead PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequiredNotProvided STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequiredNotProvided PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testGracefulRemoteCloseDuringHandshakeWrite STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testGracefulRemoteCloseDuringHandshakeWrite PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequestedNotProvided STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequestedNotProvided PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testIOExceptionsDuringHandshakeWrite STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testIOExceptionsDuringHandshakeWrite PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInvalidKeystorePassword STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInvalidKeystorePassword PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationDisabledNotProvided STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationDisabledNotProvided PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationSanDns STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationSanDns PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testEndpointIdentificationNoReverseLookup STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testEndpointIdentificationNoReverseLookup PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUngracefulRemoteCloseDuringHandshakeWrite STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUngracefulRemoteCloseDuringHandshakeWrite PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testGracefulRemoteCloseDuringHandshakeRead STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testGracefulRemoteCloseDuringHandshakeRead PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInvalidSecureRandomImplementation STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInvalidSecureRandomImplementation PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInvalidEndpointIdentification STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInvalidEndpointIdentification PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationSanIp STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationSanIp PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testEndpointIdentificationDisabled STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testEndpointIdentificationDisabled PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInterBrokerSslConfigValidation STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInterBrokerSslConfigValidation PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testServerTruststoreDynamicUpdate STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testServerTruststoreDynamicUpdate PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testNullTruststorePassword STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testNullTruststorePassword PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUngracefulRemoteCloseDuringHandshakeRead STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUngracefulRemoteCloseDuringHandshakeRead PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequiredUntrustedProvided STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequiredUntrustedProvided PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationDisabledUntrustedProvided S

Jenkins build is back to normal : kafka-trunk-jdk11 #617

2019-06-10 Thread Apache Jenkins Server
See 




Kafka streams issue

2019-06-10 Thread Brian Putt
Hello,

I'm working with the kafka streams api and am running into issues where I
subscribe to multiple topics and the consumer just hangs. It has a unique
application.id and I can see in kafka that the consumer group has been
created, but when I describe the group, I'll get: consumer group X has no
active members

The interesting thing is that this works when the topics list only contains
1 topic. I'm not interested in other answers where we create multiple
sources, ie: source1 = builder.stream("topic1") and source2 =
builder.stream("topic2") as the interface for StreamsBuilder.stream supports
an array of topics.

I've been able to subscribe to multiple topics before, I just can't
replicate how we've done this. (This code is running in a different
environment and working as expected, so not sure if it's a timing issue or
something else)

List topics = Arrays.asList("topic1", "topic2");

StreamsBuilder builder = new StreamsBuilder();
KStream source = builder.stream(topics);

source
.transformValues(...)
.map(key, value) -> ...)
.to((key, value, record) -> ...);

new KafkaStreams(builder.build(), props).start();

This question has been posted on stackoverflow in case you want to
answer there: 
https://stackoverflow.com/questions/56535113/kafka-streams-listening-to-multiple-topics-hangs


[jira] [Resolved] (KAFKA-8333) Load high watermark checkpoint only once when handling LeaderAndIsr requests

2019-06-10 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8333.

   Resolution: Fixed
Fix Version/s: 2.4.0

> Load high watermark checkpoint only once when handling LeaderAndIsr requests
> 
>
> Key: KAFKA-8333
> URL: https://issues.apache.org/jira/browse/KAFKA-8333
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.4.0
>
>
> Currently we reload the checkpoint file separately for every partition that 
> is first initialized on the broker. It would be more efficient to do this one 
> time only when we receive the LeaderAndIsr request and to reuse the state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8349) Add Windows batch files corresponding to kafka-delete-records.sh and kafka-log-dirs.sh

2019-06-10 Thread Kengo Seki (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki resolved KAFKA-8349.
---
Resolution: Fixed

Closing this since it's been already merged. Thanks [~hachikuji]!

> Add Windows batch files corresponding to kafka-delete-records.sh and 
> kafka-log-dirs.sh
> --
>
> Key: KAFKA-8349
> URL: https://issues.apache.org/jira/browse/KAFKA-8349
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>
> Some shell scripts don't have corresponding batch files in bin\windows.
> For improving Windows platform support, I'd like to add the following batch 
> files:
> - bin\windows\kafka-delete-records.bat
> - bin\windows\kafka-log-dirs.bat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8522) Tombstones can survive forever

2019-06-10 Thread Evelyn Bayes (JIRA)
Evelyn Bayes created KAFKA-8522:
---

 Summary: Tombstones can survive forever
 Key: KAFKA-8522
 URL: https://issues.apache.org/jira/browse/KAFKA-8522
 Project: Kafka
  Issue Type: Bug
  Components: log cleaner
Reporter: Evelyn Bayes


This is a bit grey zone as to whether it's a "bug" but it is certainly 
unintended behaviour.

 

Under specific conditions tombstones effectively survive forever:
 * Small amount of throughput;

 * min.cleanable.dirty.ratio near or at 0; and

 * Other parameters at default.

What  happens is all the data continuously gets cycled into the oldest segment. 
Old records get compacted away, but the new records continuously update the 
timestamp of the oldest segment reseting the countdown for deleting tombstones.

So tombstones build up in the oldest segment forever.

 

While you could "fix" this by reducing the segment size, this can be 
undesirable as a sudden change in throughput could cause a dangerous number of 
segments to be created.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3714

2019-06-10 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix transient failure in

[github] MINOR: Increase timeouts to 30 seconds (#6852)

--
[...truncated 4.91 MB...]

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor should create a Consumed with Serdes and timestampExtractor 
STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor should create a Consumed with Serdes and timestampExtractor 
PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
resetPolicy should create a Consumed with Serdes and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
resetPolicy should create a Consumed with Serdes and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > win

Jenkins build is back to normal : kafka-2.3-jdk8 #48

2019-06-10 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-2.0-jdk8 #276

2019-06-10 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-8193) Flaky Test MetricsIntegrationTest#testStreamMetricOfWindowStore

2019-06-10 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8193.

Resolution: Duplicate

> Flaky Test MetricsIntegrationTest#testStreamMetricOfWindowStore
> ---
>
> Key: KAFKA-8193
> URL: https://issues.apache.org/jira/browse/KAFKA-8193
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Konstantine Karantasis
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3576/console]
>  *14:14:48* org.apache.kafka.streams.integration.MetricsIntegrationTest > 
> testStreamMetricOfWindowStore STARTED
> *14:14:59* 
> org.apache.kafka.streams.integration.MetricsIntegrationTest.testStreamMetricOfWindowStore
>  failed, log available in 
> /home/jenkins/jenkins-slave/workspace/kafka-pr-jdk11-scala2.12/streams/build/reports/testOutput/org.apache.kafka.streams.integration.MetricsIntegrationTest.testStreamMetricOfWindowStore.test.stdout
> *14:14:59* 
> *14:14:59* org.apache.kafka.streams.integration.MetricsIntegrationTest > 
> testStreamMetricOfWindowStore FAILED
> *14:14:59* java.lang.AssertionError: Condition not met within timeout 1. 
> testStoreMetricWindow -> Size of metrics of type:'put-latency-avg' must be 
> equal to:2 but it's equal to 0 expected:<2> but was:<0>
> *14:14:59* at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:361)
> *14:14:59* at 
> org.apache.kafka.streams.integration.MetricsIntegrationTest.testStreamMetricOfWindowStore(MetricsIntegrationTest.java:260)
> *14:15:01*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


35023...@qq.com

2019-06-10 Thread ????
35023...@qq.com