RE: Use self contained tokens instead of ACL

2017-11-02 Thread Postmann, P. (Peter)
Manikumar, Sönke!

Thanks, the Patch looks very promising.

As far as I understood the token is stored in Zookeeper and when the clients 
reconnects or connects to another broker it uses the tokened and hmac for 
authentication. 

I think that’s an optimization. It could be not used and instead we send the 
token with every new connection made (and don’t store anything in Zookeeper).

Is this right or do I overlook something?

Kind Regards,
Peter

-Original Message-
From: Manikumar [mailto:manikumar.re...@gmail.com] 
Sent: Montag, 30. Oktober 2017 08:53
To: dev@kafka.apache.org
Subject: Re: Use self contained tokens instead of ACL

Hi,

In the first phase, we are trying implement the components/design discussed in 
the KIP.
Yes, we can definitely improve some of the components to be more extensible.
We are planning to implement in future KIPs/PRs.

Thanks


On Fri, Oct 27, 2017 at 8:22 PM, Sönke Liebau < 
soenke.lie...@opencore.com.invalid> wrote:

> Hi Manikumar,
>
> I've looked over the KIP and had a quick look at the code in the PR as 
> well. In principle I think this would help Peter along depending on 
> how plugable some of the components are. Since Peter wants to generate 
> Tokens not in Kafka but in an external System the entire part in Kafka 
> of generating DelegationTokens would simply not be used, which I think 
> would be fine. To validate externally generated tokens an option to 
> substitute for example the TokenCache for a custom implementation 
> or/and substitute the method of authenticating a delegation token for a 
> custom class.
>
> Apologies for asking questions I could look up in the code myself, but 
> at a first glance I haven't seen any indications of this token system 
> being extendable, do you plan to allow extending the system to 
> different external token providers? OAuth would come to mind as a 
> fairly wide spread candidate that could probably be implemented fairly easily.
>
> Kind regards,
> Sönke
>
> On Fri, Oct 27, 2017 at 11:17 AM, Manikumar 
> 
> wrote:
>
> > Hi,
> >
> > We have a accepted KIP for adding delegation token support for Kafka.
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 48+Delegation+token+support+for+Kafka
> >
> > currently the PR is under review process. Maybe this can used be as 
> > starting point for your requirement.
> >
> > https://github.com/apache/kafka/pull/3616
> >
> >
> >
> > On Fri, Oct 27, 2017 at 2:34 PM, Sönke Liebau < 
> > soenke.lie...@opencore.com.invalid> wrote:
> >
> > > Hi Peter,
> > >
> > > thanks for the explanation, it all makes sense now :)
> > >
> > > I can't say that I immediately see an easy way forward though to 
> > > be
> > honest.
> > > The big issue, I think, is getting the token to Kafka (and 
> > > hopefully
> > there
> > > is an easy way that I simply don't know of and someone will 
> > > correct
> me) -
> > > implementing a custom principalbuilder and authorizer should be 
> > > almost trivial.
> > >
> > > If transmitting the token as part of the ssl certificate or a 
> > > Kerberos ticket is out though the air gets a bit thin if you don't 
> > > want to
> > maintain
> > > your own fork of Kafka. The only potential solution that I can 
> > > come up
> > with
> > > is to piggyback on SASL and provide your own LoginModule in 
> > > Kafka's
> jaas
> > > file. If you use the SASL_SSL endpoint certificate checking should
> still
> > > have occured before the SASL handshake is initialized, so you
> > authenticated
> > > the user at that point. You could then use that handshake to 
> > > transmit
> > your
> > > token, have your custom principalbuilder extract the topics from 
> > > that
> and
> > > your custom authorizer authorize based on the extracted topicnames.
> > > A word of caution though: this is based on a few minutes looking 
> > > at
> code
> > > and my dangerous half knowledge of SASL, so there are any number 
> > > of
> > things
> > > that could make this impossible, either with SASL or in the Kafka
> > codebase
> > > itself. Might be a direction to explore though.
> > >
> > > Hopefully that makes sense and is targeted at least in the 
> > > vicinity of
> > whet
> > > you are looking for?
> > >
> > > Kind regards,
> > > Sönke
> > >
> > > On Fri, Oct 27, 2017 at 9:33 AM, Postmann, P. (Peter) < 
> > > peter.postm...@ing.com.invalid> wrote:
> > >
> > > > Hi Sönke,
> > > >
> > > > Thanks for your feedback, sorry that I didn’t gave you the whole
> > picture
> > > > in first place:
> > > >
> > > > We are using an Architecture, which tries to avoid to fetch or 
> > > > pull anything from a 3rd party during runtime. Therefore we are 
> > > > using self-contained tokens and client side load balancing with 
> > > > a micro
> > service
> > > > alike architecture.
> > > >
> > > > In this architecture we have two tokens:
> > > > - the manifest which enabled services to provide APIs
> > > > - the peer token which enables services to call APIs
> > > >
> > > > API providers publish their APIs in a

Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-02 Thread Paolo Patierno
Thanks Jason !


I have just updated the KIP with DeleteRecordsOptions definition.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Jason Gustafson 
Sent: Wednesday, November 1, 2017 10:09 PM
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new 
Admin Client API

+1 (binding). Just one nit: the KIP doesn't define the DeleteRecordsOptions
object. I see it's empty in the PR, but we may as well document the full
API in the KIP.

-Jason


On Wed, Nov 1, 2017 at 2:54 PM, Guozhang Wang  wrote:

> Made a pass over the PR and left some comments. I'm +1 on the wiki design
> page as well.
>
> On Tue, Oct 31, 2017 at 7:13 AM, Bill Bejeck  wrote:
>
> > +1
> >
> > Thanks,
> > Bill
> >
> > On Tue, Oct 31, 2017 at 4:36 AM, Paolo Patierno 
> > wrote:
> >
> > > Hi all,
> > >
> > >
> > > because I don't see any further discussion around KIP-204 (
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 204+%3A+adding+records+deletion+operation+to+the+new+Admin+Client+API)
> > > and I have already opened a PR with the implementation, can we re-cover
> > the
> > > vote started on October 18 ?
> > >
> > > There are only "non binding" votes up to now.
> > >
> > > Thanks,
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Viktor Somogyi 
> > > Sent: Wednesday, October 18, 2017 10:49 AM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the
> > new
> > > Admin Client API
> > >
> > > +1 (non-binding)
> > >
> > > On Wed, Oct 18, 2017 at 8:23 AM, Manikumar 
> > > wrote:
> > >
> > > > + (non-binding)
> > > >
> > > >
> > > > Thanks,
> > > > Manikumar
> > > >
> > > > On Tue, Oct 17, 2017 at 7:42 AM, Dong Lin 
> wrote:
> > > >
> > > > > Thanks for the KIP. +1 (non-binding)
> > > > >
> > > > > On Wed, Oct 11, 2017 at 2:27 AM, Ted Yu 
> wrote:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > On Mon, Oct 2, 2017 at 10:51 PM, Paolo Patierno <
> > ppatie...@live.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I didn't see any further discussion around this KIP, so I'd
> like
> > to
> > > > > start
> > > > > > > the vote for it.
> > > > > > >
> > > > > > > Just for reference : https://cwiki.apache.org/
> > > > > > > confluence/display/KAFKA/KIP-204+%3A+adding+records+
> > > > > > > deletion+operation+to+the+new+Admin+Client+API
> > > > > > >
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Paolo Patierno
> > > > > > > Senior Software Engineer (IoT) @ Red Hat
> > > > > > > Microsoft MVP on Azure & IoT
> > > > > > > Microsoft Azure Advisor
> > > > > > >
> > > > > > > Twitter : @ppatierno
> > > > > > > Linkedin : paolopatierno linkedin.com/in/paolopatierno
> > >
> > > > > > > Blog : DevExperience
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-02 Thread Paolo Patierno
Hi Colin,


I have just updated the KIP mentioning that this new method should replace the 
"legacy" Scala API used for deleting records today.


Thanks,

Paolo.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Paolo Patierno 
Sent: Tuesday, October 31, 2017 8:56 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the new 
Admin Client API

Hi Colin,
thanks !

This morning (Italy time zone) I started the vote for this KIP. Up to now there 
are 5 non-binding votes.

In any case, I'll update the section you mentioned. I totally agree with you on 
giving more info to developers who are using the Scala API.

Thanks
Paolo

From: Colin McCabe 
Sent: Tuesday, October 31, 2017 9:49:59 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the new 
Admin Client API

Hi Paolo,

This looks like a good proposal.  I think it's probably ready to take it
to a vote soon?

Also, in the "Compatibility, Deprecation, and Migration Plan" section,
you might want to mention the internal Scala interface for doing this
which was added in KIP-107.  We should expect users to migrate from that
interface to the new one over time.

best,
Colin


On Wed, Oct 25, 2017, at 03:47, Paolo Patierno wrote:
> Thanks for all your feedback guys. I have updated my current code as
> well.
>
> I know that the vote for this KIP is not started yet (actually I opened
> it due to no feedback on this KIP after a while but then the discussion
> started and it was really useful !) but I have already opened a PR for
> that.
>
> Maybe feedback could be useful on that as well :
>
>
> https://github.com/apache/kafka/pull/4132
>
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Colin McCabe 
> Sent: Monday, October 23, 2017 4:34 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
>
> On Mon, Oct 23, 2017, at 01:37, Tom Bentley wrote:
> > At the risk of muddying the waters further, have you considered
> > "RecordsToDelete" as the name of the class? It's both shorter and more
> > descriptive imho.
>
> +1 for RecordsToDelete
>
> >
> > Also "deleteBefore()" as the factory method name isn't very future proof
> > if
> > we came to support time-based deletion. Something like "beforeOffset()"
> > would be clearer, imho.
>
> Great idea.
>
> best,
> Colin
>
> >
> > Putting these together: RecordsToDelete.beforeOffset() seems much clearer
> > to me than DeleteRecordsTarget.deleteBefore()
> >
> >
> > On 23 October 2017 at 08:45, Paolo Patierno  wrote:
> >
> > > About the name I just started to have a doubt about DeletetionTarget
> > > because it could be bounded to any deletion operation (i.e. delete topic,
> > > ...) and not just what we want now, so records deletion.
> > >
> > > I have updated the KIP-204 using DeleteRecordsTarget so it's clear that
> > > it's related to the delete records operation and what it means, so the
> > > target for such operation.
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Paolo Patierno 
> > > Sent: Monday, October 23, 2017 7:38 AM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > > new Admin Client API
> > >
> > > Hi Colin,
> > >
> > > I was using the long primitive in the code but not updated the KIP yet,
> > > sorry ... now it's updated !
> > >
> > > At same time I agree on using DeletionTarget ... KIP updated !
> > >
> > >
> > > Regarding the deleteBefore factory method, it's a pattern already used
> > > witn NewPartitions.increaseTo which I think it's really clear and give us
> > > more possibility to evolve this DeletionTarget class if we'll add 
> > > different
> > > ways to specify such target not only offset based.
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno

Re: Use self contained tokens instead of ACL

2017-11-02 Thread Manikumar
Hi,

Yes, the token details and scram credentials are stored in Zookeeper and
clients connects
using tokenId and hmac for scram authentication.
We plan to implement token storage as pluggable interface.


Thanks,
Manikumar

On Thu, Nov 2, 2017 at 2:09 PM, Postmann, P. (Peter) <
peter.postm...@ing.com.invalid> wrote:

> Manikumar, Sönke!
>
> Thanks, the Patch looks very promising.
>
> As far as I understood the token is stored in Zookeeper and when the
> clients reconnects or connects to another broker it uses the tokened and
> hmac for authentication.
>
> I think that’s an optimization. It could be not used and instead we send
> the token with every new connection made (and don’t store anything in
> Zookeeper).
>
> Is this right or do I overlook something?
>
> Kind Regards,
> Peter
>
> -Original Message-
> From: Manikumar [mailto:manikumar.re...@gmail.com]
> Sent: Montag, 30. Oktober 2017 08:53
> To: dev@kafka.apache.org
> Subject: Re: Use self contained tokens instead of ACL
>
> Hi,
>
> In the first phase, we are trying implement the components/design
> discussed in the KIP.
> Yes, we can definitely improve some of the components to be more
> extensible.
> We are planning to implement in future KIPs/PRs.
>
> Thanks
>
>
> On Fri, Oct 27, 2017 at 8:22 PM, Sönke Liebau < soenke.lie...@opencore.com
> .invalid> wrote:
>
> > Hi Manikumar,
> >
> > I've looked over the KIP and had a quick look at the code in the PR as
> > well. In principle I think this would help Peter along depending on
> > how plugable some of the components are. Since Peter wants to generate
> > Tokens not in Kafka but in an external System the entire part in Kafka
> > of generating DelegationTokens would simply not be used, which I think
> > would be fine. To validate externally generated tokens an option to
> > substitute for example the TokenCache for a custom implementation
> > or/and substitute the method of authenticating a delegation token for a
> custom class.
> >
> > Apologies for asking questions I could look up in the code myself, but
> > at a first glance I haven't seen any indications of this token system
> > being extendable, do you plan to allow extending the system to
> > different external token providers? OAuth would come to mind as a
> > fairly wide spread candidate that could probably be implemented fairly
> easily.
> >
> > Kind regards,
> > Sönke
> >
> > On Fri, Oct 27, 2017 at 11:17 AM, Manikumar
> > 
> > wrote:
> >
> > > Hi,
> > >
> > > We have a accepted KIP for adding delegation token support for Kafka.
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 48+Delegation+token+support+for+Kafka
> > >
> > > currently the PR is under review process. Maybe this can used be as
> > > starting point for your requirement.
> > >
> > > https://github.com/apache/kafka/pull/3616
> > >
> > >
> > >
> > > On Fri, Oct 27, 2017 at 2:34 PM, Sönke Liebau <
> > > soenke.lie...@opencore.com.invalid> wrote:
> > >
> > > > Hi Peter,
> > > >
> > > > thanks for the explanation, it all makes sense now :)
> > > >
> > > > I can't say that I immediately see an easy way forward though to
> > > > be
> > > honest.
> > > > The big issue, I think, is getting the token to Kafka (and
> > > > hopefully
> > > there
> > > > is an easy way that I simply don't know of and someone will
> > > > correct
> > me) -
> > > > implementing a custom principalbuilder and authorizer should be
> > > > almost trivial.
> > > >
> > > > If transmitting the token as part of the ssl certificate or a
> > > > Kerberos ticket is out though the air gets a bit thin if you don't
> > > > want to
> > > maintain
> > > > your own fork of Kafka. The only potential solution that I can
> > > > come up
> > > with
> > > > is to piggyback on SASL and provide your own LoginModule in
> > > > Kafka's
> > jaas
> > > > file. If you use the SASL_SSL endpoint certificate checking should
> > still
> > > > have occured before the SASL handshake is initialized, so you
> > > authenticated
> > > > the user at that point. You could then use that handshake to
> > > > transmit
> > > your
> > > > token, have your custom principalbuilder extract the topics from
> > > > that
> > and
> > > > your custom authorizer authorize based on the extracted topicnames.
> > > > A word of caution though: this is based on a few minutes looking
> > > > at
> > code
> > > > and my dangerous half knowledge of SASL, so there are any number
> > > > of
> > > things
> > > > that could make this impossible, either with SASL or in the Kafka
> > > codebase
> > > > itself. Might be a direction to explore though.
> > > >
> > > > Hopefully that makes sense and is targeted at least in the
> > > > vicinity of
> > > whet
> > > > you are looking for?
> > > >
> > > > Kind regards,
> > > > Sönke
> > > >
> > > > On Fri, Oct 27, 2017 at 9:33 AM, Postmann, P. (Peter) <
> > > > peter.postm...@ing.com.invalid> wrote:
> > > >
> > > > > Hi Sönke,
> > > > >
> > > > > Thanks for your feedback, sorry that I didn’t gave you the whole
>

Re: [DISCUSS] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-11-02 Thread Tom Bentley
Hi Steven,

I notice you've renamed the template's "Rejected Alternatives" section to
"Other Alternatives", suggesting they're not rejected yet (or, if you have
rejected them, I think you should give your reasons).

Personally, I'd like to understand the arguments against simply replacing
KafkaFuture with CompletableFuture in Kafka 2.0. In other words, if we were
starting without needing to support Java 7 what would be the arguments for
having our own KafkaFuture?

Thanks,

Tom

On 1 November 2017 at 16:01, Ted Yu  wrote:

> KAFKA-4423 is still open.
> When would Java 7 be dropped ?
>
> Thanks
>
> On Wed, Nov 1, 2017 at 8:56 AM, Ismael Juma  wrote:
>
> > On Wed, Nov 1, 2017 at 3:51 PM, Ted Yu  wrote:
> >
> > > bq. Wait for a kafka release which will not support java 7 anymore
> > >
> > > Do you want to raise a separate thread for the above ?
> > >
> >
> > There is already a KIP for this so a separate thread is not needed.
> >
> > Ismael
> >
>


Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Paolo Patierno
Congratulations for this milestone !


Thanks to Gouzhang for running the release !


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Jaikiran Pai 
Sent: Thursday, November 2, 2017 2:59 AM
To: dev@kafka.apache.org
Cc: Users
Subject: Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

Congratulations Kafka team on the release. Happy to see Kafka reach this
milestone. It has been a pleasure using Kafka and also interacting with
the Kafka team.

-Jaikiran


On 01/11/17 7:57 PM, Guozhang Wang wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 1.0.0.
>
> This is a major release of the Kafka project, and is no mere bump of the
> version number. The Apache Kafka Project Management Committee has packed a
> number of valuable enhancements into the release. Let me summarize a few of
> them:
>
> ** Since its introduction in version 0.10, the Streams API has become
> hugely popular among Kafka users, including the likes of Pinterest,
> Rabobank, Zalando, and The New York Times. In 1.0, the the API continues to
> evolve at a healthy pace. To begin with, the builder API has been improved
> (KIP-120). A new API has been added to expose the state of active tasks at
> runtime (KIP-130). Debuggability gets easier with enhancements to the
> print() and writeAsText() methods (KIP-160). And if that’s not enough,
> check out KIP-138 and KIP-161 too. For more on streams, check out the
> Apache Kafka Streams documentation (https://kafka.apache.org/docu
> mentation/streams/), including some helpful new tutorial videos.
>
> ** Operating Kafka at scale requires that the system remain observable, and
> to make that easier, we’ve made a number of improvements to metrics. These
> are too many to summarize without becoming tedious, but Connect metrics
> have been significantly improved (KIP-196), a litany of new health check
> metrics are now exposed (KIP-188), and we now have a global topic and
> partition count (KIP-168). Check out KIP-164 and KIP-187 for even more.
>
> ** We now support Java 9, leading, among other things, to significantly
> faster TLS and CRC32C implementations. Over-the-wire encryption will be
> faster now, which will keep Kafka fast and compute costs low when
> encryption is enabled.
>
> ** In keeping with the security theme, KIP-152 cleans up the error handling
> on Simple Authentication Security Layer (SASL) authentication attempts.
> Previously, some authentication error conditions were indistinguishable
> from broker failures and were not logged in a clear way. This is cleaner
> now.
>
> ** Kafka can now tolerate disk failures better. Historically, JBOD storage
> configurations have not been recommended, but the architecture has
> nevertheless been tempting: after all, why not rely on Kafka’s own
> replication mechanism to protect against storage failure rather than using
> RAID? With KIP-112, Kafka now handles disk failure more gracefully. A
> single disk failure in a JBOD broker will not bring the entire broker down;
> rather, the broker will continue serving any log files that remain on
> functioning disks.
>
> ** Since release 0.11.0, the idempotent producer (which is the producer
> used in the presence of a transaction, which of course is the producer we
> use for exactly-once processing) required 
> max.in.flight.requests.per.connection
> to be equal to one. As anyone who has written or tested a wire protocol can
> attest, this put an upper bound on throughput. Thanks to KAFKA-5949, this
> can now be as large as five, relaxing the throughput constraint quite a bit.
>
>
> All of the changes in this release can be found in the release notes:
>
> https://dist.apache.org/repos/dist/release/kafka/1.0.0/RELEASE_NOTES.html
>
>
> You can download the source release from:
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka-1.0.0-src.tgz
>
> and binary releases from:
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.11-1.0.0.tgz
> (Scala
> 2.11)
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.12-1.0.0.tgz
> (Scala
> 2.12)
>
>
> 
> ---
>
> Apache Kafka is a distributed streaming platform with four four core APIs:
>
> ** The Producer API allows an application to publish a stream records to one
> or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more topics
> and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming
> an input stream from one or more topics and producing an output stream to
> one or more output topics, effectively transforming the inp

Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Xin Wang
Great Job!

- Xin

2017-11-02 18:30 GMT+08:00 Paolo Patierno :

> Congratulations for this milestone !
>
>
> Thanks to Gouzhang for running the release !
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Jaikiran Pai 
> Sent: Thursday, November 2, 2017 2:59 AM
> To: dev@kafka.apache.org
> Cc: Users
> Subject: Re: [ANNOUNCE] Apache Kafka 1.0.0 Released
>
> Congratulations Kafka team on the release. Happy to see Kafka reach this
> milestone. It has been a pleasure using Kafka and also interacting with
> the Kafka team.
>
> -Jaikiran
>
>
> On 01/11/17 7:57 PM, Guozhang Wang wrote:
> > The Apache Kafka community is pleased to announce the release for Apache
> > Kafka 1.0.0.
> >
> > This is a major release of the Kafka project, and is no mere bump of the
> > version number. The Apache Kafka Project Management Committee has packed
> a
> > number of valuable enhancements into the release. Let me summarize a few
> of
> > them:
> >
> > ** Since its introduction in version 0.10, the Streams API has become
> > hugely popular among Kafka users, including the likes of Pinterest,
> > Rabobank, Zalando, and The New York Times. In 1.0, the the API continues
> to
> > evolve at a healthy pace. To begin with, the builder API has been
> improved
> > (KIP-120). A new API has been added to expose the state of active tasks
> at
> > runtime (KIP-130). Debuggability gets easier with enhancements to the
> > print() and writeAsText() methods (KIP-160). And if that’s not enough,
> > check out KIP-138 and KIP-161 too. For more on streams, check out the
> > Apache Kafka Streams documentation (https://kafka.apache.org/docu
> > mentation/streams/), including some helpful new tutorial videos.
> >
> > ** Operating Kafka at scale requires that the system remain observable,
> and
> > to make that easier, we’ve made a number of improvements to metrics.
> These
> > are too many to summarize without becoming tedious, but Connect metrics
> > have been significantly improved (KIP-196), a litany of new health check
> > metrics are now exposed (KIP-188), and we now have a global topic and
> > partition count (KIP-168). Check out KIP-164 and KIP-187 for even more.
> >
> > ** We now support Java 9, leading, among other things, to significantly
> > faster TLS and CRC32C implementations. Over-the-wire encryption will be
> > faster now, which will keep Kafka fast and compute costs low when
> > encryption is enabled.
> >
> > ** In keeping with the security theme, KIP-152 cleans up the error
> handling
> > on Simple Authentication Security Layer (SASL) authentication attempts.
> > Previously, some authentication error conditions were indistinguishable
> > from broker failures and were not logged in a clear way. This is cleaner
> > now.
> >
> > ** Kafka can now tolerate disk failures better. Historically, JBOD
> storage
> > configurations have not been recommended, but the architecture has
> > nevertheless been tempting: after all, why not rely on Kafka’s own
> > replication mechanism to protect against storage failure rather than
> using
> > RAID? With KIP-112, Kafka now handles disk failure more gracefully. A
> > single disk failure in a JBOD broker will not bring the entire broker
> down;
> > rather, the broker will continue serving any log files that remain on
> > functioning disks.
> >
> > ** Since release 0.11.0, the idempotent producer (which is the producer
> > used in the presence of a transaction, which of course is the producer we
> > use for exactly-once processing) required max.in.flight.requests.per.
> connection
> > to be equal to one. As anyone who has written or tested a wire protocol
> can
> > attest, this put an upper bound on throughput. Thanks to KAFKA-5949, this
> > can now be as large as five, relaxing the throughput constraint quite a
> bit.
> >
> >
> > All of the changes in this release can be found in the release notes:
> >
> > https://dist.apache.org/repos/dist/release/kafka/1.0.0/
> RELEASE_NOTES.html
> >
> >
> > You can download the source release from:
> >
> > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
> kafka-1.0.0-src.tgz
> >
> > and binary releases from:
> >
> > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
> kafka_2.11-1.0.0.tgz
> > (Scala
> > 2.11)
> > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
> kafka_2.12-1.0.0.tgz
> > (Scala
> > 2.12)
> >
> >
> > 
> > ---
> >
> > Apache Kafka is a distributed streaming platform with four four core
> APIs:
> >
> > ** The Producer API allows an application to publish a stream records to
> one
> > or more Kafka topics.
> >
> > ** The Consumer API allows an appl

Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Eno Thereska
Congrats!

Eno

On Thu, Nov 2, 2017 at 10:55 AM, Xin Wang  wrote:

> Great Job!
>
> - Xin
>
> 2017-11-02 18:30 GMT+08:00 Paolo Patierno :
>
> > Congratulations for this milestone !
> >
> >
> > Thanks to Gouzhang for running the release !
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Jaikiran Pai 
> > Sent: Thursday, November 2, 2017 2:59 AM
> > To: dev@kafka.apache.org
> > Cc: Users
> > Subject: Re: [ANNOUNCE] Apache Kafka 1.0.0 Released
> >
> > Congratulations Kafka team on the release. Happy to see Kafka reach this
> > milestone. It has been a pleasure using Kafka and also interacting with
> > the Kafka team.
> >
> > -Jaikiran
> >
> >
> > On 01/11/17 7:57 PM, Guozhang Wang wrote:
> > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > Kafka 1.0.0.
> > >
> > > This is a major release of the Kafka project, and is no mere bump of
> the
> > > version number. The Apache Kafka Project Management Committee has
> packed
> > a
> > > number of valuable enhancements into the release. Let me summarize a
> few
> > of
> > > them:
> > >
> > > ** Since its introduction in version 0.10, the Streams API has become
> > > hugely popular among Kafka users, including the likes of Pinterest,
> > > Rabobank, Zalando, and The New York Times. In 1.0, the the API
> continues
> > to
> > > evolve at a healthy pace. To begin with, the builder API has been
> > improved
> > > (KIP-120). A new API has been added to expose the state of active tasks
> > at
> > > runtime (KIP-130). Debuggability gets easier with enhancements to the
> > > print() and writeAsText() methods (KIP-160). And if that’s not enough,
> > > check out KIP-138 and KIP-161 too. For more on streams, check out the
> > > Apache Kafka Streams documentation (https://kafka.apache.org/docu
> > > mentation/streams/), including some helpful new tutorial videos.
> > >
> > > ** Operating Kafka at scale requires that the system remain observable,
> > and
> > > to make that easier, we’ve made a number of improvements to metrics.
> > These
> > > are too many to summarize without becoming tedious, but Connect metrics
> > > have been significantly improved (KIP-196), a litany of new health
> check
> > > metrics are now exposed (KIP-188), and we now have a global topic and
> > > partition count (KIP-168). Check out KIP-164 and KIP-187 for even more.
> > >
> > > ** We now support Java 9, leading, among other things, to significantly
> > > faster TLS and CRC32C implementations. Over-the-wire encryption will be
> > > faster now, which will keep Kafka fast and compute costs low when
> > > encryption is enabled.
> > >
> > > ** In keeping with the security theme, KIP-152 cleans up the error
> > handling
> > > on Simple Authentication Security Layer (SASL) authentication attempts.
> > > Previously, some authentication error conditions were indistinguishable
> > > from broker failures and were not logged in a clear way. This is
> cleaner
> > > now.
> > >
> > > ** Kafka can now tolerate disk failures better. Historically, JBOD
> > storage
> > > configurations have not been recommended, but the architecture has
> > > nevertheless been tempting: after all, why not rely on Kafka’s own
> > > replication mechanism to protect against storage failure rather than
> > using
> > > RAID? With KIP-112, Kafka now handles disk failure more gracefully. A
> > > single disk failure in a JBOD broker will not bring the entire broker
> > down;
> > > rather, the broker will continue serving any log files that remain on
> > > functioning disks.
> > >
> > > ** Since release 0.11.0, the idempotent producer (which is the producer
> > > used in the presence of a transaction, which of course is the producer
> we
> > > use for exactly-once processing) required max.in.flight.requests.per.
> > connection
> > > to be equal to one. As anyone who has written or tested a wire protocol
> > can
> > > attest, this put an upper bound on throughput. Thanks to KAFKA-5949,
> this
> > > can now be as large as five, relaxing the throughput constraint quite a
> > bit.
> > >
> > >
> > > All of the changes in this release can be found in the release notes:
> > >
> > > https://dist.apache.org/repos/dist/release/kafka/1.0.0/
> > RELEASE_NOTES.html
> > >
> > >
> > > You can download the source release from:
> > >
> > > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
> > kafka-1.0.0-src.tgz
> > >
> > > and binary releases from:
> > >
> > > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
> > kafka_2.11-1.0.0.tgz
> > > (Scala
> > > 2.11)
> > > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
> > kafka_2.12-1.0.0.tgz
> > > (Scala
> > > 2.12)
> > >
> > >
> > > -

Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Mickael Maison
Great milestone ! Thanks for running this release.

On Thu, Nov 2, 2017 at 11:10 AM, Eno Thereska  wrote:
> Congrats!
>
> Eno
>
> On Thu, Nov 2, 2017 at 10:55 AM, Xin Wang  wrote:
>
>> Great Job!
>>
>> - Xin
>>
>> 2017-11-02 18:30 GMT+08:00 Paolo Patierno :
>>
>> > Congratulations for this milestone !
>> >
>> >
>> > Thanks to Gouzhang for running the release !
>> >
>> >
>> > Paolo Patierno
>> > Senior Software Engineer (IoT) @ Red Hat
>> > Microsoft MVP on Azure & IoT
>> > Microsoft Azure Advisor
>> >
>> > Twitter : @ppatierno
>> > Linkedin : paolopatierno
>> > Blog : DevExperience
>> >
>> >
>> > 
>> > From: Jaikiran Pai 
>> > Sent: Thursday, November 2, 2017 2:59 AM
>> > To: dev@kafka.apache.org
>> > Cc: Users
>> > Subject: Re: [ANNOUNCE] Apache Kafka 1.0.0 Released
>> >
>> > Congratulations Kafka team on the release. Happy to see Kafka reach this
>> > milestone. It has been a pleasure using Kafka and also interacting with
>> > the Kafka team.
>> >
>> > -Jaikiran
>> >
>> >
>> > On 01/11/17 7:57 PM, Guozhang Wang wrote:
>> > > The Apache Kafka community is pleased to announce the release for
>> Apache
>> > > Kafka 1.0.0.
>> > >
>> > > This is a major release of the Kafka project, and is no mere bump of
>> the
>> > > version number. The Apache Kafka Project Management Committee has
>> packed
>> > a
>> > > number of valuable enhancements into the release. Let me summarize a
>> few
>> > of
>> > > them:
>> > >
>> > > ** Since its introduction in version 0.10, the Streams API has become
>> > > hugely popular among Kafka users, including the likes of Pinterest,
>> > > Rabobank, Zalando, and The New York Times. In 1.0, the the API
>> continues
>> > to
>> > > evolve at a healthy pace. To begin with, the builder API has been
>> > improved
>> > > (KIP-120). A new API has been added to expose the state of active tasks
>> > at
>> > > runtime (KIP-130). Debuggability gets easier with enhancements to the
>> > > print() and writeAsText() methods (KIP-160). And if that’s not enough,
>> > > check out KIP-138 and KIP-161 too. For more on streams, check out the
>> > > Apache Kafka Streams documentation (https://kafka.apache.org/docu
>> > > mentation/streams/), including some helpful new tutorial videos.
>> > >
>> > > ** Operating Kafka at scale requires that the system remain observable,
>> > and
>> > > to make that easier, we’ve made a number of improvements to metrics.
>> > These
>> > > are too many to summarize without becoming tedious, but Connect metrics
>> > > have been significantly improved (KIP-196), a litany of new health
>> check
>> > > metrics are now exposed (KIP-188), and we now have a global topic and
>> > > partition count (KIP-168). Check out KIP-164 and KIP-187 for even more.
>> > >
>> > > ** We now support Java 9, leading, among other things, to significantly
>> > > faster TLS and CRC32C implementations. Over-the-wire encryption will be
>> > > faster now, which will keep Kafka fast and compute costs low when
>> > > encryption is enabled.
>> > >
>> > > ** In keeping with the security theme, KIP-152 cleans up the error
>> > handling
>> > > on Simple Authentication Security Layer (SASL) authentication attempts.
>> > > Previously, some authentication error conditions were indistinguishable
>> > > from broker failures and were not logged in a clear way. This is
>> cleaner
>> > > now.
>> > >
>> > > ** Kafka can now tolerate disk failures better. Historically, JBOD
>> > storage
>> > > configurations have not been recommended, but the architecture has
>> > > nevertheless been tempting: after all, why not rely on Kafka’s own
>> > > replication mechanism to protect against storage failure rather than
>> > using
>> > > RAID? With KIP-112, Kafka now handles disk failure more gracefully. A
>> > > single disk failure in a JBOD broker will not bring the entire broker
>> > down;
>> > > rather, the broker will continue serving any log files that remain on
>> > > functioning disks.
>> > >
>> > > ** Since release 0.11.0, the idempotent producer (which is the producer
>> > > used in the presence of a transaction, which of course is the producer
>> we
>> > > use for exactly-once processing) required max.in.flight.requests.per.
>> > connection
>> > > to be equal to one. As anyone who has written or tested a wire protocol
>> > can
>> > > attest, this put an upper bound on throughput. Thanks to KAFKA-5949,
>> this
>> > > can now be as large as five, relaxing the throughput constraint quite a
>> > bit.
>> > >
>> > >
>> > > All of the changes in this release can be found in the release notes:
>> > >
>> > > https://dist.apache.org/repos/dist/release/kafka/1.0.0/
>> > RELEASE_NOTES.html
>> > >
>> > >
>> > > You can download the source release from:
>> > >
>> > > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
>> > kafka-1.0.0-src.tgz
>> > >
>> > > and binary releases from:
>> > >
>> > > 

Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Tom Bentley
Thanks Guozhang!

On 2 November 2017 at 11:22, Mickael Maison 
wrote:

> Great milestone ! Thanks for running this release.
>
> On Thu, Nov 2, 2017 at 11:10 AM, Eno Thereska 
> wrote:
> > Congrats!
> >
> > Eno
> >
> > On Thu, Nov 2, 2017 at 10:55 AM, Xin Wang 
> wrote:
> >
> >> Great Job!
> >>
> >> - Xin
> >>
> >> 2017-11-02 18:30 GMT+08:00 Paolo Patierno :
> >>
> >> > Congratulations for this milestone !
> >> >
> >> >
> >> > Thanks to Gouzhang for running the release !
> >> >
> >> >
> >> > Paolo Patierno
> >> > Senior Software Engineer (IoT) @ Red Hat
> >> > Microsoft MVP on Azure & IoT
> >> > Microsoft Azure Advisor
> >> >
> >> > Twitter : @ppatierno
> >> > Linkedin : paolopatierno
> >> > Blog : DevExperience
> >> >
> >> >
> >> > 
> >> > From: Jaikiran Pai 
> >> > Sent: Thursday, November 2, 2017 2:59 AM
> >> > To: dev@kafka.apache.org
> >> > Cc: Users
> >> > Subject: Re: [ANNOUNCE] Apache Kafka 1.0.0 Released
> >> >
> >> > Congratulations Kafka team on the release. Happy to see Kafka reach
> this
> >> > milestone. It has been a pleasure using Kafka and also interacting
> with
> >> > the Kafka team.
> >> >
> >> > -Jaikiran
> >> >
> >> >
> >> > On 01/11/17 7:57 PM, Guozhang Wang wrote:
> >> > > The Apache Kafka community is pleased to announce the release for
> >> Apache
> >> > > Kafka 1.0.0.
> >> > >
> >> > > This is a major release of the Kafka project, and is no mere bump of
> >> the
> >> > > version number. The Apache Kafka Project Management Committee has
> >> packed
> >> > a
> >> > > number of valuable enhancements into the release. Let me summarize a
> >> few
> >> > of
> >> > > them:
> >> > >
> >> > > ** Since its introduction in version 0.10, the Streams API has
> become
> >> > > hugely popular among Kafka users, including the likes of Pinterest,
> >> > > Rabobank, Zalando, and The New York Times. In 1.0, the the API
> >> continues
> >> > to
> >> > > evolve at a healthy pace. To begin with, the builder API has been
> >> > improved
> >> > > (KIP-120). A new API has been added to expose the state of active
> tasks
> >> > at
> >> > > runtime (KIP-130). Debuggability gets easier with enhancements to
> the
> >> > > print() and writeAsText() methods (KIP-160). And if that’s not
> enough,
> >> > > check out KIP-138 and KIP-161 too. For more on streams, check out
> the
> >> > > Apache Kafka Streams documentation (https://kafka.apache.org/docu
> >> > > mentation/streams/), including some helpful new tutorial videos.
> >> > >
> >> > > ** Operating Kafka at scale requires that the system remain
> observable,
> >> > and
> >> > > to make that easier, we’ve made a number of improvements to metrics.
> >> > These
> >> > > are too many to summarize without becoming tedious, but Connect
> metrics
> >> > > have been significantly improved (KIP-196), a litany of new health
> >> check
> >> > > metrics are now exposed (KIP-188), and we now have a global topic
> and
> >> > > partition count (KIP-168). Check out KIP-164 and KIP-187 for even
> more.
> >> > >
> >> > > ** We now support Java 9, leading, among other things, to
> significantly
> >> > > faster TLS and CRC32C implementations. Over-the-wire encryption
> will be
> >> > > faster now, which will keep Kafka fast and compute costs low when
> >> > > encryption is enabled.
> >> > >
> >> > > ** In keeping with the security theme, KIP-152 cleans up the error
> >> > handling
> >> > > on Simple Authentication Security Layer (SASL) authentication
> attempts.
> >> > > Previously, some authentication error conditions were
> indistinguishable
> >> > > from broker failures and were not logged in a clear way. This is
> >> cleaner
> >> > > now.
> >> > >
> >> > > ** Kafka can now tolerate disk failures better. Historically, JBOD
> >> > storage
> >> > > configurations have not been recommended, but the architecture has
> >> > > nevertheless been tempting: after all, why not rely on Kafka’s own
> >> > > replication mechanism to protect against storage failure rather than
> >> > using
> >> > > RAID? With KIP-112, Kafka now handles disk failure more gracefully.
> A
> >> > > single disk failure in a JBOD broker will not bring the entire
> broker
> >> > down;
> >> > > rather, the broker will continue serving any log files that remain
> on
> >> > > functioning disks.
> >> > >
> >> > > ** Since release 0.11.0, the idempotent producer (which is the
> producer
> >> > > used in the presence of a transaction, which of course is the
> producer
> >> we
> >> > > use for exactly-once processing) required
> max.in.flight.requests.per.
> >> > connection
> >> > > to be equal to one. As anyone who has written or tested a wire
> protocol
> >> > can
> >> > > attest, this put an upper bound on throughput. Thanks to KAFKA-5949,
> >> this
> >> > > can now be as large as five, relaxing the throughput constraint
> quite a
> >> > bit.
> >> > >
> >> > >
> >> > > All of th

Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Ismael Juma
Thanks for running the release, Guozhang! Also thanks to all the
contributors who made 1.0 possible. :)

Ismael

On 1 Nov 2017 2:27 pm, "Guozhang Wang"  wrote:

The Apache Kafka community is pleased to announce the release for Apache
Kafka 1.0.0.

This is a major release of the Kafka project, and is no mere bump of the
version number. The Apache Kafka Project Management Committee has packed a
number of valuable enhancements into the release. Let me summarize a few of
them:

** Since its introduction in version 0.10, the Streams API has become
hugely popular among Kafka users, including the likes of Pinterest,
Rabobank, Zalando, and The New York Times. In 1.0, the the API continues to
evolve at a healthy pace. To begin with, the builder API has been improved
(KIP-120). A new API has been added to expose the state of active tasks at
runtime (KIP-130). Debuggability gets easier with enhancements to the
print() and writeAsText() methods (KIP-160). And if that’s not enough,
check out KIP-138 and KIP-161 too. For more on streams, check out the
Apache Kafka Streams documentation (https://kafka.apache.org/docu
mentation/streams/ ),
including some helpful new tutorial videos.

** Operating Kafka at scale requires that the system remain observable, and
to make that easier, we’ve made a number of improvements to metrics. These
are too many to summarize without becoming tedious, but Connect metrics
have been significantly improved (KIP-196), a litany of new health check
metrics are now exposed (KIP-188), and we now have a global topic and
partition count (KIP-168). Check out KIP-164 and KIP-187 for even more.

** We now support Java 9, leading, among other things, to significantly
faster TLS and CRC32C implementations. Over-the-wire encryption will be
faster now, which will keep Kafka fast and compute costs low when
encryption is enabled.

** In keeping with the security theme, KIP-152 cleans up the error handling
on Simple Authentication Security Layer (SASL) authentication attempts.
Previously, some authentication error conditions were indistinguishable
from broker failures and were not logged in a clear way. This is cleaner
now.

** Kafka can now tolerate disk failures better. Historically, JBOD storage
configurations have not been recommended, but the architecture has
nevertheless been tempting: after all, why not rely on Kafka’s own
replication mechanism to protect against storage failure rather than using
RAID? With KIP-112, Kafka now handles disk failure more gracefully. A
single disk failure in a JBOD broker will not bring the entire broker down;
rather, the broker will continue serving any log files that remain on
functioning disks.

** Since release 0.11.0, the idempotent producer (which is the producer
used in the presence of a transaction, which of course is the producer we
use for exactly-once processing) required max.in.flight.requests.per.con
nection
to be equal to one. As anyone who has written or tested a wire protocol can
attest, this put an upper bound on throughput. Thanks to KAFKA-5949, this
can now be as large as five, relaxing the throughput constraint quite a bit.


All of the changes in this release can be found in the release notes:

https://dist.apache.org/repos/dist/release/kafka/1.0.0/RELEASE_NOTES.html


You can download the source release from:

https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka-1.0.0-src.tgz

and binary releases from:

https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.11-1.0.0.tgz
(Scala
2.11)
https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.12-1.0.0.tgz
(Scala
2.12)



---

Apache Kafka is a distributed streaming platform with four four core APIs:

** The Producer API allows an application to publish a stream records to one
or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more topics
and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming
an input stream from one or more topics and producing an output stream to
one or more output topics, effectively transforming the input streams to
output streams.

** The Connector API allows building and running reusable producers or
consumers
that connect Kafka topics to existing applications or data systems. For
example, a connector to a relational database might capture every change to
a table.three key capabilities:


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between
systems or applications.

** Building real-time streaming applications that transform or react
to the streams
of data.


Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Targ

Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Damian Guy
Thanks Guozhang!

On Thu, 2 Nov 2017 at 11:42 Ismael Juma  wrote:

> Thanks for running the release, Guozhang! Also thanks to all the
> contributors who made 1.0 possible. :)
>
> Ismael
>
> On 1 Nov 2017 2:27 pm, "Guozhang Wang"  wrote:
>
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 1.0.0.
>
> This is a major release of the Kafka project, and is no mere bump of the
> version number. The Apache Kafka Project Management Committee has packed a
> number of valuable enhancements into the release. Let me summarize a few of
> them:
>
> ** Since its introduction in version 0.10, the Streams API has become
> hugely popular among Kafka users, including the likes of Pinterest,
> Rabobank, Zalando, and The New York Times. In 1.0, the the API continues to
> evolve at a healthy pace. To begin with, the builder API has been improved
> (KIP-120). A new API has been added to expose the state of active tasks at
> runtime (KIP-130). Debuggability gets easier with enhancements to the
> print() and writeAsText() methods (KIP-160). And if that’s not enough,
> check out KIP-138 and KIP-161 too. For more on streams, check out the
> Apache Kafka Streams documentation (https://kafka.apache.org/docu
> mentation/streams/  <
> https://kafka.apache.org/documentation/streams/>),
> including some helpful new tutorial videos.
>
> ** Operating Kafka at scale requires that the system remain observable, and
> to make that easier, we’ve made a number of improvements to metrics. These
> are too many to summarize without becoming tedious, but Connect metrics
> have been significantly improved (KIP-196), a litany of new health check
> metrics are now exposed (KIP-188), and we now have a global topic and
> partition count (KIP-168). Check out KIP-164 and KIP-187 for even more.
>
> ** We now support Java 9, leading, among other things, to significantly
> faster TLS and CRC32C implementations. Over-the-wire encryption will be
> faster now, which will keep Kafka fast and compute costs low when
> encryption is enabled.
>
> ** In keeping with the security theme, KIP-152 cleans up the error handling
> on Simple Authentication Security Layer (SASL) authentication attempts.
> Previously, some authentication error conditions were indistinguishable
> from broker failures and were not logged in a clear way. This is cleaner
> now.
>
> ** Kafka can now tolerate disk failures better. Historically, JBOD storage
> configurations have not been recommended, but the architecture has
> nevertheless been tempting: after all, why not rely on Kafka’s own
> replication mechanism to protect against storage failure rather than using
> RAID? With KIP-112, Kafka now handles disk failure more gracefully. A
> single disk failure in a JBOD broker will not bring the entire broker down;
> rather, the broker will continue serving any log files that remain on
> functioning disks.
>
> ** Since release 0.11.0, the idempotent producer (which is the producer
> used in the presence of a transaction, which of course is the producer we
> use for exactly-once processing) required max.in.flight.requests.per.con
> nection
> to be equal to one. As anyone who has written or tested a wire protocol can
> attest, this put an upper bound on throughput. Thanks to KAFKA-5949, this
> can now be as large as five, relaxing the throughput constraint quite a
> bit.
>
>
> All of the changes in this release can be found in the release notes:
>
> https://dist.apache.org/repos/dist/release/kafka/1.0.0/RELEASE_NOTES.html
>
>
> You can download the source release from:
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka-1.0.0-src.tgz
>
> and binary releases from:
>
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.11-1.0.0.tgz
> (Scala
> 2.11)
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.12-1.0.0.tgz
> (Scala
> 2.12)
>
>
> 
> ---
>
> Apache Kafka is a distributed streaming platform with four four core APIs:
>
> ** The Producer API allows an application to publish a stream records to
> one
> or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics
> and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming
> an input stream from one or more topics and producing an output stream to
> one or more output topics, effectively transforming the input streams to
> output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers
> that connect Kafka topics to existing applications or data systems. For
> example, a connector to a relational database might capture every change to
> a table.three key capabilities:
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streami

Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread UMESH CHAUDHARY
Great news, Congratulations to the team !

On Thu, 2 Nov 2017 at 17:17 Damian Guy  wrote:

> Thanks Guozhang!
>
> On Thu, 2 Nov 2017 at 11:42 Ismael Juma  wrote:
>
> > Thanks for running the release, Guozhang! Also thanks to all the
> > contributors who made 1.0 possible. :)
> >
> > Ismael
> >
> > On 1 Nov 2017 2:27 pm, "Guozhang Wang"  wrote:
> >
> > The Apache Kafka community is pleased to announce the release for Apache
> > Kafka 1.0.0.
> >
> > This is a major release of the Kafka project, and is no mere bump of the
> > version number. The Apache Kafka Project Management Committee has packed
> a
> > number of valuable enhancements into the release. Let me summarize a few
> of
> > them:
> >
> > ** Since its introduction in version 0.10, the Streams API has become
> > hugely popular among Kafka users, including the likes of Pinterest,
> > Rabobank, Zalando, and The New York Times. In 1.0, the the API continues
> to
> > evolve at a healthy pace. To begin with, the builder API has been
> improved
> > (KIP-120). A new API has been added to expose the state of active tasks
> at
> > runtime (KIP-130). Debuggability gets easier with enhancements to the
> > print() and writeAsText() methods (KIP-160). And if that’s not enough,
> > check out KIP-138 and KIP-161 too. For more on streams, check out the
> > Apache Kafka Streams documentation (https://kafka.apache.org/docu
> > mentation/streams/  <
> > https://kafka.apache.org/documentation/streams/>),
> > including some helpful new tutorial videos.
> >
> > ** Operating Kafka at scale requires that the system remain observable,
> and
> > to make that easier, we’ve made a number of improvements to metrics.
> These
> > are too many to summarize without becoming tedious, but Connect metrics
> > have been significantly improved (KIP-196), a litany of new health check
> > metrics are now exposed (KIP-188), and we now have a global topic and
> > partition count (KIP-168). Check out KIP-164 and KIP-187 for even more.
> >
> > ** We now support Java 9, leading, among other things, to significantly
> > faster TLS and CRC32C implementations. Over-the-wire encryption will be
> > faster now, which will keep Kafka fast and compute costs low when
> > encryption is enabled.
> >
> > ** In keeping with the security theme, KIP-152 cleans up the error
> handling
> > on Simple Authentication Security Layer (SASL) authentication attempts.
> > Previously, some authentication error conditions were indistinguishable
> > from broker failures and were not logged in a clear way. This is cleaner
> > now.
> >
> > ** Kafka can now tolerate disk failures better. Historically, JBOD
> storage
> > configurations have not been recommended, but the architecture has
> > nevertheless been tempting: after all, why not rely on Kafka’s own
> > replication mechanism to protect against storage failure rather than
> using
> > RAID? With KIP-112, Kafka now handles disk failure more gracefully. A
> > single disk failure in a JBOD broker will not bring the entire broker
> down;
> > rather, the broker will continue serving any log files that remain on
> > functioning disks.
> >
> > ** Since release 0.11.0, the idempotent producer (which is the producer
> > used in the presence of a transaction, which of course is the producer we
> > use for exactly-once processing) required max.in.flight.requests.per.con
> > nection
> > to be equal to one. As anyone who has written or tested a wire protocol
> can
> > attest, this put an upper bound on throughput. Thanks to KAFKA-5949, this
> > can now be as large as five, relaxing the throughput constraint quite a
> > bit.
> >
> >
> > All of the changes in this release can be found in the release notes:
> >
> >
> https://dist.apache.org/repos/dist/release/kafka/1.0.0/RELEASE_NOTES.html
> >
> >
> > You can download the source release from:
> >
> >
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka-1.0.0-src.tgz
> >
> > and binary releases from:
> >
> >
> >
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.11-1.0.0.tgz
> > (Scala
> > 2.11)
> >
> >
> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.12-1.0.0.tgz
> > (Scala
> > 2.12)
> >
> >
> > 
> > ---
> >
> > Apache Kafka is a distributed streaming platform with four four core
> APIs:
> >
> > ** The Producer API allows an application to publish a stream records to
> > one
> > or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics
> > and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming
> > an input stream from one or more topics and producing an output stream to
> > one or more output topics, effectively transforming the input streams to
> > output streams.
> >
> > ** The Connector API allows building a

Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Rajini Sivaram
Guozhang,

Thank you for running the release!

Regards,

Rajini

On Thu, Nov 2, 2017 at 12:07 PM, UMESH CHAUDHARY 
wrote:

> Great news, Congratulations to the team !
>
> On Thu, 2 Nov 2017 at 17:17 Damian Guy  wrote:
>
> > Thanks Guozhang!
> >
> > On Thu, 2 Nov 2017 at 11:42 Ismael Juma  wrote:
> >
> > > Thanks for running the release, Guozhang! Also thanks to all the
> > > contributors who made 1.0 possible. :)
> > >
> > > Ismael
> > >
> > > On 1 Nov 2017 2:27 pm, "Guozhang Wang"  wrote:
> > >
> > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > Kafka 1.0.0.
> > >
> > > This is a major release of the Kafka project, and is no mere bump of
> the
> > > version number. The Apache Kafka Project Management Committee has
> packed
> > a
> > > number of valuable enhancements into the release. Let me summarize a
> few
> > of
> > > them:
> > >
> > > ** Since its introduction in version 0.10, the Streams API has become
> > > hugely popular among Kafka users, including the likes of Pinterest,
> > > Rabobank, Zalando, and The New York Times. In 1.0, the the API
> continues
> > to
> > > evolve at a healthy pace. To begin with, the builder API has been
> > improved
> > > (KIP-120). A new API has been added to expose the state of active tasks
> > at
> > > runtime (KIP-130). Debuggability gets easier with enhancements to the
> > > print() and writeAsText() methods (KIP-160). And if that’s not enough,
> > > check out KIP-138 and KIP-161 too. For more on streams, check out the
> > > Apache Kafka Streams documentation (https://kafka.apache.org/docu
> > > mentation/streams/  <
> > > https://kafka.apache.org/documentation/streams/>),
> > > including some helpful new tutorial videos.
> > >
> > > ** Operating Kafka at scale requires that the system remain observable,
> > and
> > > to make that easier, we’ve made a number of improvements to metrics.
> > These
> > > are too many to summarize without becoming tedious, but Connect metrics
> > > have been significantly improved (KIP-196), a litany of new health
> check
> > > metrics are now exposed (KIP-188), and we now have a global topic and
> > > partition count (KIP-168). Check out KIP-164 and KIP-187 for even more.
> > >
> > > ** We now support Java 9, leading, among other things, to significantly
> > > faster TLS and CRC32C implementations. Over-the-wire encryption will be
> > > faster now, which will keep Kafka fast and compute costs low when
> > > encryption is enabled.
> > >
> > > ** In keeping with the security theme, KIP-152 cleans up the error
> > handling
> > > on Simple Authentication Security Layer (SASL) authentication attempts.
> > > Previously, some authentication error conditions were indistinguishable
> > > from broker failures and were not logged in a clear way. This is
> cleaner
> > > now.
> > >
> > > ** Kafka can now tolerate disk failures better. Historically, JBOD
> > storage
> > > configurations have not been recommended, but the architecture has
> > > nevertheless been tempting: after all, why not rely on Kafka’s own
> > > replication mechanism to protect against storage failure rather than
> > using
> > > RAID? With KIP-112, Kafka now handles disk failure more gracefully. A
> > > single disk failure in a JBOD broker will not bring the entire broker
> > down;
> > > rather, the broker will continue serving any log files that remain on
> > > functioning disks.
> > >
> > > ** Since release 0.11.0, the idempotent producer (which is the producer
> > > used in the presence of a transaction, which of course is the producer
> we
> > > use for exactly-once processing) required
> max.in.flight.requests.per.con
> > > nection
> > > to be equal to one. As anyone who has written or tested a wire protocol
> > can
> > > attest, this put an upper bound on throughput. Thanks to KAFKA-5949,
> this
> > > can now be as large as five, relaxing the throughput constraint quite a
> > > bit.
> > >
> > >
> > > All of the changes in this release can be found in the release notes:
> > >
> > >
> > https://dist.apache.org/repos/dist/release/kafka/1.0.0/
> RELEASE_NOTES.html
> > >
> > >
> > > You can download the source release from:
> > >
> > >
> > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
> kafka-1.0.0-src.tgz
> > >
> > > and binary releases from:
> > >
> > >
> > >
> > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
> kafka_2.11-1.0.0.tgz
> > > (Scala
> > > 2.11)
> > >
> > >
> > https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/
> kafka_2.12-1.0.0.tgz
> > > (Scala
> > > 2.12)
> > >
> > >
> > > 
> > > ---
> > >
> > > Apache Kafka is a distributed streaming platform with four four core
> > APIs:
> > >
> > > ** The Producer API allows an application to publish a stream records
> to
> > > one
> > > or more Kafka topics.
> > >
> > > ** The Consumer API allows an application to subscribe

[jira] [Created] (KAFKA-6158) CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is > 50 chars

2017-11-02 Thread Gustav Westling (JIRA)
Gustav Westling created KAFKA-6158:
--

 Summary: CONSUMER-ID and HOST values are concatenated if the 
CONSUMER-ID is > 50 chars
 Key: KAFKA-6158
 URL: https://issues.apache.org/jira/browse/KAFKA-6158
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
Reporter: Gustav Westling


Using the command:
{noformat}
./kafka-consumer-groups.sh --bootstrap-server=localhost:9092 --describe --group 
foo-group
{noformat}

If the CONSUMER-ID is too long the delimiter between CONSUMER-ID and HOST 
disappears.

Output:

{noformat}
TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG   
 CONSUMER-ID   HOST 
  CLIENT-ID
foobar-114 8948049 8948663 614
default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc/10.2.3.40
 
default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
{noformat}

Expected output:

{noformat}
TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG   
 CONSUMER-ID   HOST 
  CLIENT-ID
foobar-114 8948049 8948663 614
default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer-7c0345f5-4806-4957-be26-eb4b3bd6a9dc
 /10.2.3.40 
default-6697bb36-bf03-46e4-8f3e-4ef987177834-StreamThread-1-consumer
{noformat}

I suspect that the formatting rules are incorrect 
https://github.com/apache/kafka/blob/0.11.0/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L137.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Vahid S Hashemian
Great news. Thanks Guozhang!

--Vahid




From:   Rajini Sivaram 
To: dev 
Date:   11/02/2017 05:37 AM
Subject:Re: [ANNOUNCE] Apache Kafka 1.0.0 Released



Guozhang,

Thank you for running the release!

Regards,

Rajini

On Thu, Nov 2, 2017 at 12:07 PM, UMESH CHAUDHARY 
wrote:

> Great news, Congratulations to the team !
>
> On Thu, 2 Nov 2017 at 17:17 Damian Guy  wrote:
>
> > Thanks Guozhang!
> >
> > On Thu, 2 Nov 2017 at 11:42 Ismael Juma  wrote:
> >
> > > Thanks for running the release, Guozhang! Also thanks to all the
> > > contributors who made 1.0 possible. :)
> > >
> > > Ismael
> > >
> > > On 1 Nov 2017 2:27 pm, "Guozhang Wang"  wrote:
> > >
> > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > Kafka 1.0.0.
> > >
> > > This is a major release of the Kafka project, and is no mere bump of
> the
> > > version number. The Apache Kafka Project Management Committee has
> packed
> > a
> > > number of valuable enhancements into the release. Let me summarize a
> few
> > of
> > > them:
> > >
> > > ** Since its introduction in version 0.10, the Streams API has 
become
> > > hugely popular among Kafka users, including the likes of Pinterest,
> > > Rabobank, Zalando, and The New York Times. In 1.0, the the API
> continues
> > to
> > > evolve at a healthy pace. To begin with, the builder API has been
> > improved
> > > (KIP-120). A new API has been added to expose the state of active 
tasks
> > at
> > > runtime (KIP-130). Debuggability gets easier with enhancements to 
the
> > > print() and writeAsText() methods (KIP-160). And if that’s not 
enough,
> > > check out KIP-138 and KIP-161 too. For more on streams, check out 
the
> > > Apache Kafka Streams documentation (
https://urldefense.proofpoint.com/v2/url?u=https-3A__kafka.apache.org_docu&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=tq2Tesrs2V9c_JVNIBL_Hk-AvVv8hGn62gT7pdR-6-g&s=Be3Toa7cHgFiUMCDAnwJ_e7YhTUvp4eY84rXbSA4Irc&e=

> > > mentation/streams/ <
https://urldefense.proofpoint.com/v2/url?u=https-3A__kafka.apache.org_documentation_streams_&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=tq2Tesrs2V9c_JVNIBL_Hk-AvVv8hGn62gT7pdR-6-g&s=_75tvKAxc9SOMmWlh2Kwo9lsbZjl0XdAQUpfH8pP_bs&e=
> <
> > > 
https://urldefense.proofpoint.com/v2/url?u=https-3A__kafka.apache.org_documentation_streams_&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=tq2Tesrs2V9c_JVNIBL_Hk-AvVv8hGn62gT7pdR-6-g&s=_75tvKAxc9SOMmWlh2Kwo9lsbZjl0XdAQUpfH8pP_bs&e=
>),
> > > including some helpful new tutorial videos.
> > >
> > > ** Operating Kafka at scale requires that the system remain 
observable,
> > and
> > > to make that easier, we’ve made a number of improvements to metrics.
> > These
> > > are too many to summarize without becoming tedious, but Connect 
metrics
> > > have been significantly improved (KIP-196), a litany of new health
> check
> > > metrics are now exposed (KIP-188), and we now have a global topic 
and
> > > partition count (KIP-168). Check out KIP-164 and KIP-187 for even 
more.
> > >
> > > ** We now support Java 9, leading, among other things, to 
significantly
> > > faster TLS and CRC32C implementations. Over-the-wire encryption will 
be
> > > faster now, which will keep Kafka fast and compute costs low when
> > > encryption is enabled.
> > >
> > > ** In keeping with the security theme, KIP-152 cleans up the error
> > handling
> > > on Simple Authentication Security Layer (SASL) authentication 
attempts.
> > > Previously, some authentication error conditions were 
indistinguishable
> > > from broker failures and were not logged in a clear way. This is
> cleaner
> > > now.
> > >
> > > ** Kafka can now tolerate disk failures better. Historically, JBOD
> > storage
> > > configurations have not been recommended, but the architecture has
> > > nevertheless been tempting: after all, why not rely on Kafka’s own
> > > replication mechanism to protect against storage failure rather than
> > using
> > > RAID? With KIP-112, Kafka now handles disk failure more gracefully. 
A
> > > single disk failure in a JBOD broker will not bring the entire 
broker
> > down;
> > > rather, the broker will continue serving any log files that remain 
on
> > > functioning disks.
> > >
> > > ** Since release 0.11.0, the idempotent producer (which is the 
producer
> > > used in the presence of a transaction, which of course is the 
producer
> we
> > > use for exactly-once processing) required
> max.in.flight.requests.per.con
> > > nection
> > > to be equal to one. As anyone who has written or tested a wire 
protocol
> > can
> > > attest, this put an upper bound on throughput. Thanks to KAFKA-5949,
> this
> > > can now be as large as five, relaxing the throughput constraint 
quite a
> > > bit.
> > >
> > >
> > > All of the changes in this release can be found in the release 
notes:
> > >
> > >
> > 
https://urldefense.proofpoint.com/v2/url?u=https-3A__dist.apache.org

Re: [VOTE] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-11-02 Thread Garret Thompson
+1 (non-binding)

Thanks, Matt!


Re: [VOTE] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-11-02 Thread Ted Yu
+1

On Wed, Nov 1, 2017 at 4:50 PM, Guozhang Wang  wrote:

> +1 (binding) from me. Thanks!
>
> On Wed, Nov 1, 2017 at 4:50 PM, Guozhang Wang  wrote:
>
> > The vote should stay open for at least 72 hours. The bylaws can be found
> > here https://cwiki.apache.org/confluence/display/KAFKA/Bylaws
> >
> > On Wed, Nov 1, 2017 at 8:09 AM, Matt Farmer  wrote:
> >
> >> Hello all,
> >>
> >> It seems like discussion around KIP-210 has gone to a lull. I've got
> some
> >> candidate work underway for it already, so I'd like to go ahead and call
> >> it
> >> to a vote.
> >>
> >> For reference, the KIP can be found here:
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-210+-+
> >> Provide+for+custom+error+handling++when+Kafka+Streams+fails+to+produce
> >>
> >> Also, how long to vote threads stay open generally before changing the
> >> status of the KIP?
> >>
> >> Cheers,
> >> Matt
> >>
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] KIP-217: Expose a timeout to allow an expired ZK session to be re-created

2017-11-02 Thread Ted Yu
ZOOKEEPER-2184 is scheduled for 3.4.12 whose release is unknown.

I think adding the session recreation on Kafka side should benefit Kafka
users, especially those who don't plan to move to 3.4.12+ in the near
future.

On Wed, Nov 1, 2017 at 6:34 PM, Jun Rao  wrote:

> Hi, Stephane,
>
> 3) The difference is that currently, there is no retry when re-creating the
> Zookeeper object when a ZK session expires. So, if the re-creation of
> Zookeeper fails, the broker just logs the error and the Zookeeper object
> will never be created again. With this KIP, we will keep retrying the
> creation of Zookeeper until success.
>
> Thanks,
>
> Jun
>
> On Tue, Oct 31, 2017 at 3:28 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > Hi Jun,
> >
> > Thanks for the reply.
> >
> > 1) The reason I'm asking about it is I wonder if it's not worth focusing
> > the development efforts on taking ownership of the existing PR (
> > https://github.com/apache/zookeeper/pull/150)  to fix ZOOKEEPER-2184,
> > rebase it and have it merged into the ZK codebase shortly.  I feel this
> KIP
> > might introduce a setting that could be deprecated shortly and confuse
> the
> > end user a bit further with one more knob to turn.
> >
> > 3) I'm not sure if I fully understand, sorry for the beginner's question:
> > if the default timeout is infinite, then it won't change anything to how
> > Kafka works from today, does it? (unless I'm missing something sorry). If
> > not set to infinite, then we introduce the risk of a whole cluster
> shutting
> > down at once?
> >
> > Thanks,
> > Stephane
> >
> > On 31/10/17, 1:00 pm, "Jun Rao"  wrote:
> >
> > Hi, Stephane,
> >
> > Thanks for the reply.
> >
> > 1) Fixing the issue in ZK will be ideal. Not sure when it will happen
> > though. Once it's fixed, we can probably deprecate this config.
> >
> > 2) That could be useful. Is there a java api to do that at runtime?
> > Also,
> > invalidating DNS cache doesn't always fix the issue of unresolved
> > host. In
> > some of the cases, human intervention is needed.
> >
> > 3) The default timeout is infinite though.
> >
> > Jun
> >
> >
> > On Sat, Oct 28, 2017 at 11:48 PM, Stephane Maarek <
> > steph...@simplemachines.com.au> wrote:
> >
> > > Hi Jun,
> > >
> > > I think this is very helpful. Restarting Kafka brokers in case of
> > zookeeper
> > > host change is not a well known operation.
> > >
> > > Few questions:
> > > 1) would it not be worth fixing the problem at the source ? This
> has
> > been
> > > stuck for a while though, maybe a little push would help :
> > > https://issues.apache.org/jira/plugins/servlet/mobile#
> > issue/ZOOKEEPER-2184
> > >
> > > 2) upon recreating the zookeeper object , is it not possible to
> > invalidate
> > > the DNS cache so that it resolves the new hostname?
> > >
> > > 3) could the cluster be down in this situation: one migrates an
> > entire
> > > zookeeper cluster to new machines (one by one). The quorum is still
> > alive
> > > without downtime, but now every broker in a cluster can't resolve
> > zookeeper
> > > at the same time. They all shut down at the same time after the new
> > > time-out setting.
> > >
> > > Thanks !
> > > Stéphane
> > >
> > > On 28 Oct. 2017 9:42 am, "Jun Rao"  wrote:
> > >
> > > > Hi, Everyone,
> > > >
> > > > We created "KIP-217: Expose a timeout to allow an expired ZK
> > session to
> > > be
> > > > re-created".
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 217%3A+Expose+a+timeout+to+allow+an+expired+ZK+session+
> > to+be+re-created
> > > >
> > > > Please take a look and provide your feedback.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > >
> >
> >
> >
> >
>


[jira] [Created] (KAFKA-6159) Link to upgrade docs in 100 release notes is broken

2017-11-02 Thread JIRA
Martin Schröder created KAFKA-6159:
--

 Summary: Link to upgrade docs in 100 release notes is broken
 Key: KAFKA-6159
 URL: https://issues.apache.org/jira/browse/KAFKA-6159
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0
Reporter: Martin Schröder


The release notes for 1.0.0 point to 
http://kafka.apache.org/100/documentation.html#upgrade for "upgrade 
documentation", but that gives a 404.

Maybe you mean http://kafka.apache.org/documentation.html#upgrade ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6137) RestoreIntegrationTest sometimes fails with assertion error

2017-11-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved KAFKA-6137.
---
Resolution: Cannot Reproduce

> RestoreIntegrationTest sometimes fails with assertion error
> ---
>
> Key: KAFKA-6137
> URL: https://issues.apache.org/jira/browse/KAFKA-6137
> Project: Kafka
>  Issue Type: Test
>  Components: streams, unit tests
>Reporter: Ted Yu
>Priority: Minor
>  Labels: flaky-test
>
> From https://builds.apache.org/job/kafka-1.0-jdk7/62 :
> {code}
> org.apache.kafka.streams.integration.RestoreIntegrationTest > 
> shouldSuccessfullyStartWhenLoggingDisabled FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.kafka.streams.integration.RestoreIntegrationTest.shouldSuccessfullyStartWhenLoggingDisabled(RestoreIntegrationTest.java:195)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread Becket Qin
Great news! Thanks for running the release, Guozhang!

On Thu, Nov 2, 2017 at 8:19 AM, Vahid S Hashemian  wrote:

> Great news. Thanks Guozhang!
>
> --Vahid
>
>
>
>
> From:   Rajini Sivaram 
> To: dev 
> Date:   11/02/2017 05:37 AM
> Subject:Re: [ANNOUNCE] Apache Kafka 1.0.0 Released
>
>
>
> Guozhang,
>
> Thank you for running the release!
>
> Regards,
>
> Rajini
>
> On Thu, Nov 2, 2017 at 12:07 PM, UMESH CHAUDHARY 
> wrote:
>
> > Great news, Congratulations to the team !
> >
> > On Thu, 2 Nov 2017 at 17:17 Damian Guy  wrote:
> >
> > > Thanks Guozhang!
> > >
> > > On Thu, 2 Nov 2017 at 11:42 Ismael Juma  wrote:
> > >
> > > > Thanks for running the release, Guozhang! Also thanks to all the
> > > > contributors who made 1.0 possible. :)
> > > >
> > > > Ismael
> > > >
> > > > On 1 Nov 2017 2:27 pm, "Guozhang Wang"  wrote:
> > > >
> > > > The Apache Kafka community is pleased to announce the release for
> > Apache
> > > > Kafka 1.0.0.
> > > >
> > > > This is a major release of the Kafka project, and is no mere bump of
> > the
> > > > version number. The Apache Kafka Project Management Committee has
> > packed
> > > a
> > > > number of valuable enhancements into the release. Let me summarize a
> > few
> > > of
> > > > them:
> > > >
> > > > ** Since its introduction in version 0.10, the Streams API has
> become
> > > > hugely popular among Kafka users, including the likes of Pinterest,
> > > > Rabobank, Zalando, and The New York Times. In 1.0, the the API
> > continues
> > > to
> > > > evolve at a healthy pace. To begin with, the builder API has been
> > > improved
> > > > (KIP-120). A new API has been added to expose the state of active
> tasks
> > > at
> > > > runtime (KIP-130). Debuggability gets easier with enhancements to
> the
> > > > print() and writeAsText() methods (KIP-160). And if that’s not
> enough,
> > > > check out KIP-138 and KIP-161 too. For more on streams, check out
> the
> > > > Apache Kafka Streams documentation (
> https://urldefense.proofpoint.com/v2/url?u=https-3A__kafka.
> apache.org_docu&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=tq2Tesrs2V9c_
> JVNIBL_Hk-AvVv8hGn62gT7pdR-6-g&s=Be3Toa7cHgFiUMCDAnwJ_
> e7YhTUvp4eY84rXbSA4Irc&e=
>
> > > > mentation/streams/ <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__kafka.
> apache.org_documentation_streams_&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=tq2Tesrs2V9c_
> JVNIBL_Hk-AvVv8hGn62gT7pdR-6-g&s=_75tvKAxc9SOMmWlh2Kwo9lsbZjl0Xd
> AQUpfH8pP_bs&e=
> > <
> > > >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__kafka.
> apache.org_documentation_streams_&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=tq2Tesrs2V9c_
> JVNIBL_Hk-AvVv8hGn62gT7pdR-6-g&s=_75tvKAxc9SOMmWlh2Kwo9lsbZjl0Xd
> AQUpfH8pP_bs&e=
> >),
> > > > including some helpful new tutorial videos.
> > > >
> > > > ** Operating Kafka at scale requires that the system remain
> observable,
> > > and
> > > > to make that easier, we’ve made a number of improvements to metrics.
> > > These
> > > > are too many to summarize without becoming tedious, but Connect
> metrics
> > > > have been significantly improved (KIP-196), a litany of new health
> > check
> > > > metrics are now exposed (KIP-188), and we now have a global topic
> and
> > > > partition count (KIP-168). Check out KIP-164 and KIP-187 for even
> more.
> > > >
> > > > ** We now support Java 9, leading, among other things, to
> significantly
> > > > faster TLS and CRC32C implementations. Over-the-wire encryption will
> be
> > > > faster now, which will keep Kafka fast and compute costs low when
> > > > encryption is enabled.
> > > >
> > > > ** In keeping with the security theme, KIP-152 cleans up the error
> > > handling
> > > > on Simple Authentication Security Layer (SASL) authentication
> attempts.
> > > > Previously, some authentication error conditions were
> indistinguishable
> > > > from broker failures and were not logged in a clear way. This is
> > cleaner
> > > > now.
> > > >
> > > > ** Kafka can now tolerate disk failures better. Historically, JBOD
> > > storage
> > > > configurations have not been recommended, but the architecture has
> > > > nevertheless been tempting: after all, why not rely on Kafka’s own
> > > > replication mechanism to protect against storage failure rather than
> > > using
> > > > RAID? With KIP-112, Kafka now handles disk failure more gracefully.
> A
> > > > single disk failure in a JBOD broker will not bring the entire
> broker
> > > down;
> > > > rather, the broker will continue serving any log files that remain
> on
> > > > functioning disks.
> > > >
> > > > ** Since release 0.11.0, the idempotent producer (which is the
> producer
> > > > used in the presence of a transaction, which of course is the
> producer
> > we
> > > > use for exactly-once processing) required
> > max.in.flight.requests.per.con
> > > > nection
> > > > to be equal to one. As anyone who ha

[jira] [Created] (KAFKA-6160) expose partitions under min isr in kafka-topics.sh command

2017-11-02 Thread Mitchell (JIRA)
Mitchell created KAFKA-6160:
---

 Summary: expose partitions under min isr in kafka-topics.sh command
 Key: KAFKA-6160
 URL: https://issues.apache.org/jira/browse/KAFKA-6160
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Mitchell
Priority: Minor


It would be helpful to be able to list the partitions that are 
non-operationally online, instead of just partitions that are under-replicated 
or offline.  kafka 1.0 has a metric for under min isr, it would be helpful to 
expose that via the kafka-topics command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6161) Make configure() and close() empty by default in serde interfaces

2017-11-02 Thread Evgeny Veretennikov (JIRA)
Evgeny Veretennikov created KAFKA-6161:
--

 Summary: Make configure() and close() empty by default in serde 
interfaces
 Key: KAFKA-6161
 URL: https://issues.apache.org/jira/browse/KAFKA-6161
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Evgeny Veretennikov
Assignee: Evgeny Veretennikov
Priority: Normal


{{Serializer}}, {{Deserializer}} and {{Serde}} interfaces have methods 
{{configure()}} and {{close()}}. Pretty often one want to leave these methods 
empty. For example, a lot of serializers inside 
{{org.apache.kafka.common.serialization}} package have these methods empty:

{code}
@Override
public void configure(Map configs, boolean isKey) {
// nothing to do
}

@Override
public void close() {
// nothing to do
}
{code}

Since we switched to source level 1.8 for javac, we can make these methods 
empty by default.

As another option, we may create new interfaces (like 
{{UnconfiguredSerializer}}), in which we will define these methods empty, but 
just making methods default seems more convenient.

As a side note, making methods default shouldn't be binary incompatible. So, 
seems like it's safe to make this change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2017-11-02 Thread Steven Aerts
Hi Tom,

Nice observation.
I changed "Rejected Alternatives" section to "Other Alternatives", as
I see myself as too much of an outsider to the kafka community to be
able to decide without this discussion.

I see two major factors to decide:
 - how soon will KIP-118 (drop support of java 7) be implemented?
 - for which reasons do we drop backwards compatibility for public
interfaces marked as Evolving

If KIP-118 which is scheduled for version 2.0.0 is going to be
implemented soon, I agree with you that replacing KafkaFuture with
CompletableFuture (or CompletionStage) is a preferable option.
But as I am not familiar with the roadmap it is difficult to tell for me.


Thanks,


   Steven


2017-11-02 11:27 GMT+01:00 Tom Bentley :
> Hi Steven,
>
> I notice you've renamed the template's "Rejected Alternatives" section to
> "Other Alternatives", suggesting they're not rejected yet (or, if you have
> rejected them, I think you should give your reasons).
>
> Personally, I'd like to understand the arguments against simply replacing
> KafkaFuture with CompletableFuture in Kafka 2.0. In other words, if we were
> starting without needing to support Java 7 what would be the arguments for
> having our own KafkaFuture?
>
> Thanks,
>
> Tom
>
> On 1 November 2017 at 16:01, Ted Yu  wrote:
>
>> KAFKA-4423 is still open.
>> When would Java 7 be dropped ?
>>
>> Thanks
>>
>> On Wed, Nov 1, 2017 at 8:56 AM, Ismael Juma  wrote:
>>
>> > On Wed, Nov 1, 2017 at 3:51 PM, Ted Yu  wrote:
>> >
>> > > bq. Wait for a kafka release which will not support java 7 anymore
>> > >
>> > > Do you want to raise a separate thread for the above ?
>> > >
>> >
>> > There is already a KIP for this so a separate thread is not needed.
>> >
>> > Ismael
>> >
>>


[GitHub] kafka pull request #4171: MINOR: Change version format in release notes pyth...

2017-11-02 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4171

MINOR: Change version format in release notes python code

@ijuma @ewencp 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KMinor-update-releasepy

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4171.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4171


commit 93e8e4abdf0f98db4d05fd623d3a47ed2dbbb7ff
Author: Guozhang Wang 
Date:   2017-11-02T19:22:23Z

Change minor version format




---


[jira] [Created] (KAFKA-6162) Stream Store tries to create directory with invalid name on Windows

2017-11-02 Thread Nitzan Niv (JIRA)
Nitzan Niv created KAFKA-6162:
-

 Summary: Stream Store tries to create directory with invalid name 
on Windows
 Key: KAFKA-6162
 URL: https://issues.apache.org/jira/browse/KAFKA-6162
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
 Environment: Windows
Reporter: Nitzan Niv
Priority: Major


Stream store attempts to create a file with name generated from segment ID:
Segments.java, line 72: name + ":" + segmentId * segmentInterval

":" is invalid in directory name on Windows, so directory creation fails in 
RocksDB, with following exception:


org.apache.kafka.streams.errors.ProcessorStateException: task [1_0] Failed to 
flush state store XXX
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:248)
at 
org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:196)
at 
org.apache.kafka.streams.processor.internals.StreamTask.flushState(StreamTask.java:324)
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:304)
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:299)
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:289)
at 
org.apache.kafka.streams.processor.internals.AssignedTasks$2.apply(AssignedTasks.java:87)
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:451)
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:380)
at 
org.apache.kafka.streams.processor.internals.TaskManager.commitAll(TaskManager.java:309)
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:1018)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:835)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error 
opening store XXX:150962400 at location 
C:\Users\User\AppData\Local\Temp\embedded-kafka4250738387316061569\kafka-streams\Host-1509650758702-6227-aggregator\1_0\XXX\XXX:150962400
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:204)
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:174)
at 
org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:40)
at 
org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(Segments.java:89)
at 
org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:81)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:43)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:34)
at 
org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:67)
at 
org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:33)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:100)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:141)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:99)
at 
org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:127)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.flush(CachingWindowStore.java:132)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.flush(MeteredWindowStore.java:128)
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:245)
... 14 common frames omitted
Caused by: org.rocksdb.RocksDBException: Failed to create dir: 
C:\Users\User\AppData\Local\Temp\embedded-kafka4250738387316061569\kafka-streams\Host-1509650758702-6227-aggregator\1_0\XXX\XXX:150962400:
 Invalid argument
at org.rocksdb.RocksDB.open(Native Method)
at org.rocksdb.RocksDB.open(RocksDB.java:231)
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:197)
... 29 common frames omitted
 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4172: MINOR: Remove clients/out directory

2017-11-02 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4172

MINOR: Remove clients/out directory

It was committed inadvertently.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka remove-out-folder

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4172.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4172


commit 3d87fe67a23d254fbb04bc1012e69482f91ef05a
Author: Ismael Juma 
Date:   2017-11-02T20:34:06Z

MINOR: Remove clients/out directory

It was committed inadvertently.




---


Kafka- Requesting for help

2017-11-02 Thread Suraj Ramesh
Hi,

I would request to help me with this issue.

I should not commit offset when any exception comes while processing a
message.
I am using below approach to manually comit offset. Can you please help me
in getting uncommited offsets to re-process them in later point of time.


import org.apache.kafka.clients.consumer.ConsumerRecord; import
org.springframework.kafka.annotation.KafkaListener; import
org.springframework.kafka.listener.AcknowledgingMessageListener; import
org.springframework.kafka.support.Acknowledgment; public class
ConsumerKafka implements AcknowledgingMessageListener{
@Override @KafkaListener(id = "consumer", topics = {"${kafka.topic}"} )
public void onMessage(ConsumerRecord data, Acknowledgment
acknowledgment) { // TODO Auto-generated method stub try{
System.out.println("Read Record is : " + data.value());
System.out.println("Offset is : " + data.offset());
System.out.println("Topic is : " + data.topic());
System.out.println("Partition is : " + data.partition());
acknowledgment.acknowledge(); }catch (Exception e ){
System.out.println("Push the messaged to Error Stream : " + e); } } }

If any exception comes catch block doesnt commit the offset.

Kafka Config.

import java.util.HashMap;
import java.util.Map;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.env.Environment;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContain
erFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.ConcurrentMessageListenerConta
iner;
import org.springframework.retry.backoff.FixedBackOffPolicy;
import org.springframework.retry.policy.AlwaysRetryPolicy;
import org.springframework.retry.support.RetryTemplate;

import com.learnbootkafka.consumer.ConsumerKafka;

@Configuration
@EnableKafka
public class KafkaConfig {

 @Autowired
  Environment env;

 /**
  * Consumer Config Starts
  */
 @Bean
 KafkaListenerContainerFactory> kafkaListenerContainerFactory() {
  ConcurrentKafkaListenerContainerFactory factory = new
ConcurrentKafkaListenerContainerFactory<>();
  factory.setConsumerFactory(consumerFactory());
  factory.getContainerProperties().setPollTimeout(3000);
  factory.getContainerProperties().setAckMode(AbstractMessageListenerContain
er.AckMode.MANUAL);
   return factory;
}

 @Bean
public ConsumerFactory consumerFactory() {

return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}

 @Bean
public Map consumerConfigs() {
Map propsMap = new HashMap<>();
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
env.getProperty("kafka.broker"));
propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,
env.getProperty("enable.auto.commit"));
propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,
env.getProperty("auto.commit.interval.ms"));
propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, env.getProperty("group.id"));
propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
env.getProperty("kafka.auto.offset.reset"));
return propsMap;

}

 @Bean
public ConsumerKafka listener() {
return new ConsumerKafka();
}


Would be thankful for getting help.

Thank You,
Suraj PR


Re: [DISCUSS] KIP-217: Expose a timeout to allow an expired ZK session to be re-created

2017-11-02 Thread Jun Rao
Stephane, Jeff,

Another option is to not expose the reconnect timeout config and just retry
the creation of Zookeeper forever. This is an improvement from the current
situation and if zookeeper-2184 is fixed in the future, we don't need to
deprecate the config.

Thanks,

Jun

On Thu, Nov 2, 2017 at 9:02 AM, Ted Yu  wrote:

> ZOOKEEPER-2184 is scheduled for 3.4.12 whose release is unknown.
>
> I think adding the session recreation on Kafka side should benefit Kafka
> users, especially those who don't plan to move to 3.4.12+ in the near
> future.
>
> On Wed, Nov 1, 2017 at 6:34 PM, Jun Rao  wrote:
>
> > Hi, Stephane,
> >
> > 3) The difference is that currently, there is no retry when re-creating
> the
> > Zookeeper object when a ZK session expires. So, if the re-creation of
> > Zookeeper fails, the broker just logs the error and the Zookeeper object
> > will never be created again. With this KIP, we will keep retrying the
> > creation of Zookeeper until success.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Oct 31, 2017 at 3:28 PM, Stephane Maarek <
> > steph...@simplemachines.com.au> wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks for the reply.
> > >
> > > 1) The reason I'm asking about it is I wonder if it's not worth
> focusing
> > > the development efforts on taking ownership of the existing PR (
> > > https://github.com/apache/zookeeper/pull/150)  to fix ZOOKEEPER-2184,
> > > rebase it and have it merged into the ZK codebase shortly.  I feel this
> > KIP
> > > might introduce a setting that could be deprecated shortly and confuse
> > the
> > > end user a bit further with one more knob to turn.
> > >
> > > 3) I'm not sure if I fully understand, sorry for the beginner's
> question:
> > > if the default timeout is infinite, then it won't change anything to
> how
> > > Kafka works from today, does it? (unless I'm missing something sorry).
> If
> > > not set to infinite, then we introduce the risk of a whole cluster
> > shutting
> > > down at once?
> > >
> > > Thanks,
> > > Stephane
> > >
> > > On 31/10/17, 1:00 pm, "Jun Rao"  wrote:
> > >
> > > Hi, Stephane,
> > >
> > > Thanks for the reply.
> > >
> > > 1) Fixing the issue in ZK will be ideal. Not sure when it will
> happen
> > > though. Once it's fixed, we can probably deprecate this config.
> > >
> > > 2) That could be useful. Is there a java api to do that at runtime?
> > > Also,
> > > invalidating DNS cache doesn't always fix the issue of unresolved
> > > host. In
> > > some of the cases, human intervention is needed.
> > >
> > > 3) The default timeout is infinite though.
> > >
> > > Jun
> > >
> > >
> > > On Sat, Oct 28, 2017 at 11:48 PM, Stephane Maarek <
> > > steph...@simplemachines.com.au> wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > I think this is very helpful. Restarting Kafka brokers in case of
> > > zookeeper
> > > > host change is not a well known operation.
> > > >
> > > > Few questions:
> > > > 1) would it not be worth fixing the problem at the source ? This
> > has
> > > been
> > > > stuck for a while though, maybe a little push would help :
> > > > https://issues.apache.org/jira/plugins/servlet/mobile#
> > > issue/ZOOKEEPER-2184
> > > >
> > > > 2) upon recreating the zookeeper object , is it not possible to
> > > invalidate
> > > > the DNS cache so that it resolves the new hostname?
> > > >
> > > > 3) could the cluster be down in this situation: one migrates an
> > > entire
> > > > zookeeper cluster to new machines (one by one). The quorum is
> still
> > > alive
> > > > without downtime, but now every broker in a cluster can't resolve
> > > zookeeper
> > > > at the same time. They all shut down at the same time after the
> new
> > > > time-out setting.
> > > >
> > > > Thanks !
> > > > Stéphane
> > > >
> > > > On 28 Oct. 2017 9:42 am, "Jun Rao"  wrote:
> > > >
> > > > > Hi, Everyone,
> > > > >
> > > > > We created "KIP-217: Expose a timeout to allow an expired ZK
> > > session to
> > > > be
> > > > > re-created".
> > > > >
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 217%3A+Expose+a+timeout+to+allow+an+expired+ZK+session+
> > > to+be+re-created
> > > > >
> > > > > Please take a look and provide your feedback.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > >
> > >
> > >
> > >
> > >
> >
>


Re: [DISCUSS] KIP-217: Expose a timeout to allow an expired ZK session to be re-created

2017-11-02 Thread Stephane Maarek
Hi Jun

I think this is a better option. Would that change require a kip then as
it's not a change in public API ?

@ted it was marked as a blocked for 3.4.11 but they pushed it. It seems
that the owner of the pr hasn't acted in over a year and I think someone
needs to take ownership of that. Additionally, this would be a change in
Kafka zookeeper client dependency, so no need to update your zookeeper
quorum to benefit from the change

Thanks
Stéphane


On 3 Nov. 2017 8:45 am, "Jun Rao"  wrote:

Stephane, Jeff,

Another option is to not expose the reconnect timeout config and just retry
the creation of Zookeeper forever. This is an improvement from the current
situation and if zookeeper-2184 is fixed in the future, we don't need to
deprecate the config.

Thanks,

Jun

On Thu, Nov 2, 2017 at 9:02 AM, Ted Yu  wrote:

> ZOOKEEPER-2184 is scheduled for 3.4.12 whose release is unknown.
>
> I think adding the session recreation on Kafka side should benefit Kafka
> users, especially those who don't plan to move to 3.4.12+ in the near
> future.
>
> On Wed, Nov 1, 2017 at 6:34 PM, Jun Rao  wrote:
>
> > Hi, Stephane,
> >
> > 3) The difference is that currently, there is no retry when re-creating
> the
> > Zookeeper object when a ZK session expires. So, if the re-creation of
> > Zookeeper fails, the broker just logs the error and the Zookeeper object
> > will never be created again. With this KIP, we will keep retrying the
> > creation of Zookeeper until success.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Oct 31, 2017 at 3:28 PM, Stephane Maarek <
> > steph...@simplemachines.com.au> wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks for the reply.
> > >
> > > 1) The reason I'm asking about it is I wonder if it's not worth
> focusing
> > > the development efforts on taking ownership of the existing PR (
> > > https://github.com/apache/zookeeper/pull/150)  to fix ZOOKEEPER-2184,
> > > rebase it and have it merged into the ZK codebase shortly.  I feel
this
> > KIP
> > > might introduce a setting that could be deprecated shortly and confuse
> > the
> > > end user a bit further with one more knob to turn.
> > >
> > > 3) I'm not sure if I fully understand, sorry for the beginner's
> question:
> > > if the default timeout is infinite, then it won't change anything to
> how
> > > Kafka works from today, does it? (unless I'm missing something sorry).
> If
> > > not set to infinite, then we introduce the risk of a whole cluster
> > shutting
> > > down at once?
> > >
> > > Thanks,
> > > Stephane
> > >
> > > On 31/10/17, 1:00 pm, "Jun Rao"  wrote:
> > >
> > > Hi, Stephane,
> > >
> > > Thanks for the reply.
> > >
> > > 1) Fixing the issue in ZK will be ideal. Not sure when it will
> happen
> > > though. Once it's fixed, we can probably deprecate this config.
> > >
> > > 2) That could be useful. Is there a java api to do that at
runtime?
> > > Also,
> > > invalidating DNS cache doesn't always fix the issue of unresolved
> > > host. In
> > > some of the cases, human intervention is needed.
> > >
> > > 3) The default timeout is infinite though.
> > >
> > > Jun
> > >
> > >
> > > On Sat, Oct 28, 2017 at 11:48 PM, Stephane Maarek <
> > > steph...@simplemachines.com.au> wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > I think this is very helpful. Restarting Kafka brokers in case
of
> > > zookeeper
> > > > host change is not a well known operation.
> > > >
> > > > Few questions:
> > > > 1) would it not be worth fixing the problem at the source ? This
> > has
> > > been
> > > > stuck for a while though, maybe a little push would help :
> > > > https://issues.apache.org/jira/plugins/servlet/mobile#
> > > issue/ZOOKEEPER-2184
> > > >
> > > > 2) upon recreating the zookeeper object , is it not possible to
> > > invalidate
> > > > the DNS cache so that it resolves the new hostname?
> > > >
> > > > 3) could the cluster be down in this situation: one migrates an
> > > entire
> > > > zookeeper cluster to new machines (one by one). The quorum is
> still
> > > alive
> > > > without downtime, but now every broker in a cluster can't
resolve
> > > zookeeper
> > > > at the same time. They all shut down at the same time after the
> new
> > > > time-out setting.
> > > >
> > > > Thanks !
> > > > Stéphane
> > > >
> > > > On 28 Oct. 2017 9:42 am, "Jun Rao"  wrote:
> > > >
> > > > > Hi, Everyone,
> > > > >
> > > > > We created "KIP-217: Expose a timeout to allow an expired ZK
> > > session to
> > > > be
> > > > > re-created".
> > > > >
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 217%3A+Expose+a+timeout+to+allow+an+expired+ZK+session+
> > > to+be+re-created
> > > > >
> > > > > Please take a look and provide your feedback.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > >
> > >
> > >
> > >
> > >
> >
>


[GitHub] kafka pull request #4163: MINOR: build.gradle: sourceCompatibility, targetCo...

2017-11-02 Thread cmccabe
Github user cmccabe closed the pull request at:

https://github.com/apache/kafka/pull/4163


---


Re: [DISCUSS] KIP-217: Expose a timeout to allow an expired ZK session to be re-created

2017-11-02 Thread Ted Yu
Stephane:
bq. hasn't acted in over a year

The above fact implies some reluctance from the zookeeper community to
fully solve the issue (maybe due to technical issues).
Anyway, we should plan on not relying on the fix to go through in the near
future.

As for Jun's latest suggestion, I think we should add periodic logging
indicating the retry.

A KIP is not needed if we go that route.

Cheers

On Thu, Nov 2, 2017 at 2:54 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi Jun
>
> I think this is a better option. Would that change require a kip then as
> it's not a change in public API ?
>
> @ted it was marked as a blocked for 3.4.11 but they pushed it. It seems
> that the owner of the pr hasn't acted in over a year and I think someone
> needs to take ownership of that. Additionally, this would be a change in
> Kafka zookeeper client dependency, so no need to update your zookeeper
> quorum to benefit from the change
>
> Thanks
> Stéphane
>
>
> On 3 Nov. 2017 8:45 am, "Jun Rao"  wrote:
>
> Stephane, Jeff,
>
> Another option is to not expose the reconnect timeout config and just retry
> the creation of Zookeeper forever. This is an improvement from the current
> situation and if zookeeper-2184 is fixed in the future, we don't need to
> deprecate the config.
>
> Thanks,
>
> Jun
>
> On Thu, Nov 2, 2017 at 9:02 AM, Ted Yu  wrote:
>
> > ZOOKEEPER-2184 is scheduled for 3.4.12 whose release is unknown.
> >
> > I think adding the session recreation on Kafka side should benefit Kafka
> > users, especially those who don't plan to move to 3.4.12+ in the near
> > future.
> >
> > On Wed, Nov 1, 2017 at 6:34 PM, Jun Rao  wrote:
> >
> > > Hi, Stephane,
> > >
> > > 3) The difference is that currently, there is no retry when re-creating
> > the
> > > Zookeeper object when a ZK session expires. So, if the re-creation of
> > > Zookeeper fails, the broker just logs the error and the Zookeeper
> object
> > > will never be created again. With this KIP, we will keep retrying the
> > > creation of Zookeeper until success.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Tue, Oct 31, 2017 at 3:28 PM, Stephane Maarek <
> > > steph...@simplemachines.com.au> wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > Thanks for the reply.
> > > >
> > > > 1) The reason I'm asking about it is I wonder if it's not worth
> > focusing
> > > > the development efforts on taking ownership of the existing PR (
> > > > https://github.com/apache/zookeeper/pull/150)  to fix
> ZOOKEEPER-2184,
> > > > rebase it and have it merged into the ZK codebase shortly.  I feel
> this
> > > KIP
> > > > might introduce a setting that could be deprecated shortly and
> confuse
> > > the
> > > > end user a bit further with one more knob to turn.
> > > >
> > > > 3) I'm not sure if I fully understand, sorry for the beginner's
> > question:
> > > > if the default timeout is infinite, then it won't change anything to
> > how
> > > > Kafka works from today, does it? (unless I'm missing something
> sorry).
> > If
> > > > not set to infinite, then we introduce the risk of a whole cluster
> > > shutting
> > > > down at once?
> > > >
> > > > Thanks,
> > > > Stephane
> > > >
> > > > On 31/10/17, 1:00 pm, "Jun Rao"  wrote:
> > > >
> > > > Hi, Stephane,
> > > >
> > > > Thanks for the reply.
> > > >
> > > > 1) Fixing the issue in ZK will be ideal. Not sure when it will
> > happen
> > > > though. Once it's fixed, we can probably deprecate this config.
> > > >
> > > > 2) That could be useful. Is there a java api to do that at
> runtime?
> > > > Also,
> > > > invalidating DNS cache doesn't always fix the issue of unresolved
> > > > host. In
> > > > some of the cases, human intervention is needed.
> > > >
> > > > 3) The default timeout is infinite though.
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Sat, Oct 28, 2017 at 11:48 PM, Stephane Maarek <
> > > > steph...@simplemachines.com.au> wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > I think this is very helpful. Restarting Kafka brokers in case
> of
> > > > zookeeper
> > > > > host change is not a well known operation.
> > > > >
> > > > > Few questions:
> > > > > 1) would it not be worth fixing the problem at the source ?
> This
> > > has
> > > > been
> > > > > stuck for a while though, maybe a little push would help :
> > > > > https://issues.apache.org/jira/plugins/servlet/mobile#
> > > > issue/ZOOKEEPER-2184
> > > > >
> > > > > 2) upon recreating the zookeeper object , is it not possible to
> > > > invalidate
> > > > > the DNS cache so that it resolves the new hostname?
> > > > >
> > > > > 3) could the cluster be down in this situation: one migrates an
> > > > entire
> > > > > zookeeper cluster to new machines (one by one). The quorum is
> > still
> > > > alive
> > > > > without downtime, but now every broker in a cluster can't
> resolve
> > > > zookeeper
> > > > > at the same time. They all shut down at th

[jira] [Created] (KAFKA-6163) Broker should fail fast on startup if an error occurs while loading logs

2017-11-02 Thread JIRA
Xavier Léauté created KAFKA-6163:


 Summary: Broker should fail fast on startup if an error occurs 
while loading logs
 Key: KAFKA-6163
 URL: https://issues.apache.org/jira/browse/KAFKA-6163
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Xavier Léauté
Priority: Normal


If the broker fails to load one of the logs during startup, we currently don't 
fail fast. The {{LogManager}} will log an error and initiate the shutdown 
sequence, but continue loading all the remaining sequence before shutting down.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4172: MINOR: Remove clients/out directory

2017-11-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4172


---


Build failed in Jenkins: kafka-trunk-jdk9 #165

2017-11-02 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Remove clients/out directory

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H25 (couchdbtest ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 554e0b529884e0dd6b4968a0fc58b02f58e95a07 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 554e0b529884e0dd6b4968a0fc58b02f58e95a07
Commit message: "MINOR: Remove clients/out directory"
 > git rev-list f88fdbd3115cdb0f1bd26817513f3d33359512b1 # timeout=10
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
[kafka-trunk-jdk9] $ /bin/bash -xe /tmp/jenkins1691870081897104231.sh
+ rm -rf 
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2/bin/gradle

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine java version from '9.0.1'.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 6 days 7 hr old

Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user rajinisiva...@googlemail.com


[jira] [Created] (KAFKA-6164) ClientQuotaManager threads prevent shutdown when encountering an error loading logs

2017-11-02 Thread JIRA
Xavier Léauté created KAFKA-6164:


 Summary: ClientQuotaManager threads prevent shutdown when 
encountering an error loading logs
 Key: KAFKA-6164
 URL: https://issues.apache.org/jira/browse/KAFKA-6164
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0, 1.0.0
Reporter: Xavier Léauté
Priority: Major


While diagnosing KAFKA-6163, we noticed that when the broker initiates a 
shutdown sequence in response to an error loading the logs, the process never 
exits. The JVM appears to be waiting indefinitely for several non-deamon 
threads to terminate.

The threads in question are {{ThrottledRequestReaper-Request}}, 
{{ThrottledRequestReaper-Produce}}, and {{ThrottledRequestReaper-Fetch}}, so it 
appears we don't properly shutdown {{ClientQuotaManager}} in this situation.

QuotaManager shutdown is currently handled by KafkaApis, however KafkaApis will 
never be instantiated in the first place if we encounter an error loading the 
logs, so quotamangers are left dangling in that case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-217: Expose a timeout to allow an expired ZK session to be re-created

2017-11-02 Thread Ted Yu
The following JIRA provides some background on why upgrading immediately
following new release may not be prudent (though I expect this to be rare):

ZOOKEEPER-2347

On Thu, Nov 2, 2017 at 3:00 PM, Ted Yu  wrote:

> Stephane:
> bq. hasn't acted in over a year
>
> The above fact implies some reluctance from the zookeeper community to
> fully solve the issue (maybe due to technical issues).
> Anyway, we should plan on not relying on the fix to go through in the near
> future.
>
> As for Jun's latest suggestion, I think we should add periodic logging
> indicating the retry.
>
> A KIP is not needed if we go that route.
>
> Cheers
>
> On Thu, Nov 2, 2017 at 2:54 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
>> Hi Jun
>>
>> I think this is a better option. Would that change require a kip then as
>> it's not a change in public API ?
>>
>> @ted it was marked as a blocked for 3.4.11 but they pushed it. It seems
>> that the owner of the pr hasn't acted in over a year and I think someone
>> needs to take ownership of that. Additionally, this would be a change in
>> Kafka zookeeper client dependency, so no need to update your zookeeper
>> quorum to benefit from the change
>>
>> Thanks
>> Stéphane
>>
>>
>> On 3 Nov. 2017 8:45 am, "Jun Rao"  wrote:
>>
>> Stephane, Jeff,
>>
>> Another option is to not expose the reconnect timeout config and just
>> retry
>> the creation of Zookeeper forever. This is an improvement from the current
>> situation and if zookeeper-2184 is fixed in the future, we don't need to
>> deprecate the config.
>>
>> Thanks,
>>
>> Jun
>>
>> On Thu, Nov 2, 2017 at 9:02 AM, Ted Yu  wrote:
>>
>> > ZOOKEEPER-2184 is scheduled for 3.4.12 whose release is unknown.
>> >
>> > I think adding the session recreation on Kafka side should benefit Kafka
>> > users, especially those who don't plan to move to 3.4.12+ in the near
>> > future.
>> >
>> > On Wed, Nov 1, 2017 at 6:34 PM, Jun Rao  wrote:
>> >
>> > > Hi, Stephane,
>> > >
>> > > 3) The difference is that currently, there is no retry when
>> re-creating
>> > the
>> > > Zookeeper object when a ZK session expires. So, if the re-creation of
>> > > Zookeeper fails, the broker just logs the error and the Zookeeper
>> object
>> > > will never be created again. With this KIP, we will keep retrying the
>> > > creation of Zookeeper until success.
>> > >
>> > > Thanks,
>> > >
>> > > Jun
>> > >
>> > > On Tue, Oct 31, 2017 at 3:28 PM, Stephane Maarek <
>> > > steph...@simplemachines.com.au> wrote:
>> > >
>> > > > Hi Jun,
>> > > >
>> > > > Thanks for the reply.
>> > > >
>> > > > 1) The reason I'm asking about it is I wonder if it's not worth
>> > focusing
>> > > > the development efforts on taking ownership of the existing PR (
>> > > > https://github.com/apache/zookeeper/pull/150)  to fix
>> ZOOKEEPER-2184,
>> > > > rebase it and have it merged into the ZK codebase shortly.  I feel
>> this
>> > > KIP
>> > > > might introduce a setting that could be deprecated shortly and
>> confuse
>> > > the
>> > > > end user a bit further with one more knob to turn.
>> > > >
>> > > > 3) I'm not sure if I fully understand, sorry for the beginner's
>> > question:
>> > > > if the default timeout is infinite, then it won't change anything to
>> > how
>> > > > Kafka works from today, does it? (unless I'm missing something
>> sorry).
>> > If
>> > > > not set to infinite, then we introduce the risk of a whole cluster
>> > > shutting
>> > > > down at once?
>> > > >
>> > > > Thanks,
>> > > > Stephane
>> > > >
>> > > > On 31/10/17, 1:00 pm, "Jun Rao"  wrote:
>> > > >
>> > > > Hi, Stephane,
>> > > >
>> > > > Thanks for the reply.
>> > > >
>> > > > 1) Fixing the issue in ZK will be ideal. Not sure when it will
>> > happen
>> > > > though. Once it's fixed, we can probably deprecate this config.
>> > > >
>> > > > 2) That could be useful. Is there a java api to do that at
>> runtime?
>> > > > Also,
>> > > > invalidating DNS cache doesn't always fix the issue of
>> unresolved
>> > > > host. In
>> > > > some of the cases, human intervention is needed.
>> > > >
>> > > > 3) The default timeout is infinite though.
>> > > >
>> > > > Jun
>> > > >
>> > > >
>> > > > On Sat, Oct 28, 2017 at 11:48 PM, Stephane Maarek <
>> > > > steph...@simplemachines.com.au> wrote:
>> > > >
>> > > > > Hi Jun,
>> > > > >
>> > > > > I think this is very helpful. Restarting Kafka brokers in case
>> of
>> > > > zookeeper
>> > > > > host change is not a well known operation.
>> > > > >
>> > > > > Few questions:
>> > > > > 1) would it not be worth fixing the problem at the source ?
>> This
>> > > has
>> > > > been
>> > > > > stuck for a while though, maybe a little push would help :
>> > > > > https://issues.apache.org/jira/plugins/servlet/mobile#
>> > > > issue/ZOOKEEPER-2184
>> > > > >
>> > > > > 2) upon recreating the zookeeper object , is it not possible
>> to
>> > > > invalidate
>> > > > > the DNS cache so that it reso

Build failed in Jenkins: kafka-trunk-jdk7 #2940

2017-11-02 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Remove clients/out directory

--
[...truncated 381.87 KB...]

kafka.api.TransactionsTest > testFencingOnSendOffsets STARTED

kafka.api.TransactionsTest > testFencingOnSendOffsets PASSED

kafka.api.TransactionsTest > testFencingOnAddPartitions STARTED

kafka.api.TransactionsTest > testFencingOnAddPartitions PASSED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration STARTED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration PASSED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction STARTED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction PASSED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
STARTED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
PASSED

kafka.api.TransactionsTest > testFencingOnSend STARTED

kafka.api.TransactionsTest > testFencingOnSend PASSED

kafka.api.TransactionsTest > testFencingOnCommit STARTED

kafka.api.TransactionsTest > testFencingOnCommit PASSED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader STARTED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader PASSED

kafka.api.TransactionsTest > testSendOffsets STARTED

kafka.api.TransactionsTest > testSendOffsets PASSED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec STARTED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec PASSED

kafka.api.AdminClientIntegrationTest > testDescribeReplicaLogDirs STARTED

kafka.api.AdminClientIntegrationTest > testDescribeReplicaLogDirs PASSED

kafka.api.AdminClientIntegrationTest > testInvalidAlterConfigs STARTED

kafka.api.AdminClientIntegrationTest > testInvalidAlterConfigs PASSED

kafka.api.AdminClientIntegrationTest > testClose STARTED

kafka.api.AdminClientIntegrationTest > testClose PASSED

kafka.api.AdminClientIntegrationTest > testMinimumRequestTimeouts STARTED

kafka.api.AdminClientIntegrationTest > testMinimumRequestTimeouts PASSED

kafka.api.AdminClientIntegrationTest > testForceClose STARTED

kafka.api.AdminClientIntegrationTest > testForceClose PASSED

kafka.api.AdminClientIntegrationTest > testListNodes STARTED

kafka.api.AdminClientIntegrationTest > testListNodes PASSED

kafka.api.AdminClientIntegrationTest > testDelayedClose STARTED

kafka.api.AdminClientIntegrationTest > testDelayedClose PASSED

kafka.api.AdminClientIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.AdminClientIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.AdminClientIntegrationTest > testDescribeLogDirs STARTED

kafka.api.AdminClientIntegrationTest > testDescribeLogDirs PASSED

kafka.api.AdminClientIntegrationTest > testAlterReplicaLogDirs STARTED

kafka.api.AdminClientIntegrationTest > testAlterReplicaLogDirs PASSED

kafka.api.AdminClientIntegrationTest > testAclOperations STARTED

kafka.api.AdminClientIntegrationTest > testAclOperations PASSED

kafka.api.AdminClientIntegrationTest > testDescribeCluster STARTED

kafka.api.AdminClientIntegrationTest > testDescribeCluster PASSED

kafka.api.AdminClientIntegrationTest > testCreatePartitions STARTED

kafka.api.AdminClientIntegrationTest > testCreatePartitions PASSED

kafka.api.AdminClientIntegrationTest > testDescribeNonExistingTopic STARTED

kafka.api.AdminClientIntegrationTest > testDescribeNonExistingTopic PASSED

kafka.api.AdminClientIntegrationTest > testDescribeAndAlterConfigs STARTED

kafka.api.AdminClientIntegrationTest > testDescribeAndAlterConfigs PASSED

kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts STARTED

kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.LogAppendTimeTest > testProduceConsume STARTED

kafka.api.LogAppendTimeTest > testProduceConsume PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWit

[GitHub] kafka pull request #4171: MINOR: Change version format in release notes pyth...

2017-11-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4171


---


[GitHub] kafka pull request #4168: MINOR: update producer client request timeout in s...

2017-11-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4168


---


Jenkins build is back to normal : kafka-trunk-jdk8 #2184

2017-11-02 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-217: Expose a timeout to allow an expired ZK session to be re-created

2017-11-02 Thread Jeff Widman
+1 for permanent retry under the covers (without an exposed/later
deprecated config).

That said, I understand the reality that sometimes we have to workaround an
unfixed issue in another project, so if you think best to expose a config,
then I have no objections. Mainly I wanted to make sure you'd tried to get
upstream to fix as that is almost always a cleaner solution.

> The above fact implies some reluctance from the zookeeper community to fully
solve the issue (maybe due to technical issues).

@Ted - I spent some time a few months ago poking through issues on the ZK
issue tracker, and it looked like there wasn't much activity on the project
lately. So my guess is that it's less about problems with this particular
solution, and more that the solution has just enough moving parts that no
one with commit rights has had the time to review it. As a volunteer
maintainer on a number of projects, I certainly empathize with them,
although it would be nice to get some more committers onto the Zookeeper
project who have the time to review some of these semi-abandoned PRs and
either accept or reject them.



On Thu, Nov 2, 2017 at 3:00 PM, Ted Yu  wrote:

> Stephane:
> bq. hasn't acted in over a year
>
> The above fact implies some reluctance from the zookeeper community to
> fully solve the issue (maybe due to technical issues).
> Anyway, we should plan on not relying on the fix to go through in the near
> future.
>
> As for Jun's latest suggestion, I think we should add periodic logging
> indicating the retry.
>
> A KIP is not needed if we go that route.
>
> Cheers
>
> On Thu, Nov 2, 2017 at 2:54 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > Hi Jun
> >
> > I think this is a better option. Would that change require a kip then as
> > it's not a change in public API ?
> >
> > @ted it was marked as a blocked for 3.4.11 but they pushed it. It seems
> > that the owner of the pr hasn't acted in over a year and I think someone
> > needs to take ownership of that. Additionally, this would be a change in
> > Kafka zookeeper client dependency, so no need to update your zookeeper
> > quorum to benefit from the change
> >
> > Thanks
> > Stéphane
> >
> >
> > On 3 Nov. 2017 8:45 am, "Jun Rao"  wrote:
> >
> > Stephane, Jeff,
> >
> > Another option is to not expose the reconnect timeout config and just
> retry
> > the creation of Zookeeper forever. This is an improvement from the
> current
> > situation and if zookeeper-2184 is fixed in the future, we don't need to
> > deprecate the config.
> >
> > Thanks,
> >
> > Jun
> >
> > On Thu, Nov 2, 2017 at 9:02 AM, Ted Yu  wrote:
> >
> > > ZOOKEEPER-2184 is scheduled for 3.4.12 whose release is unknown.
> > >
> > > I think adding the session recreation on Kafka side should benefit
> Kafka
> > > users, especially those who don't plan to move to 3.4.12+ in the near
> > > future.
> > >
> > > On Wed, Nov 1, 2017 at 6:34 PM, Jun Rao  wrote:
> > >
> > > > Hi, Stephane,
> > > >
> > > > 3) The difference is that currently, there is no retry when
> re-creating
> > > the
> > > > Zookeeper object when a ZK session expires. So, if the re-creation of
> > > > Zookeeper fails, the broker just logs the error and the Zookeeper
> > object
> > > > will never be created again. With this KIP, we will keep retrying the
> > > > creation of Zookeeper until success.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Tue, Oct 31, 2017 at 3:28 PM, Stephane Maarek <
> > > > steph...@simplemachines.com.au> wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > Thanks for the reply.
> > > > >
> > > > > 1) The reason I'm asking about it is I wonder if it's not worth
> > > focusing
> > > > > the development efforts on taking ownership of the existing PR (
> > > > > https://github.com/apache/zookeeper/pull/150)  to fix
> > ZOOKEEPER-2184,
> > > > > rebase it and have it merged into the ZK codebase shortly.  I feel
> > this
> > > > KIP
> > > > > might introduce a setting that could be deprecated shortly and
> > confuse
> > > > the
> > > > > end user a bit further with one more knob to turn.
> > > > >
> > > > > 3) I'm not sure if I fully understand, sorry for the beginner's
> > > question:
> > > > > if the default timeout is infinite, then it won't change anything
> to
> > > how
> > > > > Kafka works from today, does it? (unless I'm missing something
> > sorry).
> > > If
> > > > > not set to infinite, then we introduce the risk of a whole cluster
> > > > shutting
> > > > > down at once?
> > > > >
> > > > > Thanks,
> > > > > Stephane
> > > > >
> > > > > On 31/10/17, 1:00 pm, "Jun Rao"  wrote:
> > > > >
> > > > > Hi, Stephane,
> > > > >
> > > > > Thanks for the reply.
> > > > >
> > > > > 1) Fixing the issue in ZK will be ideal. Not sure when it will
> > > happen
> > > > > though. Once it's fixed, we can probably deprecate this config.
> > > > >
> > > > > 2) That could be useful. Is there a java api to do that at
> > runtime?
> > > > > Also,
> > > > > invalidating D

kafka-pr-jdk9-scala2.12 keeps failing

2017-11-02 Thread Ted Yu
Hi,
I took a look at recent runs under https://builds.apache.
org/job/kafka-pr-jdk9-scala2.12

All the recent runs failed with:

Could not update commit status of the Pull Request on GitHub.
org.kohsuke.github.HttpException: Server returned HTTP response code:
201, message: 'Created' for URL:
https://api.github.com/repos/apache/kafka/statuses/3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a
at org.kohsuke.github.Requester.parse(Requester.java:633)
at org.kohsuke.github.Requester.parse(Requester.java:594)
at org.kohsuke.github.Requester._to(Requester.java:272)
at org.kohsuke.github.Requester.to(Requester.java:234)
at 
org.kohsuke.github.GHRepository.createCommitStatus(GHRepository.java:1071)

...

Caused by: com.fasterxml.jackson.databind.JsonMappingException:
Numeric value (4298492118) out of range of int
 at [Source: 
{"url":"https://api.github.com/repos/apache/kafka/statuses/3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a","id":4298492118,"state":"pending","description":"Build
started sha1 is
merged.","target_url":"https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2397/","context":"JDK
9 and Scala 2.12",


Should we upgrade the version for jackson ?


Cheers


Re: kafka-pr-jdk9-scala2.12 keeps failing

2017-11-02 Thread Guozhang Wang
Noticed that as well, could we track down to which git commit / version
upgrade caused the issue?


Guozhang

On Thu, Nov 2, 2017 at 6:25 PM, Ted Yu  wrote:

> Hi,
> I took a look at recent runs under https://builds.apache.
> org/job/kafka-pr-jdk9-scala2.12
>
> All the recent runs failed with:
>
> Could not update commit status of the Pull Request on GitHub.
> org.kohsuke.github.HttpException: Server returned HTTP response code:
> 201, message: 'Created' for URL:
> https://api.github.com/repos/apache/kafka/statuses/
> 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a
> at org.kohsuke.github.Requester.parse(Requester.java:633)
> at org.kohsuke.github.Requester.parse(Requester.java:594)
> at org.kohsuke.github.Requester._to(Requester.java:272)
> at org.kohsuke.github.Requester.to(Requester.java:234)
> at org.kohsuke.github.GHRepository.createCommitStatus(
> GHRepository.java:1071)
>
> ...
>
> Caused by: com.fasterxml.jackson.databind.JsonMappingException:
> Numeric value (4298492118) out of range of int
>  at [Source: {"url":"https://api.github.com/repos/apache/kafka/statuses/
> 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a","id":4298492118,"
> state":"pending","description":"Build
> started sha1 is
> merged.","target_url":"https://builds.apache.org/job/kafka-
> pr-jdk9-scala2.12/2397/","context":"JDK
> 9 and Scala 2.12",
>
>
> Should we upgrade the version for jackson ?
>
>
> Cheers
>



-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk9 #166

2017-11-02 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Change version format in release notes python code

[wangguoz] MINOR: update producer client request timeout in system test

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H25 (couchdbtest ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision e4208b1d5fa1c28ac7e64e2cb039404a14084dc0 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e4208b1d5fa1c28ac7e64e2cb039404a14084dc0
Commit message: "MINOR: update producer client request timeout in system test"
 > git rev-list 554e0b529884e0dd6b4968a0fc58b02f58e95a07 # timeout=10
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
[kafka-trunk-jdk9] $ /bin/bash -xe /tmp/jenkins3539969489941454604.sh
+ rm -rf 
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2/bin/gradle

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine java version from '9.0.1'.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 6 days 10 hr old

Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user rajinisiva...@googlemail.com


Jenkins build is back to normal : kafka-trunk-jdk7 #2941

2017-11-02 Thread Apache Jenkins Server
See 




Re: kafka-pr-jdk9-scala2.12 keeps failing

2017-11-02 Thread Ted Yu
Looking at earlier runs, e.g. :
https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2384/console

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine java version from '9.0.1'.


This was the first build with 'out of range of int' exception:


https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2389/console


However, I haven't found the commit which was at the tip of repo at that time.


On Thu, Nov 2, 2017 at 6:40 PM, Guozhang Wang  wrote:

> Noticed that as well, could we track down to which git commit / version
> upgrade caused the issue?
>
>
> Guozhang
>
> On Thu, Nov 2, 2017 at 6:25 PM, Ted Yu  wrote:
>
> > Hi,
> > I took a look at recent runs under https://builds.apache.
> > org/job/kafka-pr-jdk9-scala2.12
> >
> > All the recent runs failed with:
> >
> > Could not update commit status of the Pull Request on GitHub.
> > org.kohsuke.github.HttpException: Server returned HTTP response code:
> > 201, message: 'Created' for URL:
> > https://api.github.com/repos/apache/kafka/statuses/
> > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a
> > at org.kohsuke.github.Requester.parse(Requester.java:633)
> > at org.kohsuke.github.Requester.parse(Requester.java:594)
> > at org.kohsuke.github.Requester._to(Requester.java:272)
> > at org.kohsuke.github.Requester.to(Requester.java:234)
> > at org.kohsuke.github.GHRepository.createCommitStatus(
> > GHRepository.java:1071)
> >
> > ...
> >
> > Caused by: com.fasterxml.jackson.databind.JsonMappingException:
> > Numeric value (4298492118) out of range of int
> >  at [Source: {"url":"https://api.github.com/repos/apache/kafka/statuses/
> > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a","id":4298492118,"
> > state":"pending","description":"Build
> > started sha1 is
> > merged.","target_url":"https://builds.apache.org/job/kafka-
> > pr-jdk9-scala2.12/2397/","context":"JDK
> > 9 and Scala 2.12",
> >
> >
> > Should we upgrade the version for jackson ?
> >
> >
> > Cheers
> >
>
>
>
> --
> -- Guozhang
>


Build failed in Jenkins: kafka-trunk-jdk7 #2942

2017-11-02 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: update producer client request timeout in system test

--
[...truncated 380.55 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs STARTED
ERROR: Could not install GRADLE_3_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model

[GitHub] kafka pull request #4173: KAFKA-6156: Metric tag name should not contain col...

2017-11-02 Thread huxihx
GitHub user huxihx opened a pull request:

https://github.com/apache/kafka/pull/4173

KAFKA-6156: Metric tag name should not contain colons.

Windows directory paths often contain colons which are now allowed in 
yammer metrics. Should convert to its corresponding Unix style path before 
creating metrics.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huxihx/kafka KAFKA-6156

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4173.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4173


commit 36a1188a3f9c93d21f68b591d4fd16fa90bd8bad
Author: huxihx 
Date:   2017-11-03T06:13:47Z

KAFKA-6156: Metric tag name should not contain colons.

Windows directory paths often contain colons which are now allowed in 
yammer metrics. Should convert to its corresponding Unix style path before 
creating metrics.




---


[jira] [Created] (KAFKA-6165) Kafka Brokers goes down with outOfMemoryError.

2017-11-02 Thread kaushik srinivas (JIRA)
kaushik srinivas created KAFKA-6165:
---

 Summary: Kafka Brokers goes down with outOfMemoryError.
 Key: KAFKA-6165
 URL: https://issues.apache.org/jira/browse/KAFKA-6165
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
 Environment: DCOS cluster with 4 agent nodes and 3 masters.

agent machine config :
RAM : 384 GB
DISK : 4TB


Reporter: kaushik srinivas
Priority: Major
 Attachments: kafka_config.txt, stderr_broker1.txt, stderr_broker2.txt, 
stdout_broker1.txt, stdout_broker2.txt

Performance testing kafka with end to end pipe lines of,
Kafka Data Producer -> kafka -> spark streaming -> hdfs -- stream1
Kafka Data Producer -> kafka -> flume -> hdfs -- stream2

stream1 kafka configs :
No of topics : 10
No of partitions : 20 for all the topics

stream2 kafka configs :
No of topics : 10
No of partitions : 20 for all the topics

Some important Kafka Configuration :
"BROKER_MEM": "32768"(32GB)
"BROKER_JAVA_HEAP": "16384"(16GB)
"BROKER_COUNT": "3"
"KAFKA_MESSAGE_MAX_BYTES": "112"(1MB)
"KAFKA_REPLICA_FETCH_MAX_BYTES": "1048576"(1MB)
"KAFKA_NUM_PARTITIONS": "20"
"BROKER_DISK_SIZE": "5000" (5GB)
"KAFKA_LOG_SEGMENT_BYTES": "5000",(50MB)
"KAFKA_LOG_RETENTION_BYTES": "50"(5GB)

Data Producer to kafka Throughput:

message rate : 5 lakhs messages/sec approx across all the 3 brokers and 
topics/partitions.
message size : approx 300 to 400 bytes.

Issues observed with this configs:

Issue 1:

stack trace:

[2017-11-03 00:56:28,484] FATAL [Replica Manager on Broker 0]: Halting due to 
unrecoverable I/O error while handling produce request:  
(kafka.server.ReplicaManager)
kafka.common.KafkaStorageException: I/O exception in append to log 
'store_sales-16'
at kafka.log.Log.append(Log.scala:349)
at kafka.cluster.Partition$$anonfun$10.apply(Partition.scala:443)
at kafka.cluster.Partition$$anonfun$10.apply(Partition.scala:429)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:240)
at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:407)
at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:393)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:393)
at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:330)
at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:425)
at kafka.server.KafkaApis.handle(KafkaApis.scala:78)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
at 
kafka.log.AbstractIndex$$anonfun$resize$1.apply(AbstractIndex.scala:116)
at 
kafka.log.AbstractIndex$$anonfun$resize$1.apply(AbstractIndex.scala:106)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
at kafka.log.AbstractIndex.resize(AbstractIndex.scala:106)
at 
kafka.log.AbstractIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(AbstractIndex.scala:160)
at 
kafka.log.AbstractIndex$$anonfun$trimToValidSize$1.apply(AbstractIndex.scala:160)
at 
kafka.log.AbstractIndex$$anonfun$trimToValidSize$1.apply(AbstractIndex.scala:160)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:159)
at kafka.log.Log.roll(Log.scala:771)
at kafka.log.Log.maybeRoll(Log.scala:742)
at kafka.log.Log.append(Log.scala:405)
... 22 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
... 34 more


Issue 2 :

stack trace :

[2017-11-02 23:55:49,602] FATAL [ReplicaFetcherThread-0-0], Disk error while 
replicating data for catalog_sales-3 (kafka.server.ReplicaFetcherThread)

Re: kafka-pr-jdk9-scala2.12 keeps failing

2017-11-02 Thread Ismael Juma
This looks to be an issue in Jenkins, not in Kafka. Apache Infra updated
Java 9 to 9.0.1 and it seems to have broken some of the Jenkins code.

Ismael

On 3 Nov 2017 1:53 am, "Ted Yu"  wrote:

> Looking at earlier runs, e.g. :
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2384/console
>
> FAILURE: Build failed with an exception.
>
> * What went wrong:
> Could not determine java version from '9.0.1'.
>
>
> This was the first build with 'out of range of int' exception:
>
>
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2389/console
>
>
> However, I haven't found the commit which was at the tip of repo at that
> time.
>
>
> On Thu, Nov 2, 2017 at 6:40 PM, Guozhang Wang  wrote:
>
> > Noticed that as well, could we track down to which git commit / version
> > upgrade caused the issue?
> >
> >
> > Guozhang
> >
> > On Thu, Nov 2, 2017 at 6:25 PM, Ted Yu  wrote:
> >
> > > Hi,
> > > I took a look at recent runs under https://builds.apache.
> > > org/job/kafka-pr-jdk9-scala2.12
> > >
> > > All the recent runs failed with:
> > >
> > > Could not update commit status of the Pull Request on GitHub.
> > > org.kohsuke.github.HttpException: Server returned HTTP response code:
> > > 201, message: 'Created' for URL:
> > > https://api.github.com/repos/apache/kafka/statuses/
> > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a
> > > at org.kohsuke.github.Requester.parse(Requester.java:633)
> > > at org.kohsuke.github.Requester.parse(Requester.java:594)
> > > at org.kohsuke.github.Requester._to(Requester.java:272)
> > > at org.kohsuke.github.Requester.to(Requester.java:234)
> > > at org.kohsuke.github.GHRepository.createCommitStatus(
> > > GHRepository.java:1071)
> > >
> > > ...
> > >
> > > Caused by: com.fasterxml.jackson.databind.JsonMappingException:
> > > Numeric value (4298492118) out of range of int
> > >  at [Source: {"url":"https://api.github.com/repos/apache/kafka/
> statuses/
> > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a","id":4298492118,"
> > > state":"pending","description":"Build
> > > started sha1 is
> > > merged.","target_url":"https://builds.apache.org/job/kafka-
> > > pr-jdk9-scala2.12/2397/","context":"JDK
> > > 9 and Scala 2.12",
> > >
> > >
> > > Should we upgrade the version for jackson ?
> > >
> > >
> > > Cheers
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>