[jira] [Created] (KAFKA-6067) how to process when all isr crashed

2017-10-17 Thread haiyangyu (JIRA)
haiyangyu created KAFKA-6067:


 Summary: how to process when all isr crashed
 Key: KAFKA-6067
 URL: https://issues.apache.org/jira/browse/KAFKA-6067
 Project: Kafka
  Issue Type: New Feature
Reporter: haiyangyu


  when a topic partition's all isr crashed, the partition is Unavailability, 
why don't like hdfs when a replication in isr crashed and auto move the rep to 
another alive one,
  what's the original intention of this design?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-209 Connection String Support

2017-10-17 Thread Satish Duggana
 You may need to update KIP with the details discussed in this thread in
proposed changes section.

>>My proposed format for the connection string would be:
>>IP1:host1,IP2:host2,...IPN:hostn;parameterName=value1;parameterName2=value2;...
parameterNameN=valueN
Format should be
host1:port1,host2:port2,…host:portn;param-name1=param-val1,..

>>Invalid conversions would throw InvalidArgumentException (with a
description of the invalid conversion)
>>Invalid parameters would throw InvalidArgumentException (with the name of
the invalid parameter).

Should throw IllegalArgumentException with respective message.

Thanks,
Satish.

On Tue, Oct 17, 2017 at 4:46 AM, Clebert Suconic 
wrote:

> That works.
>
> On Mon, Oct 16, 2017 at 6:59 PM Ted Yu  wrote:
>
> > Can't you use IllegalArgumentException ?
> >
> > Some example in current code base:
> >
> > clients/src/main/java/org/apache/kafka/clients/Metadata.java:
> >  throw new IllegalArgumentException("Max time to wait for metadata
> updates
> > should not be < 0 milliseconds");
> >
> > On Mon, Oct 16, 2017 at 3:06 PM, Clebert Suconic <
> > clebert.suco...@gmail.com>
> > wrote:
> >
> > > I updated the wiki with the list on the proposed arguments.
> > >
> > > I also changed it to include a new Exception class that would be named
> > > InvalidParameterException (since I couldn't find an existing Exception
> > > class that I could reuse into this). (I could review the name or the
> > > exception of course.. just my current proposal)
> > >
> > > On Mon, Oct 16, 2017 at 5:55 PM, Jakub Scholz  wrote:
> > > > Hi Clebert,
> > > >
> > > > I think it would be good if this could cover not only KafkaConsumer
> and
> > > > KafkaProducer but also the AdminClient. So that all three can be
> > > configured
> > > > the same way.
> > > >
> > > > The bootstrap servers are a list - you can provide multiple bootstrap
> > > > servers. Maybe you add an example of how that will be configured. I
> > > assume
> > > > it will be
> > > > "host:port,host2:port2;parameterName=value1;parameterName2=value2"
> but
> > > it
> > > > would be great to have it documented.
> > > >
> > > > Thanks & Regards
> > > > Jakub
> > > >
> > > > On Mon, Oct 16, 2017 at 11:30 PM, Clebert Suconic <
> > > clebert.suco...@gmail.com
> > > >> wrote:
> > > >
> > > >> I would like to start a discussion about KIP-209
> > > >> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >> 209+-+Connection+String+Support)
> > > >>
> > > >> This is an extension of my previous thread:
> > > >> http://mail-archives.apache.org/mod_mbox/kafka-dev/201710.
> > > >> mbox/%3cCAKF+bsoFbN13D-u20tUsP6G+aHX4BUNk=S8M4KyJxAt_
> > > >> oyv...@mail.gmail.com%3e
> > > >>
> > > >> this could make the bootstrap of a consumer or producer similar to
> > > >> what users are already used when connecting into other systems,
> being
> > > >> a simple addition to Producer and Consumer, without breaking any
> > > >> previous client usage.
> > > >>
> > > >>
> > > >> --
> > > >> Clebert Suconic
> > > >>
> > >
> > >
> > >
> > > --
> > > Clebert Suconic
> > >
> >
> --
> Clebert Suconic
>


Re: [DISCUSS] KIP-209 Connection String Support

2017-10-17 Thread Tom Bentley
Hi Clebert,

The motivation section is written as more of a summary and doesn't really
give any motivation for this change. Can you explain why it would be
beneficial for Kafka to have this change? For example, if you have use
cases where the current way of instantiating a producer, consumer or admin
client is sub-optimal you should mention them.

Cheers,

Tom

On 17 October 2017 at 08:15, Satish Duggana 
wrote:

>  You may need to update KIP with the details discussed in this thread in
> proposed changes section.
>
> >>My proposed format for the connection string would be:
> >>IP1:host1,IP2:host2,...IPN:hostn;parameterName=value1;
> parameterName2=value2;...
> parameterNameN=valueN
> Format should be
> host1:port1,host2:port2,…host:portn;param-name1=param-val1,..
>
> >>Invalid conversions would throw InvalidArgumentException (with a
> description of the invalid conversion)
> >>Invalid parameters would throw InvalidArgumentException (with the name of
> the invalid parameter).
>
> Should throw IllegalArgumentException with respective message.
>
> Thanks,
> Satish.
>
> On Tue, Oct 17, 2017 at 4:46 AM, Clebert Suconic <
> clebert.suco...@gmail.com>
> wrote:
>
> > That works.
> >
> > On Mon, Oct 16, 2017 at 6:59 PM Ted Yu  wrote:
> >
> > > Can't you use IllegalArgumentException ?
> > >
> > > Some example in current code base:
> > >
> > > clients/src/main/java/org/apache/kafka/clients/Metadata.java:
> > >  throw new IllegalArgumentException("Max time to wait for metadata
> > updates
> > > should not be < 0 milliseconds");
> > >
> > > On Mon, Oct 16, 2017 at 3:06 PM, Clebert Suconic <
> > > clebert.suco...@gmail.com>
> > > wrote:
> > >
> > > > I updated the wiki with the list on the proposed arguments.
> > > >
> > > > I also changed it to include a new Exception class that would be
> named
> > > > InvalidParameterException (since I couldn't find an existing
> Exception
> > > > class that I could reuse into this). (I could review the name or the
> > > > exception of course.. just my current proposal)
> > > >
> > > > On Mon, Oct 16, 2017 at 5:55 PM, Jakub Scholz 
> wrote:
> > > > > Hi Clebert,
> > > > >
> > > > > I think it would be good if this could cover not only KafkaConsumer
> > and
> > > > > KafkaProducer but also the AdminClient. So that all three can be
> > > > configured
> > > > > the same way.
> > > > >
> > > > > The bootstrap servers are a list - you can provide multiple
> bootstrap
> > > > > servers. Maybe you add an example of how that will be configured. I
> > > > assume
> > > > > it will be
> > > > > "host:port,host2:port2;parameterName=value1;parameterName2=value2"
> > but
> > > > it
> > > > > would be great to have it documented.
> > > > >
> > > > > Thanks & Regards
> > > > > Jakub
> > > > >
> > > > > On Mon, Oct 16, 2017 at 11:30 PM, Clebert Suconic <
> > > > clebert.suco...@gmail.com
> > > > >> wrote:
> > > > >
> > > > >> I would like to start a discussion about KIP-209
> > > > >> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > >> 209+-+Connection+String+Support)
> > > > >>
> > > > >> This is an extension of my previous thread:
> > > > >> http://mail-archives.apache.org/mod_mbox/kafka-dev/201710.
> > > > >> mbox/%3cCAKF+bsoFbN13D-u20tUsP6G+aHX4BUNk=S8M4KyJxAt_
> > > > >> oyv...@mail.gmail.com%3e
> > > > >>
> > > > >> this could make the bootstrap of a consumer or producer similar to
> > > > >> what users are already used when connecting into other systems,
> > being
> > > > >> a simple addition to Producer and Consumer, without breaking any
> > > > >> previous client usage.
> > > > >>
> > > > >>
> > > > >> --
> > > > >> Clebert Suconic
> > > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > Clebert Suconic
> > > >
> > >
> > --
> > Clebert Suconic
> >
>


[jira] [Created] (KAFKA-6068) kafka-topic.sh alter replication-factor raise broker failure

2017-10-17 Thread haiyangyu (JIRA)
haiyangyu created KAFKA-6068:


 Summary: kafka-topic.sh alter replication-factor raise broker 
failure
 Key: KAFKA-6068
 URL: https://issues.apache.org/jira/browse/KAFKA-6068
 Project: Kafka
  Issue Type: New Feature
  Components: admin
Affects Versions: 0.10.0.0
Reporter: haiyangyu
 Attachments: alter_rep.png, exception.png

1) When I alter my topic replication-factor from 2 to 3, it only update zk 
topic data, 
2) then i shutdown my controller, my broker thrown NotAssignedReplicaException



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3814: KAFKA-4504: update retention.bytes config descript...

2017-10-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3814


---


[jira] [Resolved] (KAFKA-4504) Details of retention.bytes property at Topic level are not clear on how they impact partition size

2017-10-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4504.
--
   Resolution: Fixed
 Assignee: Manikumar
Fix Version/s: 1.0.0

> Details of retention.bytes property at Topic level are not clear on how they 
> impact partition size
> --
>
> Key: KAFKA-4504
> URL: https://issues.apache.org/jira/browse/KAFKA-4504
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.10.0.1
>Reporter: Justin Manchester
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> Problem:
> Details of retention.bytes property at Topic level are not clear on how they 
> impact partition size
> Business Impact:
> Users are setting retention.bytes and not seeing the desired store amount of 
> data.
> Current Text:
> This configuration controls the maximum size a log can grow to before we will 
> discard old log segments to free up space if we are using the "delete" 
> retention policy. By default there is no size limit only a time limit.
> Proposed change:
> This configuration controls the maximum size a log can grow to before we will 
> discard old log segments to free up space if we are using the "delete" 
> retention policy. By default there is no size limit only a time limit.  
> Please note, this is calculated as retention.bytes * number of partitions on 
> the given topic for the total  amount of disk space to be used.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4079: MINOR: JavaDoc improvements for RangeAssignor

2017-10-17 Thread astubbs
GitHub user astubbs opened a pull request:

https://github.com/apache/kafka/pull/4079

MINOR: JavaDoc improvements for RangeAssignor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/astubbs/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4079.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4079


commit 3f4a020aeba4e1573e0be4c0b6f02ffe5ab09515
Author: Antony Stubbs 
Date:   2017-10-17T10:59:32Z

MINOR: JavaDoc improvements for RangeAssignor




---


Build failed in Jenkins: kafka-trunk-jdk8 #2142

2017-10-17 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-4504; Clarify that retention.bytes is a partition level config

--
[...truncated 369.10 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.i

Jenkins build is back to normal : kafka-trunk-jdk7 #2895

2017-10-17 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.0.0 RC1

2017-10-17 Thread Thomas Crayford
Hi Ghouzang,

We have indeed started our performance testing at Heroku for RC1. However,
we are more than happy to retest once RC2 is available, especially given
larger amounts of time to do so.

Thanks

Tom Crayford
Heroku Kafka

On Tue, Oct 17, 2017 at 2:50 AM, Ismael Juma  wrote:

> If you don't use the default Scala version, you have to set the
> SCALA_VERSION environment variable for the bin scripts to work.
>
> Ismael
>
> On 17 Oct 2017 1:30 am, "Vahid S Hashemian" 
> wrote:
>
> Hi Guozhang,
>
> I'm not sure if this should be covered by "Java 9 support" in the RC note,
> but when I try to build jars from source using Java 9 (./gradlew
> -PscalaVersion=2.12 jar) even though the build reports as succeeded, it
> doesn't seem to have been successful:
>
> $ bin/zookeeper-server-start.sh config/zookeeper.properties
> Error: Could not find or load main class
> org.apache.zookeeper.server.quorum.QuorumPeerMain
> Caused by: java.lang.ClassNotFoundException:
> org.apache.zookeeper.server.quorum.QuorumPeerMain
>
> Please advise if I'm missing something.
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Guozhang Wang 
> To: "dev@kafka.apache.org" ,
> "us...@kafka.apache.org" , kafka-clients
> 
> Date:   10/13/2017 01:12 PM
> Subject:[VOTE] 1.0.0 RC1
>
>
>
> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 1.0.0.
>
> It's worth noting that starting in this version we are using a different
> version protocol with three digits: *major.minor.bug-fix*
>
> Any and all testing is welcome, but the following areas are worth
> highlighting:
>
> 1. Client developers should verify that their clients can produce/consume
> to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> 2. Performance and stress testing. Heroku and LinkedIn have helped with
> this in the past (and issues have been found and fixed).
> 3. End users can verify that their apps work correctly with the new
> release.
>
> This is a major version release of Apache Kafka. It includes 29 new KIPs.
> See the release notes and release plan
> (*https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_
> confluence_pages_viewpage.action-3FpageId-3D71764913&d=
> DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=
> tT9k0x5RvXtHEtLzp03BA1Y8DAgHzgCXD7UjqP7oiKE&e=
> <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> apache.org_confluence_pages_viewpage.action-3FpageId-
> 3D71764913&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=
> tT9k0x5RvXtHEtLzp03BA1Y8DAgHzgCXD7UjqP7oiKE&e=
> >*)
> for more details. A few feature highlights:
>
> * Java 9 support with significantly faster TLS and CRC32C implementations
> (KIP)
> * JBOD improvements: disk failure only disables failed disk but not the
> broker (KIP-112/KIP-113)
> * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> KIP-188, KIP-196)
> * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> and drop compatibility "Evolving" annotations
>
> Release notes for the 1.0.0 release:
> *https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-
> 7Eguozhang_kafka-2D1.0.0-2Drc1_RELEASE-5FNOTES.html&d=
> DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=
> xopSUD2TETEI5y8kxHM4P-jUdUKUIiUig2xVwabgDq8&e=
> <
> https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
> apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_RELEASE-5FNOTES.
> html&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=
> xopSUD2TETEI5y8kxHM4P-jUdUKUIiUig2xVwabgDq8&e=
> >*
>
>
>
> *** Please download, test and vote by Tuesday, October 13, 8pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.
> apache.org_KEYS&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_
> OWk2y2QfKYsXitTyAHHM&s=FfLcWlN8ODpZ2m1KliMfp35duIxif3FNnptY5-9JKWU&e=
>
>
> * Release artifacts to be voted upon (source and binary):
> *https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-
> 7Eguozhang_kafka-2D1.0.0-2Drc1_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_
> itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_
> OWk2y2QfKYsXitTyAHHM&s=bcWIqj27_tkoj-fnEzcLdP8uGXyAt6gS9KUy12WF1FE&e=
> <
> https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
> apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_&d=DwIBaQ&c=jf_
> iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=
> VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=bcWIqj27_tkoj-
> fnEzcLdP8uGXyAt6gS9KUy12WF1FE&e=
> >*
>
> * Maven artifacts to be voted upon:
> https://urldefense.proofpoint.com/v2/u

[GitHub] kafka pull request #4080: Minor: Print units for the performance consumer

2017-10-17 Thread astubbs
GitHub user astubbs opened a pull request:

https://github.com/apache/kafka/pull/4080

Minor: Print units for the performance consumer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/astubbs/kafka perfcons

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4080.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4080


commit c5c97caedb82f6a831d6c01aa5bf814a25b22ced
Author: Antony Stubbs 
Date:   2017-10-17T11:59:38Z

MINOR: Remove duplicate field records/sec

commit 33ddfacf24499a405fb26bf4a5df47b4566b3936
Author: Antony Stubbs 
Date:   2017-10-17T13:29:01Z

Minor: Print units for the performance consumer

More closely matches the performance producer.




---


[jira] [Created] (KAFKA-6069) Streams metrics tagged incorrectly

2017-10-17 Thread Tommy Becker (JIRA)
Tommy Becker created KAFKA-6069:
---

 Summary: Streams metrics tagged incorrectly
 Key: KAFKA-6069
 URL: https://issues.apache.org/jira/browse/KAFKA-6069
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: Tommy Becker
Assignee: Tommy Becker
Priority: Minor


KafkaStreams attempts to tag many (all?) of it's metrics with the client id. 
But instead of retrieving the value from the config, it tags them with the 
literal "client.id", as can be seen on 
org/apache/kafka/streams/processor/internals/StreamsKafkaClient.java:114



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4081: KAFKA-6069: Properly tag KafkaStreams metrics with...

2017-10-17 Thread twbecker
GitHub user twbecker opened a pull request:

https://github.com/apache/kafka/pull/4081

KAFKA-6069: Properly tag KafkaStreams metrics with the client id.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twbecker/kafka KAFKA-6069

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4081.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4081


commit d97881df251674b4c7f3d3f9a270b4691dc262c4
Author: Tommy Becker 
Date:   2017-10-17T13:48:07Z

Properly tag KafkaStreams metrics with the client id.




---


[GitHub] kafka pull request #4082: MINOR: Adds an option to consume continuously

2017-10-17 Thread astubbs
GitHub user astubbs opened a pull request:

https://github.com/apache/kafka/pull/4082

MINOR: Adds an option to consume continuously



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/astubbs/kafka continuous-poll

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4082.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4082


commit 5d8efd0591ef3260189e777e13699b0412a94310
Author: Antony Stubbs 
Date:   2017-10-17T15:32:06Z

MINOR: Adds an option to consume continuously




---


Re: [DISCUSS] KIP-208: Add SSL support to Kafka Connect REST interface

2017-10-17 Thread Jakub Scholz
Ok, so I updated the KIP according to what we discussed. Please have a look
at the updates. Two points I'm not 100% sure about:

1) Should we mark the rest.host.name and rest.port options as deprecated?

2) I needed to also address the advertised hostname / port. With multiple
listeners it is not clear anymore which one should be used. I saw as one
option to add advertised.listeners option and some modified version of
inter.broker.listener.name option to follow what is done in Kafka brokers.
But for the Connect REST interface, we do not advertise the address to the
clients like in Kafka broker. So we only need to tell other workers how to
connect - and for that we need only one advertised address. So I decided to
reuse the existing rest.advertised.host.name and rest.advertised.port
options and add additional option rest.advertised.security.protocol to
specify whether HTTP or HTTPS should be used. Does this make sense to you?
DO you think this is the right approach?

Thanks & Regards
Jakub

On Mon, Oct 16, 2017 at 6:34 PM, Randall Hauch  wrote:

> The broker's configuration options are "listeners" (plural) and
> "listeners.security.protocol.map". I agree that following the pattern set
> by the broker is better, so these are really good ideas. However, at this
> point I don't see a need for the "listeners.security.procotol.map", which
> for the broker must be set if the listener name is not a security protocol.
> Can we not simply just allow "HTTP" and "HTTPS" as the names of the
> listeners (rather than the broker's "PLAINTEXT", "SSL", etc.)? If so, then
> for example "listeners" might be set to "http://myhost:8081,
> https://myhost:80";, which seems to work out nicely without needing
> listener
> names other than security protocols.
>
> I also like using the worker's SSL and SASL security configs by default if
> "https" is included in the listener, but allowing the overriding of this
> via other additional properties. Here, I'm not a big fan of
> "listeners.name.https.*" prefix, which I think is pretty verbose, but I
> could see "listener.https.*" as a prefix. This allows us to add other
> security protocols at some point, if that ever becomes necessary.
>
> +1 for continuing down this road. Nice work.
>
> On Mon, Oct 16, 2017 at 9:51 AM, Ted Yu  wrote:
>
> > +1 to this proposal.
> >
> > On Mon, Oct 16, 2017 at 7:49 AM, Jakub Scholz  wrote:
> >
> > > I was having some more thoughts about it. We can simply take over what
> > > Kafka broker implements for the listeners:
> > > - We can take over the "listener" and "listener.security.protocol.map"
> > > options to define multiple REST listeners and the security protocol
> they
> > > should use
> > > - The HTTPS interface will by default use the default configuration
> > options
> > > ("ssl.keystore.localtion" etc.). But if desired, the values can be
> > > overridden for given listener (again, as in Kafka broker "
> listener.name
> > > ..ssl.keystore.location")
> > >
> > > This should address both issues raised. But before I incorporate it
> into
> > > the KIP, I would love to get some feedback if this sounds OK. Please
> let
> > me
> > > know what do you think ...
> > >
> > > Jakub
> > >
> > > On Sun, Oct 15, 2017 at 12:23 AM, Jakub Scholz 
> wrote:
> > >
> > > > I agree, adding both HTTP and HTTPS is not complicated. I just didn't
> > saw
> > > > the use case for it. But I can add it. Would you add just support
> for a
> > > > single HTTP and single HTTPS interface? Or do you see some value even
> > in
> > > > allowing more than 2 interfaces (for example one HTTP and two HTTPS
> > with
> > > > different configuration)? It could be done similarly to how the Kafka
> > > > broker does it through the "listener" configuration parameter with
> > comma
> > > > separated list. What do you think?
> > > >
> > > > As for the "rest" prefix - if we remove it, some of the same
> > > configuration
> > > > options are already used today as the option for connecting from
> Kafka
> > > > Connect to Kafka broker. So I'm not sure we should mix them. I can
> > > > definitely imagine some cases where the client SSL configuration will
> > not
> > > > be the same as the REST HTTPS configuration. That is why I added the
> > > > prefix. If we remove the prefix, how would you deal with this?
> > > >
> > > > On Fri, Oct 13, 2017 at 6:25 PM, Randall Hauch 
> > wrote:
> > > >
> > > >> Also, do we need these properties to be preceded with `rest`? I'd
> > argue
> > > >> that we're just configuring the worker's SSL information, and that
> the
> > > >> REST
> > > >> API would just use that. If we added another non-REST API, we'd want
> > to
> > > >> use
> > > >> the same security configuration.
> > > >>
> > > >> It's not that complicated in Jetty to support both "http" and
> "https"
> > > >> simultaneously, so IMO we should add that from the beginning.
> > > >>
> > > >> On Fri, Oct 13, 2017 at 9:34 AM, Randall Hauch 
> > > wrote:
> > > >>
> > > >> > It'd be useful to specify the default value

[GitHub] kafka pull request #3874: KAFKA-5163; Support replicas movement between log ...

2017-10-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3874


---


Build failed in Jenkins: kafka-trunk-jdk8 #2143

2017-10-17 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-5163; Support replicas movement between log directories (KIP-113)

--
[...truncated 461.57 KB...]

kafka.tools.ConsoleConsumerTest > 
shouldParseValidOldConsumerConfigWithAutoOffsetResetLargest PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
STARTED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningOldConsumer
 STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningOldConsumer
 PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetLatest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetLatest PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > classMethod STARTED

kafka.security.auth.ZkAuthorizationTest > classMethod FAILED
java.lang.AssertionError: Found unexpected threads, 
allThreads=Set(kafka-scheduler-28, kafka-request-handler-6, 
ThrottledRequestReaper-Produce, ZkClient-EventThread-23173-127.0.0.1:46839, 
kafka-scheduler-29, kafka-request-handler-7, Reference Handler, 
ExpirationReaper-0-Produce, kafka-scheduler-22, ReplicaFetcherThread-0-0, 
kafka-request-handler-0, 
kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-0, kafka-scheduler-23, 
kafka-request-handler-1, ThrottledRequestReaper-Request, 
kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-1, kafka-scheduler-24, 
kafka-request-handler-2, daemon-broker-bouncer-SendThread(127.0.0.1:46839), 
kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-2, Test worker, 
kafka-request-handler-3, kafka-scheduler-25, kafka-scheduler-26, 
kafka-request-handler-4, ExpirationReaper-1-Heartbeat, SensorExpiryThread, 
kafka-scheduler-20, kafka-log-cleaner-thread-0, kafka-scheduler-21, 
kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-47261, 
metrics-meter-tick-thread-1, TxnMarkerSenderThread-1, 
metrics-meter-tick-thread-2, Signal Dispatcher, main, ForkJoinPool-1-worker-1, 
transaction-log-manager-0, ExpirationReaper-0-DeleteRecords, 
controller-event-thread, ThrottledRequestReaper-Fetch, 
ExpirationReaper-1-Rebalance, ExpirationReaper-1-Fetch, 
ExpirationReaper-1-topic, /0:0:0:0:0:0:0:1:35296 to /0:0:0:0:0:0:0:1:33940 
workers Thread 2, /0:0:0:0:0:0:0:1:35296 to /0:0:0:0:0:0:0:1:33940 workers 
Thread 3, ReplicaFetcherThread-0-2, group-metadata-manager-0, 
LogDirFailureHandler, ExpirationReaper-1-Produce, 
daemon-broker-bouncer-EventThread, ExpirationReaper-0-Fetch, 
ExpirationReaper-1-DeleteRecords, Finalizer, kafka-scheduler-27, 
kafka-request-handler-5)

kafka.security.auth.ZkAuthorizationTest > classMethod STARTED

kafka.security.auth.ZkAuthorizationTest > classMethod FAILED
java.lang.AssertionError: Found unexpected threads, 
allThreads=Set(kafka-scheduler-28, kafka-request-handler-6, 
ThrottledRequestReaper-Produce, ZkClient-EventThread-23173-127.0.0.1:46839, 
kafka-scheduler-29, kafka-request-handler-7, Reference Handler, 
ExpirationReaper-0-Produce, kafka-scheduler-

[GitHub] kafka-site pull request #98: Added Pinterest logo to streams page

2017-10-17 Thread manjuapu
GitHub user manjuapu opened a pull request:

https://github.com/apache/kafka-site/pull/98

Added Pinterest logo to streams page

@guozhangwang Please review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manjuapu/kafka-site asf-site

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/98.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #98


commit d415111bcb15b5a4f7549ab410c85088a41d79f1
Author: Manjula K 
Date:   2017-10-10T02:42:05Z

Change apache-kafka image permission as image not appearing in twitter

commit 1d2370bb574c7c81c2b241a78e6a7749e9d4f660
Author: Manjula Kumar 
Date:   2017-10-17T16:19:58Z

Added pinterest logo to streams page

commit 8d83733f3051e83a0b3501488aa49559a700153f
Author: Manjula Kumar 
Date:   2017-10-17T16:30:49Z

Added links




---


Build failed in Jenkins: kafka-trunk-jdk9 #131

2017-10-17 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-5163; Support replicas movement between log directories (KIP-113)

--
[...truncated 1.79 MB...]

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldNotAllowNullSubtractorOnReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldNotAllowNullSubtractorOnReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldReduceWithInternalStoreName STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldReduceWithInternalStoreName PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldReduceAndMaterializeResults STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldReduceAndMaterializeResults PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldNotAllowInvalidStoreNameOnReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldNotAllowInvalidStoreNameOnReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenSubtractorIsNull STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenSubtractorIsNull PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testQueryableNotSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testQueryableNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testQueryableJoin STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testQueryableJoin PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfReducerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfReducerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfInitializerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfInitializerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMergerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMergerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeWithoutSpecifyingSerdes STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeWithoutSpecifyingSerdes PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfAggregatorIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shou

[VOTE] 1.0.0 RC2

2017-10-17 Thread Guozhang Wang
Hello Kafka users, developers and client-developers,

This is the third candidate for release of Apache Kafka 1.0.0. The main PRs
that gets merged in after RC1 are the following:

https://github.com/apache/kafka/commit/dc6bfa553e73ffccd1e604963e076c78d8ddcd69

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113)
* Controller improvements: async ZK access for faster administrative
request handling
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*http://home.apache.org/~guozhang/kafka-1.0.0-rc2/RELEASE_NOTES.html
*



*** Please download, test and vote by Friday, October 20, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
*http://home.apache.org/~guozhang/kafka-1.0.0-rc2/
*

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
*http://home.apache.org/~guozhang/kafka-1.0.0-rc2/javadoc/
*

* Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc2 tag:

https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=51d5f12e190a38547839c7d2710c97faaeaca586

* Documentation:
Note the documentation can't be pushed live due to changes that will not go
live until the release. You can manually verify by downloading
http://home.apache.org/~guozhang/kafka-1.0.0-rc2/kafka_2.11-1.0.0-site-docs.tgz

* Successful Jenkins builds for the 1.0.0 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/40/
System test: https://jenkins.confluent.io/job/system-test-kafka-1.0/6/


/**


Thanks,
-- Guozhang


Re: [DISCUSS] KIP-209 Connection String Support

2017-10-17 Thread Clebert Suconic
I had tweaked that section a bit.. although I though it was clear what
the benefit. Since it was a bit obvious I thought describing the
feature and the API simplification would been enough..


I am hoping it would be clearer now.

On Tue, Oct 17, 2017 at 4:37 AM, Tom Bentley  wrote:
> Hi Clebert,
>
> The motivation section is written as more of a summary and doesn't really
> give any motivation for this change. Can you explain why it would be
> beneficial for Kafka to have this change? For example, if you have use
> cases where the current way of instantiating a producer, consumer or admin
> client is sub-optimal you should mention them.
>
> Cheers,
>
> Tom
>
> On 17 October 2017 at 08:15, Satish Duggana 
> wrote:
>
>>  You may need to update KIP with the details discussed in this thread in
>> proposed changes section.
>>
>> >>My proposed format for the connection string would be:
>> >>IP1:host1,IP2:host2,...IPN:hostn;parameterName=value1;
>> parameterName2=value2;...
>> parameterNameN=valueN
>> Format should be
>> host1:port1,host2:port2,…host:portn;param-name1=param-val1,..
>>
>> >>Invalid conversions would throw InvalidArgumentException (with a
>> description of the invalid conversion)
>> >>Invalid parameters would throw InvalidArgumentException (with the name of
>> the invalid parameter).
>>
>> Should throw IllegalArgumentException with respective message.
>>
>> Thanks,
>> Satish.
>>
>> On Tue, Oct 17, 2017 at 4:46 AM, Clebert Suconic <
>> clebert.suco...@gmail.com>
>> wrote:
>>
>> > That works.
>> >
>> > On Mon, Oct 16, 2017 at 6:59 PM Ted Yu  wrote:
>> >
>> > > Can't you use IllegalArgumentException ?
>> > >
>> > > Some example in current code base:
>> > >
>> > > clients/src/main/java/org/apache/kafka/clients/Metadata.java:
>> > >  throw new IllegalArgumentException("Max time to wait for metadata
>> > updates
>> > > should not be < 0 milliseconds");
>> > >
>> > > On Mon, Oct 16, 2017 at 3:06 PM, Clebert Suconic <
>> > > clebert.suco...@gmail.com>
>> > > wrote:
>> > >
>> > > > I updated the wiki with the list on the proposed arguments.
>> > > >
>> > > > I also changed it to include a new Exception class that would be
>> named
>> > > > InvalidParameterException (since I couldn't find an existing
>> Exception
>> > > > class that I could reuse into this). (I could review the name or the
>> > > > exception of course.. just my current proposal)
>> > > >
>> > > > On Mon, Oct 16, 2017 at 5:55 PM, Jakub Scholz 
>> wrote:
>> > > > > Hi Clebert,
>> > > > >
>> > > > > I think it would be good if this could cover not only KafkaConsumer
>> > and
>> > > > > KafkaProducer but also the AdminClient. So that all three can be
>> > > > configured
>> > > > > the same way.
>> > > > >
>> > > > > The bootstrap servers are a list - you can provide multiple
>> bootstrap
>> > > > > servers. Maybe you add an example of how that will be configured. I
>> > > > assume
>> > > > > it will be
>> > > > > "host:port,host2:port2;parameterName=value1;parameterName2=value2"
>> > but
>> > > > it
>> > > > > would be great to have it documented.
>> > > > >
>> > > > > Thanks & Regards
>> > > > > Jakub
>> > > > >
>> > > > > On Mon, Oct 16, 2017 at 11:30 PM, Clebert Suconic <
>> > > > clebert.suco...@gmail.com
>> > > > >> wrote:
>> > > > >
>> > > > >> I would like to start a discussion about KIP-209
>> > > > >> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > > >> 209+-+Connection+String+Support)
>> > > > >>
>> > > > >> This is an extension of my previous thread:
>> > > > >> http://mail-archives.apache.org/mod_mbox/kafka-dev/201710.
>> > > > >> mbox/%3cCAKF+bsoFbN13D-u20tUsP6G+aHX4BUNk=S8M4KyJxAt_
>> > > > >> oyv...@mail.gmail.com%3e
>> > > > >>
>> > > > >> this could make the bootstrap of a consumer or producer similar to
>> > > > >> what users are already used when connecting into other systems,
>> > being
>> > > > >> a simple addition to Producer and Consumer, without breaking any
>> > > > >> previous client usage.
>> > > > >>
>> > > > >>
>> > > > >> --
>> > > > >> Clebert Suconic
>> > > > >>
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Clebert Suconic
>> > > >
>> > >
>> > --
>> > Clebert Suconic
>> >
>>



-- 
Clebert Suconic


Re: [DISCUSS] KIP-209 Connection String Support

2017-10-17 Thread Clebert Suconic
I had these updates in already... you just changed the names at the
string.. but it was pretty much the same thing I think... I had taken
you suggestions though.


The Exceptions.. these would be implementation details... all I wanted
to make sure is that users would get the name of the invalid parameter
as part of a string on a message.

On Tue, Oct 17, 2017 at 3:15 AM, Satish Duggana
 wrote:
>  You may need to update KIP with the details discussed in this thread in
> proposed changes section.
>
>>>My proposed format for the connection string would be:
>>>IP1:host1,IP2:host2,...IPN:hostn;parameterName=value1;parameterName2=value2;...
> parameterNameN=valueN
> Format should be
> host1:port1,host2:port2,…host:portn;param-name1=param-val1,..
>
>>>Invalid conversions would throw InvalidArgumentException (with a
> description of the invalid conversion)
>>>Invalid parameters would throw InvalidArgumentException (with the name of
> the invalid parameter).
>
> Should throw IllegalArgumentException with respective message.
>
> Thanks,
> Satish.
>
> On Tue, Oct 17, 2017 at 4:46 AM, Clebert Suconic 
> wrote:
>
>> That works.
>>
>> On Mon, Oct 16, 2017 at 6:59 PM Ted Yu  wrote:
>>
>> > Can't you use IllegalArgumentException ?
>> >
>> > Some example in current code base:
>> >
>> > clients/src/main/java/org/apache/kafka/clients/Metadata.java:
>> >  throw new IllegalArgumentException("Max time to wait for metadata
>> updates
>> > should not be < 0 milliseconds");
>> >
>> > On Mon, Oct 16, 2017 at 3:06 PM, Clebert Suconic <
>> > clebert.suco...@gmail.com>
>> > wrote:
>> >
>> > > I updated the wiki with the list on the proposed arguments.
>> > >
>> > > I also changed it to include a new Exception class that would be named
>> > > InvalidParameterException (since I couldn't find an existing Exception
>> > > class that I could reuse into this). (I could review the name or the
>> > > exception of course.. just my current proposal)
>> > >
>> > > On Mon, Oct 16, 2017 at 5:55 PM, Jakub Scholz  wrote:
>> > > > Hi Clebert,
>> > > >
>> > > > I think it would be good if this could cover not only KafkaConsumer
>> and
>> > > > KafkaProducer but also the AdminClient. So that all three can be
>> > > configured
>> > > > the same way.
>> > > >
>> > > > The bootstrap servers are a list - you can provide multiple bootstrap
>> > > > servers. Maybe you add an example of how that will be configured. I
>> > > assume
>> > > > it will be
>> > > > "host:port,host2:port2;parameterName=value1;parameterName2=value2"
>> but
>> > > it
>> > > > would be great to have it documented.
>> > > >
>> > > > Thanks & Regards
>> > > > Jakub
>> > > >
>> > > > On Mon, Oct 16, 2017 at 11:30 PM, Clebert Suconic <
>> > > clebert.suco...@gmail.com
>> > > >> wrote:
>> > > >
>> > > >> I would like to start a discussion about KIP-209
>> > > >> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > >> 209+-+Connection+String+Support)
>> > > >>
>> > > >> This is an extension of my previous thread:
>> > > >> http://mail-archives.apache.org/mod_mbox/kafka-dev/201710.
>> > > >> mbox/%3cCAKF+bsoFbN13D-u20tUsP6G+aHX4BUNk=S8M4KyJxAt_
>> > > >> oyv...@mail.gmail.com%3e
>> > > >>
>> > > >> this could make the bootstrap of a consumer or producer similar to
>> > > >> what users are already used when connecting into other systems,
>> being
>> > > >> a simple addition to Producer and Consumer, without breaking any
>> > > >> previous client usage.
>> > > >>
>> > > >>
>> > > >> --
>> > > >> Clebert Suconic
>> > > >>
>> > >
>> > >
>> > >
>> > > --
>> > > Clebert Suconic
>> > >
>> >
>> --
>> Clebert Suconic
>>



-- 
Clebert Suconic


Can you please subscribe me in this project

2017-10-17 Thread Nikhil Deore
Hi,

I want to learn and contribute to this project,
Please subscribe me in.

Thanks,
Nikhil


[VOTE] KIP-207:The Offsets which ListOffsetsResponse returns should monotonically increase even during a partition leader change

2017-10-17 Thread Colin McCabe
Hi all,

I'd like to start the voting process for KIP-207:The  Offsets which
ListOffsetsResponse returns should monotonically increase even during a
partition leader change.

See
https://cwiki.apache.org/confluence/display/KAFKA/KIP-207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+monotonically+increasing+even+during+a+partition+leader+change
for details.

The voting process will run for at least 72 hours.

regards,
Colin


Re: [VOTE] KIP-207:The Offsets which ListOffsetsResponse returns should monotonically increase even during a partition leader change

2017-10-17 Thread Apurva Mehta
+1 (non-binding)

On Tue, Oct 17, 2017 at 11:11 AM, Colin McCabe  wrote:

> Hi all,
>
> I'd like to start the voting process for KIP-207:The  Offsets which
> ListOffsetsResponse returns should monotonically increase even during a
> partition leader change.
>
> See
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+
> monotonically+increasing+even+during+a+partition+leader+change
> for details.
>
> The voting process will run for at least 72 hours.
>
> regards,
> Colin
>


Re: [VOTE] KIP-207:The Offsets which ListOffsetsResponse returns should monotonically increase even during a partition leader change

2017-10-17 Thread Ted Yu
+1

On Tue, Oct 17, 2017 at 11:23 AM, Apurva Mehta  wrote:

> +1 (non-binding)
>
> On Tue, Oct 17, 2017 at 11:11 AM, Colin McCabe  wrote:
>
> > Hi all,
> >
> > I'd like to start the voting process for KIP-207:The  Offsets which
> > ListOffsetsResponse returns should monotonically increase even during a
> > partition leader change.
> >
> > See
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+
> > monotonically+increasing+even+during+a+partition+leader+change
> > for details.
> >
> > The voting process will run for at least 72 hours.
> >
> > regards,
> > Colin
> >
>


Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-17 Thread Colin McCabe
Hi Paolo,

This is a nice improvement.

I agree that the discussion of creating a DeleteTopicPolicy can wait
until later.  Perhaps we can do it in a follow-on KIP.  However, we do
need to specify what ACL permissions are needed to invoke this API. 
That should be in the JavaDoc comments as well.  Based on the previous
discussion, I am assuming that this means DELETE on the TOPIC resource? 
Can you add this to the KIP?

Right now you have the signature:
> DeleteRecordsResult deleteRecords(Map 
> partitionsAndOffsets)

Since this function is all about deleting records that come before a
certain offset, how about calling it deleteRecordsBeforeOffset?  That
way, if we come up with another way of deleting records in the future
(such as a timestamp or transaction-based way) it will not be confusing.

On Mon, Oct 16, 2017, at 20:50, Becket Qin wrote:
> Hi Paolo,
> 
> Thanks for the KIP and sorry for being late on the thread. I am wondering
> what is the KafkaFuture returned by all() call? Should it be a
> Map instead?

Good point.

cheers,
Colin


> 
> Thanks,
> 
> Jiangjie (Becket) QIn
> 
> On Thu, Sep 28, 2017 at 3:48 AM, Paolo Patierno 
> wrote:
> 
> > Hi,
> >
> >
> > maybe we want to start without the delete records policy for now waiting
> > for a real needs. So I'm removing it from the KIP.
> >
> > I hope for more comments on this KIP-204 so that we can start a vote on
> > Monday.
> >
> >
> > Thanks.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Paolo Patierno 
> > Sent: Thursday, September 28, 2017 5:56 AM
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > new Admin Client API
> >
> > Hi,
> >
> >
> > I have just updated the KIP-204 description with the new
> > TopicDeletionPolicy suggested by the KIP-201.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Paolo Patierno 
> > Sent: Tuesday, September 26, 2017 4:57 PM
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > new Admin Client API
> >
> > Hi Tom,
> >
> > as I said in the KIP-201 discussion I'm ok with having a unique
> > DeleteTopicPolicy but then it could be useful having more information then
> > just the topic name; partitions and offset for messages deletion could be
> > useful for a fine grained use cases.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Tom Bentley 
> > Sent: Tuesday, September 26, 2017 4:32 PM
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > new Admin Client API
> >
> > Hi Paolo,
> >
> > I guess a RecordDeletionPolicy should work at the partition level, whereas
> > the TopicDeletionPolicy should work at the topic level. But then we run
> > into a similar situation as described in the motivation for KIP-201, where
> > the administrator might have to write+configure two policies in order to
> > express their intended rules. For example, it's no good preventing people
> > from deleting topics if they can delete all the messages in those topics,
> > or vice versa.
> >
> > On that reasoning, perhaps there should be a single policy interface
> > covering topic deletion and message deletion. Alternatively, the topic
> > deletion API could also invoke the record deletion policy (before the topic
> > deletion policy I mean). But the former would be more consistent with
> > what's proposed in KIP-201.
> >
> > Wdyt? I can add this to KIP-201 if you want.
> >
> > Cheers,
> >
> > Tom
> >
> >
> >
> >
> >
> > On 26 September 2017 at 17:01, Paolo Patierno  wrote:
> >
> > > Hi Tom,
> > >
> > > I think that we could live with the current authorizer based on delete
> > > topic (for both, deleting messages and topic as a whole) but then the
> > > RecordsDeletePolicy could be even more fine grained giving the
> > possibility
> > > to avoid deleting messages for specific partitions (inside the topic) and
> > > starting from a specific offset.
> > >
> > > I could think on some users solutions where 

Re: [VOTE] 1.0.0 RC1

2017-10-17 Thread Vahid S Hashemian
Thanks Ismael for the tip.
I missed it in the Readme page (
https://github.com/apache/kafka#running-a-task-on-a-particular-version-of-scala-either-211x-or-212x
)

--Vahid



From:   Ismael Juma 
To: dev@kafka.apache.org
Cc: Kafka Users 
Date:   10/16/2017 06:50 PM
Subject:Re: [VOTE] 1.0.0 RC1



If you don't use the default Scala version, you have to set the
SCALA_VERSION environment variable for the bin scripts to work.

Ismael

On 17 Oct 2017 1:30 am, "Vahid S Hashemian" 
wrote:

Hi Guozhang,

I'm not sure if this should be covered by "Java 9 support" in the RC note,
but when I try to build jars from source using Java 9 (./gradlew
-PscalaVersion=2.12 jar) even though the build reports as succeeded, it
doesn't seem to have been successful:

$ bin/zookeeper-server-start.sh config/zookeeper.properties
Error: Could not find or load main class
org.apache.zookeeper.server.quorum.QuorumPeerMain
Caused by: java.lang.ClassNotFoundException:
org.apache.zookeeper.server.quorum.QuorumPeerMain

Please advise if I'm missing something.

Thanks.
--Vahid




From:   Guozhang Wang 
To: "dev@kafka.apache.org" ,
"us...@kafka.apache.org" , kafka-clients

Date:   10/13/2017 01:12 PM
Subject:[VOTE] 1.0.0 RC1



Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 1.0.0.

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new
release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_
confluence_pages_viewpage.action-3FpageId-3D71764913&d=
DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=
tT9k0x5RvXtHEtLzp03BA1Y8DAgHzgCXD7UjqP7oiKE&e=
<
https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
apache.org_confluence_pages_viewpage.action-3FpageId-
3D71764913&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=
tT9k0x5RvXtHEtLzp03BA1Y8DAgHzgCXD7UjqP7oiKE&e=
>*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
(KIP)
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113)
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-
7Eguozhang_kafka-2D1.0.0-2Drc1_RELEASE-5FNOTES.html&d=
DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=
xopSUD2TETEI5y8kxHM4P-jUdUKUIiUig2xVwabgDq8&e=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_RELEASE-5FNOTES.
html&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=
xopSUD2TETEI5y8kxHM4P-jUdUKUIiUig2xVwabgDq8&e=
>*



*** Please download, test and vote by Tuesday, October 13, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.
apache.org_KEYS&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_
itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_
OWk2y2QfKYsXitTyAHHM&s=FfLcWlN8ODpZ2m1KliMfp35duIxif3FNnptY5-9JKWU&e=


* Release artifacts to be voted upon (source and binary):
*https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-
7Eguozhang_kafka-2D1.0.0-2Drc1_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_
itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_
OWk2y2QfKYsXitTyAHHM&s=bcWIqj27_tkoj-fnEzcLdP8uGXyAt6gS9KUy12WF1FE&e=
<
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.
apache.org_-7Eguozhang_kafka-2D1.0.0-2Drc1_&d=DwIBaQ&c=jf_
iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=
VyLkHrCpgoKOD8nDthZgGw_OWk2y2QfKYsXitTyAHHM&s=bcWIqj27_tkoj-
fnEzcLdP8uGXyAt6gS9KUy12WF1FE&e=
>*

* Maven artifacts to be voted upon:
https://urldefense.proofpoint.com/v2/url?u=https-3A__
repository.apache.org_content_groups_staging_org_apache_
kafka_&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
kjJc7uSVcviKUc&m=VyLkHrCpgoKOD8nDthZgGw_O

[GitHub] kafka-site issue #98: Added Pinterest logo to streams page

2017-10-17 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/98
  
LGTM. Merged to asf-site.


---


[GitHub] kafka-site pull request #98: Added Pinterest logo to streams page

2017-10-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/98


---


Re: [VOTE] KIP-207:The Offsets which ListOffsetsResponse returns should monotonically increase even during a partition leader change

2017-10-17 Thread Jun Rao
Hi, Colin,

Thanks for the KIP. +1. Just a minor comment. For the old client requests,
would it be better to return a LEADER_NOT_AVAILABLE error instead?

Jun

On Tue, Oct 17, 2017 at 11:11 AM, Colin McCabe  wrote:

> Hi all,
>
> I'd like to start the voting process for KIP-207:The  Offsets which
> ListOffsetsResponse returns should monotonically increase even during a
> partition leader change.
>
> See
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+
> monotonically+increasing+even+during+a+partition+leader+change
> for details.
>
> The voting process will run for at least 72 hours.
>
> regards,
> Colin
>


[GitHub] kafka pull request #4083: MINOR: Improve a Windows quickstart instruction

2017-10-17 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/4083

MINOR: Improve a Windows quickstart instruction

The output of `wmic` can be very long and could truncate the search 
keywords in the existing command. If those keywords are truncated no process is 
returned in the output. An update is suggested to the command by which the 
query is performed inside the `wmic` command itself instead of using pipes and 
`find`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka 
minor/improve_quickstart_for_windows_wmic

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4083.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4083


commit 9f52c1a6c29095a7e1b24b397644e24da5d51d79
Author: Vahid Hashemian 
Date:   2017-10-17T20:25:02Z

MINOR: Improve quickstart instruction for Windows OS

The output of `wmic` can be very long and could truncate the search 
keywords in the existing command. If those keywords are truncated no process is 
returned in the output of existing command.
An update is suggested to the command so the query is performed inside the 
`wmic` command itself instead of using pipes and greps.




---


[jira] [Created] (KAFKA-6070) ducker-ak: add ipaddress and enum34 dependencies to docker image

2017-10-17 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-6070:
--

 Summary: ducker-ak: add ipaddress and enum34 dependencies to 
docker image
 Key: KAFKA-6070
 URL: https://issues.apache.org/jira/browse/KAFKA-6070
 Project: Kafka
  Issue Type: Bug
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


ducker-ak: add ipaddress and enum34 dependencies to docker image



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4084: KAFKA-6070: add ipaddress and enum34 dependencies ...

2017-10-17 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/4084

KAFKA-6070: add ipaddress and enum34 dependencies to docker image



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-6070

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4084.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4084


commit 8882bfe65e49bdae9b5b52f4fa5bdc3bc46eaa4f
Author: Colin P. Mccabe 
Date:   2017-10-17T21:55:57Z

KAFKA-6070: add ipaddress and enum34 dependencies to docker image




---


[GitHub] kafka pull request #4085: HOTFIX: poll with zero millis during restoration

2017-10-17 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4085

HOTFIX: poll with zero millis during restoration



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KHotfix-0110-restore-only

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4085.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4085


commit 941a567f0388b4c74d095c444165e4315ff5b9df
Author: Guozhang Wang 
Date:   2017-10-17T20:29:15Z

poll with zero millis during restoration




---


[jira] [Created] (KAFKA-6071) Use ZookeeperClient in LogManager

2017-10-17 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-6071:
--

 Summary: Use ZookeeperClient in LogManager 
 Key: KAFKA-6071
 URL: https://issues.apache.org/jira/browse/KAFKA-6071
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 1.1.0
Reporter: Jun Rao


We want to replace the usage of ZkUtils in LogManager with ZookeeperClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6072) Use ZookeeperClient in GroupCoordinator

2017-10-17 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-6072:
--

 Summary: Use ZookeeperClient in GroupCoordinator
 Key: KAFKA-6072
 URL: https://issues.apache.org/jira/browse/KAFKA-6072
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 1.1.0
Reporter: Jun Rao


We want to replace the usage of ZkUtils in GroupCoordinator with 
ZookeeperClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6073) Use ZookeeperClient in KafkaApis

2017-10-17 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-6073:
--

 Summary: Use ZookeeperClient in KafkaApis
 Key: KAFKA-6073
 URL: https://issues.apache.org/jira/browse/KAFKA-6073
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 1.1.0
Reporter: Jun Rao
 Fix For: 1.1.0


We want to replace the usage of ZkUtils with ZookeeperClient in KafkaApis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6074) Use ZookeeperClient in ReplicaManager and Partition

2017-10-17 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-6074:
--

 Summary: Use ZookeeperClient in ReplicaManager and Partition
 Key: KAFKA-6074
 URL: https://issues.apache.org/jira/browse/KAFKA-6074
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 1.1.0
Reporter: Jun Rao
 Fix For: 1.1.0


We want to replace the usage of ZkUtils with ZookeeperClient in ReplicaManager 
and Partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-17 Thread Dong Lin
Hey Colin,

I have also thought about deleteRecordsBeforeOffset so that we can keep the
name consistent with the existing API in the Scala AdminClient. But then I
think it may be better to be able to specify in DeleteRecordsOptions
whether the deletion is before/after timestamp or offset. By doing this we
have one API rather than four API in Java AdminClient going forward. What
do you think?

Thanks,
Dong

On Tue, Oct 17, 2017 at 11:35 AM, Colin McCabe  wrote:

> Hi Paolo,
>
> This is a nice improvement.
>
> I agree that the discussion of creating a DeleteTopicPolicy can wait
> until later.  Perhaps we can do it in a follow-on KIP.  However, we do
> need to specify what ACL permissions are needed to invoke this API.
> That should be in the JavaDoc comments as well.  Based on the previous
> discussion, I am assuming that this means DELETE on the TOPIC resource?
> Can you add this to the KIP?
>
> Right now you have the signature:
> > DeleteRecordsResult deleteRecords(Map
> partitionsAndOffsets)
>
> Since this function is all about deleting records that come before a
> certain offset, how about calling it deleteRecordsBeforeOffset?  That
> way, if we come up with another way of deleting records in the future
> (such as a timestamp or transaction-based way) it will not be confusing.
>
> On Mon, Oct 16, 2017, at 20:50, Becket Qin wrote:
> > Hi Paolo,
> >
> > Thanks for the KIP and sorry for being late on the thread. I am wondering
> > what is the KafkaFuture returned by all() call? Should it be a
> > Map instead?
>
> Good point.
>
> cheers,
> Colin
>
>
> >
> > Thanks,
> >
> > Jiangjie (Becket) QIn
> >
> > On Thu, Sep 28, 2017 at 3:48 AM, Paolo Patierno 
> > wrote:
> >
> > > Hi,
> > >
> > >
> > > maybe we want to start without the delete records policy for now
> waiting
> > > for a real needs. So I'm removing it from the KIP.
> > >
> > > I hope for more comments on this KIP-204 so that we can start a vote on
> > > Monday.
> > >
> > >
> > > Thanks.
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Paolo Patierno 
> > > Sent: Thursday, September 28, 2017 5:56 AM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to
> the
> > > new Admin Client API
> > >
> > > Hi,
> > >
> > >
> > > I have just updated the KIP-204 description with the new
> > > TopicDeletionPolicy suggested by the KIP-201.
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Paolo Patierno 
> > > Sent: Tuesday, September 26, 2017 4:57 PM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to
> the
> > > new Admin Client API
> > >
> > > Hi Tom,
> > >
> > > as I said in the KIP-201 discussion I'm ok with having a unique
> > > DeleteTopicPolicy but then it could be useful having more information
> then
> > > just the topic name; partitions and offset for messages deletion could
> be
> > > useful for a fine grained use cases.
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Tom Bentley 
> > > Sent: Tuesday, September 26, 2017 4:32 PM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to
> the
> > > new Admin Client API
> > >
> > > Hi Paolo,
> > >
> > > I guess a RecordDeletionPolicy should work at the partition level,
> whereas
> > > the TopicDeletionPolicy should work at the topic level. But then we run
> > > into a similar situation as described in the motivation for KIP-201,
> where
> > > the administrator might have to write+configure two policies in order
> to
> > > express their intended rules. For example, it's no good preventing
> people
> > > from deleting topics if they can delete all the messages in those
> topics,
> > > or vice versa.
> > >
> > > On that reasoning, perhaps there should be a single policy interface
> > > covering topic deletion and message deletion. Alternatively, the topic
> > > deletion API could also invoke the record d

[jira] [Created] (KAFKA-6075) Kafka cannot recover after an unclean shutdown on Windows

2017-10-17 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6075:
--

 Summary: Kafka cannot recover after an unclean shutdown on Windows
 Key: KAFKA-6075
 URL: https://issues.apache.org/jira/browse/KAFKA-6075
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.1
Reporter: Vahid Hashemian


An unclean shutdown of broker on Windows cannot be recovered by Kafka. Steps to 
reproduce from a fresh build:
# Start zookeeper
# Start a broker
# Create a topic {{test}}
# Do an unclean shutdown of broker (find the process id by {{wmic process where 
"caption = 'java.exe' and commandline like '%server.properties%'" get 
processid}}), then kill the process by {{taskkill /pid  /f}}
# Start the broker again

This leads to the following errors:
{code}
[2017-10-17 17:13:24,819] ERROR Error while loading log dir C:\tmp\kafka-logs 
(kafka.log.LogManager)
java.nio.file.FileSystemException: 
C:\tmp\kafka-logs\test-0\.timeindex: The process cannot 
access the file because it is being used by another process.

at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at 
sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:333)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:295)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:295)
at kafka.log.Log.loadSegments(Log.scala:404)
at kafka.log.Log.(Log.scala:201)
at kafka.log.Log$.apply(Log.scala:1729)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:221)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$8$$anonfun$apply$16$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:292)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2017-10-17 17:13:24,819] ERROR Error while deleting the clean shutdown file in 
dir C:\tmp\kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: 
C:\tmp\kafka-logs\test-0\.timeindex: The process cannot 
access the file because it is being used by another process.

at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at 
sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
at java.nio.file.Files.deleteIfExists(Files.java:1165)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:333)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:295)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:295)
at kafka.log.Log.loadSegments(Log.scala:404)
at kafka.log.Log.(Log.scala:201)
at kafka.log.Log$.apply(Log.scala:1729)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:221)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$8$$anonfun$apply$16$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:292)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask

[GitHub] kafka pull request #4086: HOTFIX: Normal poll with zero during restoration

2017-10-17 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4086

HOTFIX: Normal poll with zero during restoration

Mirror of #4085 against trunk.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KHotfix-restore-only

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4086.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4086


commit 9f4a0e9e4b88929901b26d8599b6f5fbecac
Author: Guozhang Wang 
Date:   2017-10-18T01:27:35Z

normal poll with zero during restoration




---


[GitHub] kafka pull request #3782: KAFKA-5829; Speedup broker startup after unclean s...

2017-10-17 Thread lindong28
Github user lindong28 closed the pull request at:

https://github.com/apache/kafka/pull/3782


---


Re: [VOTE] KIP-207:The Offsets which ListOffsetsResponse returns should monotonically increase even during a partition leader change

2017-10-17 Thread Satish Duggana
+1 (non binding)

Thanks,
Satish.

On Wed, Oct 18, 2017 at 1:56 AM, Jun Rao  wrote:

> Hi, Colin,
>
> Thanks for the KIP. +1. Just a minor comment. For the old client requests,
> would it be better to return a LEADER_NOT_AVAILABLE error instead?
>
> Jun
>
> On Tue, Oct 17, 2017 at 11:11 AM, Colin McCabe  wrote:
>
> > Hi all,
> >
> > I'd like to start the voting process for KIP-207:The  Offsets which
> > ListOffsetsResponse returns should monotonically increase even during a
> > partition leader change.
> >
> > See
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+
> > monotonically+increasing+even+during+a+partition+leader+change
> > for details.
> >
> > The voting process will run for at least 72 hours.
> >
> > regards,
> > Colin
> >
>


[jira] [Created] (KAFKA-6076) Using new producer api of transaction twice failed when server run on Windows OS

2017-10-17 Thread Orwen Xiang (JIRA)
Orwen Xiang created KAFKA-6076:
--

 Summary: Using new producer api of transaction twice failed when 
server run on Windows OS
 Key: KAFKA-6076
 URL: https://issues.apache.org/jira/browse/KAFKA-6076
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.11.0.1
 Environment: OS: Windows 10 64bit
Kafka:  
kafka_2.11-0.11.0.1(https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz)
JDK: 1.8.0_144  64bit
Reporter: Orwen Xiang


Can't invoke twice (begin,commit transaction) on same Kafka Producer instance  
when it connected Kafka server run on windows 10.

But same code can run successfully when Kafka server run on CentOS 7.3 64bit 
with same Kafka server code base and config.

Producer code looks like:
Map props = new HashMap<>();
props.put("bootstrap.servers", "localhost:9092");
props.put("transactional.id", "my-transactional-id");
Producer producer = new KafkaProducer<>(props, new 
StringSerializer(), new StringSerializer());
producer.initTransactions();
try {
 producer.beginTransaction();
 for (int i = 0; i < 100; i++)
producer.send(new ProducerRecord<>("test-2", Integer.toString(i), 
Integer.toString(i)));
 producer.commitTransaction();
 System.out.println("sent one time done");
 producer.beginTransaction();
 for (int i = 0; i < 100; i++)
producer.send(new ProducerRecord<>("test-2", Integer.toString(i), 
Integer.toString(i)));
  producer.commitTransaction();
  System.out.println("sent two time done");
  } catch (ProducerFencedException | OutOfOrderSequenceException | 
AuthorizationException e) {
   producer.close();
  } catch (KafkaException e) {
   producer.abortTransaction();
}
  producer.close();



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-17 Thread Manikumar
+ (non-binding)


Thanks,
Manikumar

On Tue, Oct 17, 2017 at 7:42 AM, Dong Lin  wrote:

> Thanks for the KIP. +1 (non-binding)
>
> On Wed, Oct 11, 2017 at 2:27 AM, Ted Yu  wrote:
>
> > +1
> >
> > On Mon, Oct 2, 2017 at 10:51 PM, Paolo Patierno 
> > wrote:
> >
> > > Hi all,
> > >
> > > I didn't see any further discussion around this KIP, so I'd like to
> start
> > > the vote for it.
> > >
> > > Just for reference : https://cwiki.apache.org/
> > > confluence/display/KAFKA/KIP-204+%3A+adding+records+
> > > deletion+operation+to+the+new+Admin+Client+API
> > >
> > >
> > > Thanks,
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> >
>


[jira] [Created] (KAFKA-6077) Let SimpleConsumer support Kerberos authentication

2017-10-17 Thread huangjianan (JIRA)
huangjianan created KAFKA-6077:
--

 Summary: Let SimpleConsumer support Kerberos authentication
 Key: KAFKA-6077
 URL: https://issues.apache.org/jira/browse/KAFKA-6077
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 0.10.0.0
Reporter: huangjianan


Cannot use SimpleConsumer in Kafka Kerberos environment



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)