Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-02-29 Thread Rajini Sivaram
I have added some more detail to the KIP based on the discussion in the
last KIP meeting to simplify support for multiple mechanisms. Have also
changed the property names to reflect this.

Also updated the PR in https://issues.apache.org/jira/browse/KAFKA-3149 to
reflect the KIP.

Any feedback is appreciated.


On Tue, Feb 23, 2016 at 9:36 PM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> I have updated the KIP based on the discussion in the KIP meeting today.
>
> Comments and feedback are welcome.
>
> On Wed, Feb 3, 2016 at 7:20 PM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
>
>> Hi Harsha,
>>
>> Thank you for the review. Can you clarify - I think you are saying that
>> the client should send its mechanism over the wire to the server. Is that
>> correct? The exchange is slightly different in the KIP (the PR matches the
>> KIP) from the one you described to enable interoperability with 0.9.0.0.
>>
>>
>> On Wed, Feb 3, 2016 at 1:56 PM, Harsha  wrote:
>>
>>> Rajini,
>>>I looked at the PR you have. I think its better with your
>>>earlier approach rather than extending the protocol.
>>> What I was thinking initially is, Broker has a config option of say
>>> sasl.mechanism = GSSAPI, PLAIN
>>> and the client can have similar config of sasl.mechanism=PLAIN. Client
>>> can send its sasl mechanism before the handshake starts and if the
>>> broker accepts that particular mechanism than it can go ahead with
>>> handshake otherwise return a error saying that the mechanism not
>>> allowed.
>>>
>>> Thanks,
>>> Harsha
>>>
>>> On Wed, Feb 3, 2016, at 04:58 AM, Rajini Sivaram wrote:
>>> > A slightly different approach for supporting different SASL mechanisms
>>> > within a broker is to allow the same "*security protocol*" to be used
>>> on
>>> > different ports with different configuration options. An advantage of
>>> > this
>>> > approach is that it extends the configurability of not just SASL, but
>>> any
>>> > protocol. For instance, it would enable the use of SSL with mutual
>>> client
>>> > authentication on one port or different certificate chains on another.
>>> > And
>>> > it avoids the need for SASL mechanism negotiation.
>>> >
>>> > Kafka would have the same "*security protocols" *defined as today, but
>>> > with
>>> > (a single) configurable SASL mechanism. To have different
>>> configurations
>>> > of
>>> > a protocol within a broker, users can define new protocol names which
>>> are
>>> > configured versions of existing protocols, perhaps using just
>>> > configuration
>>> > entries and no additional code.
>>> >
>>> > For example:
>>> >
>>> > A single mechanism broker would be configured as:
>>> >
>>> > listeners=SASL_SSL://:9092
>>> > sasl.mechanism=GSSAPI
>>> > sasl.kerberos.class.name=kafka
>>> > ...
>>> >
>>> >
>>> > And a multi-mechanism broker would be configured as:
>>> >
>>> > listeners=gssapi://:9092,plain://:9093,custom://:9094
>>> > gssapi.security.protocol=SASL_SSL
>>> > gssapi.sasl.mechanism=GSSAPI
>>> > gssapi.sasl.kerberos.class.name=kafka
>>> > ...
>>> > plain.security.protocol=SASL_SSL
>>> > plain.sasl.mechanism=PLAIN
>>> > ..
>>> > custom.security.protocol=SASL_PLAINTEXT
>>> > custom.sasl.mechanism=CUSTOM
>>> > custom.sasl.callback.handler.class=example.CustomCallbackHandler
>>> >
>>> >
>>> >
>>> > This is still a big change because it affects the currently fixed
>>> > enumeration of security protocol definitions, but one that is perhaps
>>> > more
>>> > flexible than defining every new SASL mechanism as a new security
>>> > protocol.
>>> >
>>> > Thoughts?
>>> >
>>> >
>>> > On Tue, Feb 2, 2016 at 12:20 PM, Rajini Sivaram <
>>> > rajinisiva...@googlemail.com> wrote:
>>> >
>>> > > As Ismael has said, we do not have a requirement to support multiple
>>> > > protocols in a broker. But I agree with Jun's observation that some
>>> > > companies might want to support a different authentication mechanism
>>> for
>>> > > internal users or partners. For instance, we do use two different
>>> > > authentication mechanisms, it just so happens that we are able to use
>>> > > certificate-based authentication for internal users, and hence don't
>>> > > require multiple SASL mechanisms in a broker.
>>> > >
>>> > > As Tao has pointed out, mechanism negotiation is a common usage
>>> pattern.
>>> > > Many existing protocols that support SASL do already use this
>>> pattern. AMQP
>>> > > (
>>> > >
>>> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-security-v1.0-os.html#type-sasl-mechanisms
>>> ),
>>> > > which, as a messaging protocol maybe closer to Kafka in use cases
>>> than
>>> > > Zookeeper, is an example. Other examples where the client negotiates
>>> or
>>> > > sends SASL mechanism to server include ACAP that is used as an
>>> example in
>>> > > the SASL RFCs, POP3, LDAP, SMTP etc. This is not to say that Kafka
>>> > > shouldn't use a different type of mechanism selection that fits
>>> better with
>>> > > the existing Kafka design. Just that negotiati

[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-02-29 Thread Simon Cooper (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15171739#comment-15171739
 ] 

Simon Cooper commented on KAFKA-3296:
-

This is done right after kafka is installed on the VM - we install the 
binaries, start up the broker, and create some topics. From that point, 
consumers don't work.

We've put debug logging on the server, so the next time this happens, we'll 
have broker debug logs.

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$Reques

[jira] [Commented] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-29 Thread Robert Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15171787#comment-15171787
 ] 

Robert Lowe commented on KAFKA-3218:


Using the PR https://github.com/apache/kafka/pull/888 works in the Karaf 
container with Activator.java code updated to include 
{code:java}
properties.put("key.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("partitioner.class", 
"org.apache.kafka.clients.producer.internals.DefaultPartitioner");
properties.put("bootstrap.servers","10.0.25.96:13072");
KafkaProducer kafkaProducer = new KafkaProducer(properties);
{code}

> Kafka-0.9.0.0 does not work as OSGi module
> --
>
> Key: KAFKA-3218
> URL: https://issues.apache.org/jira/browse/KAFKA-3218
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Apache Felix OSGi container
> jdk_1.8.0_60
>Reporter: Joe O'Connor
>Assignee: Rajini Sivaram
> Attachments: ContextClassLoaderBug.tar.gz
>
>
> KAFKA-2295 changed all Class.forName() calls to use 
> currentThread().getContextClassLoader() instead of the default "classloader 
> that loaded the current class". 
> OSGi loads each module's classes using a separate classloader so this is now 
> broken.
> Steps to reproduce: 
> # install the kafka-clients servicemix OSGi module 0.9.0.0_1
> # attempt to initialize the Kafka producer client from Java code 
> Expected results: 
> - call to "new KafkaProducer()" succeeds
> Actual results: 
> - "new KafkaProducer()" throws ConfigException:
> {quote}Suppressed: java.lang.Exception: Error starting bundle54: 
> Activator start error in bundle com.openet.testcase.ContextClassLoaderBug 
> [54].
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
> ... 12 more
> Caused by: org.osgi.framework.BundleException: Activator start error 
> in bundle com.openet.testcase.ContextClassLoaderBug [54].
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)
> at 
> org.apache.felix.framework.Felix.startBundle(Felix.java:2144)
> at 
> org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
> at 
> org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
> ... 12 more
> Caused by: java.lang.ExceptionInInitializerError
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
> at com.openet.testcase.Activator.start(Activator.java:16)
> at 
> org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)
> ... 16 more
> *Caused by: org.apache.kafka.common.config.ConfigException: Invalid 
> value org.apache.kafka.clients.producer.internals.DefaultPartitioner for 
> configuration partitioner.class: Class* 
> *org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be 
> found.*
> at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:206)
> {quote}
> Workaround is to call "currentThread().setContextClassLoader(null)" before 
> initializing the kafka producer.
> Possible fix is to catch ClassNotFoundException at ConfigDef.java:247 and 
> retry the Class.forName() call with the default classloader. However with 
> this fix there is still a problem at AbstractConfig.java:206,  where the 
> newInstance() call succeeds but "instanceof" is false because the classes 
> were loaded by different classloaders.
> Testcase attached, see README.txt for instructions.
> See also SM-2743



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-29 Thread Robert Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15171787#comment-15171787
 ] 

Robert Lowe edited comment on KAFKA-3218 at 2/29/16 11:57 AM:
--

Using the PR https://github.com/apache/kafka/pull/888 works in the Karaf 
container with Activator.java code updated to include 
{code:java}
properties.put("key.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("partitioner.class", 
"org.apache.kafka.clients.producer.internals.DefaultPartitioner");
properties.put("bootstrap.servers","");
KafkaProducer kafkaProducer = new KafkaProducer(properties);
{code}


was (Author: lowerobert):
Using the PR https://github.com/apache/kafka/pull/888 works in the Karaf 
container with Activator.java code updated to include 
{code:java}
properties.put("key.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("partitioner.class", 
"org.apache.kafka.clients.producer.internals.DefaultPartitioner");
properties.put("bootstrap.servers","10.0.25.96:13072");
KafkaProducer kafkaProducer = new KafkaProducer(properties);
{code}

> Kafka-0.9.0.0 does not work as OSGi module
> --
>
> Key: KAFKA-3218
> URL: https://issues.apache.org/jira/browse/KAFKA-3218
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Apache Felix OSGi container
> jdk_1.8.0_60
>Reporter: Joe O'Connor
>Assignee: Rajini Sivaram
> Attachments: ContextClassLoaderBug.tar.gz
>
>
> KAFKA-2295 changed all Class.forName() calls to use 
> currentThread().getContextClassLoader() instead of the default "classloader 
> that loaded the current class". 
> OSGi loads each module's classes using a separate classloader so this is now 
> broken.
> Steps to reproduce: 
> # install the kafka-clients servicemix OSGi module 0.9.0.0_1
> # attempt to initialize the Kafka producer client from Java code 
> Expected results: 
> - call to "new KafkaProducer()" succeeds
> Actual results: 
> - "new KafkaProducer()" throws ConfigException:
> {quote}Suppressed: java.lang.Exception: Error starting bundle54: 
> Activator start error in bundle com.openet.testcase.ContextClassLoaderBug 
> [54].
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
> ... 12 more
> Caused by: org.osgi.framework.BundleException: Activator start error 
> in bundle com.openet.testcase.ContextClassLoaderBug [54].
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)
> at 
> org.apache.felix.framework.Felix.startBundle(Felix.java:2144)
> at 
> org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
> at 
> org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
> ... 12 more
> Caused by: java.lang.ExceptionInInitializerError
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
> at com.openet.testcase.Activator.start(Activator.java:16)
> at 
> org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)
> ... 16 more
> *Caused by: org.apache.kafka.common.config.ConfigException: Invalid 
> value org.apache.kafka.clients.producer.internals.DefaultPartitioner for 
> configuration partitioner.class: Class* 
> *org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be 
> found.*
> at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:206)
> {quote}
> Workaround is to call "currentThread().setContextClassLoader(null)" before 
> initializing the kafka producer.
> Possible fix is to catch ClassNotFoundException at ConfigDef.java:247 and 
> retry the Class.forName() call with the default classloader. However with 
> this fix there is still a problem at AbstractConfig.java:206,  where the 
> newInstance() call succeeds but "instanceof" is false because the classes 
> were loaded by different classl

[jira] [Comment Edited] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-29 Thread Robert Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15171787#comment-15171787
 ] 

Robert Lowe edited comment on KAFKA-3218 at 2/29/16 11:58 AM:
--

Using the PR https://github.com/apache/kafka/pull/888 works in the Karaf 
container with Activator.java code updated to include 
{code:java}
properties.put("key.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("partitioner.class", 
"org.apache.kafka.clients.producer.internals.DefaultPartitioner");
properties.put("bootstrap.servers","localhost:12345");
KafkaProducer kafkaProducer = new KafkaProducer(properties);
{code}


was (Author: lowerobert):
Using the PR https://github.com/apache/kafka/pull/888 works in the Karaf 
container with Activator.java code updated to include 
{code:java}
properties.put("key.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", 
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("partitioner.class", 
"org.apache.kafka.clients.producer.internals.DefaultPartitioner");
properties.put("bootstrap.servers","");
KafkaProducer kafkaProducer = new KafkaProducer(properties);
{code}

> Kafka-0.9.0.0 does not work as OSGi module
> --
>
> Key: KAFKA-3218
> URL: https://issues.apache.org/jira/browse/KAFKA-3218
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Apache Felix OSGi container
> jdk_1.8.0_60
>Reporter: Joe O'Connor
>Assignee: Rajini Sivaram
> Attachments: ContextClassLoaderBug.tar.gz
>
>
> KAFKA-2295 changed all Class.forName() calls to use 
> currentThread().getContextClassLoader() instead of the default "classloader 
> that loaded the current class". 
> OSGi loads each module's classes using a separate classloader so this is now 
> broken.
> Steps to reproduce: 
> # install the kafka-clients servicemix OSGi module 0.9.0.0_1
> # attempt to initialize the Kafka producer client from Java code 
> Expected results: 
> - call to "new KafkaProducer()" succeeds
> Actual results: 
> - "new KafkaProducer()" throws ConfigException:
> {quote}Suppressed: java.lang.Exception: Error starting bundle54: 
> Activator start error in bundle com.openet.testcase.ContextClassLoaderBug 
> [54].
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
> ... 12 more
> Caused by: org.osgi.framework.BundleException: Activator start error 
> in bundle com.openet.testcase.ContextClassLoaderBug [54].
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)
> at 
> org.apache.felix.framework.Felix.startBundle(Felix.java:2144)
> at 
> org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
> at 
> org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
> ... 12 more
> Caused by: java.lang.ExceptionInInitializerError
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
> at com.openet.testcase.Activator.start(Activator.java:16)
> at 
> org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)
> ... 16 more
> *Caused by: org.apache.kafka.common.config.ConfigException: Invalid 
> value org.apache.kafka.clients.producer.internals.DefaultPartitioner for 
> configuration partitioner.class: Class* 
> *org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be 
> found.*
> at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:206)
> {quote}
> Workaround is to call "currentThread().setContextClassLoader(null)" before 
> initializing the kafka producer.
> Possible fix is to catch ClassNotFoundException at ConfigDef.java:247 and 
> retry the Class.forName() call with the default classloader. However with 
> this fix there is still a problem at AbstractConfig.java:206,  where the 
> newInstance() call succeeds but "instanceof" is false because the classes 
> were loaded by different classlo

[jira] [Updated] (KAFKA-3296) All consumer reads hang indefinately

2016-02-29 Thread Simon Cooper (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Cooper updated KAFKA-3296:

Attachment: kafkalogs.zip

These are debug logs from the broker when it's in this broken state

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
> Attachments: kafkalogs.zip
>
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:38,057 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489538057, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@439e25fb,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21910,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489538056, 

Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-02-29 Thread Harsha
Rajini,
  Thanks for the changes to the KIP. It looks good to me. I
  think we can move to voting.
Thanks,
Harsha

On Mon, Feb 29, 2016, at 12:43 AM, Rajini Sivaram wrote:
> I have added some more detail to the KIP based on the discussion in the
> last KIP meeting to simplify support for multiple mechanisms. Have also
> changed the property names to reflect this.
> 
> Also updated the PR in https://issues.apache.org/jira/browse/KAFKA-3149
> to
> reflect the KIP.
> 
> Any feedback is appreciated.
> 
> 
> On Tue, Feb 23, 2016 at 9:36 PM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
> 
> > I have updated the KIP based on the discussion in the KIP meeting today.
> >
> > Comments and feedback are welcome.
> >
> > On Wed, Feb 3, 2016 at 7:20 PM, Rajini Sivaram <
> > rajinisiva...@googlemail.com> wrote:
> >
> >> Hi Harsha,
> >>
> >> Thank you for the review. Can you clarify - I think you are saying that
> >> the client should send its mechanism over the wire to the server. Is that
> >> correct? The exchange is slightly different in the KIP (the PR matches the
> >> KIP) from the one you described to enable interoperability with 0.9.0.0.
> >>
> >>
> >> On Wed, Feb 3, 2016 at 1:56 PM, Harsha  wrote:
> >>
> >>> Rajini,
> >>>I looked at the PR you have. I think its better with your
> >>>earlier approach rather than extending the protocol.
> >>> What I was thinking initially is, Broker has a config option of say
> >>> sasl.mechanism = GSSAPI, PLAIN
> >>> and the client can have similar config of sasl.mechanism=PLAIN. Client
> >>> can send its sasl mechanism before the handshake starts and if the
> >>> broker accepts that particular mechanism than it can go ahead with
> >>> handshake otherwise return a error saying that the mechanism not
> >>> allowed.
> >>>
> >>> Thanks,
> >>> Harsha
> >>>
> >>> On Wed, Feb 3, 2016, at 04:58 AM, Rajini Sivaram wrote:
> >>> > A slightly different approach for supporting different SASL mechanisms
> >>> > within a broker is to allow the same "*security protocol*" to be used
> >>> on
> >>> > different ports with different configuration options. An advantage of
> >>> > this
> >>> > approach is that it extends the configurability of not just SASL, but
> >>> any
> >>> > protocol. For instance, it would enable the use of SSL with mutual
> >>> client
> >>> > authentication on one port or different certificate chains on another.
> >>> > And
> >>> > it avoids the need for SASL mechanism negotiation.
> >>> >
> >>> > Kafka would have the same "*security protocols" *defined as today, but
> >>> > with
> >>> > (a single) configurable SASL mechanism. To have different
> >>> configurations
> >>> > of
> >>> > a protocol within a broker, users can define new protocol names which
> >>> are
> >>> > configured versions of existing protocols, perhaps using just
> >>> > configuration
> >>> > entries and no additional code.
> >>> >
> >>> > For example:
> >>> >
> >>> > A single mechanism broker would be configured as:
> >>> >
> >>> > listeners=SASL_SSL://:9092
> >>> > sasl.mechanism=GSSAPI
> >>> > sasl.kerberos.class.name=kafka
> >>> > ...
> >>> >
> >>> >
> >>> > And a multi-mechanism broker would be configured as:
> >>> >
> >>> > listeners=gssapi://:9092,plain://:9093,custom://:9094
> >>> > gssapi.security.protocol=SASL_SSL
> >>> > gssapi.sasl.mechanism=GSSAPI
> >>> > gssapi.sasl.kerberos.class.name=kafka
> >>> > ...
> >>> > plain.security.protocol=SASL_SSL
> >>> > plain.sasl.mechanism=PLAIN
> >>> > ..
> >>> > custom.security.protocol=SASL_PLAINTEXT
> >>> > custom.sasl.mechanism=CUSTOM
> >>> > custom.sasl.callback.handler.class=example.CustomCallbackHandler
> >>> >
> >>> >
> >>> >
> >>> > This is still a big change because it affects the currently fixed
> >>> > enumeration of security protocol definitions, but one that is perhaps
> >>> > more
> >>> > flexible than defining every new SASL mechanism as a new security
> >>> > protocol.
> >>> >
> >>> > Thoughts?
> >>> >
> >>> >
> >>> > On Tue, Feb 2, 2016 at 12:20 PM, Rajini Sivaram <
> >>> > rajinisiva...@googlemail.com> wrote:
> >>> >
> >>> > > As Ismael has said, we do not have a requirement to support multiple
> >>> > > protocols in a broker. But I agree with Jun's observation that some
> >>> > > companies might want to support a different authentication mechanism
> >>> for
> >>> > > internal users or partners. For instance, we do use two different
> >>> > > authentication mechanisms, it just so happens that we are able to use
> >>> > > certificate-based authentication for internal users, and hence don't
> >>> > > require multiple SASL mechanisms in a broker.
> >>> > >
> >>> > > As Tao has pointed out, mechanism negotiation is a common usage
> >>> pattern.
> >>> > > Many existing protocols that support SASL do already use this
> >>> pattern. AMQP
> >>> > > (
> >>> > >
> >>> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-security-v1.0-os.html#type-sasl-mechanisms
> >>> ),
> >>> > > which, as a me

Re: [DISCUSS] KIP-49: Fair Partition Assignment Strategy

2016-02-29 Thread Olson,Andrew
Thanks for the feedback. I have added a concrete example to the document that I 
think illustrates the benefit relatively well.

The observation about scaling the workload of individual consumers is certainly 
valid. I had not really considered this. Our primary concern is being able to 
gradually roll out consumption configuration changes in a minimally disruptive 
fashion, including load-balancing. If the round robin strategy can be enhanced 
to adequately handle that use case, we would be happy. Is there a Jira open for 
the "flaw" that you mentioned? 




On 2/26/16, 7:22 PM, "Joel Koshy"  wrote:

>Hi Andrew,
>
>Thanks for the wiki. Just a couple of comments:
>
>   - The disruptive config change issue that you mentioned is pretty much a
>   non-issue in the new consumer due to central assignment.
>   - Optional: but it may be helpful to add a concrete example.
>   - More of an orthogonal observation than a comment: with heavily skewed
>   subscriptions fairness is sort of moot. i.e., people would generally scale
>   up or down subscription counts with the express purpose of
>   reducing/increasing load on those instances.
>   - WRT roundrobin we later realized a significant flaw in the way we lay
>   out partitions: we originally wanted to randomize the partition layout to
>   reduce the likelihood of most partitions of the same topic from ending up
>   on a given consumer which is important if you have a few very large topics.
>   Unfortunately we used hashCode - which does a splendid job of clumping
>   partitions from the same topic together :( We can probably just "fix" that
>   in the new consumer's roundrobin assignor.
>
>Thanks,
>
>Joel
>
>
>On Fri, Feb 26, 2016 at 2:32 PM, Olson,Andrew  wrote:
>
>> Here is a proposal for a new partition assignment strategy,
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-49+-+Fair+Partition+Assignment+Strategy
>>
>> This KIP corresponds to these two pending pull requests,
>> https://github.com/apache/kafka/pull/146
>> https://github.com/apache/kafka/pull/979
>>
>> thanks,
>> Andrew
>>
>> CONFIDENTIALITY NOTICE This message and any included attachments are from
>> Cerner Corporation and are intended only for the addressee. The information
>> contained in this message is confidential and may constitute inside or
>> non-public information under international, federal, or state securities
>> laws. Unauthorized forwarding, printing, copying, distribution, or use of
>> such information is strictly prohibited and may be unlawful. If you are not
>> the addressee, please promptly delete this message and notify the sender of
>> the delivery error by e-mail or you may call Cerner's corporate offices in
>> Kansas City, Missouri, U.S.A at (+1) (816)221-1024.
>>


[GitHub] kafka pull request: KAFKA-3291: DumpLogSegment tool should also pr...

2016-02-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/975


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3291) DumpLogSegment tool should also provide an option to only verify index sanity.

2016-02-29 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-3291.
---
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 975
[https://github.com/apache/kafka/pull/975]

> DumpLogSegment tool should also provide an option to only verify index sanity.
> --
>
> Key: KAFKA-3291
> URL: https://issues.apache.org/jira/browse/KAFKA-3291
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.10.0.0
>
>
> DumpLogSegment tool should call index.sanityCheck function as part of index 
> sanity check as that function determines if an index will be rebuilt on 
> restart or not. This is a cheap check as it only checks the file size and can 
> help in scenarios where customer is trying to figure out which index files 
> will be rebuilt on startup which directly affects the broker bootstrap time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3291) DumpLogSegment tool should also provide an option to only verify index sanity.

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172169#comment-15172169
 ] 

ASF GitHub Bot commented on KAFKA-3291:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/975


> DumpLogSegment tool should also provide an option to only verify index sanity.
> --
>
> Key: KAFKA-3291
> URL: https://issues.apache.org/jira/browse/KAFKA-3291
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.10.0.0
>
>
> DumpLogSegment tool should call index.sanityCheck function as part of index 
> sanity check as that function determines if an index will be rebuilt on 
> restart or not. This is a cheap check as it only checks the file size and can 
> help in scenarios where customer is trying to figure out which index files 
> will be rebuilt on startup which directly affects the broker bootstrap time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3300) Calculate the initial/max size of offset index files and reduce the memory footprint for memory mapped index files.

2016-02-29 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-3300:

Status: Patch Available  (was: Open)

> Calculate the initial/max size of offset index files and reduce the memory 
> footprint for memory mapped index files.
> ---
>
> Key: KAFKA-3300
> URL: https://issues.apache.org/jira/browse/KAFKA-3300
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> Currently the initial/max size of offset index file is configured by 
> {{log.index.max.bytes}}. This will be the offset index file size for active 
> log segment until it rolls out. 
> Theoretically, we can calculate the upper bound of offset index size using 
> the following formula:
> {noformat}
> log.segment.bytes / index.interval.bytes * 8
> {noformat}
> With default setting the bytes needed for an offset index size is 1GB / 4K * 
> 8 = 2MB. And the default log.index.max.bytes is 10MB.
> This means we are over-allocating at least 8MB on disk and mapping it to 
> memory.
> We can probably do the following:
> 1. When creating a new offset index, calculate the size using the above 
> formula,
> 2. If the result in (1) is greater than log.index.max.bytes, we allocate 
> log.index.max.bytes instead.
> This should be able to significantly save memory if a broker has a lot of 
> partitions on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3302) Pass kerberos keytab and principal as part of client config

2016-02-29 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-3302:
-

 Summary: Pass kerberos keytab and principal as part of client 
config 
 Key: KAFKA-3302
 URL: https://issues.apache.org/jira/browse/KAFKA-3302
 Project: Kafka
  Issue Type: Improvement
  Components: security
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.10.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-02-29 Thread Rajini Sivaram
Harsha,

Thank you for the review. I will wait another day to see if there is more
feedback and then start a voting thread.

Rajini

On Mon, Feb 29, 2016 at 2:51 PM, Harsha  wrote:

> Rajini,
>   Thanks for the changes to the KIP. It looks good to me. I
>   think we can move to voting.
> Thanks,
> Harsha
>
> On Mon, Feb 29, 2016, at 12:43 AM, Rajini Sivaram wrote:
> > I have added some more detail to the KIP based on the discussion in the
> > last KIP meeting to simplify support for multiple mechanisms. Have also
> > changed the property names to reflect this.
> >
> > Also updated the PR in https://issues.apache.org/jira/browse/KAFKA-3149
> > to
> > reflect the KIP.
> >
> > Any feedback is appreciated.
> >
> >
> > On Tue, Feb 23, 2016 at 9:36 PM, Rajini Sivaram <
> > rajinisiva...@googlemail.com> wrote:
> >
> > > I have updated the KIP based on the discussion in the KIP meeting
> today.
> > >
> > > Comments and feedback are welcome.
> > >
> > > On Wed, Feb 3, 2016 at 7:20 PM, Rajini Sivaram <
> > > rajinisiva...@googlemail.com> wrote:
> > >
> > >> Hi Harsha,
> > >>
> > >> Thank you for the review. Can you clarify - I think you are saying
> that
> > >> the client should send its mechanism over the wire to the server. Is
> that
> > >> correct? The exchange is slightly different in the KIP (the PR
> matches the
> > >> KIP) from the one you described to enable interoperability with
> 0.9.0.0.
> > >>
> > >>
> > >> On Wed, Feb 3, 2016 at 1:56 PM, Harsha  wrote:
> > >>
> > >>> Rajini,
> > >>>I looked at the PR you have. I think its better with your
> > >>>earlier approach rather than extending the protocol.
> > >>> What I was thinking initially is, Broker has a config option of say
> > >>> sasl.mechanism = GSSAPI, PLAIN
> > >>> and the client can have similar config of sasl.mechanism=PLAIN.
> Client
> > >>> can send its sasl mechanism before the handshake starts and if the
> > >>> broker accepts that particular mechanism than it can go ahead with
> > >>> handshake otherwise return a error saying that the mechanism not
> > >>> allowed.
> > >>>
> > >>> Thanks,
> > >>> Harsha
> > >>>
> > >>> On Wed, Feb 3, 2016, at 04:58 AM, Rajini Sivaram wrote:
> > >>> > A slightly different approach for supporting different SASL
> mechanisms
> > >>> > within a broker is to allow the same "*security protocol*" to be
> used
> > >>> on
> > >>> > different ports with different configuration options. An advantage
> of
> > >>> > this
> > >>> > approach is that it extends the configurability of not just SASL,
> but
> > >>> any
> > >>> > protocol. For instance, it would enable the use of SSL with mutual
> > >>> client
> > >>> > authentication on one port or different certificate chains on
> another.
> > >>> > And
> > >>> > it avoids the need for SASL mechanism negotiation.
> > >>> >
> > >>> > Kafka would have the same "*security protocols" *defined as today,
> but
> > >>> > with
> > >>> > (a single) configurable SASL mechanism. To have different
> > >>> configurations
> > >>> > of
> > >>> > a protocol within a broker, users can define new protocol names
> which
> > >>> are
> > >>> > configured versions of existing protocols, perhaps using just
> > >>> > configuration
> > >>> > entries and no additional code.
> > >>> >
> > >>> > For example:
> > >>> >
> > >>> > A single mechanism broker would be configured as:
> > >>> >
> > >>> > listeners=SASL_SSL://:9092
> > >>> > sasl.mechanism=GSSAPI
> > >>> > sasl.kerberos.class.name=kafka
> > >>> > ...
> > >>> >
> > >>> >
> > >>> > And a multi-mechanism broker would be configured as:
> > >>> >
> > >>> > listeners=gssapi://:9092,plain://:9093,custom://:9094
> > >>> > gssapi.security.protocol=SASL_SSL
> > >>> > gssapi.sasl.mechanism=GSSAPI
> > >>> > gssapi.sasl.kerberos.class.name=kafka
> > >>> > ...
> > >>> > plain.security.protocol=SASL_SSL
> > >>> > plain.sasl.mechanism=PLAIN
> > >>> > ..
> > >>> > custom.security.protocol=SASL_PLAINTEXT
> > >>> > custom.sasl.mechanism=CUSTOM
> > >>> > custom.sasl.callback.handler.class=example.CustomCallbackHandler
> > >>> >
> > >>> >
> > >>> >
> > >>> > This is still a big change because it affects the currently fixed
> > >>> > enumeration of security protocol definitions, but one that is
> perhaps
> > >>> > more
> > >>> > flexible than defining every new SASL mechanism as a new security
> > >>> > protocol.
> > >>> >
> > >>> > Thoughts?
> > >>> >
> > >>> >
> > >>> > On Tue, Feb 2, 2016 at 12:20 PM, Rajini Sivaram <
> > >>> > rajinisiva...@googlemail.com> wrote:
> > >>> >
> > >>> > > As Ismael has said, we do not have a requirement to support
> multiple
> > >>> > > protocols in a broker. But I agree with Jun's observation that
> some
> > >>> > > companies might want to support a different authentication
> mechanism
> > >>> for
> > >>> > > internal users or partners. For instance, we do use two different
> > >>> > > authentication mechanisms, it just so happens that we are able
> to use
> > >>> > > certificate-based a

Build failed in Jenkins: kafka-trunk-jdk7 #1072

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[harsha] KAFKA-3291; DumpLogSegment tool should also provide an option to 
only…

--
[...truncated 5617 lines...]

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > readTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > putTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateNonRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateShouldOverride PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeOverridesValueSetBySameWorker PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectors PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.Distrib

[jira] [Commented] (KAFKA-3296) All consumer reads hang indefinately

2016-02-29 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172416#comment-15172416
 ] 

Jason Gustafson commented on KAFKA-3296:


[~thecoop1984] Thanks for the logs. Is there only one broker in the cluster? 
Also, is there any chance you could enable DEBUG logging for the controller? 
What I see is the controller resigning about two minutes after startup and then 
there's nothing else in the controller logs. That means either that another 
broker took over as the controller, or somehow we've missed an expected event 
for the broker to become controller again. I know there have been several JIRAs 
tracking failures to receive some Zookeeper events, so we might be hitting one 
of those cases. Can you describe how you run Zookeeper in this test? Anyway, 
the controller being down explains why the consumer cannot find the group 
coordinator.

> All consumer reads hang indefinately
> 
>
> Key: KAFKA-3296
> URL: https://issues.apache.org/jira/browse/KAFKA-3296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.9.0.1
>Reporter: Simon Cooper
>Priority: Critical
> Attachments: kafkalogs.zip
>
>
> We've got several integration tests that bring up systems on VMs for testing. 
> We've recently upgraded to 0.9, and very occasionally we occasionally see an 
> issue where every consumer that tries to read from the broker hangs, spamming 
> the following in their logs:
> {code}2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.NetworkClient 
> [pool-10-thread-1] | Sending metadata request 
> ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21905,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537856, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10954 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,856 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,857 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537857, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@28edb273,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21906,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537856, sendTimeMs=1456489537856), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21907,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489537956, sendTimeMs=0) to node 1
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10955 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 
> 0, leader = 1, replicas = [1,], isr = [1,]])
> 2016-02-26T12:25:37,956 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Issuing group metadata request to broker 1
> 2016-02-26T12:25:37,957 | DEBUG | o.a.k.c.c.i.AbstractCoordinator 
> [pool-10-thread-1] | Group metadata response 
> ClientResponse(receivedTimeMs=1456489537957, disconnected=false, 
> request=ClientRequest(expectResponse=true, 
> callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@40cee8cc,
>  
> request=RequestSend(header={api_key=10,api_version=0,correlation_id=21908,client_id=consumer-1},
>  body={group_id=}), createdTimeMs=1456489537956, sendTimeMs=1456489537956), 
> responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.NetworkClient [pool-10-thread-1] | 
> Sending metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=21909,client_id=consumer-1},
>  body={topics=[Topic1]}), isInitiatedByNetworkClient, 
> createdTimeMs=1456489538056, sendTimeMs=0) to node 1
> 2016-02-26T12:25:38,056 | DEBUG | o.a.k.c.Metadata [pool-10-thread-1] | 
> Updated cluster metadata version 10956 to Cluster(nodes = [Node(1, 
> server.internal, 9092)], partitions = [Partition(topic = Topic1, partition = 

Build failed in Jenkins: kafka-trunk-jdk8 #399

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[harsha] KAFKA-3291; DumpLogSegment tool should also provide an option to only…

--
[...truncated 5025 lines...]

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testLockStateDirectory PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testStorePartitions PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdateKTable 
PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testUpdateNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdate PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
PASSED

org.apache.kafka.streams.processor.internals.assignment.AssginmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameTopic PASSED

org.apache.kafka.streams.processor.TopologyBuild

[GitHub] kafka pull request: HOTFIX: Missing streams jar in release

2016-02-29 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/984

HOTFIX: Missing streams jar in release

Observation: when doing "gradlew releaseTarGz" the streams jar was not 
included in the tarball. Adding a line to include it. @ijuma @guozhangwang 
could you please review. Thanks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/984.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #984


commit 2e8162a88d55e7a7934dd3b9a7188846f94ec383
Author: Eno Thereska 
Date:   2016-02-29T20:09:34Z

Missing streams jar in release




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3133: Add putIfAbsent function to KeyVal...

2016-02-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/912


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3133) Add putIfAbsent function to KeyValueStore

2016-02-29 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3133.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 912
[https://github.com/apache/kafka/pull/912]

> Add putIfAbsent function to KeyValueStore
> -
>
> Key: KAFKA-3133
> URL: https://issues.apache.org/jira/browse/KAFKA-3133
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Since a local store will only be accessed by a single stream thread, there is 
> no atomicity concerns and hence this API should be easy to add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3133) Add putIfAbsent function to KeyValueStore

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172571#comment-15172571
 ] 

ASF GitHub Bot commented on KAFKA-3133:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/912


> Add putIfAbsent function to KeyValueStore
> -
>
> Key: KAFKA-3133
> URL: https://issues.apache.org/jira/browse/KAFKA-3133
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Since a local store will only be accessed by a single stream thread, there is 
> no atomicity concerns and hence this API should be easy to add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: Missing streams jar in release

2016-02-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/984


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3229) Root statestore is not registered with ProcessorStateManager, inner state store is registered instead

2016-02-29 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172592#comment-15172592
 ] 

Guozhang Wang commented on KAFKA-3229:
--

[~tom_dearman] I feel adding one more parameter to the register() call may be a 
bit more for users with the processor topology API, could we instead separate 
the logic of adding the store to stores map and register the inner store 
(including restoring, etc) so that we can still keep the register API simple 
with one parameter?

> Root statestore is not registered with ProcessorStateManager, inner state 
> store is registered instead
> -
>
> Key: KAFKA-3229
> URL: https://issues.apache.org/jira/browse/KAFKA-3229
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.10.0.0
> Environment: MacOS El Capitan
>Reporter: Tom Dearman
> Fix For: 0.10.0.0
>
>
> When the hierarchy of nested StateStores are created, init is called on the 
> root store, but parent StateStores such as  MeteredKeyValueStore just call 
> the contained StateStore until a store such as MemoryStore calls 
> ProcessorContext.register, but it passes 'this' to the method so only that 
> inner state store (MemoryStore in this case) is registered with 
> ProcessorStateManager.  As state is added to the store none of the parent 
> stores code will be called, metering, or even StoreChangeLogger to put the 
> state on the kafka topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3303) Pass partial record metadata to Interceptor onAcknowledgement in case of errors

2016-02-29 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-3303:
---

 Summary: Pass partial record metadata to Interceptor 
onAcknowledgement in case of errors
 Key: KAFKA-3303
 URL: https://issues.apache.org/jira/browse/KAFKA-3303
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.10.0.0
Reporter: Anna Povzner
Assignee: Anna Povzner
 Fix For: 0.10.0.0


Currently Interceptor.onAcknowledgement behaves similarly to Callback. If 
exception occurred and exception is passed to onAcknowledgement, metadata param 
is set to null.

However, it would be useful to pass topic, and partition if available to the 
interceptor so that it knows which topic/partition got an error.

This is part of KIP-42.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3133) Add putIfAbsent function to KeyValueStore

2016-02-29 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3133:
-
Assignee: Kim Christensen

> Add putIfAbsent function to KeyValueStore
> -
>
> Key: KAFKA-3133
> URL: https://issues.apache.org/jira/browse/KAFKA-3133
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Kim Christensen
> Fix For: 0.10.0.0
>
>
> Since a local store will only be accessed by a single stream thread, there is 
> no atomicity concerns and hence this API should be easy to add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3229) Root statestore is not registered with ProcessorStateManager, inner state store is registered instead

2016-02-29 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3229:
-
Assignee: Tom Dearman

> Root statestore is not registered with ProcessorStateManager, inner state 
> store is registered instead
> -
>
> Key: KAFKA-3229
> URL: https://issues.apache.org/jira/browse/KAFKA-3229
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Affects Versions: 0.10.0.0
> Environment: MacOS El Capitan
>Reporter: Tom Dearman
>Assignee: Tom Dearman
> Fix For: 0.10.0.0
>
>
> When the hierarchy of nested StateStores are created, init is called on the 
> root store, but parent StateStores such as  MeteredKeyValueStore just call 
> the contained StateStore until a store such as MemoryStore calls 
> ProcessorContext.register, but it passes 'this' to the method so only that 
> inner state store (MemoryStore in this case) is registered with 
> ProcessorStateManager.  As state is added to the store none of the parent 
> stores code will be called, metering, or even StoreChangeLogger to put the 
> state on the kafka topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3133) Add putIfAbsent function to KeyValueStore

2016-02-29 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172692#comment-15172692
 ] 

Guozhang Wang commented on KAFKA-3133:
--

Thanks for the patch [~kichristensen]. I have added you to the contributor list 
so you can assign yourself to Kafka JIRAs in the future.

> Add putIfAbsent function to KeyValueStore
> -
>
> Key: KAFKA-3133
> URL: https://issues.apache.org/jira/browse/KAFKA-3133
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Kim Christensen
> Fix For: 0.10.0.0
>
>
> Since a local store will only be accessed by a single stream thread, there is 
> no atomicity concerns and hence this API should be easy to add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3192) Add implicit unlimited windowed aggregation for KStream

2016-02-29 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3192.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 870
[https://github.com/apache/kafka/pull/870]

> Add implicit unlimited windowed aggregation for KStream
> ---
>
> Key: KAFKA-3192
> URL: https://issues.apache.org/jira/browse/KAFKA-3192
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Some users would want to have a convenient way to specify "unlimited windowed 
> aggregation" for KStreams. We can add that as a syntax-suger like the 
> following:
> {code}
> KTable aggregateByKey(aggregator)
> {code}
> Where it computes the aggregate WITHOUT windowing, and the underlying 
> implementation just use a RocksDBStore instead of a RocksDBWindowStore, and 
> the returned type will be KTable, not KTable. 
> With this we can also remove UnlimitedWindows specs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3192) Add implicit unlimited windowed aggregation for KStream

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172703#comment-15172703
 ] 

ASF GitHub Bot commented on KAFKA-3192:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/870


> Add implicit unlimited windowed aggregation for KStream
> ---
>
> Key: KAFKA-3192
> URL: https://issues.apache.org/jira/browse/KAFKA-3192
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Some users would want to have a convenient way to specify "unlimited windowed 
> aggregation" for KStreams. We can add that as a syntax-suger like the 
> following:
> {code}
> KTable aggregateByKey(aggregator)
> {code}
> Where it computes the aggregate WITHOUT windowing, and the underlying 
> implementation just use a RocksDBStore instead of a RocksDBWindowStore, and 
> the returned type will be KTable, not KTable. 
> With this we can also remove UnlimitedWindows specs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3192: Add unwindowed aggregations for KS...

2016-02-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/870


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-3192) Add implicit unlimited windowed aggregation for KStream

2016-02-29 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-3192:


Assignee: Guozhang Wang

> Add implicit unlimited windowed aggregation for KStream
> ---
>
> Key: KAFKA-3192
> URL: https://issues.apache.org/jira/browse/KAFKA-3192
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Some users would want to have a convenient way to specify "unlimited windowed 
> aggregation" for KStreams. We can add that as a syntax-suger like the 
> following:
> {code}
> KTable aggregateByKey(aggregator)
> {code}
> Where it computes the aggregate WITHOUT windowing, and the underlying 
> implementation just use a RocksDBStore instead of a RocksDBWindowStore, and 
> the returned type will be KTable, not KTable. 
> With this we can also remove UnlimitedWindows specs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2073) Replace TopicMetadata request/response with o.a.k.requests.metadata

2016-02-29 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-2073:
--

Assignee: Jason Gustafson  (was: Andrii Biletskyi)

> Replace TopicMetadata request/response with o.a.k.requests.metadata
> ---
>
> Key: KAFKA-2073
> URL: https://issues.apache.org/jira/browse/KAFKA-2073
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Jason Gustafson
>
> Replace TopicMetadata request/response with o.a.k.requests.metadata.
> Note, this is more challenging that it appears because while the wire 
> protocol is identical, the objects are completely different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2073) Replace TopicMetadata request/response with o.a.k.requests.metadata

2016-02-29 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172713#comment-15172713
 ] 

Jason Gustafson commented on KAFKA-2073:


[~abiletskyi] I went ahead and assigned this to myself. Feel free to assign 
back if you are still working on it.

> Replace TopicMetadata request/response with o.a.k.requests.metadata
> ---
>
> Key: KAFKA-2073
> URL: https://issues.apache.org/jira/browse/KAFKA-2073
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Jason Gustafson
>
> Replace TopicMetadata request/response with o.a.k.requests.metadata.
> Note, this is more challenging that it appears because while the wire 
> protocol is identical, the objects are completely different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #400

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3133: Add putIfAbsent function to KeyValueStore

[wangguoz] HOTFIX: Missing streams jar in release

--
[...truncated 1474 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

k

[jira] [Created] (KAFKA-3304) KIP-35 - Retrieving protocol version

2016-02-29 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3304:
-

 Summary: KIP-35 - Retrieving protocol version
 Key: KAFKA-3304
 URL: https://issues.apache.org/jira/browse/KAFKA-3304
 Project: Kafka
  Issue Type: Improvement
Reporter: Ashish K Singh


Uber JIRA to track adding of functionality to retrieve protocol versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3304) KIP-35 - Retrieving protocol version

2016-02-29 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172727#comment-15172727
 ] 

Ashish K Singh commented on KAFKA-3304:
---

[~edenhill] is it OK if I assign this JIRA to myself and post patches. Would 
love to have you as reviewer. I have a WIP patch ready.

> KIP-35 - Retrieving protocol version
> 
>
> Key: KAFKA-3304
> URL: https://issues.apache.org/jira/browse/KAFKA-3304
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>
> Uber JIRA to track adding of functionality to retrieve protocol versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3236: Honor Producer Configuration "bloc...

2016-02-29 Thread knusbaum
GitHub user knusbaum reopened a pull request:

https://github.com/apache/kafka/pull/934

KAFKA-3236: Honor Producer Configuration "block.on.buffer.full"



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/knusbaum/kafka KAFKA-3236-master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/934.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #934


commit e8b63f045ff5d854d3d3d4bc0751cbba0d37d69f
Author: Sanjiv Raj 
Date:   2016-02-11T22:20:26Z

[ADDHR-1240] Honor block.on.buffer.full producer configuration

commit 50e4f01bc5f2b004533ac60bad5d8c396508e762
Author: Sanjiv Raj 
Date:   2016-02-18T18:43:20Z

Fix failing producer integration test

commit 836afe6159ee3b902a4c809cefd0345f61e6b026
Author: Kyle Nusbaum 
Date:   2016-02-17T17:05:38Z

Updating config documentation.

commit 6009eccb3a65c0a8cc8f441c89d902708475271e
Author: Kyle Nusbaum 
Date:   2016-02-18T21:45:39Z

Fixing TestUtils

commit 5cf40a2065674d72298bdcce2a64ada1c6ca0163
Author: Kyle Nusbaum 
Date:   2016-02-19T16:50:46Z

Merge branch 'trunk' of github.com:apache/kafka into KAFKA-3236-master

commit 6e2d64ee8eedc90efff54fdd952c8d5f98a8b0d5
Author: Kyle Nusbaum 
Date:   2016-02-24T21:31:03Z

Fixing config descriptions.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3236: Honor Producer Configuration "bloc...

2016-02-29 Thread knusbaum
Github user knusbaum closed the pull request at:

https://github.com/apache/kafka/pull/934


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3236) Honor Producer Configuration "block.on.buffer.full"

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172729#comment-15172729
 ] 

ASF GitHub Bot commented on KAFKA-3236:
---

Github user knusbaum closed the pull request at:

https://github.com/apache/kafka/pull/934


> Honor Producer Configuration "block.on.buffer.full"
> ---
>
> Key: KAFKA-3236
> URL: https://issues.apache.org/jira/browse/KAFKA-3236
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> In Kafka-0.9, "max.block.ms" is used to control how long the following 
> methods will block.
> KafkaProducer.send() when
>* Buffer is full
>* Metadata is unavailable
> KafkaProducer.partitionsFor() when
>* Metadata is unavailable
> However when "block.on.buffer.full" is set to false, "max.block.ms" is in 
> effect whenever a buffer is requested/allocated from the Producer BufferPool. 
> Instead it should throw a BufferExhaustedException without waiting for 
> "max.block.ms"
> This is particulary useful if a producer application does not wish to block 
> at all on KafkaProducer.send() . We avoid waiting on KafkaProducer.send() 
> when metadata is unavailable by invoking send() only if the producer instance 
> has fetched the metadata for the topic in a different thread using the same 
> producer instance. However "max.block.ms" is still required to specify a 
> timeout for bootstrapping the metadata fetch.
> We should resolve this limitation by decoupling "max.block.ms" and 
> "block.on.buffer.full".
>* "max.block.ms" will be used exclusively for fetching metadata when
> "block.on.buffer.full" = false (in pure non-blocking mode )
>* "max.block.ms" will be applicable to both fetching metadata as well as 
> buffer allocation when "block.on.buffer.full = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3236) Honor Producer Configuration "block.on.buffer.full"

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172730#comment-15172730
 ] 

ASF GitHub Bot commented on KAFKA-3236:
---

GitHub user knusbaum reopened a pull request:

https://github.com/apache/kafka/pull/934

KAFKA-3236: Honor Producer Configuration "block.on.buffer.full"



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/knusbaum/kafka KAFKA-3236-master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/934.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #934


commit e8b63f045ff5d854d3d3d4bc0751cbba0d37d69f
Author: Sanjiv Raj 
Date:   2016-02-11T22:20:26Z

[ADDHR-1240] Honor block.on.buffer.full producer configuration

commit 50e4f01bc5f2b004533ac60bad5d8c396508e762
Author: Sanjiv Raj 
Date:   2016-02-18T18:43:20Z

Fix failing producer integration test

commit 836afe6159ee3b902a4c809cefd0345f61e6b026
Author: Kyle Nusbaum 
Date:   2016-02-17T17:05:38Z

Updating config documentation.

commit 6009eccb3a65c0a8cc8f441c89d902708475271e
Author: Kyle Nusbaum 
Date:   2016-02-18T21:45:39Z

Fixing TestUtils

commit 5cf40a2065674d72298bdcce2a64ada1c6ca0163
Author: Kyle Nusbaum 
Date:   2016-02-19T16:50:46Z

Merge branch 'trunk' of github.com:apache/kafka into KAFKA-3236-master

commit 6e2d64ee8eedc90efff54fdd952c8d5f98a8b0d5
Author: Kyle Nusbaum 
Date:   2016-02-24T21:31:03Z

Fixing config descriptions.




> Honor Producer Configuration "block.on.buffer.full"
> ---
>
> Key: KAFKA-3236
> URL: https://issues.apache.org/jira/browse/KAFKA-3236
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> In Kafka-0.9, "max.block.ms" is used to control how long the following 
> methods will block.
> KafkaProducer.send() when
>* Buffer is full
>* Metadata is unavailable
> KafkaProducer.partitionsFor() when
>* Metadata is unavailable
> However when "block.on.buffer.full" is set to false, "max.block.ms" is in 
> effect whenever a buffer is requested/allocated from the Producer BufferPool. 
> Instead it should throw a BufferExhaustedException without waiting for 
> "max.block.ms"
> This is particulary useful if a producer application does not wish to block 
> at all on KafkaProducer.send() . We avoid waiting on KafkaProducer.send() 
> when metadata is unavailable by invoking send() only if the producer instance 
> has fetched the metadata for the topic in a different thread using the same 
> producer instance. However "max.block.ms" is still required to specify a 
> timeout for bootstrapping the metadata fetch.
> We should resolve this limitation by decoupling "max.block.ms" and 
> "block.on.buffer.full".
>* "max.block.ms" will be used exclusively for fetching metadata when
> "block.on.buffer.full" = false (in pure non-blocking mode )
>* "max.block.ms" will be applicable to both fetching metadata as well as 
> buffer allocation when "block.on.buffer.full = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3304) KIP-35 - Retrieving protocol version

2016-02-29 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3304:
--
Description: Uber JIRA to track adding of functionality to retrieve 
protocol versions. More discussion can be found on 
[KIP-35|https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version].
  (was: Uber JIRA to track adding of functionality to retrieve protocol 
versions.)

> KIP-35 - Retrieving protocol version
> 
>
> Key: KAFKA-3304
> URL: https://issues.apache.org/jira/browse/KAFKA-3304
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>
> Uber JIRA to track adding of functionality to retrieve protocol versions. 
> More discussion can be found on 
> [KIP-35|https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3305) JMX in AWS instance

2016-02-29 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-3305:
---

 Summary: JMX in AWS instance
 Key: KAFKA-3305
 URL: https://issues.apache.org/jira/browse/KAFKA-3305
 Project: Kafka
  Issue Type: Improvement
 Environment: AWS
Reporter: Flavio Junqueira


While connecting JConsole to a broker running on AWS, I've needed to set the 
following properties to get it to work:

{noformat}
-Djava.net.preferIPv4Stack=true 
-Djava.rmi.server.hostname=xxx.compute.amazonaws.com 
-Dcom.sun.management.jmxremote.rmi.port=9996
{noformat}

in addition to setting the JMX_PORT variable. I suggest we at least document 
these options and possibly have them in {{kafka-run-class.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Possible bug in Streams → KStreamImpl.java

2016-02-29 Thread Avi Flax
I was just playing around with Streams’ join features, just to get a
feel for them, and I think I may have noticed a bug in the code, in
KStreamImpl.java on line 310:

https://github.com/apache/kafka/blob/845c6eae1f6c6bcf117f5baa53bb19b4611c0528/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java#L310

(I’m linking to the latest commit that changed this file so that the
link will be stable, but line 310 is currently identical in this
commit and trunk.)

the line reads:

.withValues(otherValueSerializer, otherValueDeserializer)

but I think maybe it’s supposed to read:

.withValues(thisValueSerializer, thisValueDeserializer)

I took a look at the tests and it seems they’re not catching this
because in the current tests, the serdes for both streams are the same
— it might be a good idea to add a test wherein they’re different.

If Streams was stable I’d offer to prepare a PR but given that it’s a
WIP I figured it would be better to just share this observation.

HTH!

Avi


[jira] [Updated] (KAFKA-3306) Change metdata response to include required additional fields

2016-02-29 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3306:
---
Summary: Change metdata response to include required additional fields  
(was: Update metdata response to include required additional fields)

> Change metdata response to include required additional fields
> -
>
> Key: KAFKA-3306
> URL: https://issues.apache.org/jira/browse/KAFKA-3306
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> This includes:
> - Rack information from KAFKA-1215 (KIP-36)
> - A boolean to indicate if a topic is marked for deletion



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3306) Update metdata response to include required additional fields

2016-02-29 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-3306:
--

 Summary: Update metdata response to include required additional 
fields
 Key: KAFKA-3306
 URL: https://issues.apache.org/jira/browse/KAFKA-3306
 Project: Kafka
  Issue Type: Sub-task
Reporter: Grant Henke
Assignee: Grant Henke


This includes:
- Rack information from KAFKA-1215 (KIP-36)
- A boolean to indicate if a topic is marked for deletion



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3289) Update Kafka protocol guide wiki for KIP-31 / KIP-32

2016-02-29 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved KAFKA-3289.
-
Resolution: Fixed

> Update Kafka protocol guide wiki for KIP-31 / KIP-32
> 
>
> Key: KAFKA-3289
> URL: https://issues.apache.org/jira/browse/KAFKA-3289
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: add AUTO_OFFSET_RESET_CONFIG to Streams...

2016-02-29 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/985

MINOR: add AUTO_OFFSET_RESET_CONFIG to StreamsConfig,

and remove TOTAL_RECORDS_TO_PROCESS
@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka config_params

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/985.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #985


commit 4074465fa0032c242a066a7684748df7eb432e02
Author: Yasuhiro Matsuda 
Date:   2016-02-29T22:47:04Z

MINOR: add AUTO_OFFSET_RESET_CONFIG to StreamsConfig, remove 
TOTAL_RECORDS_TO_PROCESS

commit 1ee31381eb2ed55aceb836fce214a2a5b3f3f5cc
Author: Yasuhiro Matsuda 
Date:   2016-02-29T22:47:14Z

Merge branch 'trunk' of github.com:apache/kafka into config_params




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3236) Honor Producer Configuration "block.on.buffer.full"

2016-02-29 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172838#comment-15172838
 ] 

Jiangjie Qin commented on KAFKA-3236:
-

[~tgraves] We had extensive discussion on the configurations in KIP-19.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient

We will actually deprecate the {{block.on.buffer.full}} configuration in the 
next release. The rationale behind this is that any synchronized call for 
producer is guaranteed to be returned in {{max.block.ms}}. That includes buffer 
full, metadata refresh and so on. 

In your use case, {{block.on.buffer.full = false}} and {{max.block.ms > 0}} is 
not a pure non-blocking mode because  {{producer.send()}} can still block up to 
{{max.bock.ms}}, right?

> Honor Producer Configuration "block.on.buffer.full"
> ---
>
> Key: KAFKA-3236
> URL: https://issues.apache.org/jira/browse/KAFKA-3236
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>
> In Kafka-0.9, "max.block.ms" is used to control how long the following 
> methods will block.
> KafkaProducer.send() when
>* Buffer is full
>* Metadata is unavailable
> KafkaProducer.partitionsFor() when
>* Metadata is unavailable
> However when "block.on.buffer.full" is set to false, "max.block.ms" is in 
> effect whenever a buffer is requested/allocated from the Producer BufferPool. 
> Instead it should throw a BufferExhaustedException without waiting for 
> "max.block.ms"
> This is particulary useful if a producer application does not wish to block 
> at all on KafkaProducer.send() . We avoid waiting on KafkaProducer.send() 
> when metadata is unavailable by invoking send() only if the producer instance 
> has fetched the metadata for the topic in a different thread using the same 
> producer instance. However "max.block.ms" is still required to specify a 
> timeout for bootstrapping the metadata fetch.
> We should resolve this limitation by decoupling "max.block.ms" and 
> "block.on.buffer.full".
>* "max.block.ms" will be used exclusively for fetching metadata when
> "block.on.buffer.full" = false (in pure non-blocking mode )
>* "max.block.ms" will be applicable to both fetching metadata as well as 
> buffer allocation when "block.on.buffer.full = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3304) KIP-35 - Retrieving protocol version

2016-02-29 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172872#comment-15172872
 ] 

Ashish K Singh commented on KAFKA-3304:
---

Assigning the JIRA to myself.

> KIP-35 - Retrieving protocol version
> 
>
> Key: KAFKA-3304
> URL: https://issues.apache.org/jira/browse/KAFKA-3304
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>
> Uber JIRA to track adding of functionality to retrieve protocol versions. 
> More discussion can be found on 
> [KIP-35|https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3304) KIP-35 - Retrieving protocol version

2016-02-29 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh reassigned KAFKA-3304:
-

Assignee: Ashish K Singh

> KIP-35 - Retrieving protocol version
> 
>
> Key: KAFKA-3304
> URL: https://issues.apache.org/jira/browse/KAFKA-3304
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Uber JIRA to track adding of functionality to retrieve protocol versions. 
> More discussion can be found on 
> [KIP-35|https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #401

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3192: Add unwindowed aggregations for KStream; and make all

--
[...truncated 5659 lines...]
org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > readTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > putTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateNonRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateShouldOverride PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorStateRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeOverridesValueSetBySameWorker PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
readConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putSafeConnectorIgnoresStaleStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
putConnectorState PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectors PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment 

Build failed in Jenkins: kafka-trunk-jdk7 #1073

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3133: Add putIfAbsent function to KeyValueStore

[wangguoz] HOTFIX: Missing streams jar in release

--
[...truncated 5623 lines...]

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStartAndStopConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByAlias PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByShortAlias 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > stopBeforeStarting PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > standardStartup PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > cancelBeforeStopping PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
FAILED
java.lang.AssertionError: 
  Expectation failure on verify:
Listener.onStartup(job-0): expected: 1, actual: 1
Listener.onShutdown(job-0): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
at 
org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testPollsInBackground(WorkerSourceTaskTest.java:151)

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testFailureInPoll PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
java.lang.AssertionError: 
  Expectation failure on verify:
Listener.onStartup(job-0): expected: 1, actual: 1
Listener.onShutdown(job-0): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
at org.po

[jira] [Created] (KAFKA-3307) Add ProtocolVersion request/response and server side handling.

2016-02-29 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3307:
-

 Summary: Add ProtocolVersion request/response and server side 
handling.
 Key: KAFKA-3307
 URL: https://issues.apache.org/jira/browse/KAFKA-3307
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish K Singh
Assignee: Ashish K Singh






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2431) Test SSL/TLS impact on performance

2016-02-29 Thread Simon Suo (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172875#comment-15172875
 ] 

Simon Suo commented on KAFKA-2431:
--

Hi all. This is Simon, a data infra intern working on LinkedIn's Kafka team. I 
am currently evaluating solutions to reduce performance overhead of Kafka 
security features.

The summary report here discusses the possibility of a optional OpenSSL 
implementation that may achieve 4 to 5 times speed up. Is this being developed 
right now? Do you have any additional benchmark data to show the potential 
performance gain?

Let me know if you have any relevant information and time for a small 
discussion. I can be reached at s...@linkedin.com

Best regards,
Simon Suo

> Test SSL/TLS impact on performance
> --
>
> Key: KAFKA-2431
> URL: https://issues.apache.org/jira/browse/KAFKA-2431
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Test new Producer and new Consumer performance with and without SSL/TLS once 
> the SSL/TLS branch is integrated.
> The ideal scenario is that SSL/TLS would not have an impact if disabled. When 
> enabled, there will be some overhead (encryption and the inability to use 
> `SendFile`) and it will be good to quantify it. The encryption overhead is 
> reduced if recent JDKs are used with CPUs that support AES-specific 
> instructions (https://en.wikipedia.org/wiki/AES_instruction_set).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Possible bug in Streams → KStreamImpl.java

2016-02-29 Thread Guozhang Wang
Hi Avi,

Thanks for pointing it out! And yes it is indeed a bug in the code. Could
you file a HOTFIX PR fixing this and also modify the existing unit test to
cover this case?

Thanks,
Guozhang

On Mon, Feb 29, 2016 at 2:15 PM, Avi Flax  wrote:

> I was just playing around with Streams’ join features, just to get a
> feel for them, and I think I may have noticed a bug in the code, in
> KStreamImpl.java on line 310:
>
>
> https://github.com/apache/kafka/blob/845c6eae1f6c6bcf117f5baa53bb19b4611c0528/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java#L310
>
> (I’m linking to the latest commit that changed this file so that the
> link will be stable, but line 310 is currently identical in this
> commit and trunk.)
>
> the line reads:
>
> .withValues(otherValueSerializer, otherValueDeserializer)
>
> but I think maybe it’s supposed to read:
>
> .withValues(thisValueSerializer, thisValueDeserializer)
>
> I took a look at the tests and it seems they’re not catching this
> because in the current tests, the serdes for both streams are the same
> — it might be a good idea to add a test wherein they’re different.
>
> If Streams was stable I’d offer to prepare a PR but given that it’s a
> WIP I figured it would be better to just share this observation.
>
> HTH!
>
> Avi
>



-- 
-- Guozhang


[GitHub] kafka pull request: Kafka 3307: Add ProtocolVersion request/respon...

2016-02-29 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/986

Kafka 3307: Add ProtocolVersion request/response and server side handling.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3307

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/986.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #986


commit 3765aa58300f5ffc6997cad22e86ad90f383129e
Author: Ashish Singh 
Date:   2016-02-29T00:42:36Z

Init patch. Add req/resp. Add req handling. Add basic ut.

commit e4465910e94cf3300589f94aeeafe55c0ff7ed3e
Author: Ashish Singh 
Date:   2016-02-29T04:23:52Z

Respond with empty response body for invalid requests

commit 2985d6dad7241092c8fe9170d56e85f621d8fa1b
Author: Ashish Singh 
Date:   2016-02-29T04:28:40Z

Remove commented code

commit 3f103945afee264a9a5d663c6a8048e1388896ad
Author: Ashish Singh 
Date:   2016-02-29T22:14:28Z

Add mechanism to deprecate a protocol version. Populate ProtocolVersion's 
apiDeprecatedVersions using this mechanism.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3307: Add ProtocolVersion request/respon...

2016-02-29 Thread SinghAsDev
GitHub user SinghAsDev reopened a pull request:

https://github.com/apache/kafka/pull/986

KAFKA-3307: Add ProtocolVersion request/response and server side handling.

The patch does the following.
1. For unknown requests or protocol versions, broker sends an empty 
response, instead of simple closing the connection.
2. Adds ProtocolVersion request and response, and server side 
implementation.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3307

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/986.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #986


commit 3765aa58300f5ffc6997cad22e86ad90f383129e
Author: Ashish Singh 
Date:   2016-02-29T00:42:36Z

Init patch. Add req/resp. Add req handling. Add basic ut.

commit e4465910e94cf3300589f94aeeafe55c0ff7ed3e
Author: Ashish Singh 
Date:   2016-02-29T04:23:52Z

Respond with empty response body for invalid requests

commit 2985d6dad7241092c8fe9170d56e85f621d8fa1b
Author: Ashish Singh 
Date:   2016-02-29T04:28:40Z

Remove commented code

commit 3f103945afee264a9a5d663c6a8048e1388896ad
Author: Ashish Singh 
Date:   2016-02-29T22:14:28Z

Add mechanism to deprecate a protocol version. Populate ProtocolVersion's 
apiDeprecatedVersions using this mechanism.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3307: Add ProtocolVersion request/respon...

2016-02-29 Thread SinghAsDev
Github user SinghAsDev closed the pull request at:

https://github.com/apache/kafka/pull/986


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3307) Add ProtocolVersion request/response and server side handling.

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172879#comment-15172879
 ] 

ASF GitHub Bot commented on KAFKA-3307:
---

Github user SinghAsDev closed the pull request at:

https://github.com/apache/kafka/pull/986


> Add ProtocolVersion request/response and server side handling.
> --
>
> Key: KAFKA-3307
> URL: https://issues.apache.org/jira/browse/KAFKA-3307
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3307) Add ProtocolVersion request/response and server side handling.

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172880#comment-15172880
 ] 

ASF GitHub Bot commented on KAFKA-3307:
---

GitHub user SinghAsDev reopened a pull request:

https://github.com/apache/kafka/pull/986

KAFKA-3307: Add ProtocolVersion request/response and server side handling.

The patch does the following.
1. For unknown requests or protocol versions, broker sends an empty 
response, instead of simple closing the connection.
2. Adds ProtocolVersion request and response, and server side 
implementation.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3307

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/986.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #986


commit 3765aa58300f5ffc6997cad22e86ad90f383129e
Author: Ashish Singh 
Date:   2016-02-29T00:42:36Z

Init patch. Add req/resp. Add req handling. Add basic ut.

commit e4465910e94cf3300589f94aeeafe55c0ff7ed3e
Author: Ashish Singh 
Date:   2016-02-29T04:23:52Z

Respond with empty response body for invalid requests

commit 2985d6dad7241092c8fe9170d56e85f621d8fa1b
Author: Ashish Singh 
Date:   2016-02-29T04:28:40Z

Remove commented code

commit 3f103945afee264a9a5d663c6a8048e1388896ad
Author: Ashish Singh 
Date:   2016-02-29T22:14:28Z

Add mechanism to deprecate a protocol version. Populate ProtocolVersion's 
apiDeprecatedVersions using this mechanism.




> Add ProtocolVersion request/response and server side handling.
> --
>
> Key: KAFKA-3307
> URL: https://issues.apache.org/jira/browse/KAFKA-3307
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3307) Add ProtocolVersion request/response and server side handling.

2016-02-29 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3307:
--
Status: Patch Available  (was: Open)

> Add ProtocolVersion request/response and server side handling.
> --
>
> Key: KAFKA-3307
> URL: https://issues.apache.org/jira/browse/KAFKA-3307
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3307) Add ProtocolVersion request/response and server side handling.

2016-02-29 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3307:
--
Affects Version/s: 0.10.0.0

> Add ProtocolVersion request/response and server side handling.
> --
>
> Key: KAFKA-3307
> URL: https://issues.apache.org/jira/browse/KAFKA-3307
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3308) Use ProtocolVersion during rolling upgrades to decide on a mutually agreed protocol version between brokers, eliminating the need of inter.broker.protocol.version.

2016-02-29 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3308:
-

 Summary: Use ProtocolVersion during rolling upgrades to decide on 
a mutually agreed protocol version between brokers, eliminating the need of 
inter.broker.protocol.version.
 Key: KAFKA-3308
 URL: https://issues.apache.org/jira/browse/KAFKA-3308
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish K Singh
Assignee: Ashish K Singh






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-02-29 Thread Ashish Singh
I hope it is OK for me to make some progress here. I have made the
following changes.

1. Updated KIP-35, to adopt Jay's suggestion on maintaining separate list
of deprecated versions, instead of using a version of -1.
2. Added information on required permissions, Describe action on Cluster
resource, to be able to retrieve protocol versions from a auth enabled
Kafka cluster.

Created https://issues.apache.org/jira/browse/KAFKA-3304. Primary patch is
available to review, https://github.com/apache/kafka/pull/986

On Thu, Feb 25, 2016 at 1:27 PM, Ashish Singh  wrote:

> Kafka clients in Hadoop ecosystem, Flume, Spark, etc, have found it really
> difficult to cope up with Kafka releases as they want to support users on
> different Kafka versions. Capability to retrieve protocol version will go a
> long way to ease out those pain points. I will be happy to help out with
> the work on this KIP. @Magnus, thanks for driving this, is it OK if I carry
> forward the work from here. It will be ideal to have this in 0.10.0.0.
>
> On Mon, Oct 12, 2015 at 9:29 PM, Jay Kreps  wrote:
>
>> I wonder if we need to solve the error problem? I think this KIP gives a
>> descent work around.
>>
>> Probably we should have included an error in the response header, but we
>> debated it at the time decided not to and now it is pretty hard to add
>> because the headers aren't versioned (d'oh).
>>
>> It seems like any other solution is going to be kind of a hack, right?
>> Sending malformed responses back seems like not a clean solution...
>>
>> (Not sure if I was pro- having a top-level error or not, but in any case
>> the rationale for the decision was that so many of the requests were
>> per-partition or per-topic or whatever and hence fail or succeed at that
>> level and this makes it hard to know what the right top-level error code
>> is
>> and hard for the client to figure out what to do with the top level error
>> if some of the partitions succeed but there is a top-level error).
>>
>> I think actually this new API actually gives a way to handle this
>> gracefully on the client side by just having clients that want to be
>> graceful check for support for their version. Clients that do that will
>> have a graceful message.
>>
>> At some point if we're ever reworking the headers we should really
>> consider
>> (a) versioning them and (b) adding a top-level error code in the response.
>> But given this would be a big breaking change and this is really just to
>> give a nicer error message seems like it probably isn't worth it to try to
>> do something now.
>>
>> -Jay
>>
>>
>>
>> On Mon, Oct 12, 2015 at 8:11 PM, Jiangjie Qin 
>> wrote:
>>
>> > I am thinking instead of returning an empty response, it would be
>> better to
>> > return an explicit UnsupportedVersionException code.
>> >
>> > Today KafkaApis handles the error in the following way:
>> > 1. For requests/responses using old Scala classes, KafkaApis uses
>> > RequestOrResponse.handleError() to return an error response.
>> > 2. For requests/response using Java classes (only JoinGroupRequest and
>> > Heartbeat now), KafkaApis calls AbstractRequest.getErrorResponse() to
>> > return an error response.
>> >
>> > In KAFKA-2512, I am returning an UnsupportedVersionException for case
>> [1]
>> > when see an unsupported version. This will put the error code per topic
>> or
>> > partition for most of the requests, but might not work all the time.
>> e.g.
>> > TopicMetadataRequest with an empty topic set.
>> >
>> > Case [2] does not quite work for unsupported version, because we will
>> > thrown an uncaught exception when version is not recognized (BTW this
>> is a
>> > bug). Part of the reason is that for some response types, error code is
>> not
>> > part of the response level field.
>> >
>> > Maybe it worth checking how each response is dealing with error code
>> today.
>> > A scan of the response formats gives the following result:
>> > 1. TopicMetadataResponse - per topic error code, does not work when the
>> > topic set is empty in the request.
>> > 2. ProduceResonse - per partition error code.
>> > 3. OffsetCommitResponse - per partition.
>> > 4. OffsetFetchResponse - per partition.
>> > 5. OffsetResponse - per partition.
>> > 6. FetchResponse - per partition
>> > 7. ConsumerMetadataResponse - response level
>> > 8. ControlledShutdownResponse - response level
>> > 9. JoinGroupResponse - response level
>> > 10. HearbeatResponse - response level
>> > 11. LeaderAndIsrResponse - response level
>> > 12. StopReplicaResponse - response level
>> > 13. UpdateMetadataResponse - response level
>> >
>> > So from the list above it looks for each response we are actually able
>> to
>> > return an error code, as long as we make sure the topic or partition
>> won't
>> > be empty when the error code is at topic or partition level. Luckily in
>> the
>> > above list we only need to worry about TopicMetadataResponse.
>> >
>> > Maybe error handling is out of the scope of this KIP, but I prefer we
>> 

[jira] [Created] (KAFKA-3309) Update Protocol Documentation WIP patch

2016-02-29 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-3309:
-

 Summary: Update Protocol Documentation WIP patch
 Key: KAFKA-3309
 URL: https://issues.apache.org/jira/browse/KAFKA-3309
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ashish K Singh
Assignee: Ashish K Singh


https://github.com/apache/kafka/pull/970 has the WIP patch. Update to reflect 
changes made for KIP-35.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: DefaultMessageFormatter custom deserial...

2016-02-29 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/987

MINOR: DefaultMessageFormatter custom deserializer fixes

The ability to specify a deserializer for keys and values was added in a 
recent commit (845c6eae1f6c6bcf11), but it contained a few issues.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka console-consumer-cleanups

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/987.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #987


commit 06b5beedb7c9baa9e76a726a523e1218073f20fc
Author: Ismael Juma 
Date:   2016-02-29T22:54:15Z

Remove `println` that were merged to trunk by mistake

commit a48db9f8abcad1c5121b26bf43212252f0d81c12
Author: Ismael Juma 
Date:   2016-02-29T22:56:07Z

Change `key.decoder` to `key.deserializer` and `value.decoder` to 
`value.deserializer`

This makes it consistent with the new consumer.

commit 29ead51aafbd25fca280a38ea0418c2465b4a33c
Author: Ismael Juma 
Date:   2016-02-29T23:42:56Z

Handle case where user doesn't provide deserializer correctly




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: add AUTO_OFFSET_RESET_CONFIG to Streams...

2016-02-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/985


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-02-29 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-3310:
--

 Summary: fetch requests can trigger repeated NPE when quota is 
enabled
 Key: KAFKA-3310
 URL: https://issues.apache.org/jira/browse/KAFKA-3310
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1
Reporter: Jun Rao


We saw the following NPE when consumer quota is enabled. NPE is triggered on 
every fetch request from the client.

java.lang.NullPointerException
at 
kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
at 
kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
at 
kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)

One possible cause of this is the logic of removing inactive sensors. 
Currently, in ClientQuotaManager, we create two sensors per clientId: a 
throttleTimeSensor and a quotaSensor. Each sensor expires if it's not actively 
updated for 1 hour. What can happen is that initially, the quota is not 
exceeded. So, quotaSensor is being updated actively, but throttleTimeSensor is 
not. At some point, throttleTimeSensor is removed by the expiring thread. Now, 
we are in a situation that quotaSensor is registered, but throttleTimeSensor is 
not. Later on, if the quota is exceeded, we will hit the above NPE when trying 
to update throttleTimeSensor.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-02-29 Thread Dana Powers
This is a fantastic and much-needed KIP. All third-party clients have had
to deal with this issue. In my experience most clients are either declaring
they only support version broker version X, or are spending a lot of time
hacking around the issue. I think the community of non-java drivers would
see significant benefit from this proposal.

My specific thought is that for kafka-python it has been easier to manage
compatibility using broker release version to gate various features by
api-protocol version. For example, only enable group coordination apis if
>= (0, 9), kafka-backed offsets >= (0, 8, 2), etc. As an example, here are
some backwards compatibility issues that I think are difficult to capture
w/ just the protocol versions:

- LZ4 compression only supported in brokers >= 0.8.2, but no protocol
change.
- kafka-backed offset storage, in additional to requiring new offset
commit/fetch protocol versions, also requires adding support for tracking
the group coordinator.
- 0.8.2-beta OffsetCommit api [different than 0.8.X release]


Could release version be added to the api response in this proposal?
Perhaps include a KafkaRelease string in the Response before the array of
api versions?

Thanks for the great KIP, Magnus. And thanks for restarting the discussion,
Ashish. I also would like to see this addressed in 0.10

-Dana


On Mon, Feb 29, 2016 at 3:55 PM, Ashish Singh  wrote:

> I hope it is OK for me to make some progress here. I have made the
> following changes.
>
> 1. Updated KIP-35, to adopt Jay's suggestion on maintaining separate list
> of deprecated versions, instead of using a version of -1.
> 2. Added information on required permissions, Describe action on Cluster
> resource, to be able to retrieve protocol versions from a auth enabled
> Kafka cluster.
>
> Created https://issues.apache.org/jira/browse/KAFKA-3304. Primary patch is
> available to review, https://github.com/apache/kafka/pull/986
>
> On Thu, Feb 25, 2016 at 1:27 PM, Ashish Singh  wrote:
>
> > Kafka clients in Hadoop ecosystem, Flume, Spark, etc, have found it
> really
> > difficult to cope up with Kafka releases as they want to support users on
> > different Kafka versions. Capability to retrieve protocol version will
> go a
> > long way to ease out those pain points. I will be happy to help out with
> > the work on this KIP. @Magnus, thanks for driving this, is it OK if I
> carry
> > forward the work from here. It will be ideal to have this in 0.10.0.0.
> >
> > On Mon, Oct 12, 2015 at 9:29 PM, Jay Kreps  wrote:
> >
> >> I wonder if we need to solve the error problem? I think this KIP gives a
> >> descent work around.
> >>
> >> Probably we should have included an error in the response header, but we
> >> debated it at the time decided not to and now it is pretty hard to add
> >> because the headers aren't versioned (d'oh).
> >>
> >> It seems like any other solution is going to be kind of a hack, right?
> >> Sending malformed responses back seems like not a clean solution...
> >>
> >> (Not sure if I was pro- having a top-level error or not, but in any case
> >> the rationale for the decision was that so many of the requests were
> >> per-partition or per-topic or whatever and hence fail or succeed at that
> >> level and this makes it hard to know what the right top-level error code
> >> is
> >> and hard for the client to figure out what to do with the top level
> error
> >> if some of the partitions succeed but there is a top-level error).
> >>
> >> I think actually this new API actually gives a way to handle this
> >> gracefully on the client side by just having clients that want to be
> >> graceful check for support for their version. Clients that do that will
> >> have a graceful message.
> >>
> >> At some point if we're ever reworking the headers we should really
> >> consider
> >> (a) versioning them and (b) adding a top-level error code in the
> response.
> >> But given this would be a big breaking change and this is really just to
> >> give a nicer error message seems like it probably isn't worth it to try
> to
> >> do something now.
> >>
> >> -Jay
> >>
> >>
> >>
> >> On Mon, Oct 12, 2015 at 8:11 PM, Jiangjie Qin  >
> >> wrote:
> >>
> >> > I am thinking instead of returning an empty response, it would be
> >> better to
> >> > return an explicit UnsupportedVersionException code.
> >> >
> >> > Today KafkaApis handles the error in the following way:
> >> > 1. For requests/responses using old Scala classes, KafkaApis uses
> >> > RequestOrResponse.handleError() to return an error response.
> >> > 2. For requests/response using Java classes (only JoinGroupRequest and
> >> > Heartbeat now), KafkaApis calls AbstractRequest.getErrorResponse() to
> >> > return an error response.
> >> >
> >> > In KAFKA-2512, I am returning an UnsupportedVersionException for case
> >> [1]
> >> > when see an unsupported version. This will put the error code per
> topic
> >> or
> >> > partition for most of the requests, but might not work all the time.
>

[jira] [Commented] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-02-29 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172950#comment-15172950
 ] 

Jun Rao commented on KAFKA-3310:


[~aauradkar], do you think this is a problem?

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: DefaultMessageFormatter custom deserial...

2016-02-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/987


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1074

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3192: Add unwindowed aggregations for KStream; and make all

--
[...truncated 1483 lines...]

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testInvalidCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testLogAppendTime PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

Build failed in Jenkins: kafka-trunk-jdk8 #402

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Add AUTO_OFFSET_RESET_CONFIG to StreamsConfig; remove

[wangguoz] MINOR: DefaultMessageFormatter custom deserializer fixes

--
[...truncated 129 lines...]

^
:298:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:307:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:308:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:246:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
Console.readLine().equalsIgnoreCase("y")
^
:371:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
if (!Console.readLine().equalsIgnoreCase("y")) {
 ^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warnings; re-run with -feature for details
12 warnings found
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes
:kafka-trunk-jdk8:clients:compileTestJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processTestResources UP-TO-DATE
:kafka-trunk-jdk8:clients:testClasses UP-TO-DATE
:kafka-trunk-jdk8:core:copyDependantLibs
:kafka-trunk-jdk8:core:copyDependantTestLibs
:kafka-trunk-jdk8:core:jar
:clients:compileJava UP-TO-DATE
:clients:processResources UP-TO-DATE
:clients:classes UP-TO-DATE
:clients:determineCommitId UP-TO-DATE
:clients:createVersionFile
:clients:jar UP-TO-DATE
:core:compileJava UP-TO-DATE
:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: ignoring option 
MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:401:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated

Re: [VOTE] KIP-33 - Add a time based log index to Kafka

2016-02-29 Thread Jun Rao
Hi, Becket,

I thought that your proposal to build time-based index just based off
index.interval.bytes
is reasonable. Is there a particular need to also add time.
index.interval.bytes?

Compute the pre-allocated index file size based on log segment file size
can be useful. However, the tricky thing is that log segment size can be
changed dynamically. Also, for mmap files, they don't use heap space, just
virtual memory, which will be paged in on demand. So, I am not sure if
memory space is a big concern there. The simplest thing is probably to
change the default index size to 2MB to match the default log segment size.

A couple of other things to think through.

1. Currently, LogSegment supports truncating to an offset. How do we do
that on a time-based index?

2. Since it's possible to have a empty time-based index (if all message
timestamps are smaller than the largest timestamp in previous segment), we
need to figure out what timestamp to use for retaining such log segment. In
the extreme case, it can happen that after we delete an old log segment,
all of the new log segments have an empty time-based index, in this case,
how do we avoid losing track of the latest timestamp?

Thanks,

Jun

On Sun, Feb 28, 2016 at 3:26 PM, Becket Qin  wrote:

> Hi Guozhang,
>
> The size of memory mapped index file was also our concern as well. That is
> why we are suggesting minute level time indexing instead of second level.
> There are a few thoughts on the extra memory cost of time index.
>
> 1. Currently all the index files are loaded as memory mapped files. Notice
> that only the index of the active segment is of the default size 10MB.
> Typically the index of the old segments are much smaller than 10MB. So if
> we use the same initial size for time index files, the total amount of
> memory won't be doubled, but the memory cost of active segments will be
> doubled. (However, the 10MB value itself seems problematic, see later
> reasoning).
>
> 2. It is likely that the time index is much smaller than the offset index
> because user would adjust the time index interval ms depending on the topic
> volume. i.e for a low volume topic the time index interval ms will be much
> longer so that we can avoid inserting one time index entry for each message
> in the extreme case.
>
> 3. To further guard against the unnecessary frequent insertion of time
> index entry, we used the index.interval.bytes as a restriction for time
> index entry as well. Such that even for a newly created topic with the
> default time.index.interval.ms we don't need to worry about overly
> aggressive time index entry insertion.
>
> Considering the above. The overall memory cost for time index should be
> much smaller compared with the offset index. However, as you pointed out
> for (1) might still be an issue. I am actually not sure about why we always
> allocate 10 MB for the index file. This itself looks a problem given we
> actually have a pretty good way to know the upper bound of memory taken by
> an offset index.
>
> Theoretically, the offset index file will at most have (log.segment.bytes /
> index.interval.bytes) entries. In our default configuration,
> log.segment.size=1GB, and index.interval.bytes=4K. This means we only need
> (1GB/4K)*8 Bytes = 2MB. Allocating 10 MB is really a big waste of memory.
>
> I suggest we do the following:
> 1. When creating the log index file, we always allocate memory using the
> above calculation.
> 2. If the memory calculated in (1) is greater than segment.index.bytes, we
> use segment.index.bytes instead. Otherwise we simply use the result in (1)
>
> If we do this I believe the memory for index file will probably be smaller
> even if we have the time index added. I will create a separate ticket for
> the index file initial size.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Thu, Feb 25, 2016 at 3:30 PM, Guozhang Wang  wrote:
>
> > Jiangjie,
> >
> > I was originally only thinking about the "time.index.size.max.bytes"
> config
> > in addition to the "offset.index.size.max.bytes". Since the latter's
> > default size is 10MB, and for memory mapped file, we will allocate that
> > much of memory at the start which could be a pressure on RAM if we double
> > it.
> >
> > Guozhang
> >
> > On Wed, Feb 24, 2016 at 4:56 PM, Becket Qin 
> wrote:
> >
> > > Hi Guozhang,
> > >
> > > I thought about this again and it seems we stilll need the
> > > time.index.interval.ms configuration to avoid unnecessary frequent
> time
> > > index insertion.
> > >
> > > I just updated the wiki to add index.interval.bytes as an additional
> > > constraints for time index entry insertion. Another slight change made
> > was
> > > that as long as a message timestamp shows time.index.interval.ms has
> > > passed
> > > since the timestamp of last time index entry, we will insert another
> > > timestmap index entry. Previously we always insert time index at
> > > time.index.interval.ms bucket boundaries.
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becke

Build failed in Jenkins: kafka-trunk-jdk7 #1075

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Add AUTO_OFFSET_RESET_CONFIG to StreamsConfig; remove

[wangguoz] MINOR: DefaultMessageFormatter custom deserializer fixes

--
[...truncated 4994 lines...]
org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameTopic PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testTopicGroupsByStateStore PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithDuplicates PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithMultipleParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testAddStateStore 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testLockStateDirectory PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testStorePartitions PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdateKTable 
PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testUpdateNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdate PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testStickiness PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.TaskAssignorTest > 
testAssignWithoutStandby PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.assignment.AssginmentInfoTest > 
testEncodeDecode PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingMultiplexingTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingStatefulTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testDrivingSimpleTopology PASSED

org.apache.kafka.streams.processor.internals.ProcessorTopologyTest > 
testTopologyMetadata PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionA

Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-02-29 Thread Jun Rao
Rajini,

Thanks for the updates. Just a couple of minor comments.

1. With the default GSSAPI, what's the first packet that the client sends
to the server? Is that completely different from the packet format that we
will use for non-GSSAPI mechanisms?

2. In the server response, it doesn't seem that we need to include the
version since the client knows the version of the request that it sends.

Jun

On Mon, Feb 29, 2016 at 10:14 AM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> Harsha,
>
> Thank you for the review. I will wait another day to see if there is more
> feedback and then start a voting thread.
>
> Rajini
>
> On Mon, Feb 29, 2016 at 2:51 PM, Harsha  wrote:
>
> > Rajini,
> >   Thanks for the changes to the KIP. It looks good to me. I
> >   think we can move to voting.
> > Thanks,
> > Harsha
> >
> > On Mon, Feb 29, 2016, at 12:43 AM, Rajini Sivaram wrote:
> > > I have added some more detail to the KIP based on the discussion in the
> > > last KIP meeting to simplify support for multiple mechanisms. Have also
> > > changed the property names to reflect this.
> > >
> > > Also updated the PR in
> https://issues.apache.org/jira/browse/KAFKA-3149
> > > to
> > > reflect the KIP.
> > >
> > > Any feedback is appreciated.
> > >
> > >
> > > On Tue, Feb 23, 2016 at 9:36 PM, Rajini Sivaram <
> > > rajinisiva...@googlemail.com> wrote:
> > >
> > > > I have updated the KIP based on the discussion in the KIP meeting
> > today.
> > > >
> > > > Comments and feedback are welcome.
> > > >
> > > > On Wed, Feb 3, 2016 at 7:20 PM, Rajini Sivaram <
> > > > rajinisiva...@googlemail.com> wrote:
> > > >
> > > >> Hi Harsha,
> > > >>
> > > >> Thank you for the review. Can you clarify - I think you are saying
> > that
> > > >> the client should send its mechanism over the wire to the server. Is
> > that
> > > >> correct? The exchange is slightly different in the KIP (the PR
> > matches the
> > > >> KIP) from the one you described to enable interoperability with
> > 0.9.0.0.
> > > >>
> > > >>
> > > >> On Wed, Feb 3, 2016 at 1:56 PM, Harsha  wrote:
> > > >>
> > > >>> Rajini,
> > > >>>I looked at the PR you have. I think its better with
> your
> > > >>>earlier approach rather than extending the protocol.
> > > >>> What I was thinking initially is, Broker has a config option of say
> > > >>> sasl.mechanism = GSSAPI, PLAIN
> > > >>> and the client can have similar config of sasl.mechanism=PLAIN.
> > Client
> > > >>> can send its sasl mechanism before the handshake starts and if the
> > > >>> broker accepts that particular mechanism than it can go ahead with
> > > >>> handshake otherwise return a error saying that the mechanism not
> > > >>> allowed.
> > > >>>
> > > >>> Thanks,
> > > >>> Harsha
> > > >>>
> > > >>> On Wed, Feb 3, 2016, at 04:58 AM, Rajini Sivaram wrote:
> > > >>> > A slightly different approach for supporting different SASL
> > mechanisms
> > > >>> > within a broker is to allow the same "*security protocol*" to be
> > used
> > > >>> on
> > > >>> > different ports with different configuration options. An
> advantage
> > of
> > > >>> > this
> > > >>> > approach is that it extends the configurability of not just SASL,
> > but
> > > >>> any
> > > >>> > protocol. For instance, it would enable the use of SSL with
> mutual
> > > >>> client
> > > >>> > authentication on one port or different certificate chains on
> > another.
> > > >>> > And
> > > >>> > it avoids the need for SASL mechanism negotiation.
> > > >>> >
> > > >>> > Kafka would have the same "*security protocols" *defined as
> today,
> > but
> > > >>> > with
> > > >>> > (a single) configurable SASL mechanism. To have different
> > > >>> configurations
> > > >>> > of
> > > >>> > a protocol within a broker, users can define new protocol names
> > which
> > > >>> are
> > > >>> > configured versions of existing protocols, perhaps using just
> > > >>> > configuration
> > > >>> > entries and no additional code.
> > > >>> >
> > > >>> > For example:
> > > >>> >
> > > >>> > A single mechanism broker would be configured as:
> > > >>> >
> > > >>> > listeners=SASL_SSL://:9092
> > > >>> > sasl.mechanism=GSSAPI
> > > >>> > sasl.kerberos.class.name=kafka
> > > >>> > ...
> > > >>> >
> > > >>> >
> > > >>> > And a multi-mechanism broker would be configured as:
> > > >>> >
> > > >>> > listeners=gssapi://:9092,plain://:9093,custom://:9094
> > > >>> > gssapi.security.protocol=SASL_SSL
> > > >>> > gssapi.sasl.mechanism=GSSAPI
> > > >>> > gssapi.sasl.kerberos.class.name=kafka
> > > >>> > ...
> > > >>> > plain.security.protocol=SASL_SSL
> > > >>> > plain.sasl.mechanism=PLAIN
> > > >>> > ..
> > > >>> > custom.security.protocol=SASL_PLAINTEXT
> > > >>> > custom.sasl.mechanism=CUSTOM
> > > >>> > custom.sasl.callback.handler.class=example.CustomCallbackHandler
> > > >>> >
> > > >>> >
> > > >>> >
> > > >>> > This is still a big change because it affects the currently fixed
> > > >>> > enumeration of security protocol definitions, but o

[GitHub] kafka pull request: KAFKA-3273; MessageFormatter and MessageReader...

2016-02-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/972


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3273) MessageFormatter and MessageReader interfaces should be resilient to changes

2016-02-29 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3273:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 972
[https://github.com/apache/kafka/pull/972]

> MessageFormatter and MessageReader interfaces should be resilient to changes
> 
>
> Key: KAFKA-3273
> URL: https://issues.apache.org/jira/browse/KAFKA-3273
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> They should use `ConsumerRecord` and `ProducerRecord` as parameters and 
> return types respectively in order to avoid breaking users each time a new 
> parameter is added.
> An additional question is whether we need to maintain compatibility with 
> previous releases. [~junrao] suggested that we do not, but [~ewencp] thought 
> we should.
> Note that the KIP-31/32 change has broken compatibility for 
> `MessageFormatter` so we need to do _something_ for the next release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3273) MessageFormatter and MessageReader interfaces should be resilient to changes

2016-02-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15173135#comment-15173135
 ] 

ASF GitHub Bot commented on KAFKA-3273:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/972


> MessageFormatter and MessageReader interfaces should be resilient to changes
> 
>
> Key: KAFKA-3273
> URL: https://issues.apache.org/jira/browse/KAFKA-3273
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> They should use `ConsumerRecord` and `ProducerRecord` as parameters and 
> return types respectively in order to avoid breaking users each time a new 
> parameter is added.
> An additional question is whether we need to maintain compatibility with 
> previous releases. [~junrao] suggested that we do not, but [~ewencp] thought 
> we should.
> Note that the KIP-31/32 change has broken compatibility for 
> `MessageFormatter` so we need to do _something_ for the next release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-33 - Add a time based log index to Kafka

2016-02-29 Thread Becket Qin
Hi Jun,

I think index.interval.bytes is used to control the density of the offset
index. The counterpart of index.interval.bytes for time index is
time.index.interval.ms. If we did not change the semantic of log.roll.ms,
log.roll.ms/time.index.interval.ms and
log.segment.bytes/index.interval.bytes are a perfect mapping from bytes to
time. However, because we changed the behavior of log.roll.ms, we need to
guard against a potentially excessively large time index. We can either
reuse index.interval.bytes or introduce time.index.interval.bytes, but I
cannot think of additional usage for time.index.interval.bytes other than
limiting the time index size.

I agree that the memory mapped file is probably not a big issue here and we
can change the default index size to 2MB.

For the two cases you mentioned.
1. Because the message offset in the time index is also monotonically
increasing, truncating should be straightforward. i.e. only keep the
entries that are pointing to the offsets earlier than the truncated to
offsets.

2. The current assumption is that if the time index of a segment is empty
and there are no previous time index entry, we will assume that segment
should be removed - because all the older segment with even larger
timestamp have been removed. So in the case you mentioned, during startup
we will remove all the segments and roll out a new empty segment.

Thanks,

Jiangjie (Becket) Qin



On Mon, Feb 29, 2016 at 6:09 PM, Jun Rao  wrote:

> Hi, Becket,
>
> I thought that your proposal to build time-based index just based off
> index.interval.bytes
> is reasonable. Is there a particular need to also add time.
> index.interval.bytes?
>
> Compute the pre-allocated index file size based on log segment file size
> can be useful. However, the tricky thing is that log segment size can be
> changed dynamically. Also, for mmap files, they don't use heap space, just
> virtual memory, which will be paged in on demand. So, I am not sure if
> memory space is a big concern there. The simplest thing is probably to
> change the default index size to 2MB to match the default log segment size.
>
> A couple of other things to think through.
>
> 1. Currently, LogSegment supports truncating to an offset. How do we do
> that on a time-based index?
>
> 2. Since it's possible to have a empty time-based index (if all message
> timestamps are smaller than the largest timestamp in previous segment), we
> need to figure out what timestamp to use for retaining such log segment. In
> the extreme case, it can happen that after we delete an old log segment,
> all of the new log segments have an empty time-based index, in this case,
> how do we avoid losing track of the latest timestamp?
>
> Thanks,
>
> Jun
>
> On Sun, Feb 28, 2016 at 3:26 PM, Becket Qin  wrote:
>
> > Hi Guozhang,
> >
> > The size of memory mapped index file was also our concern as well. That
> is
> > why we are suggesting minute level time indexing instead of second level.
> > There are a few thoughts on the extra memory cost of time index.
> >
> > 1. Currently all the index files are loaded as memory mapped files.
> Notice
> > that only the index of the active segment is of the default size 10MB.
> > Typically the index of the old segments are much smaller than 10MB. So if
> > we use the same initial size for time index files, the total amount of
> > memory won't be doubled, but the memory cost of active segments will be
> > doubled. (However, the 10MB value itself seems problematic, see later
> > reasoning).
> >
> > 2. It is likely that the time index is much smaller than the offset index
> > because user would adjust the time index interval ms depending on the
> topic
> > volume. i.e for a low volume topic the time index interval ms will be
> much
> > longer so that we can avoid inserting one time index entry for each
> message
> > in the extreme case.
> >
> > 3. To further guard against the unnecessary frequent insertion of time
> > index entry, we used the index.interval.bytes as a restriction for time
> > index entry as well. Such that even for a newly created topic with the
> > default time.index.interval.ms we don't need to worry about overly
> > aggressive time index entry insertion.
> >
> > Considering the above. The overall memory cost for time index should be
> > much smaller compared with the offset index. However, as you pointed out
> > for (1) might still be an issue. I am actually not sure about why we
> always
> > allocate 10 MB for the index file. This itself looks a problem given we
> > actually have a pretty good way to know the upper bound of memory taken
> by
> > an offset index.
> >
> > Theoretically, the offset index file will at most have
> (log.segment.bytes /
> > index.interval.bytes) entries. In our default configuration,
> > log.segment.size=1GB, and index.interval.bytes=4K. This means we only
> need
> > (1GB/4K)*8 Bytes = 2MB. Allocating 10 MB is really a big waste of memory.
> >
> > I suggest we do the following:
> > 1. When creating

[jira] [Commented] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-02-29 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15173204#comment-15173204
 ] 

Aditya Auradkar commented on KAFKA-3310:


[~junrao] - Let me investigate this. If this is a problem, it should be easy to 
fix by recording 0 on the throttle time sensor everytime. 

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #403

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3273; MessageFormatter and MessageReader interfaces should be

--
[...truncated 5651 lines...]

org.apache.kafka.connect.runtime.WorkerTaskTest > stopBeforeStarting PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > standardStartup PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > cancelBeforeStopping PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectors PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitTaskSuccessAndFlushFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStartAndStopConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByAlias PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByShortAlias 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > connectorStatus PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > taskStatus PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
FAILED
java.lang.AssertionError: 
  Expectation failure on verify:
Listener.onStartup(job-0): expected: 1, actual: 1
Listener.onShutdown(job-0): expected: 1, actual: 1
at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
at 
org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testPollsInBackground(WorkerSourceTaskTest.java:151)

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSlowTaskStart PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testFailureInPoll PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
t

[jira] [Commented] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-02-29 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15173223#comment-15173223
 ] 

Aditya Auradkar commented on KAFKA-3310:


[~junrao] - Just making sure, you observe that the response is still delayed 
right? The throttle time sensor is the last thing that is recorded and the 
element has been added to the delay queue, so the fetchResponseCallback should 
fire after the throttle time. 

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1076

2016-02-29 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3273; MessageFormatter and MessageReader interfaces should be

--
[...truncated 5623 lines...]

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetConnectorStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
putAndGetTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteTaskStatus PASSED

org.apache.kafka.connect.storage.MemoryStatusBackingStoreTest > 
deleteConnectorStatus PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStartAndStopConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByAlias PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByShortAlias 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > stopBeforeStarting PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > standardStartup PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > cancelBeforeStopping PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
FAILED
java.lang.AssertionError: 
  Expectation failure on verify:
Listener.onStartup(job-0): expected: 1, actual: 1
Listener.onShutdown(job-0): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
at 
org.apache.kafka.connect.runtime.WorkerSourceTaskTest.testPollsInBackground(WorkerSourceTaskTest.java:151)

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testFailureInPoll PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit FAILED
java.lang.AssertionError: 
  Expectation failure on verify:
Listener.onStartup(job-0): expected: 1, actual: 1
Listener.onShutdown(job-0): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
at org.powermock.api.easymock.PowerMock.verifyAl

[jira] [Commented] (KAFKA-3310) fetch requests can trigger repeated NPE when quota is enabled

2016-02-29 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15173295#comment-15173295
 ] 

Jun Rao commented on KAFKA-3310:


[~aauradkar], that depends. In this case, the NPE is triggered directly when 
handling the fetch request in KafkaApis. The throttle time sensor is actually 
recorded before we add the request to the delay queue. So, we will send an 
empty fetch response with an unexpected error. However, the same NPE could be 
triggered when we try to complete a fetch request from the fetch purgatory. In 
this case, we won't even be able to send a fetch response. So the fetch request 
will timeout. What's worse is that there could be other fetch requests (both 
consumer and follower) in the fetch purgatory off the same key. Since we hit 
the unexpected exception while evaluating the completeness of this particular 
fetch request, we will skip the checking of other fetch requests on the same 
chain and therefore may delay other fetch requests.

It seems that this problem can show up pretty easily. Just upgrade the broker 
to 0.9.0, start a consumer, wait for more than an hour, then set the consumer 
quota. If the consumer fetch request is now throttled, we will hit the NPE.

Recording 0 on the throttled time sensor probably fixes most of the problem, 
but I am not sure if it fixes this completely. Since these two sensors are not 
updated at exactly the same time, it seems that it's still possible for 
throttled time sensor to expire before quota sensor?

> fetch requests can trigger repeated NPE when quota is enabled
> -
>
> Key: KAFKA-3310
> URL: https://issues.apache.org/jira/browse/KAFKA-3310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>
> We saw the following NPE when consumer quota is enabled. NPE is triggered on 
> every fetch request from the client.
> java.lang.NullPointerException
> at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:122)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$3(KafkaApis.scala:419)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:436)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:481)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:69)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> One possible cause of this is the logic of removing inactive sensors. 
> Currently, in ClientQuotaManager, we create two sensors per clientId: a 
> throttleTimeSensor and a quotaSensor. Each sensor expires if it's not 
> actively updated for 1 hour. What can happen is that initially, the quota is 
> not exceeded. So, quotaSensor is being updated actively, but 
> throttleTimeSensor is not. At some point, throttleTimeSensor is removed by 
> the expiring thread. Now, we are in a situation that quotaSensor is 
> registered, but throttleTimeSensor is not. Later on, if the quota is 
> exceeded, we will hit the above NPE when trying to update throttleTimeSensor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)