[GitHub] kafka pull request: KAFKA-3085: BrokerChangeListener computes inco...

2016-01-11 Thread dajac
GitHub user dajac opened a pull request:

https://github.com/apache/kafka/pull/752

KAFKA-3085: BrokerChangeListener computes inconsistent live/dead broker list



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dajac/kafka KAFKA-3085

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/752.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #752


commit 706cf9248769bb408ec228c11f7b717770b573d0
Author: David Jacot 
Date:   2016-01-11T08:51:25Z

BrokerChangeListener computes inconsistent live/dead broker list




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3085) BrokerChangeListener computes inconsistent live/dead broker list

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091625#comment-15091625
 ] 

ASF GitHub Bot commented on KAFKA-3085:
---

GitHub user dajac opened a pull request:

https://github.com/apache/kafka/pull/752

KAFKA-3085: BrokerChangeListener computes inconsistent live/dead broker list



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dajac/kafka KAFKA-3085

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/752.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #752


commit 706cf9248769bb408ec228c11f7b717770b573d0
Author: David Jacot 
Date:   2016-01-11T08:51:25Z

BrokerChangeListener computes inconsistent live/dead broker list




> BrokerChangeListener computes inconsistent live/dead broker list
> 
>
> Key: KAFKA-3085
> URL: https://issues.apache.org/jira/browse/KAFKA-3085
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: David Jacot
>
> On a broker change ZK event, BrokerChangeListener gets the current broker 
> list from ZK. It then computes a new broker list, a dead broker list, and a 
> live broker list with more detailed broker info. The new and live broker list 
> are computed by reading the value associated with each of the current broker 
> twice. If a broker is de-registered in between, these two list will not be 
> consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3085) BrokerChangeListener computes inconsistent live/dead broker list

2016-01-11 Thread David Jacot (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-3085:
---
Status: Patch Available  (was: Open)

> BrokerChangeListener computes inconsistent live/dead broker list
> 
>
> Key: KAFKA-3085
> URL: https://issues.apache.org/jira/browse/KAFKA-3085
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: David Jacot
>
> On a broker change ZK event, BrokerChangeListener gets the current broker 
> list from ZK. It then computes a new broker list, a dead broker list, and a 
> live broker list with more detailed broker info. The new and live broker list 
> are computed by reading the value associated with each of the current broker 
> twice. If a broker is de-registered in between, these two list will not be 
> consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Fix a type of variable

2016-01-11 Thread oyld
GitHub user oyld opened a pull request:

https://github.com/apache/kafka/pull/753

MINOR: Fix a type of variable

requestTimeoutMs should be int.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/oyld/kafka type_improvement

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/753.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #753


commit e9f7d1a58fa2127f0ae30432fbc0d764b856ad3d
Author: oyld 
Date:   2016-01-11T09:21:43Z

MINOR: Fix a type of variable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3087) Fix documentation for retention.ms property

2016-01-11 Thread Raju Bairishetti (JIRA)
Raju Bairishetti created KAFKA-3087:
---

 Summary: Fix documentation for retention.ms property
 Key: KAFKA-3087
 URL: https://issues.apache.org/jira/browse/KAFKA-3087
 Project: Kafka
  Issue Type: Bug
  Components: log
Reporter: Raju Bairishetti
Assignee: Jay Kreps
Priority: Critical


Log retention settings can be set it in broker and some properties can be 
overriden at topic level. 
|Property |Default|Server Default property| Description|
|retention.ms|7 days|log.retention.minutes|This configuration controls the 
maximum time we will retain a log before we will discard old log segments to 
free up space if we are using the "delete" retention policy. This represents an 
SLA on how soon consumers must read their data.|

But retention.ms is in milli seconds not in minutes. So corresponding property 
should be *log.retention.ms* instead of *log.retention.minutes*.

It would be better if we mention the if the time age is in millis/minutes/hours 
in the documentation page and documenting in code as well (Right now, it is 
saying *age in the code*. We should specify the *age in time granularity).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3087) Fix documentation for retention.ms property and update documentation for LogConfig.scala class

2016-01-11 Thread Raju Bairishetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raju Bairishetti updated KAFKA-3087:

Summary: Fix documentation for retention.ms property and update 
documentation for LogConfig.scala class  (was: Fix documentation for 
retention.ms property)

> Fix documentation for retention.ms property and update documentation for 
> LogConfig.scala class
> --
>
> Key: KAFKA-3087
> URL: https://issues.apache.org/jira/browse/KAFKA-3087
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Raju Bairishetti
>Assignee: Jay Kreps
>Priority: Critical
>  Labels: documentation
>
> Log retention settings can be set it in broker and some properties can be 
> overriden at topic level. 
> |Property |Default|Server Default property| Description|
> |retention.ms|7 days|log.retention.minutes|This configuration controls the 
> maximum time we will retain a log before we will discard old log segments to 
> free up space if we are using the "delete" retention policy. This represents 
> an SLA on how soon consumers must read their data.|
> But retention.ms is in milli seconds not in minutes. So corresponding 
> property should be *log.retention.ms* instead of *log.retention.minutes*.
> It would be better if we mention the if the time age is in 
> millis/minutes/hours in the documentation page and documenting in code as 
> well (Right now, it is saying *age in the code*. We should specify the *age 
> in time granularity).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3087) Fix documentation for retention.ms property and update documentation for LogConfig.scala class

2016-01-11 Thread Raju Bairishetti (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091671#comment-15091671
 ] 

Raju Bairishetti commented on KAFKA-3087:
-

Could any one please add me as a contributor?

> Fix documentation for retention.ms property and update documentation for 
> LogConfig.scala class
> --
>
> Key: KAFKA-3087
> URL: https://issues.apache.org/jira/browse/KAFKA-3087
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Raju Bairishetti
>Assignee: Jay Kreps
>Priority: Critical
>  Labels: documentation
>
> Log retention settings can be set it in broker and some properties can be 
> overriden at topic level. 
> |Property |Default|Server Default property| Description|
> |retention.ms|7 days|log.retention.minutes|This configuration controls the 
> maximum time we will retain a log before we will discard old log segments to 
> free up space if we are using the "delete" retention policy. This represents 
> an SLA on how soon consumers must read their data.|
> But retention.ms is in milli seconds not in minutes. So corresponding 
> property should be *log.retention.ms* instead of *log.retention.minutes*.
> It would be better if we mention the if the time age is in 
> millis/minutes/hours in the documentation page and documenting in code as 
> well (Right now, it is saying *age in the code*. We should specify the *age 
> in time granularity).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3087) Fix documentation for retention.ms property and update documentation for LogConfig.scala class

2016-01-11 Thread Raju Bairishetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raju Bairishetti updated KAFKA-3087:

Description: 
Log retention settings can be set it in broker and some properties can be 
overriden at topic level. 
|Property |Default|Server Default property| Description|
|retention.ms|7 days|log.retention.minutes|This configuration controls the 
maximum time we will retain a log before we will discard old log segments to 
free up space if we are using the "delete" retention policy. This represents an 
SLA on how soon consumers must read their data.|

But retention.ms is in milli seconds not in minutes. So corresponding *Server 
Default property* should be *log.retention.ms* instead of 
*log.retention.minutes*.

It would be better if we mention the if the time age is in millis/minutes/hours 
in the documentation page and documenting in code as well (Right now, it is 
saying *age in the code*. We should specify the *age in time granularity).


  was:
Log retention settings can be set it in broker and some properties can be 
overriden at topic level. 
|Property |Default|Server Default property| Description|
|retention.ms|7 days|log.retention.minutes|This configuration controls the 
maximum time we will retain a log before we will discard old log segments to 
free up space if we are using the "delete" retention policy. This represents an 
SLA on how soon consumers must read their data.|

But retention.ms is in milli seconds not in minutes. So corresponding property 
should be *log.retention.ms* instead of *log.retention.minutes*.

It would be better if we mention the if the time age is in millis/minutes/hours 
in the documentation page and documenting in code as well (Right now, it is 
saying *age in the code*. We should specify the *age in time granularity).



> Fix documentation for retention.ms property and update documentation for 
> LogConfig.scala class
> --
>
> Key: KAFKA-3087
> URL: https://issues.apache.org/jira/browse/KAFKA-3087
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Raju Bairishetti
>Assignee: Jay Kreps
>Priority: Critical
>  Labels: documentation
>
> Log retention settings can be set it in broker and some properties can be 
> overriden at topic level. 
> |Property |Default|Server Default property| Description|
> |retention.ms|7 days|log.retention.minutes|This configuration controls the 
> maximum time we will retain a log before we will discard old log segments to 
> free up space if we are using the "delete" retention policy. This represents 
> an SLA on how soon consumers must read their data.|
> But retention.ms is in milli seconds not in minutes. So corresponding *Server 
> Default property* should be *log.retention.ms* instead of 
> *log.retention.minutes*.
> It would be better if we mention the if the time age is in 
> millis/minutes/hours in the documentation page and documenting in code as 
> well (Right now, it is saying *age in the code*. We should specify the *age 
> in time granularity).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3054) Connect Herder fail forever if sent a wrong connector config or task config

2016-01-11 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091994#comment-15091994
 ] 

jin xing commented on KAFKA-3054:
-

[~ewencp]
thanks for comment : )
currently the DistributedHerder only catch the ConfigException,  thus any other 
exceptions thrown during connector startup or task startup will kill the 
DistributedHerder and Worker;
If the cluster has only one DistributedHerder, restart will fail forever; 
It make sense to let the herder swallow all exceptions thrown by connector or 
task during handling the life cycle of connector and task, thus Herder and 
Worker can keep running;
How do you think?


> Connect Herder fail forever if sent a wrong connector config or task config
> ---
>
> Key: KAFKA-3054
> URL: https://issues.apache.org/jira/browse/KAFKA-3054
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: jin xing
>Assignee: jin xing
>
> Connector Herder throws ConnectException and shutdown if sent a wrong config, 
> restarting herder will keep failing with the wrong config; It make sense that 
> herder should stay available when start connector or task failed; After 
> receiving a delete connector request, the herder can delete the wrong config 
> from "config storage"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3019) Add an exceptionName method to Errors

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092037#comment-15092037
 ] 

ASF GitHub Bot commented on KAFKA-3019:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/754

KAFKA-3019: Add an exceptionName method to Errors



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka exception-name

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/754.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #754


commit a302fcd0592dd8bd306c95471346beeea92c23ca
Author: Grant Henke 
Date:   2016-01-11T14:53:53Z

KAFKA-3019: Add an exceptionName method to Errors




> Add an exceptionName method to Errors
> -
>
> Key: KAFKA-3019
> URL: https://issues.apache.org/jira/browse/KAFKA-3019
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> The Errors class is often used to get and print the name of an exception 
> related to an Error. Adding a exceptionName method and updating all usages 
> would help provide more clear and less error prone code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3019: Add an exceptionName method to Err...

2016-01-11 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/754

KAFKA-3019: Add an exceptionName method to Errors



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka exception-name

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/754.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #754


commit a302fcd0592dd8bd306c95471346beeea92c23ca
Author: Grant Henke 
Date:   2016-01-11T14:53:53Z

KAFKA-3019: Add an exceptionName method to Errors




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3019) Add an exceptionName method to Errors

2016-01-11 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-3019:
---
Status: Patch Available  (was: Open)

> Add an exceptionName method to Errors
> -
>
> Key: KAFKA-3019
> URL: https://issues.apache.org/jira/browse/KAFKA-3019
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> The Errors class is often used to get and print the name of an exception 
> related to an Error. Adding a exceptionName method and updating all usages 
> would help provide more clear and less error prone code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Automated PR builds

2016-01-11 Thread Grant Henke
It looks like the automated builds have stopped triggering. Any ideas what
could be the cause?

Thanks,
Grant
-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Re: Automated PR builds

2016-01-11 Thread Ismael Juma
I filed https://issues.apache.org/jira/browse/INFRA-11065 about an hour ago.

Ismael

On Mon, Jan 11, 2016 at 3:08 PM, Grant Henke  wrote:

> It looks like the automated builds have stopped triggering. Any ideas what
> could be the cause?
>
> Thanks,
> Grant
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>


[jira] [Resolved] (KAFKA-2685) "alter topic" on non-existent topic exits without error

2016-01-11 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-2685.

Resolution: Fixed
  Assignee: Grant Henke  (was: Edward Ribeiro)

> "alter topic" on non-existent topic exits without error
> ---
>
> Key: KAFKA-2685
> URL: https://issues.apache.org/jira/browse/KAFKA-2685
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> When running:
> kafka-topics --zookeeper localhost:2181 --alter --topic test --config 
> unclean.leader.election.enable=false
> and topic "test" does not exist, the command simply return with no error 
> message.
> We expect to see an error when trying to modify non-existing topics, so user 
> will have a chance to catch and correct typos.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


ClassLoading in OSGi environment

2016-01-11 Thread Ramon Gordillo
Hi.

I have tried using 0.9.0.0, building an OSGi bundle and exporting the
packages. However, when creating a Producer, I get an Ex:

Caused by: org.apache.kafka.common.config.ConfigException: Invalid value
org.apache.kafka.clients.producer.internals.DefaultPartitioner for
configuration partitioner.class: Class
org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be
found.

at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
~[kafka-clients-0.9.0.0.jar:na]
at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
~[kafka-clients-0.9.0.0.jar:na]
at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
~[kafka-clients-0.9.0.0.jar:na]
at org.apache.kafka.clients.producer.ProducerConfig.(
ProducerConfig.java:206) ~[kafka-clients-0.9.0.0.jar:na]

That is because the static ProducerConfig initializer sets the Class name
and ConfigDef does a Class.forName, which does not work pretty well in OSGi
environments. But there is another way to set those "class" parameters, and
is using directly the class. So in my OSGi environment, changing
ProducerConfig:


   .define(PARTITIONER_CLASS_CONFIG,
Type.CLASS,
DefaultPartitioner.class.getName(),
Importance.MEDIUM,
PARTITIONER_CLASS_DOC)
for

   .define(PARTITIONER_CLASS_CONFIG,
Type.CLASS,
DefaultPartitioner.class,
Importance.MEDIUM,
PARTITIONER_CLASS_DOC)

works fine in OSGi too.

What do you think about this?

Thanks in advance.


Re: ClassLoading in OSGi environment

2016-01-11 Thread Rajini Sivaram
There are multiple places in Kafka where the context class loader or
Class.forName() is used to load classes. Perhaps it would be better to use
a common utility everywhere for dynamic classloading with an option to use
the right classloader.loadClass() that works with OSGi?

Regards,

Rajini

On Mon, Jan 11, 2016 at 1:49 PM, Ramon Gordillo 
wrote:

> Hi.
>
> I have tried using 0.9.0.0, building an OSGi bundle and exporting the
> packages. However, when creating a Producer, I get an Ex:
>
> Caused by: org.apache.kafka.common.config.ConfigException: Invalid value
> org.apache.kafka.clients.producer.internals.DefaultPartitioner for
> configuration partitioner.class: Class
> org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be
> found.
>
> at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> ~[kafka-clients-0.9.0.0.jar:na]
> at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> ~[kafka-clients-0.9.0.0.jar:na]
> at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> ~[kafka-clients-0.9.0.0.jar:na]
> at org.apache.kafka.clients.producer.ProducerConfig.(
> ProducerConfig.java:206) ~[kafka-clients-0.9.0.0.jar:na]
>
> That is because the static ProducerConfig initializer sets the Class name
> and ConfigDef does a Class.forName, which does not work pretty well in OSGi
> environments. But there is another way to set those "class" parameters, and
> is using directly the class. So in my OSGi environment, changing
> ProducerConfig:
>
>
>.define(PARTITIONER_CLASS_CONFIG,
> Type.CLASS,
> DefaultPartitioner.class.getName(),
> Importance.MEDIUM,
> PARTITIONER_CLASS_DOC)
> for
>
>.define(PARTITIONER_CLASS_CONFIG,
> Type.CLASS,
> DefaultPartitioner.class,
> Importance.MEDIUM,
> PARTITIONER_CLASS_DOC)
>
> works fine in OSGi too.
>
> What do you think about this?
>
> Thanks in advance.
>


[jira] [Updated] (KAFKA-2260) Allow specifying expected offset on produce

2016-01-11 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2260:


Patch is stale, I'm uploading an updated version. 

> Allow specifying expected offset on produce
> ---
>
> Key: KAFKA-2260
> URL: https://issues.apache.org/jira/browse/KAFKA-2260
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Kirwin
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
> Attachments: expected-offsets.patch
>
>
> I'd like to propose a change that adds a simple CAS-like mechanism to the 
> Kafka producer. This update has a small footprint, but enables a bunch of 
> interesting uses in stream processing or as a commit log for process state.
> h4. Proposed Change
> In short:
> - Allow the user to attach a specific offset to each message produced.
> - The server assigns offsets to messages in the usual way. However, if the 
> expected offset doesn't match the actual offset, the server should fail the 
> produce request instead of completing the write.
> This is a form of optimistic concurrency control, like the ubiquitous 
> check-and-set -- but instead of checking the current value of some state, it 
> checks the current offset of the log.
> h4. Motivation
> Much like check-and-set, this feature is only useful when there's very low 
> contention. Happily, when Kafka is used as a commit log or as a 
> stream-processing transport, it's common to have just one producer (or a 
> small number) for a given partition -- and in many of these cases, predicting 
> offsets turns out to be quite useful.
> - We get the same benefits as the 'idempotent producer' proposal: a producer 
> can retry a write indefinitely and be sure that at most one of those attempts 
> will succeed; and if two producers accidentally write to the end of the 
> partition at once, we can be certain that at least one of them will fail.
> - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
> messages consecutively to a partition, even if the list is much larger than 
> the buffer size or the producer has to be restarted.
> - If a process is using Kafka as a commit log -- reading from a partition to 
> bootstrap, then writing any updates to that same partition -- it can be sure 
> that it's seen all of the messages in that partition at the moment it does 
> its first (successful) write.
> There's a bunch of other similar use-cases here, but they all have roughly 
> the same flavour.
> h4. Implementation
> The major advantage of this proposal over other suggested transaction / 
> idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
> currently-unused field, adds no new APIs, and requires very little new code 
> or additional work from the server.
> - Produced messages already carry an offset field, which is currently ignored 
> by the server. This field could be used for the 'expected offset', with a 
> sigil value for the current behaviour. (-1 is a natural choice, since it's 
> already used to mean 'next available offset'.)
> - We'd need a new error and error code for a 'CAS failure'.
> - The server assigns offsets to produced messages in 
> {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
> changed, this method would assign offsets in the same way -- but if they 
> don't match the offset in the message, we'd return an error instead of 
> completing the write.
> - To avoid breaking existing clients, this behaviour would need to live 
> behind some config flag. (Possibly global, but probably more useful 
> per-topic?)
> I understand all this is unsolicited and possibly strange: happy to answer 
> questions, and if this seems interesting, I'd be glad to flesh this out into 
> a full KIP or patch. (And apologies if this is the wrong venue for this sort 
> of thing!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2260) Allow specifying expected offset on produce

2016-01-11 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2260:

Attachment: KAFKA-2260.patch

I understand we are doing patchers through github push requests, but here I'm 
just uploading an updated version of what was already here.

> Allow specifying expected offset on produce
> ---
>
> Key: KAFKA-2260
> URL: https://issues.apache.org/jira/browse/KAFKA-2260
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Kirwin
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
> Attachments: KAFKA-2260.patch, expected-offsets.patch
>
>
> I'd like to propose a change that adds a simple CAS-like mechanism to the 
> Kafka producer. This update has a small footprint, but enables a bunch of 
> interesting uses in stream processing or as a commit log for process state.
> h4. Proposed Change
> In short:
> - Allow the user to attach a specific offset to each message produced.
> - The server assigns offsets to messages in the usual way. However, if the 
> expected offset doesn't match the actual offset, the server should fail the 
> produce request instead of completing the write.
> This is a form of optimistic concurrency control, like the ubiquitous 
> check-and-set -- but instead of checking the current value of some state, it 
> checks the current offset of the log.
> h4. Motivation
> Much like check-and-set, this feature is only useful when there's very low 
> contention. Happily, when Kafka is used as a commit log or as a 
> stream-processing transport, it's common to have just one producer (or a 
> small number) for a given partition -- and in many of these cases, predicting 
> offsets turns out to be quite useful.
> - We get the same benefits as the 'idempotent producer' proposal: a producer 
> can retry a write indefinitely and be sure that at most one of those attempts 
> will succeed; and if two producers accidentally write to the end of the 
> partition at once, we can be certain that at least one of them will fail.
> - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
> messages consecutively to a partition, even if the list is much larger than 
> the buffer size or the producer has to be restarted.
> - If a process is using Kafka as a commit log -- reading from a partition to 
> bootstrap, then writing any updates to that same partition -- it can be sure 
> that it's seen all of the messages in that partition at the moment it does 
> its first (successful) write.
> There's a bunch of other similar use-cases here, but they all have roughly 
> the same flavour.
> h4. Implementation
> The major advantage of this proposal over other suggested transaction / 
> idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
> currently-unused field, adds no new APIs, and requires very little new code 
> or additional work from the server.
> - Produced messages already carry an offset field, which is currently ignored 
> by the server. This field could be used for the 'expected offset', with a 
> sigil value for the current behaviour. (-1 is a natural choice, since it's 
> already used to mean 'next available offset'.)
> - We'd need a new error and error code for a 'CAS failure'.
> - The server assigns offsets to produced messages in 
> {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
> changed, this method would assign offsets in the same way -- but if they 
> don't match the offset in the message, we'd return an error instead of 
> completing the write.
> - To avoid breaking existing clients, this behaviour would need to live 
> behind some config flag. (Possibly global, but probably more useful 
> per-topic?)
> I understand all this is unsolicited and possibly strange: happy to answer 
> questions, and if this seems interesting, I'd be glad to flesh this out into 
> a full KIP or patch. (And apologies if this is the wrong venue for this sort 
> of thing!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #278

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: speed up connect startup when full connector class name is

[me] MINOR: Add property to configure showing of standard streams in Gradle

[me] MINOR: Security doc fixes

[me] KAFKA-3044: Re-word consumer.poll behaviour

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 7059158c48ec05288c0b45ccac05d62b38f1bf76 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7059158c48ec05288c0b45ccac05d62b38f1bf76
 > git rev-list ccdf552749135b2c40f5a0afb4aa121115165ed2 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson7106217985747398177.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 11.965 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson6461167595562478901.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '

[jira] [Commented] (KAFKA-3054) Connect Herder fail forever if sent a wrong connector config or task config

2016-01-11 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092392#comment-15092392
 ] 

Ewen Cheslack-Postava commented on KAFKA-3054:
--

[~jinxing6...@126.com] We do want to catch them, but probably don't want to 
just swallow them. Although that might be a short-term solution for this 
specific problem. We don't do a good job of tracking connector/task status in 
Connect right now. We'll need to track this information (and also expose it via 
the REST API, and allow control via APIs like suggested in KAFKA-2370). I know 
[~hachikuji] is also working on KAFKA-2886 now, which also faces the same 
problem -- we can sort of half fix the issue before we have support for 
tracking status info.

I'd say a good short term solution would be to catch other exceptions and at a 
minimum log it at ERROR level. I haven't thought through the types of 
exceptions that might be generated, but it's possible we'll want to treat 
different exceptions somewhat differently (e.g. if they throw a 
ConnectException, the connector may have hit an issue, but is behaving well; if 
they throw anything that we can only classify as Throwable, we probably want to 
treat that as a bug in the connector itself and complain more loudly about it 
in the log). Then you might want to file a follow-up JIRA to make sure we don't 
lose track of that status change when we have support for tracking it.

> Connect Herder fail forever if sent a wrong connector config or task config
> ---
>
> Key: KAFKA-3054
> URL: https://issues.apache.org/jira/browse/KAFKA-3054
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: jin xing
>Assignee: jin xing
>
> Connector Herder throws ConnectException and shutdown if sent a wrong config, 
> restarting herder will keep failing with the wrong config; It make sense that 
> herder should stay available when start connector or task failed; After 
> receiving a delete connector request, the herder can delete the wrong config 
> from "config storage"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3084: Topic existence checks in topic co...

2016-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/744


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3084) Topic existence checks in topic commands (create, alter, delete)

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092396#comment-15092396
 ] 

ASF GitHub Bot commented on KAFKA-3084:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/744


> Topic existence checks in topic commands (create, alter, delete)
> 
>
> Key: KAFKA-3084
> URL: https://issues.apache.org/jira/browse/KAFKA-3084
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> In Kafka 0.9.0 error codes were added to the topic commands. However, often 
> users only want to perform an action based on the existence of a topic. And 
> they don't want to error if the topic does or does not exist.
> Adding if-exists option for the topic delete and alter commands and 
> if-not-exists for the create command allows users to build scripts that can 
> handle this expected state without error codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3084) Topic existence checks in topic commands (create, alter, delete)

2016-01-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3084:
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 744
[https://github.com/apache/kafka/pull/744]

> Topic existence checks in topic commands (create, alter, delete)
> 
>
> Key: KAFKA-3084
> URL: https://issues.apache.org/jira/browse/KAFKA-3084
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> In Kafka 0.9.0 error codes were added to the topic commands. However, often 
> users only want to perform an action based on the existence of a topic. And 
> they don't want to error if the topic does or does not exist.
> Adding if-exists option for the topic delete and alter commands and 
> if-not-exists for the create command allows users to build scripts that can 
> handle this expected state without error codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #951

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3084: Topic existence checks in topic commands (create, alter,

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 3a0fc125f4337a670ea52009afb1a254179ac07b 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 3a0fc125f4337a670ea52009afb1a254179ac07b
 > git rev-list 7059158c48ec05288c0b45ccac05d62b38f1bf76 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2884643473462564791.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 13.829 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson1366370850867135526.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/clients/src/main/java/org/apache/kafka/common/errors/ClusterAuthorizationException.java'
>  to cache fileHashes.bin 
> (/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk7/.gradle/2.10/taskArtifacts/fileHashes.bin).

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.124 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Created] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-01-11 Thread Dave Peterson (JIRA)
Dave Peterson created KAFKA-3088:


 Summary: 0.9.0.0 broker crash on receipt of produce request with 
empty client ID
 Key: KAFKA-3088
 URL: https://issues.apache.org/jira/browse/KAFKA-3088
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.9.0.0
Reporter: Dave Peterson
Assignee: Jun Rao


Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
the broker to crash as shown below.  More details can be found in the following 
email thread:

http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e



   [2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
(kafka.server.KafkaApis)
   java.lang.NullPointerException
  at 
org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
  at 
org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
  at 
org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
  at 
org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
  at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
  at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
  at 
kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
  at 
kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
  at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
  at 
kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
  at 
kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
  at 
kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
  at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
  at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
  at java.lang.Thread.run(Thread.java:745)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3019: Add an exceptionName method to Err...

2016-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/754


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3019) Add an exceptionName method to Errors

2016-01-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3019:
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 754
[https://github.com/apache/kafka/pull/754]

> Add an exceptionName method to Errors
> -
>
> Key: KAFKA-3019
> URL: https://issues.apache.org/jira/browse/KAFKA-3019
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> The Errors class is often used to get and print the name of an exception 
> related to an Error. Adding a exceptionName method and updating all usages 
> would help provide more clear and less error prone code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3019) Add an exceptionName method to Errors

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092503#comment-15092503
 ] 

ASF GitHub Bot commented on KAFKA-3019:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/754


> Add an exceptionName method to Errors
> -
>
> Key: KAFKA-3019
> URL: https://issues.apache.org/jira/browse/KAFKA-3019
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> The Errors class is often used to get and print the name of an exception 
> related to an Error. Adding a exceptionName method and updating all usages 
> would help provide more clear and less error prone code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3079) org.apache.kafka.common.KafkaException: java.lang.SecurityException: Configuration Error:

2016-01-11 Thread Mohit Anchlia (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092506#comment-15092506
 ] 

Mohit Anchlia commented on KAFKA-3079:
--

I've tried multiple entries and for some reason I keep seeing the same error. 
Not sure from where it's picking up "Zookeeper"

> org.apache.kafka.common.KafkaException: java.lang.SecurityException: 
> Configuration Error:
> -
>
> Key: KAFKA-3079
> URL: https://issues.apache.org/jira/browse/KAFKA-3079
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
> Environment: RHEL 6
>Reporter: Mohit Anchlia
> Attachments: kafka_server_jaas.conf
>
>
> After enabling security I am seeing the following error even though JAAS file 
> has no mention of "Zookeeper". I used the following steps:
> http://docs.confluent.io/2.0.0/kafka/sasl.html
> [2016-01-07 19:05:15,329] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.KafkaException: java.lang.SecurityException: 
> Configuration Error:
> Line 8: expected [{], found [Zookeeper]
> at 
> org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:102)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:262)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.lang.SecurityException: Configuration Error:
> Line 8: expected [{], found [Zookeeper]
> at com.sun.security.auth.login.ConfigFile.(ConfigFile.java:110)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at java.lang.Class.newInstance(Class.java:374)
> at 
> javax.security.auth.login.Configuration$2.run(Configuration.java:258)
> at 
> javax.security.auth.login.Configuration$2.run(Configuration.java:250)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> javax.security.auth.login.Configuration.getConfiguration(Configuration.java:249)
> at 
> org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:99)
> ... 5 more
> Caused by: java.io.IOException: Configuration Error:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3079) org.apache.kafka.common.KafkaException: java.lang.SecurityException: Configuration Error:

2016-01-11 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092524#comment-15092524
 ] 

Ismael Juma commented on KAFKA-3079:


[~mohitanchlia], what happens if you remove the "# Zookeeper client 
authentication" line from your file?

> org.apache.kafka.common.KafkaException: java.lang.SecurityException: 
> Configuration Error:
> -
>
> Key: KAFKA-3079
> URL: https://issues.apache.org/jira/browse/KAFKA-3079
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
> Environment: RHEL 6
>Reporter: Mohit Anchlia
> Attachments: kafka_server_jaas.conf
>
>
> After enabling security I am seeing the following error even though JAAS file 
> has no mention of "Zookeeper". I used the following steps:
> http://docs.confluent.io/2.0.0/kafka/sasl.html
> [2016-01-07 19:05:15,329] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.KafkaException: java.lang.SecurityException: 
> Configuration Error:
> Line 8: expected [{], found [Zookeeper]
> at 
> org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:102)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:262)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.lang.SecurityException: Configuration Error:
> Line 8: expected [{], found [Zookeeper]
> at com.sun.security.auth.login.ConfigFile.(ConfigFile.java:110)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at java.lang.Class.newInstance(Class.java:374)
> at 
> javax.security.auth.login.Configuration$2.run(Configuration.java:258)
> at 
> javax.security.auth.login.Configuration$2.run(Configuration.java:250)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> javax.security.auth.login.Configuration.getConfiguration(Configuration.java:249)
> at 
> org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:99)
> ... 5 more
> Caused by: java.io.IOException: Configuration Error:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #279

2016-01-11 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #280

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3019: Add an exceptionName method to Errors

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 9c998dd8cd4a489512b6ed34a05afce88a0b1ba2 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9c998dd8cd4a489512b6ed34a05afce88a0b1ba2
 > git rev-list 3a0fc125f4337a670ea52009afb1a254179ac07b # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson2026079921971043764.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 11.85 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5783407636092447882.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 11.114 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Re: Consider increasing the default reserved.broker.max.id

2016-01-11 Thread Grant Henke
Any thoughts on adding a "broker.id.generation.enabled" configuration for
auto generating broker ids?

If we added this and defaulted it to false, users who upgrade don't need to
worry about the reserved.broker.max.id configuration since it will not be
used. Users using the standard/old way of manually configuring ids can keep
doing so without concern. And when users enable broker id generation, the
docs can inform them how correctly set the reserved.broker.max.id.

On Fri, Jan 8, 2016 at 4:18 PM, Grant Henke  wrote:

> I agree that many people id their brokers differently and increasing the
> default will only handle a subset of those schemes. Though I think
> increasing it to some reasonable value may help decrease issues drastically
> regardless.
>
> I also think some longer term fix that avoids collisions all together
> would be nice. Though I am not sure what that long term solution is. We
> would need to introduce something that a configured broker id is not
> allowed to set. Any ideas?
>
> I also wanted to note here that while investigating this I found some
> interesting special cases/rules for the reserved.broker.max.id config.
>
> 1. Because a zookeeper sequence value is added to that value to generate
> the unique id's, the value once configured and used *cannot be decreased*
> and still guaranteeing no collisions.
>
> 2. Because the id was generated it can never be manually set in the
> config. Therefore if you need to stand up a new machine with the same
> broker ids (perhaps for recovery) you can't set this value manually. The
> workaround would be to set the value in the meta.properties file of all the
> log directories. (note: I haven't fully vetted this yet)
>
>
>
>
> On Wed, Dec 23, 2015 at 5:25 PM, Ewen Cheslack-Postava 
> wrote:
>
>> Which other numbering schemes do we want to be able to un-break by
>> increasing this default? For example, I know some people use the IP
>> address
>> with dots removed -- we'd have to use a very large # to make sure that
>> worked. Before making another change, it'd be good to know what other
>> schemes people are using and that we'd really be fixing the issue for
>> someone.
>>
>> -Ewen
>>
>> On Fri, Dec 18, 2015 at 9:37 AM, Ismael Juma  wrote:
>>
>> > On Fri, Dec 18, 2015 at 4:44 PM, Grant Henke 
>> wrote:
>> >
>> > > There is some discussion on KAFKA-1070
>> > >  around the design
>> > > choice
>> > > and compatibility. The value 1000 was thrown out as a quick example
>> but
>> > it
>> > > was never discussed beyond that. The discussion also sites a few cases
>> > > where a value of 1000 would cause issue.
>> > >
>> >
>> > Thanks for digging that up. Also worth noting that Jay said:
>> >
>> > "I think we can get around the problem you point out by just defaulting
>> the
>> > node id sequence to 1000. This could theoretically conflict but most
>> people
>> > number from 0 or 1 and we can discuss this in the release notes. Our
>> plan
>> > will be to release with support for both configured node ids and
>> assigned
>> > node ids for compatibility. After a couple of releases we will remove
>> the
>> > config."
>> >
>> > Ismael
>> >
>>
>>
>>
>> --
>> Thanks,
>> Ewen
>>
>
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Build failed in Jenkins: kafka-trunk-jdk7 #952

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3019: Add an exceptionName method to Errors

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 9c998dd8cd4a489512b6ed34a05afce88a0b1ba2 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9c998dd8cd4a489512b6ed34a05afce88a0b1ba2
 > git rev-list 3a0fc125f4337a670ea52009afb1a254179ac07b # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7809920625156671595.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 12.034 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4408829152533411241.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:394:
 value DEFAULT_TIMESTAMP in object OffsetCo

What about Audit feature?

2016-01-11 Thread Luciano Afranllie
Hi

Kafka documentation mention Audit feature in section 6.6 but
https://issues.apache.org/jira/browse/KAFKA-260 is resolved Won't Fix.

Should this section of the documentation be removed?

Regards
Luciano


Kafka KIP meeting Jan. 12 at 11:00am PST

2016-01-11 Thread Jun Rao
Hi, Everyone,

We will have a Kafka KIP meeting tomorrow at 11:00am PST. If you plan to
attend but haven't received an invite, please let me know. The following is
the agenda.

Agenda:
KIP-36: Rack-aware replica assignment
KIP-41: KafkaConsumer Max Records

Thanks,

Jun


[jira] [Commented] (KAFKA-3021) Centralize dependency version managment

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092794#comment-15092794
 ] 

ASF GitHub Bot commented on KAFKA-3021:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/741


> Centralize dependency version managment
> ---
>
> Key: KAFKA-3021
> URL: https://issues.apache.org/jira/browse/KAFKA-3021
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3021: Centralize dependency version mana...

2016-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/741


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3021) Centralize dependency version managment

2016-01-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3021:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 741
[https://github.com/apache/kafka/pull/741]

> Centralize dependency version managment
> ---
>
> Key: KAFKA-3021
> URL: https://issues.apache.org/jira/browse/KAFKA-3021
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #953

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3021: Centralize dependency version management

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision f009c30947c49fec4f41efcc31b9d5d72b6f7f37 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f009c30947c49fec4f41efcc31b9d5d72b6f7f37
 > git rev-list 9c998dd8cd4a489512b6ed34a05afce88a0b1ba2 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson3540618209887406693.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 14.729 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4300841336211883661.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.822 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Created] (KAFKA-3089) VerifiableProducer should do a clean shutdown in stop_node()

2016-01-11 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-3089:
---

 Summary: VerifiableProducer should do a clean shutdown in 
stop_node()
 Key: KAFKA-3089
 URL: https://issues.apache.org/jira/browse/KAFKA-3089
 Project: Kafka
  Issue Type: Improvement
Reporter: Dong Lin
Assignee: Dong Lin


VerifiableProducer is closed by SIGKILL when stop_node() is called. For this 
reason, when stop_producer_and_consumer() is invoked in 
ProduceConsumeValidateTest, VerifiableProducer is killed immediately without 
allowing it to wait for acknowledgement. The reported number of messages 
produced by VerifiableProducer will thus be much smaller than the reported 
number of messages consumed by consumer, causing confusion to developers.

For almost all other services, such as VerifiableConsumer and ConsoleConsumer, 
we send SIGINT when stop_node() is called. It is not clear why 
VerifiableProducer is different from them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3089; VerifiableProducer should do a cle...

2016-01-11 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/755

KAFKA-3089; VerifiableProducer should do a clean shutdown in stop_node()



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-3089

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/755.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #755


commit 1fdf7dc454f17da2f8ca95e62953949bf7cc0e3d
Author: Dong Lin 
Date:   2016-01-11T22:36:37Z

KAFKA-3089; VerifiableProducer should do a clean shutdown in stop_node()




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3089) VerifiableProducer should do a clean shutdown in stop_node()

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092874#comment-15092874
 ] 

ASF GitHub Bot commented on KAFKA-3089:
---

GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/755

KAFKA-3089; VerifiableProducer should do a clean shutdown in stop_node()



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-3089

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/755.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #755


commit 1fdf7dc454f17da2f8ca95e62953949bf7cc0e3d
Author: Dong Lin 
Date:   2016-01-11T22:36:37Z

KAFKA-3089; VerifiableProducer should do a clean shutdown in stop_node()




> VerifiableProducer should do a clean shutdown in stop_node()
> 
>
> Key: KAFKA-3089
> URL: https://issues.apache.org/jira/browse/KAFKA-3089
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
>
> VerifiableProducer is closed by SIGKILL when stop_node() is called. For this 
> reason, when stop_producer_and_consumer() is invoked in 
> ProduceConsumeValidateTest, VerifiableProducer is killed immediately without 
> allowing it to wait for acknowledgement. The reported number of messages 
> produced by VerifiableProducer will thus be much smaller than the reported 
> number of messages consumed by consumer, causing confusion to developers.
> For almost all other services, such as VerifiableConsumer and 
> ConsoleConsumer, we send SIGINT when stop_node() is called. It is not clear 
> why VerifiableProducer is different from them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3085: BrokerChangeListener computes inco...

2016-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/752


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3085) BrokerChangeListener computes inconsistent live/dead broker list

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092892#comment-15092892
 ] 

ASF GitHub Bot commented on KAFKA-3085:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/752


> BrokerChangeListener computes inconsistent live/dead broker list
> 
>
> Key: KAFKA-3085
> URL: https://issues.apache.org/jira/browse/KAFKA-3085
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: David Jacot
>
> On a broker change ZK event, BrokerChangeListener gets the current broker 
> list from ZK. It then computes a new broker list, a dead broker list, and a 
> live broker list with more detailed broker info. The new and live broker list 
> are computed by reading the value associated with each of the current broker 
> twice. If a broker is de-registered in between, these two list will not be 
> consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #281

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3021: Centralize dependency version management

[junrao] KAFKA-3085; BrokerChangeListener computes inconsistent live/dead broker

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision e789a35d3bce916f25705915b5e86353462a1454 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e789a35d3bce916f25705915b5e86353462a1454
 > git rev-list 9c998dd8cd4a489512b6ed34a05afce88a0b1ba2 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson2373071239267360448.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 11.729 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4605922759111883427.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 11.665 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[jira] [Commented] (KAFKA-3085) BrokerChangeListener computes inconsistent live/dead broker list

2016-01-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15092923#comment-15092923
 ] 

Jun Rao commented on KAFKA-3085:


[~dajac], sorry, I overlooked this when reviewing the patch. The fix in the 
above PR is still not complete. So I am leaving this jira open. The issue is 
that we compute things like newBrokerIds based on currentBrokerList, which is 
the list of broker id returned from ZK. However, if a broker disappears from ZK 
when we try to read the broker info using the broker id, the broker essentially 
doesn't exist and shouldn't be included in newBrokerIds. So, perhaps the fix 
should be (1) compute curBrokers from currentBrokerList and only include live 
brokers at that time; (2) derive newBrokerIds and deadBrokerIds from curBrokers 
by comparing with controllerContext.liveOrShuttingDownBrokerIds. Do you want to 
give this another try? Thanks,

> BrokerChangeListener computes inconsistent live/dead broker list
> 
>
> Key: KAFKA-3085
> URL: https://issues.apache.org/jira/browse/KAFKA-3085
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: David Jacot
>
> On a broker change ZK event, BrokerChangeListener gets the current broker 
> list from ZK. It then computes a new broker list, a dead broker list, and a 
> live broker list with more detailed broker info. The new and live broker list 
> are computed by reading the value associated with each of the current broker 
> twice. If a broker is de-registered in between, these two list will not be 
> consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #954

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3085; BrokerChangeListener computes inconsistent live/dead broker

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision e789a35d3bce916f25705915b5e86353462a1454 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e789a35d3bce916f25705915b5e86353462a1454
 > git rev-list f009c30947c49fec4f41efcc31b9d5d72b6f7f37 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4573160968842705591.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 10.964 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5405754020655539621.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 13.923 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Assigned] (KAFKA-2998) New Consumer should not retry indefinitely if no broker is available

2016-01-11 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-2998:
--

Assignee: Jason Gustafson

> New Consumer should not retry indefinitely if no broker is available
> 
>
> Key: KAFKA-2998
> URL: https://issues.apache.org/jira/browse/KAFKA-2998
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Florian Hussonnois
>Assignee: Jason Gustafson
>Priority: Minor
>
> If no broker from bootstrap.servers is available consumer retries 
> indefinitely with debug log message :
>  
> DEBUG 17:16:13 Give up sending metadata request since no node is available
> DEBUG 17:16:13 Initialize connection to node -1 for sending metadata request
> DEBUG 17:16:13 Initiating connection to node -1 at localhost:9091.
> At least, an ERROR message should be log after a number of retries.
> In addition, maybe the consumer should fail in a such case ? This behavior 
> could be set by a configuration property ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2998) New Consumer should not retry indefinitely if no broker is available

2016-01-11 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093078#comment-15093078
 ] 

Jason Gustafson commented on KAFKA-2998:


[~ijuma] Maybe we can treat this as a separate issue. It seems common to 
accidentally point the bootstrap brokers to zookeeper, for example, and the 
current behavior is to silently retry forever. Maybe we can address the common 
misconfiguration problem here without needing the full patch for KAFKA-1894? A 
simple approach might be to log disconnects to any of the bootstrap brokers at 
the error level.

> New Consumer should not retry indefinitely if no broker is available
> 
>
> Key: KAFKA-2998
> URL: https://issues.apache.org/jira/browse/KAFKA-2998
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Florian Hussonnois
>Assignee: Jason Gustafson
>Priority: Minor
>
> If no broker from bootstrap.servers is available consumer retries 
> indefinitely with debug log message :
>  
> DEBUG 17:16:13 Give up sending metadata request since no node is available
> DEBUG 17:16:13 Initialize connection to node -1 for sending metadata request
> DEBUG 17:16:13 Initiating connection to node -1 at localhost:9091.
> At least, an ERROR message should be log after a number of retries.
> In addition, maybe the consumer should fail in a such case ? This behavior 
> could be set by a configuration property ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ClassLoading in OSGi environment

2016-01-11 Thread Guozhang Wang
This makes sense, could you file a JIRA to keep track of this?

Guozhang

On Mon, Jan 11, 2016 at 8:17 AM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> There are multiple places in Kafka where the context class loader or
> Class.forName() is used to load classes. Perhaps it would be better to use
> a common utility everywhere for dynamic classloading with an option to use
> the right classloader.loadClass() that works with OSGi?
>
> Regards,
>
> Rajini
>
> On Mon, Jan 11, 2016 at 1:49 PM, Ramon Gordillo 
> wrote:
>
> > Hi.
> >
> > I have tried using 0.9.0.0, building an OSGi bundle and exporting the
> > packages. However, when creating a Producer, I get an Ex:
> >
> > Caused by: org.apache.kafka.common.config.ConfigException: Invalid value
> > org.apache.kafka.clients.producer.internals.DefaultPartitioner for
> > configuration partitioner.class: Class
> > org.apache.kafka.clients.producer.internals.DefaultPartitioner could not
> be
> > found.
> >
> > at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> > ~[kafka-clients-0.9.0.0.jar:na]
> > at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> > ~[kafka-clients-0.9.0.0.jar:na]
> > at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> > ~[kafka-clients-0.9.0.0.jar:na]
> > at org.apache.kafka.clients.producer.ProducerConfig.(
> > ProducerConfig.java:206) ~[kafka-clients-0.9.0.0.jar:na]
> >
> > That is because the static ProducerConfig initializer sets the Class name
> > and ConfigDef does a Class.forName, which does not work pretty well in
> OSGi
> > environments. But there is another way to set those "class" parameters,
> and
> > is using directly the class. So in my OSGi environment, changing
> > ProducerConfig:
> >
> >
> >.define(PARTITIONER_CLASS_CONFIG,
> > Type.CLASS,
> >
>  DefaultPartitioner.class.getName(),
> > Importance.MEDIUM,
> > PARTITIONER_CLASS_DOC)
> > for
> >
> >.define(PARTITIONER_CLASS_CONFIG,
> > Type.CLASS,
> > DefaultPartitioner.class,
> > Importance.MEDIUM,
> > PARTITIONER_CLASS_DOC)
> >
> > works fine in OSGi too.
> >
> > What do you think about this?
> >
> > Thanks in advance.
> >
>



-- 
-- Guozhang


[jira] [Commented] (KAFKA-2998) New Consumer should not retry indefinitely if no broker is available

2016-01-11 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093118#comment-15093118
 ] 

Ismael Juma commented on KAFKA-2998:


Sounds like a good plan.

> New Consumer should not retry indefinitely if no broker is available
> 
>
> Key: KAFKA-2998
> URL: https://issues.apache.org/jira/browse/KAFKA-2998
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Florian Hussonnois
>Assignee: Jason Gustafson
>Priority: Minor
>
> If no broker from bootstrap.servers is available consumer retries 
> indefinitely with debug log message :
>  
> DEBUG 17:16:13 Give up sending metadata request since no node is available
> DEBUG 17:16:13 Initialize connection to node -1 for sending metadata request
> DEBUG 17:16:13 Initiating connection to node -1 at localhost:9091.
> At least, an ERROR message should be log after a number of retries.
> In addition, maybe the consumer should fail in a such case ? This behavior 
> could be set by a configuration property ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2478) KafkaConsumer javadoc example seems wrong

2016-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2478:
-
Assignee: Dmitry Stratiychuk  (was: Neha Narkhede)

> KafkaConsumer javadoc example seems wrong
> -
>
> Key: KAFKA-2478
> URL: https://issues.apache.org/jira/browse/KAFKA-2478
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Dmitry Stratiychuk
>Assignee: Dmitry Stratiychuk
>
> I was looking at this KafkaConsumer example in the javadoc:
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L199
> As I understand, commit() method commits the maximum offsets returned by the 
> most recent invocation of poll() method.
> In this example, there's a danger of losing the data.
> Imagine the case where 300 records are returned by consumer.poll()
> The commit will happen after inserting 200 records into the database.
> But it will also commit the offsets for 100 records that are still 
> unprocessed.
> So if consumer fails before buffer is dumped into the database again,
> then those 100 records will never be processed.
> If I'm wrong, could you please clarify the behaviour of commit() method?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Minor - remove unused TimeUnit from MetricConf...

2016-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/600


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #955

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: remove unused TimeUnit from MetricConfig

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (docker Ubuntu ubuntu4 ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision c9114488b3c266fe60cf96426bfa143b4b8109c0 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c9114488b3c266fe60cf96426bfa143b4b8109c0
 > git rev-list e789a35d3bce916f25705915b5e86353462a1454 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8767340901438274474.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 17.45 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7044117306701243392.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 19.423 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Build failed in Jenkins: kafka-trunk-jdk8 #282

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: remove unused TimeUnit from MetricConfig

--
[...truncated 1452 lines...]

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testEquality PASSED

kafka.message.ByteBufferMessageSetTest > testOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[0] PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[1] PASSED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PA

[jira] [Commented] (KAFKA-3085) BrokerChangeListener computes inconsistent live/dead broker list

2016-01-11 Thread David Jacot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093334#comment-15093334
 ] 

David Jacot commented on KAFKA-3085:


[~junrao] No worries. I'll update it accordingly.

> BrokerChangeListener computes inconsistent live/dead broker list
> 
>
> Key: KAFKA-3085
> URL: https://issues.apache.org/jira/browse/KAFKA-3085
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: David Jacot
>
> On a broker change ZK event, BrokerChangeListener gets the current broker 
> list from ZK. It then computes a new broker list, a dead broker list, and a 
> live broker list with more detailed broker info. The new and live broker list 
> are computed by reading the value associated with each of the current broker 
> twice. If a broker is de-registered in between, these two list will not be 
> consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3085) BrokerChangeListener computes inconsistent live/dead broker list

2016-01-11 Thread David Jacot (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-3085:
---
Status: In Progress  (was: Patch Available)

> BrokerChangeListener computes inconsistent live/dead broker list
> 
>
> Key: KAFKA-3085
> URL: https://issues.apache.org/jira/browse/KAFKA-3085
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: David Jacot
>
> On a broker change ZK event, BrokerChangeListener gets the current broker 
> list from ZK. It then computes a new broker list, a dead broker list, and a 
> live broker list with more detailed broker info. The new and live broker list 
> are computed by reading the value associated with each of the current broker 
> twice. If a broker is de-registered in between, these two list will not be 
> consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-32 Add CreateTime and LogAppendTime to Kafka message.

2016-01-11 Thread Becket Qin
That makes sense to me, too. I will update the KIP to reflect this. Thanks, 
Anna.

> On Jan 9, 2016, at 9:37 AM, Neha Narkhede  wrote:
> 
> Anna - Good suggestion. Sounds good to me as well
> 
> On Fri, Jan 8, 2016 at 2:32 PM, Aditya Auradkar <
> aaurad...@linkedin.com.invalid> wrote:
> 
>> Anna,
>> 
>> That sounds good to me as well.
>> 
>> Aditya
>> 
>>> On Fri, Jan 8, 2016 at 2:11 PM, Gwen Shapira  wrote:
>>> 
>>> Sounds good to me too. Seems pretty easy to add and can be useful for
>>> producers.
>>> 
 On Fri, Jan 8, 2016 at 1:22 PM, Joel Koshy  wrote:
 
 Hi Anna,
 
 That sounds good to me - Becket/others any thoughts?
 
 Thanks,
 
 Joel
 
 On Fri, Jan 8, 2016 at 12:41 PM, Anna Povzner 
>> wrote:
 
> Hi Becket and everyone,
> 
> Could we please add the following functionality to this KIP. I think
>> it
> would be very useful for the broker to return the timestamp in the
>> ack
>>> to
> the producer (in response: timestamp per partition) and propagate it
>>> back
> to client in RecordMetadata. This way, if timestamp type is
 LogAppendTime,
> the producer client will see what timestamp was actually set -- and
>> it
> would match the timestamp that consumer sees. Also, returning the
 timestamp
> in RecordMetadata is also useful for timestamp type = CreateTime,
>> since
> timestamp could be also set in KafkaProducer (if client set timestamp
>>> in
> ProducerRecord to 0).
> 
> Since this requires protocol change as well, it will be better to
 implement
> this as part of KIP-32, rather than proposing a new KIP.
> 
> Thanks,
> Anna
> 
> 
> On Fri, Jan 8, 2016 at 12:53 PM, Joel Koshy 
>>> wrote:
> 
>> +1 from me
>> 
>> Looking through this thread it seems there was some confusion on
>> the
>> migration discussion. This discussion in fact happened in the
>> KIP-31
>> discuss thread, not so much in the KIP hangout. There is
>> considerable
>> overlap in discussions between KIP-3[1,2,3] so it makes sense to
>> cross-reference all of these.
>> 
>> I'm finding the Apache list archive a little cumbersome to use
>> (e.g.,
 the
>> current link in KIP-31 points to the beginning of September
>> archives)
 but
>> the emails discussing migration were in October:
>>> http://mail-archives.apache.org/mod_mbox/kafka-dev/201510.mbox/thread
>> 
>> Markmail has a better interface but interestingly it has not
>> indexed
 any
> of
>> the emails from August, September and early October (
>> http://markmail.org/search/?q=list%3Aorg.apache.incubator.kafka-dev+date%3A201509-201511+order%3Adate-backward
>> ).
>> Perhaps KIPs should include a permalink to the first message of the
> DISCUSS
>> thread. E.g.,
>> http://mail-archives.apache.org/mod_mbox/kafka-dev/201509.mbox/%3CCAHrRUm5jvL_dPeZWnfBD-vONgSZWOq1VL1Ss8OSUOCPXmtg8rQ%40mail.gmail.com%3E
>> 
>> Also, just to clarify Jay's comments on the content of KIPs: I
>> think
> having
>> a pseudo-code spec/implementation guide is useful (especially for
>> client-side KIPs). While the motivation should definitely capture
>>> “why
 we
>> are doing the KIP” it probably shouldn’t have to exhaustively
>> capture
> “why
>> we are doing the KIP *this way*”. i.e., some of the discussions are
>> extremely nuanced and in this case spans multiple KIPs so links to
 other
>> KIPs and the discuss threads and KIP hangout recordings are perhaps
>> sufficient to fill this gap - or maybe a new section that
>> summarizes
 the
>> discussions.
>> 
>> Thanks,
>> 
>> Joel
>> 
>>> On Wed, Jan 6, 2016 at 9:29 AM, Jun Rao  wrote:
>>> 
>>> Hi, Jiangjie,
>>> 
>>> 52. Replacing MessageSet with o.a.k.common.record will be ideal.
>>> Unfortunately, we use MessageSet in SimpleConsumer, which is part
>>> of
> the
>>> public api. Replacing MessageSet with o.a.k.common.record will be
>>> an
>>> incompatible api change. So, we probably should do this after we
>> deprecate
>>> SimpleConsumer.
>>> 
>>> My original question is actually whether we just bump up magic
>> byte
 in
>>> Message once to incorporate both the offset and the timestamp
>>> change.
> It
>>> seems that the answer is yes. Could you reflect that in the KIP?
>>> 
>>> Thanks,
>>> 
>>> Jun
>>> 
>>> 
>>> On Wed, Jan 6, 2016 at 7:01 AM, Becket Qin >> 
> wrote:
>>> 
 Thanks a lot for the careful reading, Jun.
 Please see inline replies.
 
 
> On Jan 6, 2016, at 3:24 AM, Jun Rao 
>> wrote:
> 
> Jiangjie,
> 
> Thanks for the updated KIP. Overall, a +1 on the proposal. A
>>> few
>> minor
> comments on the KIP.
> 
> KIP-32:
> 50. 6.c says "The log rolling

[jira] [Commented] (KAFKA-3085) BrokerChangeListener computes inconsistent live/dead broker list

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093411#comment-15093411
 ] 

ASF GitHub Bot commented on KAFKA-3085:
---

GitHub user dajac opened a pull request:

https://github.com/apache/kafka/pull/756

KAFKA-3085: BrokerChangeListener computes inconsistent live/dead broker 
list.

Follow up PR as per comments in the ticket.

@junrao It should be correct now as `curBrokers` included only live brokers 
and live/dead brokers are computed based on it. Could you take a look when you 
have time?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dajac/kafka KAFKA-3085

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/756.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #756


commit 94edafcf9d77a25ef5506cff9a3e9a1e76ff14ca
Author: David Jacot 
Date:   2016-01-12T06:42:40Z

BrokerChangeListener computes inconsistent live/dead broker list.




> BrokerChangeListener computes inconsistent live/dead broker list
> 
>
> Key: KAFKA-3085
> URL: https://issues.apache.org/jira/browse/KAFKA-3085
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: David Jacot
>
> On a broker change ZK event, BrokerChangeListener gets the current broker 
> list from ZK. It then computes a new broker list, a dead broker list, and a 
> live broker list with more detailed broker info. The new and live broker list 
> are computed by reading the value associated with each of the current broker 
> twice. If a broker is de-registered in between, these two list will not be 
> consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3085: BrokerChangeListener computes inco...

2016-01-11 Thread dajac
GitHub user dajac opened a pull request:

https://github.com/apache/kafka/pull/756

KAFKA-3085: BrokerChangeListener computes inconsistent live/dead broker 
list.

Follow up PR as per comments in the ticket.

@junrao It should be correct now as `curBrokers` included only live brokers 
and live/dead brokers are computed based on it. Could you take a look when you 
have time?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dajac/kafka KAFKA-3085

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/756.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #756


commit 94edafcf9d77a25ef5506cff9a3e9a1e76ff14ca
Author: David Jacot 
Date:   2016-01-12T06:42:40Z

BrokerChangeListener computes inconsistent live/dead broker list.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3085) BrokerChangeListener computes inconsistent live/dead broker list

2016-01-11 Thread David Jacot (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-3085:
---
Status: Patch Available  (was: In Progress)

> BrokerChangeListener computes inconsistent live/dead broker list
> 
>
> Key: KAFKA-3085
> URL: https://issues.apache.org/jira/browse/KAFKA-3085
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: David Jacot
>
> On a broker change ZK event, BrokerChangeListener gets the current broker 
> list from ZK. It then computes a new broker list, a dead broker list, and a 
> live broker list with more detailed broker info. The new and live broker list 
> are computed by reading the value associated with each of the current broker 
> twice. If a broker is de-registered in between, these two list will not be 
> consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3077) Enable KafkaLog4jAppender to work with SASL enabled brokers.

2016-01-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3077.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 740
[https://github.com/apache/kafka/pull/740]

> Enable KafkaLog4jAppender to work with SASL enabled brokers.
> 
>
> Key: KAFKA-3077
> URL: https://issues.apache.org/jira/browse/KAFKA-3077
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.1.0
>
>
> KafkaLog4jAppender is not enhanced to talk to sasl enabled cluster. This JIRA 
> aims at adding that support, thus enabling users using log4j appender to 
> publish to a SASL enabled Kafka cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3077: Enable KafkaLog4jAppender to work ...

2016-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/740


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3077) Enable KafkaLog4jAppender to work with SASL enabled brokers.

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093453#comment-15093453
 ] 

ASF GitHub Bot commented on KAFKA-3077:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/740


> Enable KafkaLog4jAppender to work with SASL enabled brokers.
> 
>
> Key: KAFKA-3077
> URL: https://issues.apache.org/jira/browse/KAFKA-3077
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.1.0
>
>
> KafkaLog4jAppender is not enhanced to talk to sasl enabled cluster. This JIRA 
> aims at adding that support, thus enabling users using log4j appender to 
> publish to a SASL enabled Kafka cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Kafka 3078

2016-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/747


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3078) Add ducktape tests for KafkaLog4jAppender producing to SASL enabled Kafka cluster

2016-01-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3078.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 747
[https://github.com/apache/kafka/pull/747]

> Add ducktape tests for KafkaLog4jAppender producing to SASL enabled Kafka 
> cluster
> -
>
> Key: KAFKA-3078
> URL: https://issues.apache.org/jira/browse/KAFKA-3078
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.1.0
>
>
> Parent JIRA, KAFKA-3077, enables KafkaLog4jAppender to produce to SASL 
> enabled clusters. This JIRA must add ducktape system tests to verify the 
> functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3077) Enable KafkaLog4jAppender to work with SASL enabled brokers.

2016-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093458#comment-15093458
 ] 

ASF GitHub Bot commented on KAFKA-3077:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/747


> Enable KafkaLog4jAppender to work with SASL enabled brokers.
> 
>
> Key: KAFKA-3077
> URL: https://issues.apache.org/jira/browse/KAFKA-3077
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.9.1.0
>
>
> KafkaLog4jAppender is not enhanced to talk to sasl enabled cluster. This JIRA 
> aims at adding that support, thus enabling users using log4j appender to 
> publish to a SASL enabled Kafka cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #956

2016-01-11 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3077: Enable KafkaLog4jAppender to work with SASL enabled brokers

[me] KAFKA-3078: Add ducktape tests for KafkaLog4jAppender producing to SASL

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 3e5afbfa0dd4ddfca65fae1f3b2a268ae1ed2025 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 3e5afbfa0dd4ddfca65fae1f3b2a268ae1ed2025
 > git rev-list c9114488b3c266fe60cf96426bfa143b4b8109c0 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5261244958212892421.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 11.303 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson3233632596213235478.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.041 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51