topic's partition have no leader and isr

2014-07-03 Thread 鞠大升
hi, all

I have a topic with 32 partitions, after some reassign operation, 2
partitions became to no leader and isr.
---
Topic:org.mobile_nginx  PartitionCount:32   ReplicationFactor:1
Configs:
Topic: org.mobile_nginx Partition: 0Leader: 3   Replicas: 3
Isr: 3
Topic: org.mobile_nginx Partition: 1Leader: 4   Replicas: 4
Isr: 4
Topic: org.mobile_nginx Partition: 2Leader: 5   Replicas: 5
Isr: 5
Topic: org.mobile_nginx Partition: 3Leader: 6   Replicas: 6
Isr: 6
Topic: org.mobile_nginx Partition: 4Leader: 3   Replicas: 3
Isr: 3
Topic: org.mobile_nginx Partition: 5Leader: 4   Replicas: 4
Isr: 4
Topic: org.mobile_nginx Partition: 6Leader: 5   Replicas: 5
Isr: 5
Topic: org.mobile_nginx Partition: 7Leader: 6   Replicas: 6
Isr: 6
Topic: org.mobile_nginx Partition: 8Leader: 3   Replicas: 3
Isr: 3
Topic: org.mobile_nginx Partition: 9Leader: 4   Replicas: 4
Isr: 4
Topic: org.mobile_nginx Partition: 10   Leader: 2   Replicas: 1
Isr: 2
Topic: org.mobile_nginx Partition: 11   Leader: 2   Replicas: 2
Isr: 2
Topic: org.mobile_nginx Partition: 12   Leader: 3   Replicas: 1
Isr: 3
Topic: org.mobile_nginx Partition: 13   Leader: 2   Replicas: 2
Isr: 2
Topic: org.mobile_nginx Partition: 14   Leader: 4   Replicas: 4
Isr: 4
Topic: org.mobile_nginx Partition: 15   Leader: 2   Replicas: 2
Isr: 2
Topic: org.mobile_nginx Partition: 16   Leader: 4   Replicas: 4
Isr: 4
Topic: org.mobile_nginx Partition: 17   Leader: 5   Replicas: 5
Isr: 5
Topic: org.mobile_nginx Partition: 18   Leader: 6   Replicas: 6
Isr: 6
Topic: org.mobile_nginx Partition: 19   Leader: 5   Replicas: 5
Isr: 5
Topic: org.mobile_nginx Partition: 20   Leader: 2   Replicas: 2
Isr: 2
Topic: org.mobile_nginx Partition: 21   Leader: 3   Replicas: 3
Isr: 3
Topic: org.mobile_nginx Partition: 22   Leader: 4   Replicas: 4
Isr: 4
Topic: org.mobile_nginx Partition: 23   Leader: 5   Replicas: 5
Isr: 5
Topic: org.mobile_nginx Partition: 24   Leader: 6   Replicas: 6
Isr: 6
Topic: org.mobile_nginx Partition: 25   Leader: -1  Replicas:
6,1   Isr:
Topic: org.mobile_nginx Partition: 26   Leader: 2   Replicas: 2
Isr: 2
Topic: org.mobile_nginx Partition: 27   Leader: 3   Replicas: 3
Isr: 3
Topic: org.mobile_nginx Partition: 28   Leader: 4   Replicas: 4
Isr: 4
Topic: org.mobile_nginx Partition: 29   Leader: 5   Replicas: 5
Isr: 5
Topic: org.mobile_nginx Partition: 30   Leader: 6   Replicas: 6
Isr: 6
Topic: org.mobile_nginx Partition: 31   Leader: -1  Replicas:
3,1   Isr:
---
partition-25 and partition-32 have no leader and no isr.
No matter reassign or leader election operation, can not reduce replicas
number, and can not election a leader for 4 days.

Anyone have any idea how to resolve this problem?

-- 
dashengju
+86 13810875910
dashen...@gmail.com


[jira] [Created] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Michael Noll (JIRA)
Michael Noll created KAFKA-1519:
---

 Summary: Console consumer: expose configuration option to 
enable/disable writing the line separator
 Key: KAFKA-1519
 URL: https://issues.apache.org/jira/browse/KAFKA-1519
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.1.1
Reporter: Michael Noll
Assignee: Neha Narkhede
Priority: Minor


The current console consumer includes a {{DefaultMessageFormatter}}, which 
exposes a few user-configurable options which can be set on the command line 
via --property, e.g. "--property line.separator=XYZ".

Unfortunately, the current implementation does not allow the user to completely 
disable writing any such line separator.  However, this functionality would be 
helpful to enable users to capture data "as is" from a Kafka topic to snapshot 
file.  Capturing data "as is" -- without an artificial line separator -- is 
particularly nice for data in a binary format (including Avro).



*No workaround*

A potential workaround would be to pass an empty string as the property value 
of "line.separator", but this doesn't work in the current implementation.

The following variants throw an "Invalid parser arguments" exception:

{code}
--property line.separator=   # "nothing"
--property line.separator=""# double quotes
--property line.separator='' # single quotes
{code}

Escape tricks via {{\}} backslash don't work either.

If there actually is a workaround please let me know.

*How to fix*

We can introduce a "print.line" option to enable/disable writing 
"line.separator" similar to how the code already uses "print.key" to 
enable/disable writing "key.separator".

This change is trivial.  To preserve backwards compatibility, the "print.line" 
option would be set to true by default (unlike the "print.key" option, which 
defaults to false).

*Alternatives*

Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
course implement their own custom {{MessageFormatter}}.  But given that it's a) 
a trivial change to the {{DefaultMessageFormatter}} and b) a nice user feature 
I'd say changing the built-in {{DefaultMessageFormatter}} would be the better 
approach.  This way, Kafka would support writing data as-is to a file out of 
the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Michael Noll (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Noll updated KAFKA-1519:


Issue Type: Improvement  (was: Bug)

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Michael Noll (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Noll updated KAFKA-1519:


Description: 
The current console consumer includes a {{DefaultMessageFormatter}}, which 
exposes a few user-configurable options which can be set on the command line 
via --property, e.g. "--property line.separator=XYZ".

Unfortunately, the current implementation does not allow the user to completely 
disable writing any such line separator.  However, this functionality would be 
helpful to enable users to capture data "as is" from a Kafka topic to snapshot 
file.  Capturing data "as is" -- without an artificial line separator -- is 
particularly nice for data in a binary format (including Avro).



*No workaround*

A potential workaround would be to pass an empty string as the property value 
of "line.separator", but this doesn't work in the current implementation.

The following variants throw an "Invalid parser arguments" exception:

{code}
--property line.separator=   # "nothing"
--property line.separator="" # double quotes
--property line.separator='' # single quotes
{code}

Escape tricks via a backslash don't work either.

If there actually is a workaround please let me know.

*How to fix*

We can introduce a "print.line" option to enable/disable writing 
"line.separator" similar to how the code already uses "print.key" to 
enable/disable writing "key.separator".

This change is trivial.  To preserve backwards compatibility, the "print.line" 
option would be set to true by default (unlike the "print.key" option, which 
defaults to false).

*Alternatives*

Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
course implement their own custom {{MessageFormatter}}.  But given that it's a) 
a trivial change to the {{DefaultMessageFormatter}} and b) a nice user feature 
I'd say changing the built-in {{DefaultMessageFormatter}} would be the better 
approach.  This way, Kafka would support writing data as-is to a file out of 
the box.

  was:
The current console consumer includes a {{DefaultMessageFormatter}}, which 
exposes a few user-configurable options which can be set on the command line 
via --property, e.g. "--property line.separator=XYZ".

Unfortunately, the current implementation does not allow the user to completely 
disable writing any such line separator.  However, this functionality would be 
helpful to enable users to capture data "as is" from a Kafka topic to snapshot 
file.  Capturing data "as is" -- without an artificial line separator -- is 
particularly nice for data in a binary format (including Avro).



*No workaround*

A potential workaround would be to pass an empty string as the property value 
of "line.separator", but this doesn't work in the current implementation.

The following variants throw an "Invalid parser arguments" exception:

{code}
--property line.separator=   # "nothing"
--property line.separator="" # double quotes
--property line.separator='' # single quotes
{code}

Escape tricks via {{\}} backslash don't work either.

If there actually is a workaround please let me know.

*How to fix*

We can introduce a "print.line" option to enable/disable writing 
"line.separator" similar to how the code already uses "print.key" to 
enable/disable writing "key.separator".

This change is trivial.  To preserve backwards compatibility, the "print.line" 
option would be set to true by default (unlike the "print.key" option, which 
defaults to false).

*Alternatives*

Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
course implement their own custom {{MessageFormatter}}.  But given that it's a) 
a trivial change to the {{DefaultMessageFormatter}} and b) a nice user feature 
I'd say changing the built-in {{DefaultMessageFormatter}} would be the better 
approach.  This way, Kafka would support writing data as-is to a file out of 
the box.


> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  

[jira] [Updated] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Michael Noll (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Noll updated KAFKA-1519:


Description: 
The current console consumer includes a {{DefaultMessageFormatter}}, which 
exposes a few user-configurable options which can be set on the command line 
via --property, e.g. "--property line.separator=XYZ".

Unfortunately, the current implementation does not allow the user to completely 
disable writing any such line separator.  However, this functionality would be 
helpful to enable users to capture data "as is" from a Kafka topic to snapshot 
file.  Capturing data "as is" -- without an artificial line separator -- is 
particularly nice for data in a binary format (including Avro).



*No workaround*

A potential workaround would be to pass an empty string as the property value 
of "line.separator", but this doesn't work in the current implementation.

The following variants throw an "Invalid parser arguments" exception:

{code}
--property line.separator=   # "nothing"
--property line.separator="" # double quotes
--property line.separator='' # single quotes
{code}

Escape tricks via {{\}} backslash don't work either.

If there actually is a workaround please let me know.

*How to fix*

We can introduce a "print.line" option to enable/disable writing 
"line.separator" similar to how the code already uses "print.key" to 
enable/disable writing "key.separator".

This change is trivial.  To preserve backwards compatibility, the "print.line" 
option would be set to true by default (unlike the "print.key" option, which 
defaults to false).

*Alternatives*

Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
course implement their own custom {{MessageFormatter}}.  But given that it's a) 
a trivial change to the {{DefaultMessageFormatter}} and b) a nice user feature 
I'd say changing the built-in {{DefaultMessageFormatter}} would be the better 
approach.  This way, Kafka would support writing data as-is to a file out of 
the box.

  was:
The current console consumer includes a {{DefaultMessageFormatter}}, which 
exposes a few user-configurable options which can be set on the command line 
via --property, e.g. "--property line.separator=XYZ".

Unfortunately, the current implementation does not allow the user to completely 
disable writing any such line separator.  However, this functionality would be 
helpful to enable users to capture data "as is" from a Kafka topic to snapshot 
file.  Capturing data "as is" -- without an artificial line separator -- is 
particularly nice for data in a binary format (including Avro).



*No workaround*

A potential workaround would be to pass an empty string as the property value 
of "line.separator", but this doesn't work in the current implementation.

The following variants throw an "Invalid parser arguments" exception:

{code}
--property line.separator=   # "nothing"
--property line.separator=""# double quotes
--property line.separator='' # single quotes
{code}

Escape tricks via {{\}} backslash don't work either.

If there actually is a workaround please let me know.

*How to fix*

We can introduce a "print.line" option to enable/disable writing 
"line.separator" similar to how the code already uses "print.key" to 
enable/disable writing "key.separator".

This change is trivial.  To preserve backwards compatibility, the "print.line" 
option would be set to true by default (unlike the "print.key" option, which 
defaults to false).

*Alternatives*

Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
course implement their own custom {{MessageFormatter}}.  But given that it's a) 
a trivial change to the {{DefaultMessageFormatter}} and b) a nice user feature 
I'd say changing the built-in {{DefaultMessageFormatter}} would be the better 
approach.  This way, Kafka would support writing data as-is to a file out of 
the box.


> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file

[jira] [Created] (KAFKA-1520) Tools for contributors

2014-07-03 Thread Evgeny Vereshchagin (JIRA)
Evgeny Vereshchagin created KAFKA-1520:
--

 Summary: Tools for contributors
 Key: KAFKA-1520
 URL: https://issues.apache.org/jira/browse/KAFKA-1520
 Project: Kafka
  Issue Type: Improvement
Reporter: Evgeny Vereshchagin
Priority: Minor


Hi!

It' s too hard to contribute.
I spend two days for configuring jira, reviewboard and jenkins.
Manual installation of jira-python and RBTools with these 
[instructions|https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review]
 was awful.

I create [package|https://pypi.python.org/pypi/kafka-dev-tools/] with 
kafka-patch-review script. Now all dependencies installs automatically and 
script installs to required place.

Is there any tools for automating developers tasks? I can add it to package.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051607#comment-14051607
 ] 

Jun Rao commented on KAFKA-1519:


Could you try sth like --property="line.separator="? 

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Michael Noll (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051615#comment-14051615
 ] 

Michael Noll commented on KAFKA-1519:
-

Unfortunately, those don't work either:

{code}
$ bin/kafka-console-consumer.sh ... --property="line.separator="
Invalid parser arguments: line.separator=

$ bin/kafka-console-consumer.sh ... --property "line.separator="
Invalid parser arguments: line.separator=
{code}

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1520) Tools for contributors

2014-07-03 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051619#comment-14051619
 ] 

Jun Rao commented on KAFKA-1520:


Thanks for creating that package. That can be very helpful. Not sure where is 
the best place to put it. At the minimum, we can link it off our website or 
wiki.

Could you add a bit more description to the README to include info such as (1) 
the list of packages installed and (2) the supported OS and the corresponding 
installation command? 

> Tools for contributors
> --
>
> Key: KAFKA-1520
> URL: https://issues.apache.org/jira/browse/KAFKA-1520
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Evgeny Vereshchagin
>Priority: Minor
>
> Hi!
> It' s too hard to contribute.
> I spend two days for configuring jira, reviewboard and jenkins.
> Manual installation of jira-python and RBTools with these 
> [instructions|https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review]
>  was awful.
> I create [package|https://pypi.python.org/pypi/kafka-dev-tools/] with 
> kafka-patch-review script. Now all dependencies installs automatically and 
> script installs to required place.
> Is there any tools for automating developers tasks? I can add it to package.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: topic's partition have no leader and isr

2014-07-03 Thread Guozhang Wang
Did you see any errors on the controller log?

Guozhang


On Thu, Jul 3, 2014 at 2:26 AM, 鞠大升  wrote:

> hi, all
>
> I have a topic with 32 partitions, after some reassign operation, 2
> partitions became to no leader and isr.
>
> ---
> Topic:org.mobile_nginx  PartitionCount:32   ReplicationFactor:1
> Configs:
> Topic: org.mobile_nginx Partition: 0Leader: 3   Replicas: 3
> Isr: 3
> Topic: org.mobile_nginx Partition: 1Leader: 4   Replicas: 4
> Isr: 4
> Topic: org.mobile_nginx Partition: 2Leader: 5   Replicas: 5
> Isr: 5
> Topic: org.mobile_nginx Partition: 3Leader: 6   Replicas: 6
> Isr: 6
> Topic: org.mobile_nginx Partition: 4Leader: 3   Replicas: 3
> Isr: 3
> Topic: org.mobile_nginx Partition: 5Leader: 4   Replicas: 4
> Isr: 4
> Topic: org.mobile_nginx Partition: 6Leader: 5   Replicas: 5
> Isr: 5
> Topic: org.mobile_nginx Partition: 7Leader: 6   Replicas: 6
> Isr: 6
> Topic: org.mobile_nginx Partition: 8Leader: 3   Replicas: 3
> Isr: 3
> Topic: org.mobile_nginx Partition: 9Leader: 4   Replicas: 4
> Isr: 4
> Topic: org.mobile_nginx Partition: 10   Leader: 2   Replicas: 1
> Isr: 2
> Topic: org.mobile_nginx Partition: 11   Leader: 2   Replicas: 2
> Isr: 2
> Topic: org.mobile_nginx Partition: 12   Leader: 3   Replicas: 1
> Isr: 3
> Topic: org.mobile_nginx Partition: 13   Leader: 2   Replicas: 2
> Isr: 2
> Topic: org.mobile_nginx Partition: 14   Leader: 4   Replicas: 4
> Isr: 4
> Topic: org.mobile_nginx Partition: 15   Leader: 2   Replicas: 2
> Isr: 2
> Topic: org.mobile_nginx Partition: 16   Leader: 4   Replicas: 4
> Isr: 4
> Topic: org.mobile_nginx Partition: 17   Leader: 5   Replicas: 5
> Isr: 5
> Topic: org.mobile_nginx Partition: 18   Leader: 6   Replicas: 6
> Isr: 6
> Topic: org.mobile_nginx Partition: 19   Leader: 5   Replicas: 5
> Isr: 5
> Topic: org.mobile_nginx Partition: 20   Leader: 2   Replicas: 2
> Isr: 2
> Topic: org.mobile_nginx Partition: 21   Leader: 3   Replicas: 3
> Isr: 3
> Topic: org.mobile_nginx Partition: 22   Leader: 4   Replicas: 4
> Isr: 4
> Topic: org.mobile_nginx Partition: 23   Leader: 5   Replicas: 5
> Isr: 5
> Topic: org.mobile_nginx Partition: 24   Leader: 6   Replicas: 6
> Isr: 6
> Topic: org.mobile_nginx Partition: 25   Leader: -1  Replicas:
> 6,1   Isr:
> Topic: org.mobile_nginx Partition: 26   Leader: 2   Replicas: 2
> Isr: 2
> Topic: org.mobile_nginx Partition: 27   Leader: 3   Replicas: 3
> Isr: 3
> Topic: org.mobile_nginx Partition: 28   Leader: 4   Replicas: 4
> Isr: 4
> Topic: org.mobile_nginx Partition: 29   Leader: 5   Replicas: 5
> Isr: 5
> Topic: org.mobile_nginx Partition: 30   Leader: 6   Replicas: 6
> Isr: 6
> Topic: org.mobile_nginx Partition: 31   Leader: -1  Replicas:
> 3,1   Isr:
>
> ---
> partition-25 and partition-32 have no leader and no isr.
> No matter reassign or leader election operation, can not reduce replicas
> number, and can not election a leader for 4 days.
>
> Anyone have any idea how to resolve this problem?
>
> --
> dashengju
> +86 13810875910
> dashen...@gmail.com
>



-- 
-- Guozhang


Review Request 23266: Fix KAFKA-1515

2014-07-03 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23266/
---

Review request for kafka.


Bugs: KAFKA-1515
https://issues.apache.org/jira/browse/KAFKA-1515


Repository: kafka


Description
---

1. Move the waiting logic out of Metadata into KafkaProducer (KafkaConsumer 
would not wait on fetch metadata, so will not share this code); and wake-up 
sender upon waiting metadata 2. Set the refresh timestamp to now instead of 0


Diffs
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
d85ca30001dc3d6122a890c34092551654315458 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
8890aa2e3ce5fffc159b3c8528138226e8c8cfd3 
  clients/src/test/java/org/apache/kafka/clients/producer/MetadataTest.java 
0d7d04ca5d71d4db22da54ed2153f7c0e10cdf78 
  clients/src/test/resources/log4j.properties 
b1d5b7f2b4091040bdcfb0a60fd5879f45a0 
  core/src/test/resources/log4j.properties 
1b7d5d8f7d5fae7d272849715714781cad05d77b 

Diff: https://reviews.apache.org/r/23266/diff/


Testing
---


Thanks,

Guozhang Wang



Re: Review Request 23266: Fix KAFKA-1515

2014-07-03 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23266/#review47302
---



clients/src/test/resources/log4j.properties


Are those changes intended?


- Jun Rao


On July 3, 2014, 4:32 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23266/
> ---
> 
> (Updated July 3, 2014, 4:32 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1515
> https://issues.apache.org/jira/browse/KAFKA-1515
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> 1. Move the waiting logic out of Metadata into KafkaProducer (KafkaConsumer 
> would not wait on fetch metadata, so will not share this code); and wake-up 
> sender upon waiting metadata 2. Set the refresh timestamp to now instead of 0
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> d85ca30001dc3d6122a890c34092551654315458 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  8890aa2e3ce5fffc159b3c8528138226e8c8cfd3 
>   clients/src/test/java/org/apache/kafka/clients/producer/MetadataTest.java 
> 0d7d04ca5d71d4db22da54ed2153f7c0e10cdf78 
>   clients/src/test/resources/log4j.properties 
> b1d5b7f2b4091040bdcfb0a60fd5879f45a0 
>   core/src/test/resources/log4j.properties 
> 1b7d5d8f7d5fae7d272849715714781cad05d77b 
> 
> Diff: https://reviews.apache.org/r/23266/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051695#comment-14051695
 ] 

Jun Rao commented on KAFKA-1519:


Yes, it is a problem. What you suggested would fix the problem. However, it 
seems that the root cause is that CommandLineUtils.parseKeyValueArgs() doesn't 
support empty value for a key. I am wondering if we should just support empty 
value in parseKeyValueArgs().


> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1515) Wake-up Sender upon blocked on fetching leader metadata

2014-07-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1515:
-

Attachment: KAFKA-1515_2014-07-03_10:19:28.patch

> Wake-up Sender upon blocked on fetching leader metadata
> ---
>
> Key: KAFKA-1515
> URL: https://issues.apache.org/jira/browse/KAFKA-1515
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1515_2014-07-03_10:19:28.patch
>
>
> Currently the new KafkaProducer will not wake up the sender thread upon 
> forcing metadata fetch, and hence if the sender is polling with a long 
> timeout (e.g. the metadata.age period) this wait will usually timeout and 
> fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23266: Fix KAFKA-1515

2014-07-03 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23266/
---

(Updated July 3, 2014, 5:19 p.m.)


Review request for kafka.


Bugs: KAFKA-1515
https://issues.apache.org/jira/browse/KAFKA-1515


Repository: kafka


Description
---

1. Move the waiting logic out of Metadata into KafkaProducer (KafkaConsumer 
would not wait on fetch metadata, so will not share this code); and wake-up 
sender upon waiting metadata 2. Set the refresh timestamp to now instead of 0


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
d85ca30001dc3d6122a890c34092551654315458 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
8890aa2e3ce5fffc159b3c8528138226e8c8cfd3 
  clients/src/test/java/org/apache/kafka/clients/producer/MetadataTest.java 
0d7d04ca5d71d4db22da54ed2153f7c0e10cdf78 

Diff: https://reviews.apache.org/r/23266/diff/


Testing
---


Thanks,

Guozhang Wang



[jira] [Commented] (KAFKA-1515) Wake-up Sender upon blocked on fetching leader metadata

2014-07-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051703#comment-14051703
 ] 

Guozhang Wang commented on KAFKA-1515:
--

Updated reviewboard https://reviews.apache.org/r/23266/
 against branch origin/trunk

> Wake-up Sender upon blocked on fetching leader metadata
> ---
>
> Key: KAFKA-1515
> URL: https://issues.apache.org/jira/browse/KAFKA-1515
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1515_2014-07-03_10:19:28.patch
>
>
> Currently the new KafkaProducer will not wake up the sender thread upon 
> forcing metadata fetch, and hence if the sender is polling with a long 
> timeout (e.g. the metadata.age period) this wait will usually timeout and 
> fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23266: Fix KAFKA-1515

2014-07-03 Thread Guozhang Wang


> On July 3, 2014, 5:05 p.m., Jun Rao wrote:
> > clients/src/test/resources/log4j.properties, lines 15-16
> > 
> >
> > Are those changes intended?

Nope they are not..


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23266/#review47302
---


On July 3, 2014, 5:19 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23266/
> ---
> 
> (Updated July 3, 2014, 5:19 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1515
> https://issues.apache.org/jira/browse/KAFKA-1515
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> 1. Move the waiting logic out of Metadata into KafkaProducer (KafkaConsumer 
> would not wait on fetch metadata, so will not share this code); and wake-up 
> sender upon waiting metadata 2. Set the refresh timestamp to now instead of 0
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> d85ca30001dc3d6122a890c34092551654315458 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  8890aa2e3ce5fffc159b3c8528138226e8c8cfd3 
>   clients/src/test/java/org/apache/kafka/clients/producer/MetadataTest.java 
> 0d7d04ca5d71d4db22da54ed2153f7c0e10cdf78 
> 
> Diff: https://reviews.apache.org/r/23266/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051737#comment-14051737
 ] 

Gwen Shapira commented on KAFKA-1519:
-

btw. I can't see any unit tests for command line utils. 
This can be a good excuse to add few for parseKeyValueArgs.

Then again, maybe I'm looking in the wrong place.

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051714#comment-14051714
 ] 

Gwen Shapira commented on KAFKA-1519:
-

Just supporting an empty value seems far more intuitive than adding a new 
option. Not to mention that it will probably be useful in other scenarios.

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Producer Graceful Shutdown issue in Container (Kafka version 0.8.x.x)

2014-07-03 Thread Bhavesh Mistry
Hi Kafka Team,

We are running multiple webapps in tomcat container, and we have producer
which are managed by the ServletContextListener (Lifecycle).  Upon
contextInitialized we create and on contextDestroyed we call the
producer.close() but underlying Metric Lib does not shutdown.  So we have
thread leak due to this issue.  I had to call
Metrics.defaultRegistry().shutdown() to resolve this issue.  is this know
issue ? I know the metric lib have JVM Shutdown hook, but it will not be
invoke since the contain thread is un-deploying the web app and class
loader goes way and leaking thread does not find the under lying Kafka
class.Because of this tomcat, it not shutting down gracefully.

Are you guys planing to un-register metrics when Producer close is called
or shutdown Metrics pool for client.id ?


SEVERE: The web application [  ] appears to have started a thread named [
*metrics-meter-tick-thread-1*] but has failed to stop it. This is very
likely to create a memory leak.
SEVERE: The web application [] appears to have started a thread named [
*metrics-meter-tick-thread-2*] but has failed to stop it. This is very
likely to create a memory leak.

Thanks,

Bhavesh


[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Michael Noll (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051783#comment-14051783
 ] 

Michael Noll commented on KAFKA-1519:
-

Btw, for others reading this ticket:  In Kafka 0.8.1.1 -- which is listed as 
the "Affects version" -- the parsing of key-value args for the console consumer 
happens in {{MessageFormatter.tryParseFormatterArgs()}} 
({{ConsoleConsumer.scala}}).  In Kafka trunk, this functionality was moved to 
{{CommandLineUtils.parseKeyValueArgs()}} ({{CommandLineUtils.scala}}) as Jun 
pointed out.

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Michael Noll (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051790#comment-14051790
 ] 

Michael Noll commented on KAFKA-1519:
-

Supporting an empty value is of course another option.  That's fine with me, 
too.

That said I'm not sure whether it's actually more intuitive (it feels similar 
to special-purposing "null" instead of returning "None"), but I'm a small 
sample size. :-)

I'd have two questions:

* Where would one find unit tests, as Gwen mentioned, for the command line 
parsing?  I did a quick grep search of the code* but found none.
* Would this approach (support empty value) break any backwards compatibility?  
This is difficult to answer for me without more intimate code knowledge and 
because there don't seem to be any unit tests yet, see above.


*grep'ed with:

{code}
$ find . -type f | xargs grep parseKeyValueArgs
./core/src/main/scala/kafka/tools/ConsoleConsumer.scala:val formatterArgs = 
CommandLineUtils.parseKeyValueArgs(options.valuesOf(messageFormatterArgOpt))
./core/src/main/scala/kafka/tools/ConsoleProducer.scala:val cmdLineProps = 
CommandLineUtils.parseKeyValueArgs(options.valuesOf(propertyOpt))
./core/src/main/scala/kafka/tools/ReplayLogProducer.scala:val producerProps 
= CommandLineUtils.parseKeyValueArgs(options.valuesOf(propertyOpt))
./core/src/main/scala/kafka/tools/SimpleConsumerShell.scala:val 
formatterArgs = 
CommandLineUtils.parseKeyValueArgs(options.valuesOf(messageFormatterArgOpt))
./core/src/main/scala/kafka/utils/CommandLineUtils.scala:  def 
parseKeyValueArgs(args: Iterable[String]): Properties = {
{code}

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] Subscription: outstanding kafka patches

2014-07-03 Thread jira
Issue Subscription
Filter: outstanding kafka patches (96 issues)
The list of outstanding kafka patches
Subscriber: kafka-mailing-list

Key Summary
KAFKA-1515  Wake-up Sender upon blocked on fetching leader metadata
https://issues.apache.org/jira/browse/KAFKA-1515
KAFKA-1512  Limit the maximum number of connections per ip address
https://issues.apache.org/jira/browse/KAFKA-1512
KAFKA-1509  Restart of destination broker after unreplicated partition move 
leaves partitions without leader
https://issues.apache.org/jira/browse/KAFKA-1509
KAFKA-1500  adding new consumer requests using the new protocol
https://issues.apache.org/jira/browse/KAFKA-1500
KAFKA-1498  new producer performance and bug improvements
https://issues.apache.org/jira/browse/KAFKA-1498
KAFKA-1496  Using batch message in sync producer only sends the first message 
if we use a Scala Stream as the argument 
https://issues.apache.org/jira/browse/KAFKA-1496
KAFKA-1481  Stop using dashes AND underscores as separators in MBean names
https://issues.apache.org/jira/browse/KAFKA-1481
KAFKA-1477  add authentication layer and initial JKS x509 implementation for 
brokers, producers and consumer for network communication
https://issues.apache.org/jira/browse/KAFKA-1477
KAFKA-1475  Kafka consumer stops LeaderFinder/FetcherThreads, but application 
does not know
https://issues.apache.org/jira/browse/KAFKA-1475
KAFKA-1471  Add Producer Unit Tests for LZ4 and LZ4HC compression
https://issues.apache.org/jira/browse/KAFKA-1471
KAFKA-1468  Improve perf tests
https://issues.apache.org/jira/browse/KAFKA-1468
KAFKA-1460  NoReplicaOnlineException: No replica for partition
https://issues.apache.org/jira/browse/KAFKA-1460
KAFKA-1450  check invalid leader in a more robust way
https://issues.apache.org/jira/browse/KAFKA-1450
KAFKA-1430  Purgatory redesign
https://issues.apache.org/jira/browse/KAFKA-1430
KAFKA-1394  Ensure last segment isn't deleted on expiration when there are 
unflushed messages
https://issues.apache.org/jira/browse/KAFKA-1394
KAFKA-1372  Upgrade to Gradle 1.10
https://issues.apache.org/jira/browse/KAFKA-1372
KAFKA-1367  Broker topic metadata not kept in sync with ZooKeeper
https://issues.apache.org/jira/browse/KAFKA-1367
KAFKA-1351  String.format is very expensive in Scala
https://issues.apache.org/jira/browse/KAFKA-1351
KAFKA-1343  Kafka consumer iterator thread stalls
https://issues.apache.org/jira/browse/KAFKA-1343
KAFKA-1329  Add metadata fetch and refresh functionality to the consumer
https://issues.apache.org/jira/browse/KAFKA-1329
KAFKA-1324  Debian packaging
https://issues.apache.org/jira/browse/KAFKA-1324
KAFKA-1303  metadata request in the new producer can be delayed
https://issues.apache.org/jira/browse/KAFKA-1303
KAFKA-1300  Added WaitForReplaction admin tool.
https://issues.apache.org/jira/browse/KAFKA-1300
KAFKA-1235  Enable server to indefinitely retry on controlled shutdown
https://issues.apache.org/jira/browse/KAFKA-1235
KAFKA-1234  All kafka-run-class.sh to source in user config file (to set env 
vars like KAFKA_OPTS)
https://issues.apache.org/jira/browse/KAFKA-1234
KAFKA-1230  shell script files under bin don't work with cygwin (bash on 
windows)
https://issues.apache.org/jira/browse/KAFKA-1230
KAFKA-1215  Rack-Aware replica assignment option
https://issues.apache.org/jira/browse/KAFKA-1215
KAFKA-1207  Launch Kafka from within Apache Mesos
https://issues.apache.org/jira/browse/KAFKA-1207
KAFKA-1206  allow Kafka to start from a resource negotiator system
https://issues.apache.org/jira/browse/KAFKA-1206
KAFKA-1194  The kafka broker cannot delete the old log files after the 
configured time
https://issues.apache.org/jira/browse/KAFKA-1194
KAFKA-1190  create a draw performance graph script
https://issues.apache.org/jira/browse/KAFKA-1190
KAFKA-1180  WhiteList topic filter gets a NullPointerException on complex Regex
https://issues.apache.org/jira/browse/KAFKA-1180
KAFKA-1173  Using Vagrant to get up and running with Apache Kafka
https://issues.apache.org/jira/browse/KAFKA-1173
KAFKA-1150  Fetch on a replicated topic does not return as soon as possible
https://issues.apache.org/jira/browse/KAFKA-1150
KAFKA-1147  Consumer socket timeout should be greater than fetch max wait
https://issues.apache.org/jira/browse/KAFKA-1147
KAFKA-1145  Broker fail to sync after restart
https://issues.apache.org/jira/browse/KAFKA-1145
KAFKA-1144  commitOffsets can be passed the offsets to commit
https://issues.apache.org/jira/browse/KAFKA-1144
KAFKA-1130  "log.dirs" is a confusing property name
https://is

[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-07-03 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051899#comment-14051899
 ] 

Jay Kreps commented on KAFKA-1512:
--

A proposal from the LI ops team is to also add an override for this so you can 
have custom limits for ips if you want 
  max.connections.per.ip.overrides=192.168.1.1:5, 192.168.1.2:, 192.168.1.3:45
If no objections I will implement this too.

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1512.patch, KAFKA-1512.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-07-03 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051899#comment-14051899
 ] 

Jay Kreps edited comment on KAFKA-1512 at 7/3/14 8:46 PM:
--

A proposal from the LI ops team is to also add an override for this so you can 
have custom limits for ips if you want:
{code}
  max.connections.per.ip.overrides=192.168.1.1:5, 192.168.1.2:, 192.168.1.3:45
{code}
If no objections I will implement this too.


was (Author: jkreps):
A proposal from the LI ops team is to also add an override for this so you can 
have custom limits for ips if you want 
  max.connections.per.ip.overrides=192.168.1.1:5, 192.168.1.2:, 192.168.1.3:45
If no objections I will implement this too.

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1512.patch, KAFKA-1512.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051909#comment-14051909
 ] 

Jay Kreps commented on KAFKA-1519:
--

I would also vote for supporting an empty separator using the existing 
argument. It would be great to add kafka.utils.CommandLineUtilsTest with a test 
to cover that method.

I suspect that there would not be any backwards compatibility issues since that 
argument is not currently accepted and it would be unlikely anyone would depend 
on the tool rejecting that argument.

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-07-03 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051914#comment-14051914
 ] 

Gwen Shapira commented on KAFKA-1512:
-

It sounds like I can completely block specific IPs with:

max.connections.per.ip.overrides=192.168.1.1:0

Or selectively allow only specific IP to connect with:
max.connections.per.ip=0
max.connections.per.ip.overrides=192.168.1.1:20

Not an objection, just checking my understanding of this feature.

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1512.patch, KAFKA-1512.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1520) Tools for contributors

2014-07-03 Thread Evgeny Vereshchagin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051917#comment-14051917
 ] 

Evgeny Vereshchagin commented on KAFKA-1520:


It's demo version and too early to document this:)
And maybe it's wrong way.

Theoretically you can install it on any platform that supports Python2.7.
Installation instruction same as 
[here|https://www.reviewboard.org/docs/rbtools/0.6/#installation]

That package depends on tools required for kafka-patch-review script: RBTools 
and jira-python.

I want add features to automate, for example, these 
[steps|https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review#Patchsubmissionandreview-Simplecontributorworkflow],
 because manual branch managment and rebasing too magically for most people.

But I don't understand workflow very well, yet.

And maybe some Kafka developers has useful Python/Bash/Ruby helper scripts or 
[Vagrant|http://www.vagrantup.com/] files for creating development environment.


> Tools for contributors
> --
>
> Key: KAFKA-1520
> URL: https://issues.apache.org/jira/browse/KAFKA-1520
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Evgeny Vereshchagin
>Priority: Minor
>
> Hi!
> It' s too hard to contribute.
> I spend two days for configuring jira, reviewboard and jenkins.
> Manual installation of jira-python and RBTools with these 
> [instructions|https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review]
>  was awful.
> I create [package|https://pypi.python.org/pypi/kafka-dev-tools/] with 
> kafka-patch-review script. Now all dependencies installs automatically and 
> script installs to required place.
> Is there any tools for automating developers tasks? I can add it to package.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1520) Tools for contributors

2014-07-03 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051948#comment-14051948
 ] 

Jay Kreps commented on KAFKA-1520:
--

Hey [~evvers],

The intention of the review tool was to help regular commits easily deal with 
the Apache tools which arent themselves integrated (JIRA, RB, etc). If you just 
want to jump in and contribute a patch you should not need to do that. Do we 
have instructions somewhere that seem to imply you need the review tools all 
set up to send us a patch? If so we should probably revise them...

> Tools for contributors
> --
>
> Key: KAFKA-1520
> URL: https://issues.apache.org/jira/browse/KAFKA-1520
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Evgeny Vereshchagin
>Priority: Minor
>
> Hi!
> It' s too hard to contribute.
> I spend two days for configuring jira, reviewboard and jenkins.
> Manual installation of jira-python and RBTools with these 
> [instructions|https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review]
>  was awful.
> I create [package|https://pypi.python.org/pypi/kafka-dev-tools/] with 
> kafka-patch-review script. Now all dependencies installs automatically and 
> script installs to required place.
> Is there any tools for automating developers tasks? I can add it to package.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-07-03 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051951#comment-14051951
 ] 

Jay Kreps commented on KAFKA-1512:
--

Yes, I hadn't thought of that. Disabling connections could potentially be 
useful. The intended use was actually the other way around, basically default 
most things to something reasonable like 10 but have a way to whitelist some 
IPs to have unlimited connections.

The background here is that we were previously having clients bootstrap 
metadata through a VIP (which appears to the kafka nodes as a single ip). We 
just had an issue where a 200 node cluster that uses Kafka started creating and 
leaking connections through the vip which brought down a big shared cluster. So 
we thought we should have some limits. The hope was to change the VIP to DNS 
round-robin and gradually migrate the clients to that. In the meantime we 
thought it would be useful to be able to enforce the limit but whitelist the 
VIP with unlimited connections.

Thinking about this, maybe it is a little crazy hard coding ip/host names in 
config?

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1512.patch, KAFKA-1512.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1520) Tools for contributors

2014-07-03 Thread Evgeny Vereshchagin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051960#comment-14051960
 ] 

Evgeny Vereshchagin commented on KAFKA-1520:


{quote}
Do we have instructions somewhere that seem to imply you need the review tools 
all set up to send us a patch?
{quote}

[~jkreps], [here|http://kafka.apache.org/contributing.html]:
{quote}
You can find some help using git to contribute, rebase, and create patches here.
{quote}
And link to [Kafka patch review 
tool|https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review]

> Tools for contributors
> --
>
> Key: KAFKA-1520
> URL: https://issues.apache.org/jira/browse/KAFKA-1520
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Evgeny Vereshchagin
>Priority: Minor
>
> Hi!
> It' s too hard to contribute.
> I spend two days for configuring jira, reviewboard and jenkins.
> Manual installation of jira-python and RBTools with these 
> [instructions|https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review]
>  was awful.
> I create [package|https://pypi.python.org/pypi/kafka-dev-tools/] with 
> kafka-patch-review script. Now all dependencies installs automatically and 
> script installs to required place.
> Is there any tools for automating developers tasks? I can add it to package.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23208: Patch for KAFKA-1512

2014-07-03 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23208/
---

(Updated July 3, 2014, 10:18 p.m.)


Review request for kafka.


Bugs: KAFKA-1512
https://issues.apache.org/jira/browse/KAFKA-1512


Repository: kafka


Description
---

KAFKA-1512 Add per-ip connection limits.


Diffs (updated)
-

  core/src/main/scala/kafka/network/SocketServer.scala 
4976d9c3a66bc965f5870a0736e21c7b32650bab 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
ef75b67b67676ae5b8931902cbc8c0c2cc72c0d3 
  core/src/main/scala/kafka/server/KafkaServer.scala 
c22e51e0412843ec993721ad3230824c0aadd2ba 
  core/src/test/scala/unit/kafka/network/SocketServerTest.scala 
1c492de8fde6582ca2342842a551739575d1f46c 

Diff: https://reviews.apache.org/r/23208/diff/


Testing
---


Thanks,

Jay Kreps



[jira] [Commented] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-07-03 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051973#comment-14051973
 ] 

Jay Kreps commented on KAFKA-1512:
--

Updated reviewboard https://reviews.apache.org/r/23208/
 against branch trunk

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
> KAFKA-1512_2014-07-03_15:17:55.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2014-07-03 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1512:
-

Attachment: KAFKA-1512_2014-07-03_15:17:55.patch

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1512.patch, KAFKA-1512.patch, 
> KAFKA-1512_2014-07-03_15:17:55.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23266: Fix KAFKA-1515

2014-07-03 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23266/
---

(Updated July 3, 2014, 11:42 p.m.)


Review request for kafka.


Bugs: KAFKA-1515
https://issues.apache.org/jira/browse/KAFKA-1515


Repository: kafka


Description
---

1. Move the waiting logic out of Metadata into KafkaProducer (KafkaConsumer 
would not wait on fetch metadata, so will not share this code); and wake-up 
sender upon waiting metadata 2. Set the refresh timestamp to now instead of 0


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
d21f9225539b070f9b50b7a76601d80b83daf7ef 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
d85ca30001dc3d6122a890c34092551654315458 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java 
8890aa2e3ce5fffc159b3c8528138226e8c8cfd3 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
37b9d1a462d42b811fffd2af3c418e3a9179f00f 
  clients/src/test/java/org/apache/kafka/clients/producer/MetadataTest.java 
0d7d04ca5d71d4db22da54ed2153f7c0e10cdf78 

Diff: https://reviews.apache.org/r/23266/diff/


Testing
---


Thanks,

Guozhang Wang



[jira] [Commented] (KAFKA-1515) Wake-up Sender upon blocked on fetching leader metadata

2014-07-03 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14052007#comment-14052007
 ] 

Guozhang Wang commented on KAFKA-1515:
--

Updated reviewboard https://reviews.apache.org/r/23266/
 against branch origin/trunk

> Wake-up Sender upon blocked on fetching leader metadata
> ---
>
> Key: KAFKA-1515
> URL: https://issues.apache.org/jira/browse/KAFKA-1515
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1515_2014-07-03_10:19:28.patch, 
> KAFKA-1515_2014-07-03_16:43:05.patch
>
>
> Currently the new KafkaProducer will not wake up the sender thread upon 
> forcing metadata fetch, and hence if the sender is polling with a long 
> timeout (e.g. the metadata.age period) this wait will usually timeout and 
> fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1515) Wake-up Sender upon blocked on fetching leader metadata

2014-07-03 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1515:
-

Attachment: KAFKA-1515_2014-07-03_16:43:05.patch

> Wake-up Sender upon blocked on fetching leader metadata
> ---
>
> Key: KAFKA-1515
> URL: https://issues.apache.org/jira/browse/KAFKA-1515
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0
>
> Attachments: KAFKA-1515_2014-07-03_10:19:28.patch, 
> KAFKA-1515_2014-07-03_16:43:05.patch
>
>
> Currently the new KafkaProducer will not wake up the sender thread upon 
> forcing metadata fetch, and hence if the sender is polling with a long 
> timeout (e.g. the metadata.age period) this wait will usually timeout and 
> fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (KAFKA-1521) Producer Graceful Shutdown issue in Container (Kafka version 0.8.x.x)

2014-07-03 Thread Bravesh Mistry (JIRA)
Bravesh Mistry created KAFKA-1521:
-

 Summary: Producer Graceful Shutdown issue in Container (Kafka 
version 0.8.x.x)
 Key: KAFKA-1521
 URL: https://issues.apache.org/jira/browse/KAFKA-1521
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1.1, 0.8.0
 Environment: Tomcat Container or Any other J2EE container
Reporter: Bravesh Mistry
Assignee: Jun Rao
Priority: Minor


Hi Kafka Team,

We are running multiple webapps in tomcat container, and we have producer which 
are managed by the ServletContextListener (Lifecycle).  Upon  
contextInitialized we create and on contextDestroyed we call the 
producer.close() but underlying Metric Lib does not shutdown.  So we have 
thread leak due to this issue.  I had to call 
Metrics.defaultRegistry().shutdown() to resolve this issue.  is this know issue 
? I know the metric lib have JVM Shutdown hook, but it will not be invoke since 
the contain thread is un-deploying the web app and class loader goes way and 
leaking thread does not find the under lying Kafka class.Because of this 
tomcat, it not shutting down gracefully.

Are you guys planing to un-register metrics when Producer close is called or 
shutdown Metrics pool for client.id ? 

Here is logs:

SEVERE: The web application [  ] appears to have started a thread named 
[metrics-meter-tick-thread-1] but has failed to stop it. This is very likely to 
create a memory leak.
SEVERE: The web application [] appears to have started a thread named 
[metrics-meter-tick-thread-2] but has failed to stop it. This is very likely to 
create a memory leak.

Thanks,

Bhavesh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1521) Producer Graceful Shutdown issue in Container (Kafka version 0.8.x.x)

2014-07-03 Thread Bravesh Mistry (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14052056#comment-14052056
 ] 

Bravesh Mistry commented on KAFKA-1521:
---

Here is confirmation from [~guozhang]  :

Guozhang Wang   Thu, Jul 3, 2014 at 3:47 PM
To: Bhavesh Mistry 

This is indeed an issue. Could you file a jira?


On Thu, Jul 3, 2014 at 3:43 PM, Bhavesh Mistry  
wrote:

Hi Guozhang,

Is this expected ?  Should I file an issue ?  Or if this is know issue.

Thanks,

Bhavesh


On Thu, Jul 3, 2014 at 11:40 AM, Bhavesh Mistry 
 wrote:

Hi Guozhang,

Thank you for your quick response.

This is version 0.8.0 producer package kafka.javaapi.producer.Producer.

Thanks,
Bhavesh


On Thu, Jul 3, 2014 at 11:04 AM, Guozhang Wang  
wrote:

Hi Bhavesh,

Is this the new producer under clients or the original producer 
under core?

Guozhang



> Producer Graceful Shutdown issue in Container (Kafka version 0.8.x.x)
> -
>
> Key: KAFKA-1521
> URL: https://issues.apache.org/jira/browse/KAFKA-1521
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0, 0.8.1.1
> Environment: Tomcat Container or Any other J2EE container
>Reporter: Bravesh Mistry
>Assignee: Jun Rao
>Priority: Minor
>
> Hi Kafka Team,
> We are running multiple webapps in tomcat container, and we have producer 
> which are managed by the ServletContextListener (Lifecycle).  Upon  
> contextInitialized we create and on contextDestroyed we call the 
> producer.close() but underlying Metric Lib does not shutdown.  So we have 
> thread leak due to this issue.  I had to call 
> Metrics.defaultRegistry().shutdown() to resolve this issue.  is this know 
> issue ? I know the metric lib have JVM Shutdown hook, but it will not be 
> invoke since the contain thread is un-deploying the web app and class loader 
> goes way and leaking thread does not find the under lying Kafka class.
> Because of this tomcat, it not shutting down gracefully.
> Are you guys planing to un-register metrics when Producer close is called or 
> shutdown Metrics pool for client.id ? 
> Here is logs:
> SEVERE: The web application [  ] appears to have started a thread named 
> [metrics-meter-tick-thread-1] but has failed to stop it. This is very likely 
> to create a memory leak.
> SEVERE: The web application [] appears to have started a thread named 
> [metrics-meter-tick-thread-2] but has failed to stop it. This is very likely 
> to create a memory leak.
> Thanks,
> Bhavesh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1521) Producer Graceful Shutdown issue in Container (Kafka version 0.8.x.x)

2014-07-03 Thread Bravesh Mistry (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14052062#comment-14052062
 ] 

Bravesh Mistry commented on KAFKA-1521:
---

Just for information, I am using Async mode of Kafka Producers.

> Producer Graceful Shutdown issue in Container (Kafka version 0.8.x.x)
> -
>
> Key: KAFKA-1521
> URL: https://issues.apache.org/jira/browse/KAFKA-1521
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0, 0.8.1.1
> Environment: Tomcat Container or Any other J2EE container
>Reporter: Bravesh Mistry
>Assignee: Jun Rao
>Priority: Minor
>
> Hi Kafka Team,
> We are running multiple webapps in tomcat container, and we have producer 
> which are managed by the ServletContextListener (Lifecycle).  Upon  
> contextInitialized we create and on contextDestroyed we call the 
> producer.close() but underlying Metric Lib does not shutdown.  So we have 
> thread leak due to this issue.  I had to call 
> Metrics.defaultRegistry().shutdown() to resolve this issue.  is this know 
> issue ? I know the metric lib have JVM Shutdown hook, but it will not be 
> invoke since the contain thread is un-deploying the web app and class loader 
> goes way and leaking thread does not find the under lying Kafka class.
> Because of this tomcat, it not shutting down gracefully.
> Are you guys planing to un-register metrics when Producer close is called or 
> shutdown Metrics pool for client.id ? 
> Here is logs:
> SEVERE: The web application [  ] appears to have started a thread named 
> [metrics-meter-tick-thread-1] but has failed to stop it. This is very likely 
> to create a memory leak.
> SEVERE: The web application [] appears to have started a thread named 
> [metrics-meter-tick-thread-2] but has failed to stop it. This is very likely 
> to create a memory leak.
> Thanks,
> Bhavesh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1519:


Attachment: KAFKA-1519.patch

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
> Attachments: KAFKA-1519.patch
>
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1519:


Status: Patch Available  (was: Open)

Attached a patch.
Note that I did not specifically test the behavior of line.separator and what 
will happen if the value of the property is the empty string.

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
> Attachments: KAFKA-1519.patch
>
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14052113#comment-14052113
 ] 

Jay Kreps commented on KAFKA-1519:
--

Looks good. I think this change might also interpret --property line.seperator 
as the same as --property line.seperator=

Not sure if that is good or confusing...

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
> Attachments: KAFKA-1519.patch
>
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23266: Fix KAFKA-1515

2014-07-03 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23266/#review47334
---



clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java


Is there a race condition here? What if the sender immediately executes and 
the flag isn't set yet?

Maybe the usage needs to be something like
  long version = metadata.version()
  metadata.requestUpdate()
  sender.wakeup()
  metadata.awaitUpdate(version)

The version would just be a long counter that we increment in 
Metadata.update().



clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java


This method seems a bit ad hoc. Would it better to just have the caller 
directly do
  metadata.fetch().partitionsForTopic != null?
or if we want to make it a little more readable, add a .hasTopic method to 
Cluster and do
  metadata.fetch().hasTopic(t)



clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java


All the java apis that wait with some timeout take a max wait duration not 
an end millisecond timestamp. Would that be better?



clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java


Not sure about this logic. Imagine that the last refresh was an hour ago 
and the original max wait time was 30 secs. How long does this wait?


- Jay Kreps


On July 3, 2014, 11:42 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23266/
> ---
> 
> (Updated July 3, 2014, 11:42 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1515
> https://issues.apache.org/jira/browse/KAFKA-1515
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> 1. Move the waiting logic out of Metadata into KafkaProducer (KafkaConsumer 
> would not wait on fetch metadata, so will not share this code); and wake-up 
> sender upon waiting metadata 2. Set the refresh timestamp to now instead of 0
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> d21f9225539b070f9b50b7a76601d80b83daf7ef 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> d85ca30001dc3d6122a890c34092551654315458 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Metadata.java
>  8890aa2e3ce5fffc159b3c8528138226e8c8cfd3 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> 37b9d1a462d42b811fffd2af3c418e3a9179f00f 
>   clients/src/test/java/org/apache/kafka/clients/producer/MetadataTest.java 
> 0d7d04ca5d71d4db22da54ed2153f7c0e10cdf78 
> 
> Diff: https://reviews.apache.org/r/23266/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



[jira] [Commented] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14052129#comment-14052129
 ] 

Gwen Shapira commented on KAFKA-1519:
-

Java Properties class explicitly supports lack of "=", and I think we should 
match since the whole point is to generate properties object.

http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html

Quoting:
"
As a third example, the line:

cheeses
 
specifies that the key is "cheeses" and the associated element is the empty 
string "".
"

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
> Attachments: KAFKA-1519.patch
>
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1519) Console consumer: expose configuration option to enable/disable writing the line separator

2014-07-03 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1519:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Makes sense. Committed. Thanks!

> Console consumer: expose configuration option to enable/disable writing the 
> line separator
> --
>
> Key: KAFKA-1519
> URL: https://issues.apache.org/jira/browse/KAFKA-1519
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Michael Noll
>Assignee: Neha Narkhede
>Priority: Minor
> Attachments: KAFKA-1519.patch
>
>
> The current console consumer includes a {{DefaultMessageFormatter}}, which 
> exposes a few user-configurable options which can be set on the command line 
> via --property, e.g. "--property line.separator=XYZ".
> Unfortunately, the current implementation does not allow the user to 
> completely disable writing any such line separator.  However, this 
> functionality would be helpful to enable users to capture data "as is" from a 
> Kafka topic to snapshot file.  Capturing data "as is" -- without an 
> artificial line separator -- is particularly nice for data in a binary format 
> (including Avro).
> *No workaround*
> A potential workaround would be to pass an empty string as the property value 
> of "line.separator", but this doesn't work in the current implementation.
> The following variants throw an "Invalid parser arguments" exception:
> {code}
> --property line.separator=   # "nothing"
> --property line.separator="" # double quotes
> --property line.separator='' # single quotes
> {code}
> Escape tricks via a backslash don't work either.
> If there actually is a workaround please let me know.
> *How to fix*
> We can introduce a "print.line" option to enable/disable writing 
> "line.separator" similar to how the code already uses "print.key" to 
> enable/disable writing "key.separator".
> This change is trivial.  To preserve backwards compatibility, the 
> "print.line" option would be set to true by default (unlike the "print.key" 
> option, which defaults to false).
> *Alternatives*
> Apart from modifying the built-in {{DefaultMessageFormatter}}, users could of 
> course implement their own custom {{MessageFormatter}}.  But given that it's 
> a) a trivial change to the {{DefaultMessageFormatter}} and b) a nice user 
> feature I'd say changing the built-in {{DefaultMessageFormatter}} would be 
> the better approach.  This way, Kafka would support writing data as-is to a 
> file out of the box.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Build failed in Jenkins: Kafka-trunk #216

2014-07-03 Thread Apache Jenkins Server
See 

Changes:

[jay.kreps] KAFKA-1519 Make it possible to disable the line seperator in the 
console consumer. Patch from Gwen Shapira.

--
[...truncated 972 lines...]

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.TopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.log4j.KafkaLog4jAppenderTest > testKafkaLog4jConfigs PASSED

kafka.log4j.KafkaLog4jAppenderTest > testLog4jAppends PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testBrokerFailure PASSED

kafka.api.test.ProducerSendTest > testAutoCreateTopic PASSED

kafka.api.test.ProducerSendTest > testSendOffset PASSED

kafka.api.test.ProducerSendTest > testClose PASSED

kafka.api.test.ProducerSendTest > testSendToPartition PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest > testResumePartitionReassignmentThatWasCompleted PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.AdminTest > testShutdownBroker PASSED

kafka.admin.AdminTest > testTopicConfigChange PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicWithAllAliveReplicas PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicWithRecoveredFollower PASSED

kafka.admin.DeleteTopicTest > testResumeDeleteTopicOnControllerFailover PASSED

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testDeleteTopicDuringAddPartition PASSED

kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic PASSED

kafka.admin.DeleteTopicTest > testRecreateTopicAfterDeletion PASSED

kafka.admin.DeleteTopicTest > testAutoCreateAfterDeleteTopic FAILED
org.scalatest.junit.JUnitTestFailedError: Topic should have been auto 
created
at 
org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
at 
org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
at org.scalatest.Assertions$class.fail(Assertions.scala:711)
at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
at 
kafka.admin.DeleteTopicTest.testAutoCreateAfterDeleteTopic(DeleteTopicTest.scala:222)

kafka.admin.DeleteTopicTest > testDeleteNonExistingTopic PASSED

kafka.message.MessageTest > testFiel