[jira] [Updated] (KAFKA-3138) 0.9.0 docs still say that log compaction doesn't work on compressed topics.

2016-01-24 Thread James Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Cheng updated KAFKA-3138:
---
Status: Patch Available  (was: Open)

> 0.9.0 docs still say that log compaction doesn't work on compressed topics.
> ---
>
> Key: KAFKA-3138
> URL: https://issues.apache.org/jira/browse/KAFKA-3138
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
>
> The 0.9.0 docs say "Log compaction is not yet compatible with compressed 
> topics.". But I believe that was fixed in 0.9.0.
> Is the fix to simply remove that line from the docs? It sounds newbie level. 
> If so, I would like to work on this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Fixed undefined method `update_guest'

2016-01-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/802


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #985

2016-01-24 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Fixed undefined method `update_guest'

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision c8b60b6344e659b805ea04fc976abcae2bf9fcf8 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c8b60b6344e659b805ea04fc976abcae2bf9fcf8
 > git rev-list d00cf520fb0b36c7c705250b1773db2f242d5f44 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4652776313393113203.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 14.591 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8874198943486626544.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.574 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[GitHub] kafka pull request: KAFKA-3134: Fix missing value.deserializer err...

2016-01-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/803


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3134) Missing required configuration "value.deserializer" when initializing a KafkaConsumer with a valid "valueDeserializer"

2016-01-24 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3134.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   0.9.0.1

Issue resolved by pull request 803
[https://github.com/apache/kafka/pull/803]

> Missing required configuration "value.deserializer" when initializing a 
> KafkaConsumer with a valid "valueDeserializer"
> --
>
> Key: KAFKA-3134
> URL: https://issues.apache.org/jira/browse/KAFKA-3134
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Yifan Ying
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> I tried to initialize a KafkaConsumer object using with a null 
> keyDeserializer and a non-null valueDeserializer:
> {code}
> public KafkaConsumer(Properties properties, Deserializer keyDeserializer,
>  Deserializer valueDeserializer)
> {code}
> Then I got an exception as follows:
> {code}
> Caused by: org.apache.kafka.common.config.ConfigException: Missing required 
> configuration "value.deserializer" which has no default value.
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:148)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:49)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:56)
>   at 
> org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:336)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)
>   .
> {code}
> Then I went to ConsumerConfig.java file and found this block of code causing 
> the problem:
> {code}
> public static Map addDeserializerToConfig(Map 
> configs,
>   Deserializer 
> keyDeserializer,
>   Deserializer 
> valueDeserializer) {
> Map newConfigs = new HashMap();
> newConfigs.putAll(configs);
> if (keyDeserializer != null)
> newConfigs.put(KEY_DESERIALIZER_CLASS_CONFIG, 
> keyDeserializer.getClass());
> if (keyDeserializer != null)
> newConfigs.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
> valueDeserializer.getClass());
> return newConfigs;
> }
> public static Properties addDeserializerToConfig(Properties properties,
>  Deserializer 
> keyDeserializer,
>  Deserializer 
> valueDeserializer) {
> Properties newProperties = new Properties();
> newProperties.putAll(properties);
> if (keyDeserializer != null)
> newProperties.put(KEY_DESERIALIZER_CLASS_CONFIG, 
> keyDeserializer.getClass().getName());
> if (keyDeserializer != null)
> newProperties.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
> valueDeserializer.getClass().getName());
> return newProperties;
> }
> {code}
> Instead of checking valueDeserializer, the code checks keyDeserializer every 
> time. So when keyDeserializer is null but valueDeserializer is not, the 
> valueDeserializer property will never get set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3134) Missing required configuration "value.deserializer" when initializing a KafkaConsumer with a valid "valueDeserializer"

2016-01-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114255#comment-15114255
 ] 

ASF GitHub Bot commented on KAFKA-3134:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/803


> Missing required configuration "value.deserializer" when initializing a 
> KafkaConsumer with a valid "valueDeserializer"
> --
>
> Key: KAFKA-3134
> URL: https://issues.apache.org/jira/browse/KAFKA-3134
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Yifan Ying
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> I tried to initialize a KafkaConsumer object using with a null 
> keyDeserializer and a non-null valueDeserializer:
> {code}
> public KafkaConsumer(Properties properties, Deserializer keyDeserializer,
>  Deserializer valueDeserializer)
> {code}
> Then I got an exception as follows:
> {code}
> Caused by: org.apache.kafka.common.config.ConfigException: Missing required 
> configuration "value.deserializer" which has no default value.
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:148)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:49)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:56)
>   at 
> org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:336)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:518)
>   .
> {code}
> Then I went to ConsumerConfig.java file and found this block of code causing 
> the problem:
> {code}
> public static Map addDeserializerToConfig(Map 
> configs,
>   Deserializer 
> keyDeserializer,
>   Deserializer 
> valueDeserializer) {
> Map newConfigs = new HashMap();
> newConfigs.putAll(configs);
> if (keyDeserializer != null)
> newConfigs.put(KEY_DESERIALIZER_CLASS_CONFIG, 
> keyDeserializer.getClass());
> if (keyDeserializer != null)
> newConfigs.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
> valueDeserializer.getClass());
> return newConfigs;
> }
> public static Properties addDeserializerToConfig(Properties properties,
>  Deserializer 
> keyDeserializer,
>  Deserializer 
> valueDeserializer) {
> Properties newProperties = new Properties();
> newProperties.putAll(properties);
> if (keyDeserializer != null)
> newProperties.put(KEY_DESERIALIZER_CLASS_CONFIG, 
> keyDeserializer.getClass().getName());
> if (keyDeserializer != null)
> newProperties.put(VALUE_DESERIALIZER_CLASS_CONFIG, 
> valueDeserializer.getClass().getName());
> return newProperties;
> }
> {code}
> Instead of checking valueDeserializer, the code checks keyDeserializer every 
> time. So when keyDeserializer is null but valueDeserializer is not, the 
> valueDeserializer property will never get set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #311

2016-01-24 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Fixed undefined method `update_guest'

[me] KAFKA-3134: Fix missing value.deserializer error during KafkaConsumer

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5e8a084834ad35506ee74e1da15a3964642a512e 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5e8a084834ad35506ee74e1da15a3964642a512e
 > git rev-list d00cf520fb0b36c7c705250b1773db2f242d5f44 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1454130029835881017.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 15.654 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson6440679830465674655.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 13.762 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[jira] [Commented] (KAFKA-2426) A Kafka node tries to connect to itself through its advertised hostname

2016-01-24 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114260#comment-15114260
 ] 

Ewen Cheslack-Postava commented on KAFKA-2426:
--

Although it should be *very* unusual to have this problem, it does seem like it 
would be nice if the one case where a node connects to itself could use the 
local interface/address to connect. In environments like AWS, this is nice 
since it avoids going through any NATs or external routing since it'd use the 
local IP rather than a public IP that requires additional routing and proxying.

That said, I'm not sure it's worth special casing this -- one reason it's 
probably not easy is that the local/advertised hostname info probably isn't 
tracked far enough to switch between the two.

> A Kafka node tries to connect to itself through its advertised hostname
> ---
>
> Key: KAFKA-2426
> URL: https://issues.apache.org/jira/browse/KAFKA-2426
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.8.2.1
> Environment: Docker https://github.com/wurstmeister/kafka-docker, 
> managed by a Kubernetes cluster, with an "iptables proxy".
>Reporter: Mikaël Cluseau
>Assignee: Jun Rao
>
> Hi,
> when used behind a firewall, Apache Kafka nodes are trying to connect to 
> themselves using their advertised hostnames. This means that if you have a 
> service IP managed by the docker's host using *only* iptables DNAT rules, the 
> node's connection to "itself" times out.
> This is the case in any setup where a host will DNAT the service IP to the 
> instance's IP, and send the packet back on the same interface other a Linux 
> Bridge port not configured in "hairpin" mode. It's because of this: 
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/bridge/br_forward.c#n30
> The specific part of the kubernetes issue is here: 
> https://github.com/BenTheElder/kubernetes/issues/3#issuecomment-123925060 .
> The timeout involves that the even if partition's leader is elected, it then 
> fails to accept writes from the other members, causing a write lock. and 
> generating very heavy logs (as fast as Kafka usualy is, but through log4j 
> this time ;)).
> This also means that the normal docker case work by going through the 
> userspace-proxy, which necessarily impacts the performance.
> The workaround for us was to add a "127.0.0.2 advertised-hostname" to 
> /etc/hosts in the container startup script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3023) Log Compaction documentation still says compressed messages are not supported

2016-01-24 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114261#comment-15114261
 ] 

Ewen Cheslack-Postava commented on KAFKA-3023:
--

Marking dup of newer bug since KAFKA-3138 has a patch.

> Log Compaction documentation still says compressed messages are not supported
> -
>
> Key: KAFKA-3023
> URL: https://issues.apache.org/jira/browse/KAFKA-3023
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Looks like we can now compact topics with compressed messages  
> (https://issues.apache.org/jira/browse/KAFKA-1374) but the docs still say we 
> can't:
> http://kafka.apache.org/documentation.html#design_compactionlimitations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3023) Log Compaction documentation still says compressed messages are not supported

2016-01-24 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3023.
--
Resolution: Duplicate

> Log Compaction documentation still says compressed messages are not supported
> -
>
> Key: KAFKA-3023
> URL: https://issues.apache.org/jira/browse/KAFKA-3023
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Looks like we can now compact topics with compressed messages  
> (https://issues.apache.org/jira/browse/KAFKA-1374) but the docs still say we 
> can't:
> http://kafka.apache.org/documentation.html#design_compactionlimitations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3138) 0.9.0 docs still say that log compaction doesn't work on compressed topics.

2016-01-24 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3138:
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 807
[https://github.com/apache/kafka/pull/807]

> 0.9.0 docs still say that log compaction doesn't work on compressed topics.
> ---
>
> Key: KAFKA-3138
> URL: https://issues.apache.org/jira/browse/KAFKA-3138
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
> Fix For: 0.9.1.0
>
>
> The 0.9.0 docs say "Log compaction is not yet compatible with compressed 
> topics.". But I believe that was fixed in 0.9.0.
> Is the fix to simply remove that line from the docs? It sounds newbie level. 
> If so, I would like to work on this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3138) 0.9.0 docs still say that log compaction doesn't work on compressed topics.

2016-01-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114264#comment-15114264
 ] 

ASF GitHub Bot commented on KAFKA-3138:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/807


> 0.9.0 docs still say that log compaction doesn't work on compressed topics.
> ---
>
> Key: KAFKA-3138
> URL: https://issues.apache.org/jira/browse/KAFKA-3138
> Project: Kafka
>  Issue Type: Bug
>Reporter: James Cheng
> Fix For: 0.9.1.0
>
>
> The 0.9.0 docs say "Log compaction is not yet compatible with compressed 
> topics.". But I believe that was fixed in 0.9.0.
> Is the fix to simply remove that line from the docs? It sounds newbie level. 
> If so, I would like to work on this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3138: 0.9.0 docs still say that log comp...

2016-01-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/807


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #986

2016-01-24 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3134: Fix missing value.deserializer error during KafkaConsumer

[me] KAFKA-3138: 0.9.0 docs still say that log compaction doesn't work on

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision fa6b90f97cb366a02eae8057124ec22db1f7e9cd 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f fa6b90f97cb366a02eae8057124ec22db1f7e9cd
 > git rev-list c8b60b6344e659b805ea04fc976abcae2bf9fcf8 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7051455153581681733.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 16.814 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5355479549527019536.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 21.344 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Build failed in Jenkins: kafka-trunk-jdk8 #312

2016-01-24 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3138: 0.9.0 docs still say that log compaction doesn't work on

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision fa6b90f97cb366a02eae8057124ec22db1f7e9cd 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f fa6b90f97cb366a02eae8057124ec22db1f7e9cd
 > git rev-list 5e8a084834ad35506ee74e1da15a3964642a512e # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5859266234426913771.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 8.899 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1481640306634556333.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 9.923 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Jenkins build is back to normal : kafka_0.9.0_jdk7 #93

2016-01-24 Thread Apache Jenkins Server
See 



[jira] [Created] (KAFKA-3143) inconsistent state in ZK when all replicas are dead

2016-01-24 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-3143:
--

 Summary: inconsistent state in ZK when all replicas are dead
 Key: KAFKA-3143
 URL: https://issues.apache.org/jira/browse/KAFKA-3143
 Project: Kafka
  Issue Type: Bug
Reporter: Jun Rao


This issue can be recreated in the following steps.

1. Start 3 brokers, 1, 2 and 3.
2. Create a topic with a single partition and 2 replicas, say on broker 1 and 2.
If we stop both replicas 1 and 2, depending on where the controller is, the 
leader and isr stored in ZK in the end are different.

If the controller is on broker 3, what's stored in ZK will be -1 for leader and 
an empty set for ISR.

On the other hand, if the controller is on broker 2 and we stop broker 1 
followed by broker 2, what's stored in ZK will be 2 for leader and 2 for ISR.

The issue is that in the first case, the controller will call 
ReplicaStateMachine to transition to OfflineReplica, which will change the 
leader and isr. However, in the second case, the controller fails over, but we 
don't transition ReplicaStateMachine to OfflineReplica during controller 
initialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3143) inconsistent state in ZK when all replicas are dead

2016-01-24 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114526#comment-15114526
 ] 

Jun Rao commented on KAFKA-3143:


The fix is probably to transition all offline replicas to OfflineReplica state 
during controller failover. Also, in OfflinePartitionLeaderSelector, we throw a 
NoReplicaOnlineException if there is no live assigned replica. To be consistent 
with the logic in KafkaController.removeReplicaFromIsr(), it seems that we 
should just set the leader to NoLeader.

> inconsistent state in ZK when all replicas are dead
> ---
>
> Key: KAFKA-3143
> URL: https://issues.apache.org/jira/browse/KAFKA-3143
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>
> This issue can be recreated in the following steps.
> 1. Start 3 brokers, 1, 2 and 3.
> 2. Create a topic with a single partition and 2 replicas, say on broker 1 and 
> 2.
> If we stop both replicas 1 and 2, depending on where the controller is, the 
> leader and isr stored in ZK in the end are different.
> If the controller is on broker 3, what's stored in ZK will be -1 for leader 
> and an empty set for ISR.
> On the other hand, if the controller is on broker 2 and we stop broker 1 
> followed by broker 2, what's stored in ZK will be 2 for leader and 2 for ISR.
> The issue is that in the first case, the controller will call 
> ReplicaStateMachine to transition to OfflineReplica, which will change the 
> leader and isr. However, in the second case, the controller fails over, but 
> we don't transition ReplicaStateMachine to OfflineReplica during controller 
> initialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3143) inconsistent state in ZK when all replicas are dead

2016-01-24 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114532#comment-15114532
 ] 

Eno Thereska commented on KAFKA-3143:
-

[~junrao]: if you want I can have a look since I'm looking at the controller 
code.

> inconsistent state in ZK when all replicas are dead
> ---
>
> Key: KAFKA-3143
> URL: https://issues.apache.org/jira/browse/KAFKA-3143
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>
> This issue can be recreated in the following steps.
> 1. Start 3 brokers, 1, 2 and 3.
> 2. Create a topic with a single partition and 2 replicas, say on broker 1 and 
> 2.
> If we stop both replicas 1 and 2, depending on where the controller is, the 
> leader and isr stored in ZK in the end are different.
> If the controller is on broker 3, what's stored in ZK will be -1 for leader 
> and an empty set for ISR.
> On the other hand, if the controller is on broker 2 and we stop broker 1 
> followed by broker 2, what's stored in ZK will be 2 for leader and 2 for ISR.
> The issue is that in the first case, the controller will call 
> ReplicaStateMachine to transition to OfflineReplica, which will change the 
> leader and isr. However, in the second case, the controller fails over, but 
> we don't transition ReplicaStateMachine to OfflineReplica during controller 
> initialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2673) Log JmxTool output to logger

2016-01-24 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-2673:

Assignee: (was: Eno Thereska)

> Log JmxTool output to logger
> 
>
> Key: KAFKA-2673
> URL: https://issues.apache.org/jira/browse/KAFKA-2673
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Eno Thereska
>Priority: Trivial
>  Labels: newbie
> Fix For: 0.8.1.2
>
>
> Currently JmxTool outputs the data into a CSV file. It could be of value to 
> have the data sent to a logger specified in a log4j configuration file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3143) inconsistent state in ZK when all replicas are dead

2016-01-24 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114543#comment-15114543
 ] 

Ismael Juma commented on KAFKA-3143:


Isn't this the same as KAFKA-3096? I created a PR for that a few weeks ago.

> inconsistent state in ZK when all replicas are dead
> ---
>
> Key: KAFKA-3143
> URL: https://issues.apache.org/jira/browse/KAFKA-3143
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>
> This issue can be recreated in the following steps.
> 1. Start 3 brokers, 1, 2 and 3.
> 2. Create a topic with a single partition and 2 replicas, say on broker 1 and 
> 2.
> If we stop both replicas 1 and 2, depending on where the controller is, the 
> leader and isr stored in ZK in the end are different.
> If the controller is on broker 3, what's stored in ZK will be -1 for leader 
> and an empty set for ISR.
> On the other hand, if the controller is on broker 2 and we stop broker 1 
> followed by broker 2, what's stored in ZK will be 2 for leader and 2 for ISR.
> The issue is that in the first case, the controller will call 
> ReplicaStateMachine to transition to OfflineReplica, which will change the 
> leader and isr. However, in the second case, the controller fails over, but 
> we don't transition ReplicaStateMachine to OfflineReplica during controller 
> initialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2426) A Kafka node tries to connect to itself through its advertised hostname

2016-01-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-2426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114610#comment-15114610
 ] 

Mikaël Cluseau commented on KAFKA-2426:
---

Couldn't a "cluster.host.name" parameter be managed like "advertised.host.name"?

> A Kafka node tries to connect to itself through its advertised hostname
> ---
>
> Key: KAFKA-2426
> URL: https://issues.apache.org/jira/browse/KAFKA-2426
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.8.2.1
> Environment: Docker https://github.com/wurstmeister/kafka-docker, 
> managed by a Kubernetes cluster, with an "iptables proxy".
>Reporter: Mikaël Cluseau
>Assignee: Jun Rao
>
> Hi,
> when used behind a firewall, Apache Kafka nodes are trying to connect to 
> themselves using their advertised hostnames. This means that if you have a 
> service IP managed by the docker's host using *only* iptables DNAT rules, the 
> node's connection to "itself" times out.
> This is the case in any setup where a host will DNAT the service IP to the 
> instance's IP, and send the packet back on the same interface other a Linux 
> Bridge port not configured in "hairpin" mode. It's because of this: 
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/bridge/br_forward.c#n30
> The specific part of the kubernetes issue is here: 
> https://github.com/BenTheElder/kubernetes/issues/3#issuecomment-123925060 .
> The timeout involves that the even if partition's leader is elected, it then 
> fails to accept writes from the other members, causing a write lock. and 
> generating very heavy logs (as fast as Kafka usualy is, but through log4j 
> this time ;)).
> This also means that the normal docker case work by going through the 
> userspace-proxy, which necessarily impacts the performance.
> The workaround for us was to add a "127.0.0.2 advertised-hostname" to 
> /etc/hosts in the container startup script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-24 Thread Becket Qin
This could be a useful feature. And I think there are some use cases to
mutate the data like rejected alternative one mentioned.

I am wondering if there is functional overlapping between
ProducerInterceptor.onAcknowledgement() and the producer callback? I can
see that the Callback could be a per record setting while
onAcknowledgement() is a producer level setting. Other than that, is there
any difference between them?

Thanks,

Jiangjie (Becket) Qin

On Fri, Jan 22, 2016 at 6:21 PM, Neha Narkhede  wrote:

> James,
>
> That is one of the many monitoring use cases for the interceptor interface.
>
> Thanks,
> Neha
>
> On Fri, Jan 22, 2016 at 6:18 PM, James Cheng  wrote:
>
> > Anna,
> >
> > I'm trying to understand a concrete use case. It sounds like producer
> > interceptors could be used to implement part of LinkedIn's Kafak Audit
> > tool? https://engineering.linkedin.com/kafka/running-kafka-scale
> >
> > Part of that is done by a wrapper library around the kafka producer that
> > keeps a count of the number of messages produced, and then sends that
> count
> > to a side-topic. It sounds like the producer interceptors could possibly
> be
> > used to implement that?
> >
> > -James
> >
> > > On Jan 22, 2016, at 4:33 PM, Anna Povzner  wrote:
> > >
> > > Hi,
> > >
> > > I just created a KIP-42 for adding producer and consumer interceptors
> for
> > > intercepting messages at different points on producer and consumer.
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors
> > >
> > > Comments and suggestions are welcome!
> > >
> > > Thanks,
> > > Anna
> >
> >
> > 
> >
> > This email and any attachments may contain confidential and privileged
> > material for the sole use of the intended recipient. Any review, copying,
> > or distribution of this email (or any attachments) by others is
> prohibited.
> > If you are not the intended recipient, please contact the sender
> > immediately and permanently delete this email and any attachments. No
> > employee or agent of TiVo Inc. is authorized to conclude any binding
> > agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> > Inc. may only be made by a signed written agreement.
> >
>
>
>
> --
> Thanks,
> Neha
>


[jira] [Commented] (KAFKA-3143) inconsistent state in ZK when all replicas are dead

2016-01-24 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114623#comment-15114623
 ] 

Jun Rao commented on KAFKA-3143:


[~ijuma], yes, this seems to be similar to KAFKA-3096, but may not be exactly 
the same. It's probably better to fix them together. Left some comments in 
KAFKA-3096.

> inconsistent state in ZK when all replicas are dead
> ---
>
> Key: KAFKA-3143
> URL: https://issues.apache.org/jira/browse/KAFKA-3143
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>
> This issue can be recreated in the following steps.
> 1. Start 3 brokers, 1, 2 and 3.
> 2. Create a topic with a single partition and 2 replicas, say on broker 1 and 
> 2.
> If we stop both replicas 1 and 2, depending on where the controller is, the 
> leader and isr stored in ZK in the end are different.
> If the controller is on broker 3, what's stored in ZK will be -1 for leader 
> and an empty set for ISR.
> On the other hand, if the controller is on broker 2 and we stop broker 1 
> followed by broker 2, what's stored in ZK will be 2 for leader and 2 for ISR.
> The issue is that in the first case, the controller will call 
> ReplicaStateMachine to transition to OfflineReplica, which will change the 
> leader and isr. However, in the second case, the controller fails over, but 
> we don't transition ReplicaStateMachine to OfflineReplica during controller 
> initialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3144) report members with no assigned partitions in ConsumerGroupCommand

2016-01-24 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-3144:
--

 Summary: report members with no assigned partitions in 
ConsumerGroupCommand
 Key: KAFKA-3144
 URL: https://issues.apache.org/jira/browse/KAFKA-3144
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Jun Rao


A couple of suggestions on improving ConsumerGroupCommand. 

1. It would be useful to list members with no assigned partitions when doing 
describe in ConsumerGroupCommand.

2. Currently, we show the client.id of each member when doing describe in 
ConsumerGroupCommand. Since client.id is supposed to be the logical application 
id, all members in the same group are supposed to set the same client.id. So, 
it would be clearer if we show the client id as well as the member id.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2016-01-24 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114793#comment-15114793
 ] 

jin xing commented on KAFKA-2875:
-

[~ijuma]
Could you please help to take a look at the pr?

> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.1.0
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)