Re: Review Request 29301: Patch for KAFKA-1694

2015-01-06 Thread Joe Stein

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29301/#review66801
---



build.gradle


We need to have upload archive here



build.gradle


If we can do this without an upgrade that would be great if we are in fact 
just requiring 1 function.



build.gradle


???



clients/src/main/java/org/apache/kafka/common/protocol/types/MaybeOf.java


not sure about this name, you are making an Option[] for the protocol value 
and that not make sense if you look at the use of it and not how it works.



clients/src/main/java/org/apache/kafka/common/protocol/types/MaybeOf.java


why wouldn't the null size == 0?



core/src/main/scala/kafka/common/ErrorMapping.scala


This is for trying to send an admin request to a broker that is not the 
controller? I think it should be name for that more specifically and clearly/


- Joe Stein


On Dec. 24, 2014, 7:22 p.m., Andrii Biletskyi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29301/
> ---
> 
> (Updated Dec. 24, 2014, 7:22 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1694
> https://issues.apache.org/jira/browse/KAFKA-1694
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1694 - introduced new type for Wire protocol, ported 
> ClusterMetadataResponse to it
> 
> 
> KAFKA-1694 - Split Admin RQ/RP to separate messages
> 
> 
> KAFKA-1694 - Admin commands can be handled only by controller; 
> DeleteTopicCommand NPE fix
> 
> 
> Diffs
> -
> 
>   bin/kafka.sh PRE-CREATION 
>   bin/windows/kafka.bat PRE-CREATION 
>   build.gradle 18f86e4c8a10618d50ac78572d119c6e100ed85b 
>   clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java 
> 109fc965e09b2ed186a073351bd037ac8af20a4c 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
>   clients/src/main/java/org/apache/kafka/common/protocol/types/MaybeOf.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
> 121e880a941fcd3e6392859edba11a94236494cc 
>   
> clients/src/main/java/org/apache/kafka/common/requests/ClusterMetadataRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/ClusterMetadataResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DescribeTopicOutput.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DescribeTopicRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DescribeTopicResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/ListTopicsOutput.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/ListTopicsRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/ListTopicsResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/TopicConfigDetails.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/TopicPartitionsDetails.java
>  PRE-CREATION 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  df37fc6d8f0db0b8192a948426af603be3444da4 
>   core/src/main/scala/kafka/api/ApiUtils.scala 
> 1f80de1638978901500df808ca5133308c9d1fca 
>   core/src/main/scala/kafka/api/ClusterMetadataRequest.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/ClusterMetadataResponse.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/RequestKeys.scala 
> c24c0345feedc7b9e2e9f40af11bfa1b8d328c43 
>   core/src/main/scala/kafka/api/admin/AlterTopicRequest.scala PRE-CREATION 
>   core/src/mai

[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-01-06 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14266290#comment-14266290
 ] 

Sriharsha Chintalapani commented on KAFKA-1646:
---

[~jkreps] [~junrao] Can you please take a look at the latest patch. Thanks.

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646_20141216_163008.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Possible Memory Leak in Kafka with Tomcat

2015-01-06 Thread Marcel Alburg
Hello,

i try to use spring-integration-kafka and after stopping the Tomcat i get
an "possible memory leak" from the class loader.

I talked with the people of spring-integration-kafka and you can see this
talk with a lot of debug messages and an reproduceable project under:

https://github.com/spring-projects/spring-integration-kafka/issues/10

Thanks in advance


Re: Review Request 29590: 1. Removed defaults for serializer/deserializer. 2. Converted type cast exception to serialization exception in the producer. 3. Added string ser/deser. 4. Moved the isKey fl

2015-01-06 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29590/#review66861
---

Ship it!



clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java


The error message can be made clearer by specifying the name of the class 
configured under "key.serializer" as well.



clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java


ditto


- Neha Narkhede


On Jan. 5, 2015, 7:47 p.m., Jun Rao wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29590/
> ---
> 
> (Updated Jan. 5, 2015, 7:47 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: kafka-1797
> https://issues.apache.org/jira/browse/kafka-1797
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> addressing Jay's comments
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ByteArrayDeserializer.java
>  514cbd2c27a8d1ce13489d315f7880dfade7ffde 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 1d64f08762b0c33fcaebde0f41039b327060215a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Deserializer.java 
> c774a199db71fbc00776cd1256af57b2d9e55a66 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> fe9066388f4b7910512d85ef088a1b96749735ac 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/ByteArraySerializer.java
>  9005b74a328c997663232fe3a0999b25d2267efe 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> d859fc588a276eb36bcfd621ae6d7978ad0decdd 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 9cdc13d6cbb372b350acf90a21538b8ba495d2e8 
>   clients/src/main/java/org/apache/kafka/clients/producer/Serializer.java 
> de87f9c1caeadd176195be75d0db43fc0a518380 
>   clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
> 3d4ab7228926f50309c07f0672f33416ce4fa37f 
>   
> clients/src/main/java/org/apache/kafka/common/errors/DeserializationException.java
>  a5433398fb9788e260a4250da32e4be607f3f207 
>   
> clients/src/main/java/org/apache/kafka/common/serialization/StringDeserializer.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/serialization/StringSerializer.java
>  PRE-CREATION 
>   
> clients/src/test/java/org/apache/kafka/common/serialization/SerializationTest.java
>  PRE-CREATION 
>   core/src/main/scala/kafka/producer/KafkaLog4jAppender.scala 
> e194942492324092f811b86f9c1f28f79b366cfd 
>   core/src/main/scala/kafka/tools/ConsoleProducer.scala 
> 397d80da08c925757649b7d104d8360f56c604c3 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> 2126f6e55c5ec6a418165d340cc9a4f445af5045 
>   core/src/main/scala/kafka/tools/ProducerPerformance.scala 
> f2dc4ed2f04f0e9656e10b02db5ed1d39c4a4d39 
>   core/src/main/scala/kafka/tools/ReplayLogProducer.scala 
> f541987b2876a438c43ea9088ae8fed708ba82a3 
>   core/src/main/scala/kafka/tools/TestEndToEndLatency.scala 
> 2ebc7bf643ea91bd93ba37b8e64e8a5a9bb37ece 
>   core/src/main/scala/kafka/tools/TestLogCleaning.scala 
> b81010ec0fa9835bfe48ce6aad0c491cdc67e7ef 
>   core/src/test/scala/integration/kafka/api/ProducerCompressionTest.scala 
> 1505fd4464dc9ac71cce52d9b64406a21e5e45d2 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> 6196060edf9f1650720ec916f88933953a1daa2c 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 94d0028d8c4907e747aa8a74a13d719b974c97bf 
> 
> Diff: https://reviews.apache.org/r/29590/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jun Rao
> 
>



Re: Review Request 29379: Patch for KAFKA-1788

2015-01-06 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29379/
---

(Updated Jan. 6, 2015, 6:42 p.m.)


Review request for kafka.


Bugs: KAFKA-1788
https://issues.apache.org/jira/browse/KAFKA-1788


Repository: kafka


Description (updated)
---

Merge remote-tracking branch 'origin/trunk' into KAFKA-1788


KAFKA-1788: addressed Ewen's comments.


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
f61efb35db7e0de590556e6a94a7b5cb850cdae9 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
a893d88c2f4e21509b6c70d6817b4b2cdd0fd657 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
 c15485d1af304ef53691d478f113f332fe67af77 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
 dd0af8aee98abed5d4a0dc50989e37888bb353fe 
  
clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
 2c9932401d573549c40f16fda8c4e3e11309cb85 
  clients/src/test/java/org/apache/kafka/clients/producer/SenderTest.java 
ef2ca65cabe97b909f17b62027a1bb06827e88fe 

Diff: https://reviews.apache.org/r/29379/diff/


Testing
---

Unit test added. 


Thanks,

Parth Brahmbhatt



Re: Review Request 29379: Patch for KAFKA-1788

2015-01-06 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29379/#review66879
---



clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java


pulled up the synchronized dequeue block.



clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java


yes.replaced with batchExpirationMillis.



clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java


sender.completeBatch() is only called as part of produce response handling 
or disconnect. Both of which will never be invoked when there is no broker. I 
could add sender as a member of record accumulator or pass it as the callback 
arg as part of the ready() method. All of which is too hecky.

Let me know if you see some other alternative.



clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java


done.


- Parth Brahmbhatt


On Jan. 6, 2015, 6:42 p.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29379/
> ---
> 
> (Updated Jan. 6, 2015, 6:42 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1788
> https://issues.apache.org/jira/browse/KAFKA-1788
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Merge remote-tracking branch 'origin/trunk' into KAFKA-1788
> 
> 
> KAFKA-1788: addressed Ewen's comments.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> f61efb35db7e0de590556e6a94a7b5cb850cdae9 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> a893d88c2f4e21509b6c70d6817b4b2cdd0fd657 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  c15485d1af304ef53691d478f113f332fe67af77 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  dd0af8aee98abed5d4a0dc50989e37888bb353fe 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
>  2c9932401d573549c40f16fda8c4e3e11309cb85 
>   clients/src/test/java/org/apache/kafka/clients/producer/SenderTest.java 
> ef2ca65cabe97b909f17b62027a1bb06827e88fe 
> 
> Diff: https://reviews.apache.org/r/29379/diff/
> 
> 
> Testing
> ---
> 
> Unit test added. 
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



[jira] [Commented] (KAFKA-1788) producer record can stay in RecordAccumulator forever if leader is no available

2015-01-06 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14266538#comment-14266538
 ] 

Parth Brahmbhatt commented on KAFKA-1788:
-

Updated reviewboard https://reviews.apache.org/r/29379/diff/
 against branch origin/trunk

> producer record can stay in RecordAccumulator forever if leader is no 
> available
> ---
>
> Key: KAFKA-1788
> URL: https://issues.apache.org/jira/browse/KAFKA-1788
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1788.patch, KAFKA-1788_2015-01-06_13:42:37.patch
>
>
> In the new producer, when a partition has no leader for a long time (e.g., 
> all replicas are down), the records for that partition will stay in the 
> RecordAccumulator until the leader is available. This may cause the 
> bufferpool to be full and the callback for the produced message to block for 
> a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1788) producer record can stay in RecordAccumulator forever if leader is no available

2015-01-06 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-1788:

Attachment: KAFKA-1788_2015-01-06_13:42:37.patch

> producer record can stay in RecordAccumulator forever if leader is no 
> available
> ---
>
> Key: KAFKA-1788
> URL: https://issues.apache.org/jira/browse/KAFKA-1788
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1788.patch, KAFKA-1788_2015-01-06_13:42:37.patch
>
>
> In the new producer, when a partition has no leader for a long time (e.g., 
> all replicas are down), the records for that partition will stay in the 
> RecordAccumulator until the leader is available. This may cause the 
> bufferpool to be full and the callback for the produced message to block for 
> a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29379: Patch for KAFKA-1788

2015-01-06 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29379/
---

(Updated Jan. 6, 2015, 6:44 p.m.)


Review request for kafka.


Bugs: KAFKA-1788
https://issues.apache.org/jira/browse/KAFKA-1788


Repository: kafka


Description
---

Merge remote-tracking branch 'origin/trunk' into KAFKA-1788


KAFKA-1788: addressed Ewen's comments.


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
f61efb35db7e0de590556e6a94a7b5cb850cdae9 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
a893d88c2f4e21509b6c70d6817b4b2cdd0fd657 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
 c15485d1af304ef53691d478f113f332fe67af77 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
 dd0af8aee98abed5d4a0dc50989e37888bb353fe 
  
clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
 2c9932401d573549c40f16fda8c4e3e11309cb85 
  clients/src/test/java/org/apache/kafka/clients/producer/SenderTest.java 
ef2ca65cabe97b909f17b62027a1bb06827e88fe 

Diff: https://reviews.apache.org/r/29379/diff/


Testing
---

Unit test added. 


Thanks,

Parth Brahmbhatt



[jira] [Commented] (KAFKA-1788) producer record can stay in RecordAccumulator forever if leader is no available

2015-01-06 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14266541#comment-14266541
 ] 

Parth Brahmbhatt commented on KAFKA-1788:
-

Updated reviewboard https://reviews.apache.org/r/29379/diff/
 against branch origin/trunk

> producer record can stay in RecordAccumulator forever if leader is no 
> available
> ---
>
> Key: KAFKA-1788
> URL: https://issues.apache.org/jira/browse/KAFKA-1788
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1788.patch, KAFKA-1788_2015-01-06_13:42:37.patch, 
> KAFKA-1788_2015-01-06_13:44:41.patch
>
>
> In the new producer, when a partition has no leader for a long time (e.g., 
> all replicas are down), the records for that partition will stay in the 
> RecordAccumulator until the leader is available. This may cause the 
> bufferpool to be full and the callback for the produced message to block for 
> a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1788) producer record can stay in RecordAccumulator forever if leader is no available

2015-01-06 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-1788:

Attachment: KAFKA-1788_2015-01-06_13:44:41.patch

> producer record can stay in RecordAccumulator forever if leader is no 
> available
> ---
>
> Key: KAFKA-1788
> URL: https://issues.apache.org/jira/browse/KAFKA-1788
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1788.patch, KAFKA-1788_2015-01-06_13:42:37.patch, 
> KAFKA-1788_2015-01-06_13:44:41.patch
>
>
> In the new producer, when a partition has no leader for a long time (e.g., 
> all replicas are down), the records for that partition will stay in the 
> RecordAccumulator until the leader is available. This may cause the 
> bufferpool to be full and the callback for the produced message to block for 
> a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1642) [Java New Producer Kafka Trunk] CPU Usage Spike to 100% when network connection is lost

2015-01-06 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1642:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the latest followup patch. +1 and committed to both 0.8.2 and trunk.

> [Java New Producer Kafka Trunk] CPU Usage Spike to 100% when network 
> connection is lost
> ---
>
> Key: KAFKA-1642
> URL: https://issues.apache.org/jira/browse/KAFKA-1642
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2
>Reporter: Bhavesh Mistry
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: 
> 0001-Initial-CPU-Hish-Usage-by-Kafka-FIX-and-Also-fix-CLO.patch, 
> KAFKA-1642.patch, KAFKA-1642.patch, KAFKA-1642_2014-10-20_17:33:57.patch, 
> KAFKA-1642_2014-10-23_16:19:41.patch, KAFKA-1642_2015-01-05_18:56:55.patch
>
>
> I see my CPU spike to 100% when network connection is lost for while.  It 
> seems network  IO thread are very busy logging following error message.  Is 
> this expected behavior ?
> 2014-09-17 14:06:16.830 [kafka-producer-network-thread] ERROR 
> org.apache.kafka.clients.producer.internals.Sender - Uncaught error in kafka 
> producer I/O thread: 
> java.lang.IllegalStateException: No entry found for node -2
> at 
> org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:110)
> at 
> org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:99)
> at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:394)
> at 
> org.apache.kafka.clients.NetworkClient.maybeUpdateMetadata(NetworkClient.java:380)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:174)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:175)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:115)
> at java.lang.Thread.run(Thread.java:744)
> Thanks,
> Bhavesh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1842) New producer/consumer should support configurable connection timeouts

2015-01-06 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-1842:


 Summary: New producer/consumer should support configurable 
connection timeouts
 Key: KAFKA-1842
 URL: https://issues.apache.org/jira/browse/KAFKA-1842
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2
Reporter: Ewen Cheslack-Postava


During discussion of KAFKA-1642 it became clear that the current connection 
handling code for the new clients doesn't give enough flexibility in some 
failure cases. We need to support connection timeouts that are configurable via 
Kafka configs rather than relying on the underlying TCP stack's default 
settings. This would give the user control over how aggressively they want to 
try new servers when trying to fetch metadata (currently dependent on the 
underlying OS timeouts and some implementation details of 
NetworkClient.maybeUpdateMetadata and NetworkClient.leastLoadedNode), which is 
the specific issue that came up in KAFKA-1642. More generally it gives better 
control over how fast the user sees failures when there are network failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1843) Metadata fetch/refresh in new producer should handle all node connection states gracefully

2015-01-06 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-1843:


 Summary: Metadata fetch/refresh in new producer should handle all 
node connection states gracefully
 Key: KAFKA-1843
 URL: https://issues.apache.org/jira/browse/KAFKA-1843
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2
Reporter: Ewen Cheslack-Postava


KAFKA-1642 resolved some issues with the handling of broker connection states 
to avoid high CPU usage, but made the minimal fix rather than the ideal one. 
The code for handling the metadata fetch is difficult to get right because it 
has to handle a lot of possible connectivity states and failure modes across 
all the known nodes. It also needs to correctly integrate with the surrounding 
event loop, providing correct poll() timeouts to both avoid busy looping and 
make sure it wakes up and tries new nodes in the face of both connection and 
request failures.

A patch here should address a few issues:
1. Make sure connection timeouts, as implemented in KAFKA-1842, are cleanly 
integrated. This mostly means that when a connecting node is selected to fetch 
metadata from, that the code notices that and sets the next timeout based on 
the connection timeout rather than some other backoff.
2. Rethink the logic and naming of NetworkClient.leastLoadedNode. That method 
actually takes into account a) the current connectivity of each node, b) 
whether the node had a recent connection failure, c) the "load" in terms of in 
flight requests. It also needs to ensure that different clients don't use the 
same ordering across multiple calls (which is already addressed in the current 
code by nodeIndexOffset) and that we always eventually try all nodes in the 
face of connection failures (which isn't currently handled by leastLoadedNode 
and probably cannot be without tracking additional state). This method also has 
to work for new consumer use cases even though it is currently only used by the 
new producer's metadata fetch. Finally it has to properly handle when other 
code calls initiateConnect() since the normal path for sending messages also 
initiates connections.

We can already say that there is an order of preference given a single call (as 
follows), but making this work across multiple calls when some initial choices 
fail to connect or return metadata *and* connection states may be changing is 
much more difficult.
 * Connected, zero in flight requests - the request can be sent immediately
 * Connecting node - it will hopefully be connected very soon and by definition 
has no in flight requests
 * Disconnected - same reasoning as for a connecting node
 * Connected, > 0 in flight requests - we consider any # of in flight requests 
as a big enough backlog to delay the request a lot.
We could use an approach that better accounts for # of in flight requests 
rather than just turning it into a boolean variable, but that probably 
introduces much more complexity than it is worth.

3. The most difficult case to handle so far has been when leastLoadedNode 
returns a disconnected node to maybeUpdateMetadata as its best option. Properly 
handling the two resulting cases (initiateConnect fails immediately vs. taking 
some time to possibly establish the connection) is tricky.
4. Consider optimizing for the failure cases. The most common cases are when 
you already have an active connection and can immediately get the metadata or 
you need to establish a connection, but the connection and metadata 
request/response happen very quickly. These common cases are infrequent enough 
(default every 5 min) that establishing an extra connection isn't a big deal as 
long as it's eventually cleaned up. The edge cases, like network partitions 
where some subset of nodes become unreachable for a long period, are harder to 
reason about but we should be sure we will always be able to gracefully recover 
from them.

KAFKA-1642 enumerated the possible outcomes of a single call to 
maybeUpdateMetadata. A good fix for this would consider all of those outcomes 
for repeated calls to 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28769: Patch for KAFKA-1809

2015-01-06 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/
---

(Updated Jan. 6, 2015, 7:46 p.m.)


Review request for kafka.


Bugs: KAFKA-1809
https://issues.apache.org/jira/browse/KAFKA-1809


Repository: kafka


Description (updated)
---

first commit of refactoring.


changed topicmetadata to include brokerendpoints and fixed few unit tests


fixing systest and support for binding to default address


fixed system tests


fix default address binding and ipv6 support


fix some issues regarding endpoint parsing. Also, larger segments for systest 
make the validation much faster


added link to security wiki in doc


fixing unit test after rename of ProtocolType to SecurityProtocol


Following Joe's advice, added security protocol enum on client side, and 
modified protocol to use ID instead of string.


validate producer config against enum


add a second protocol for testing and modify SocketServerTests to check on 
multi-ports


Reverted the metadata request changes and removed the explicit security 
protocol argument. Instead the socketserver will determine the protocol based 
on the port and add this to the request


bump version for UpdateMetadataRequest and added support for rolling upgrades 
with new config


following tests - fixed LeaderAndISR protocol and ZK registration for backward 
compatibility


cleaned up some changes that were actually not necessary. hopefully making this 
patch slightly easier to review


undoing some changes that don't belong here


bring back config lost in cleanup


fixes neccessary for an all non-plaintext cluster to work


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
525b95e98010cd2053eacd8c321d079bcac2f910 
  clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
527dd0f9c47fce7310b7a37a9b95bf87f1b9c292 
  clients/src/test/java/org/apache/kafka/common/utils/ClientUtilsTest.java 
6e37ea553f73d9c584641c48c56dbf6e62ba5f88 
  clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
a39fab532f73148316a56c0f8e9197f38ea66f79 
  config/server.properties b0e4496a8ca736b6abe965a430e8ce87b0e8287f 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
28b12c7b89a56c113b665fbde1b95f873f8624a3 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
285c0333ff43543d3e46444c1cd9374bb883bb59 
  core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
84f60178f6ebae735c8aa3e79ed93fe21ac4aea7 
  core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
4ff7e8f8cc695551dd5d2fe65c74f6b6c571e340 
  core/src/main/scala/kafka/api/TopicMetadata.scala 
0190076df0adf906ecd332284f222ff974b315fc 
  core/src/main/scala/kafka/api/TopicMetadataResponse.scala 
92ac4e687be22e4800199c0666bfac5e0059e5bb 
  core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
530982e36b17934b8cc5fb668075a5342e142c59 
  core/src/main/scala/kafka/client/ClientUtils.scala 
ebba87f0566684c796c26cb76c64b4640a5ccfde 
  core/src/main/scala/kafka/cluster/Broker.scala 
0060add008bb3bc4b0092f2173c469fce0120be6 
  core/src/main/scala/kafka/cluster/BrokerEndPoint.scala PRE-CREATION 
  core/src/main/scala/kafka/cluster/EndPoint.scala PRE-CREATION 
  core/src/main/scala/kafka/cluster/SecurityProtocol.scala PRE-CREATION 
  core/src/main/scala/kafka/common/BrokerEndPointNotAvailableException.scala 
PRE-CREATION 
  core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
9ebbee6c16dc83767297c729d2d74ebbd063a993 
  core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala 
b9e2bea7b442a19bcebd1b350d39541a8c9dd068 
  core/src/main/scala/kafka/consumer/ConsumerFetcherThread.scala 
ee6139c901082358382c5ef892281386bf6fc91b 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
191a8677444e53b043e9ad6e94c5a9191c32599e 
  core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
eb492f00449744bc8d63f55b393e2a1659d38454 
  core/src/main/scala/kafka/controller/KafkaController.scala 
66df6d2fbdbdd556da6bea0df84f93e0472c8fbf 
  core/src/main/scala/kafka/javaapi/ConsumerMetadataResponse.scala 
1b28861cdf7dfb30fc696b54f8f8f05f730f4ece 
  core/src/main/scala/kafka/javaapi/TopicMetadata.scala 
f384e04678df10a5b46a439f475c63371bf8e32b 
  core/src/main/scala/kafka/javaapi/TopicMetadataRequest.scala 
b0b7be14d494ae8c87f4443b52db69d273c20316 
  core/src/main/scala/kafka/network/BlockingChannel.scala 
6e2a38eee8e568f9032f95c75fa5899e9715b433 
  core/src/main/scala/kafka/network/RequestChannel.scala 
7b1db3dbbb2c0676f166890f566c14aa248467ab 
  core/src/main/scala/kafka/network/SocketServer.scala 
e451592fe358158548117f47a80e807007dd8b98 
  core/src/main/scala/kafka/producer/ProducerPool.scala 
43df70bb461dd3e385e6b20396adef3c4016a3fc 
  core/src/main/scala/kafka/server/AbstractFetcherManager.sca

[jira] [Updated] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-01-06 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1809:

Attachment: KAFKA-1809_2015-01-06_11:46:22.patch

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch, KAFKA-1809_2014-12-25_11:04:17.patch, 
> KAFKA-1809_2014-12-27_12:03:35.patch, KAFKA-1809_2014-12-27_12:22:56.patch, 
> KAFKA-1809_2014-12-29_12:11:36.patch, KAFKA-1809_2014-12-30_11:25:29.patch, 
> KAFKA-1809_2014-12-30_18:48:46.patch, KAFKA-1809_2015-01-05_14:23:57.patch, 
> KAFKA-1809_2015-01-05_20:25:15.patch, KAFKA-1809_2015-01-05_21:40:14.patch, 
> KAFKA-1809_2015-01-06_11:46:22.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1809) Refactor brokers to allow listening on multiple ports and IPs

2015-01-06 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14266634#comment-14266634
 ] 

Gwen Shapira commented on KAFKA-1809:
-

Updated reviewboard https://reviews.apache.org/r/28769/diff/
 against branch trunk

> Refactor brokers to allow listening on multiple ports and IPs 
> --
>
> Key: KAFKA-1809
> URL: https://issues.apache.org/jira/browse/KAFKA-1809
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Attachments: KAFKA-1809.patch, KAFKA-1809_2014-12-25_11:04:17.patch, 
> KAFKA-1809_2014-12-27_12:03:35.patch, KAFKA-1809_2014-12-27_12:22:56.patch, 
> KAFKA-1809_2014-12-29_12:11:36.patch, KAFKA-1809_2014-12-30_11:25:29.patch, 
> KAFKA-1809_2014-12-30_18:48:46.patch, KAFKA-1809_2015-01-05_14:23:57.patch, 
> KAFKA-1809_2015-01-05_20:25:15.patch, KAFKA-1809_2015-01-05_21:40:14.patch, 
> KAFKA-1809_2015-01-06_11:46:22.patch
>
>
> The goal is to eventually support different security mechanisms on different 
> ports. 
> Currently brokers are defined as host+port pair, and this definition exists 
> throughout the code-base, therefore some refactoring is needed to support 
> multiple ports for a single broker.
> The detailed design is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Multiple+Listeners+for+Kafka+Brokers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29590: 1. Removed defaults for serializer/deserializer. 2. Converted type cast exception to serialization exception in the producer. 3. Added string ser/deser. 4. Moved the isKey fl

2015-01-06 Thread Jun Rao


> On Jan. 6, 2015, 2:13 a.m., Jay Kreps wrote:
> > clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java, 
> > line 433
> > 
> >
> > This does suck. An alternative would be to default the serializer to 
> > the string "" which would not show in the docs...?

This doesn't quite work with ConfigDef. It will throw a ConfigException on 
empty string since it can't find the class by name.


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29590/#review66773
---


On Jan. 5, 2015, 7:47 p.m., Jun Rao wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29590/
> ---
> 
> (Updated Jan. 5, 2015, 7:47 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: kafka-1797
> https://issues.apache.org/jira/browse/kafka-1797
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> addressing Jay's comments
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ByteArrayDeserializer.java
>  514cbd2c27a8d1ce13489d315f7880dfade7ffde 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 1d64f08762b0c33fcaebde0f41039b327060215a 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Deserializer.java 
> c774a199db71fbc00776cd1256af57b2d9e55a66 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> fe9066388f4b7910512d85ef088a1b96749735ac 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/ByteArraySerializer.java
>  9005b74a328c997663232fe3a0999b25d2267efe 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> d859fc588a276eb36bcfd621ae6d7978ad0decdd 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 9cdc13d6cbb372b350acf90a21538b8ba495d2e8 
>   clients/src/main/java/org/apache/kafka/clients/producer/Serializer.java 
> de87f9c1caeadd176195be75d0db43fc0a518380 
>   clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
> 3d4ab7228926f50309c07f0672f33416ce4fa37f 
>   
> clients/src/main/java/org/apache/kafka/common/errors/DeserializationException.java
>  a5433398fb9788e260a4250da32e4be607f3f207 
>   
> clients/src/main/java/org/apache/kafka/common/serialization/StringDeserializer.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/serialization/StringSerializer.java
>  PRE-CREATION 
>   
> clients/src/test/java/org/apache/kafka/common/serialization/SerializationTest.java
>  PRE-CREATION 
>   core/src/main/scala/kafka/producer/KafkaLog4jAppender.scala 
> e194942492324092f811b86f9c1f28f79b366cfd 
>   core/src/main/scala/kafka/tools/ConsoleProducer.scala 
> 397d80da08c925757649b7d104d8360f56c604c3 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> 2126f6e55c5ec6a418165d340cc9a4f445af5045 
>   core/src/main/scala/kafka/tools/ProducerPerformance.scala 
> f2dc4ed2f04f0e9656e10b02db5ed1d39c4a4d39 
>   core/src/main/scala/kafka/tools/ReplayLogProducer.scala 
> f541987b2876a438c43ea9088ae8fed708ba82a3 
>   core/src/main/scala/kafka/tools/TestEndToEndLatency.scala 
> 2ebc7bf643ea91bd93ba37b8e64e8a5a9bb37ece 
>   core/src/main/scala/kafka/tools/TestLogCleaning.scala 
> b81010ec0fa9835bfe48ce6aad0c491cdc67e7ef 
>   core/src/test/scala/integration/kafka/api/ProducerCompressionTest.scala 
> 1505fd4464dc9ac71cce52d9b64406a21e5e45d2 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> 6196060edf9f1650720ec916f88933953a1daa2c 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 94d0028d8c4907e747aa8a74a13d719b974c97bf 
> 
> Diff: https://reviews.apache.org/r/29590/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jun Rao
> 
>



[jira] [Updated] (KAFKA-1489) Global threshold on data retention size

2015-01-06 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1489:
-
Fix Version/s: 0.8.2

> Global threshold on data retention size
> ---
>
> Key: KAFKA-1489
> URL: https://issues.apache.org/jira/browse/KAFKA-1489
> Project: Kafka
>  Issue Type: New Feature
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Andras Sereny
>Assignee: Jay Kreps
>  Labels: newbie
> Fix For: 0.8.2
>
>
> Currently, Kafka has per topic settings to control the size of one single log 
> (log.retention.bytes). With lots of topics of different volume and as they 
> grow in number, it could become tedious to maintain topic level settings 
> applying to a single log. 
> Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
> stored, so it'd make sense to have a configurable threshold to control how 
> much space *all* data in one Kafka log data directory can take up.
> See also:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
> http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1489) Global threshold on data retention size

2015-01-06 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1489:
-
Fix Version/s: (was: 0.8.2)
   0.8.3
 Assignee: (was: Jay Kreps)

> Global threshold on data retention size
> ---
>
> Key: KAFKA-1489
> URL: https://issues.apache.org/jira/browse/KAFKA-1489
> Project: Kafka
>  Issue Type: New Feature
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Andras Sereny
>  Labels: newbie
> Fix For: 0.8.3
>
>
> Currently, Kafka has per topic settings to control the size of one single log 
> (log.retention.bytes). With lots of topics of different volume and as they 
> grow in number, it could become tedious to maintain topic level settings 
> applying to a single log. 
> Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
> stored, so it'd make sense to have a configurable threshold to control how 
> much space *all* data in one Kafka log data directory can take up.
> See also:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
> http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #360

2015-01-06 Thread Apache Jenkins Server
See 

Changes:

[junrao] kafka-1797; (follow-up patch) add the serializer/deserializer api to 
the new java client; patched by Jun Rao; reviewed by Jay Kreps

--
[...truncated 1731 lines...]
kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest PASSED

kafka.log4j.KafkaLog4jAppenderTest > testKafkaLog4jConfigs PASSED

kafka.log4j.KafkaLog4jAppenderTest > testLog4jAppends PASSED

kafka.admin.AdminTest > testReplicaAssignment PASSED

kafka.admin.AdminTest > testManualReplicaAssignment PASSED

kafka.admin.AdminTest > testTopicCreationInZK PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderInNewReplicas PASSED

kafka.admin.AdminTest > testPartitionReassignmentWithLeaderNotInNewReplicas 
PASSED

kafka.admin.AdminTest > testPartitionReassignmentNonOverlappingReplicas PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.AdminTest >

Build failed in Jenkins: Kafka-trunk #361

2015-01-06 Thread Apache Jenkins Server
See 

Changes:

[junrao] kafka-1797; (delta follow-up patch) add the serializer/deserializer 
api to the new java client; patched by Jun Rao; reviewed by Neha Narkhede

--
[...truncated 1704 lines...]
kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testDoublyLinkedList PASSED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecover

Possible Memory Leak in Kafka with Tomcat

2015-01-06 Thread Marcel Alburg
Hello,

i try to use spring-integration-kafka and after stopping the Tomcat i get
an "possible memory leak" from the class loader.

I talked with the people of spring-integration-kafka and you can see this
talk with a lot of debug messages and an reproduceable project under:

https://github.com/spring-projects/spring-integration-kafka/issues/10

Thanks in advance


Re: Review Request 29301: Patch for KAFKA-1694

2015-01-06 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29301/#review66838
---



tools/src/main/java/org/apache/kafka/cli/command/AlterTopicCommand.java


minor nit: grammar. "The topic to be created, altered, or described"


- Jeff Holoman


On Dec. 24, 2014, 7:22 p.m., Andrii Biletskyi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29301/
> ---
> 
> (Updated Dec. 24, 2014, 7:22 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1694
> https://issues.apache.org/jira/browse/KAFKA-1694
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1694 - introduced new type for Wire protocol, ported 
> ClusterMetadataResponse to it
> 
> 
> KAFKA-1694 - Split Admin RQ/RP to separate messages
> 
> 
> KAFKA-1694 - Admin commands can be handled only by controller; 
> DeleteTopicCommand NPE fix
> 
> 
> Diffs
> -
> 
>   bin/kafka.sh PRE-CREATION 
>   bin/windows/kafka.bat PRE-CREATION 
>   build.gradle 18f86e4c8a10618d50ac78572d119c6e100ed85b 
>   clients/src/main/java/org/apache/kafka/common/protocol/ApiKeys.java 
> 109fc965e09b2ed186a073351bd037ac8af20a4c 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
>   clients/src/main/java/org/apache/kafka/common/protocol/types/MaybeOf.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java 
> 121e880a941fcd3e6392859edba11a94236494cc 
>   
> clients/src/main/java/org/apache/kafka/common/requests/ClusterMetadataRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/ClusterMetadataResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/AlterTopicResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/CreateTopicResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DeleteTopicResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DescribeTopicOutput.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DescribeTopicRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/DescribeTopicResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/ListTopicsOutput.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/ListTopicsRequest.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/ListTopicsResponse.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/TopicConfigDetails.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/requests/admin/TopicPartitionsDetails.java
>  PRE-CREATION 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  df37fc6d8f0db0b8192a948426af603be3444da4 
>   core/src/main/scala/kafka/api/ApiUtils.scala 
> 1f80de1638978901500df808ca5133308c9d1fca 
>   core/src/main/scala/kafka/api/ClusterMetadataRequest.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/ClusterMetadataResponse.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/RequestKeys.scala 
> c24c0345feedc7b9e2e9f40af11bfa1b8d328c43 
>   core/src/main/scala/kafka/api/admin/AlterTopicRequest.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/admin/AlterTopicResponse.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/admin/CreateTopicRequest.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/admin/CreateTopicResponse.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/admin/DeleteTopicRequest.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/admin/DeleteTopicResponse.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/admin/DescribeTopicRequest.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/admin/DescribeTopicResponse.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/api/admin/ListTopicsRequest.scala PRE-CREATION 
>   core/src/main/scala/kafka/api/admin/ListTopicsResponse.scala PRE-CREATION 
>   core/src/main/scala/kafka/common/AdminRequestFailedException.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/common/ErrorMapping.scala 
> eedc2f5f21dd8755fba8919

Re: Possible Memory Leak in Kafka with Tomcat

2015-01-06 Thread Bhavesh Mistry
Hi Marcel,

Memory leaks will happen when there are background threads started by
webapp and not shutdown (usually lib like Code hale has shutdown hook, but
web app you do get to execute shutdown hook so you get memory leaks or
class not found).  I have faced this so you need to either use web context
listener and close all threads.

With Old Producer I have faced this issue. I had to explicitly called
Metrics.defaultRegistry().shutdown();  for shutdown because
producer.close() does not shutdown metics threads. Similar issue if you are
consuming consumer side.  Also, you have to call consumer.shutdown() as
well.

I hope this help !

Thanks,

Bhavesh


On Tue, Jan 6, 2015 at 1:53 PM, Marcel Alburg  wrote:

> Hello,
>
> i try to use spring-integration-kafka and after stopping the Tomcat i get
> an "possible memory leak" from the class loader.
>
> I talked with the people of spring-integration-kafka and you can see this
> talk with a lot of debug messages and an reproduceable project under:
>
> https://github.com/spring-projects/spring-integration-kafka/issues/10
>
> Thanks in advance
>


Re: Review Request 29379: Patch for KAFKA-1788

2015-01-06 Thread Ewen Cheslack-Postava

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29379/#review66942
---


Thinking about this more, I think this patch only covers one very specific case 
of the bug. Since we only check for timeouts when the leader lookup returns 
null, this only handles the case where we're unable to update metadata (which 
is the only reason we would consistently be missing leader info). It doesn't 
handle other important cases, e.g. we know the leader but can't send for a long 
time because we got disconnected and can't reconnect or we're still connected 
but requests are getting through too slowly so the batch sits in the queue for 
a long time.

I think it might make sense to restructure this a bit to address those cases 
and handle the issue with the missing stats/completeBatch call. How about 
always checking expiration as we're iterating through all these items, 
including it in the ReadyCheckResult return value as a collection like we do 
with readyNodes, and then the caller, Sender, can use completeBatch() to clean 
up? This avoids some duplicate code, gets the error stats right, and makes sure 
batches are always considered for expiration so it should completely solve the 
problem. You might need to add a while() loop to pull off all the expired 
batches and then use the existing code to process the next remaining batch (if 
there is one left).

Another issue: the current patch relies on ready() being called frequently for 
the batches to be removed promptly after they expire. However, there are 
conditions where poll() will be called with large timeouts, potentially up to 5 
minutes using the default settings. If we followed the approach described 
above, we'd probably need to track another value similar to 
nextReadyCheckDelayMs which would indicate when we would next need to wake up 
to expire the batch with the earliest expiration time.

The fixes you made did address the problems. It looks like this no longer 
applies to trunk due to 50b734690, so it'll need rebasing.

- Ewen Cheslack-Postava


On Jan. 6, 2015, 6:44 p.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29379/
> ---
> 
> (Updated Jan. 6, 2015, 6:44 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1788
> https://issues.apache.org/jira/browse/KAFKA-1788
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Merge remote-tracking branch 'origin/trunk' into KAFKA-1788
> 
> 
> KAFKA-1788: addressed Ewen's comments.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> f61efb35db7e0de590556e6a94a7b5cb850cdae9 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> a893d88c2f4e21509b6c70d6817b4b2cdd0fd657 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  c15485d1af304ef53691d478f113f332fe67af77 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  dd0af8aee98abed5d4a0dc50989e37888bb353fe 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
>  2c9932401d573549c40f16fda8c4e3e11309cb85 
>   clients/src/test/java/org/apache/kafka/clients/producer/SenderTest.java 
> ef2ca65cabe97b909f17b62027a1bb06827e88fe 
> 
> Diff: https://reviews.apache.org/r/29379/diff/
> 
> 
> Testing
> ---
> 
> Unit test added. 
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Possible Memory Leak in Kafka with Tomcat

2015-01-06 Thread Marcel Alburg
Hi,

thanks for the answer.

If create a spring bean wich is starting the kafka consumer and this bean has 
an „destroy“ function. You can see it here: 
https://github.com/marcelalburg/spring-integration-kafka-memory-leak/blob/master/src/main/java/org/example/kafkamemoryleak/listener/KafkaConsumerStarter.java#L30
 


you write that i ‚ve to shutdown the consumer. I’ve an ConsumerContext and i 
call „destroy“ directly. but the error occurs again (in this example above - 
there lines are commented out - but i try this too)

But in the end, i get this error

—
SCHWERWIEGEND: The web application [/spring-integration-kafka-memory-leak] 
created a ThreadLocal with key of type [scala.util.DynamicVariable$$anon$1] 
(value [scala.util.DynamicVariable$$anon$1@526effe5]) and a value of type 
[scala.Some] (value [Some([1.91] failure: end of input

{"jmx_port":-1,"timestamp":"1420541126553","host":"127.0.0.1","version":1,"port":9092}

  ^)]) but failed to remove it when the web application was stopped. 
Threads are going to be renewed over time to try and avoid a probable memory 
leak.
Jan 06, 2015 11:55:58 AM org.apache.catalina.loader.WebappClassLoader 
checkThreadLocalMapForLeaks
SCHWERWIEGEND: The web application [/spring-integration-kafka-memory-leak] 
created a ThreadLocal with key of type [scala.util.DynamicVariable$$anon$1] 
(value [scala.util.DynamicVariable$$anon$1@202d458a]) and a value of type 
[scala.None$] (value [None]) but failed to remove it when the web application 
was stopped. Threads are going to be renewed over time to try and avoid a 
probable memory leak.
Jan 06, 2015 11:55:58 AM org.apache.coyote.AbstractProtocol stop
INFORMATION: Stopping ProtocolHandler ["http-nio-8080"]

Exception: java.lang.IllegalStateException thrown from the 
UncaughtExceptionHandler in thread 
"kafkaTaskExecutor-1-SendThread(127.0.0.1:2181)"
Exception in thread 
"default_MacBook.local-1420541738588-56bc7077_watcher_executor" 
java.lang.IllegalStateException: Can't overwrite cause with 
java.lang.IllegalStateException: Illegal access: this web application instance 
has been stopped already.  Could not load 
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1$$anonfun$run$4.
  The eventual following stack trace is caused by an error thrown for debugging 
purposes as well as to attempt to terminate the thread which caused the illegal 
access, and has no functional impact.
at java.lang.Throwable.initCause(Throwable.java:457)
at 
org.apache.catalina.loader.WebappClassLoader.checkStateForClassLoading(WebappClassLoader.java:1331)
at 
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1212)
at 
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1173)
at 
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run(ZookeeperConsumerConnector.scala:360)
Caused by: java.lang.ClassNotFoundException
at 
org.apache.catalina.loader.WebappClassLoader.checkStateForClassLoading(WebappClassLoader.java:1330)
... 3 more
---


Thanks

> Am 07.01.2015 um 00:06 schrieb Bhavesh Mistry :
> 
> Hi Marcel,
> 
> Memory leaks will happen when there are background threads started by webapp 
> and not shutdown (usually lib like Code hale has shutdown hook, but web app 
> you do get to execute shutdown hook so you get memory leaks or class not 
> found).  I have faced this so you need to either use web context listener and 
> close all threads.
> 
> With Old Producer I have faced this issue. I had to explicitly called 
> Metrics.defaultRegistry().shutdown();  for shutdown because producer.close() 
> does not shutdown metics threads. Similar issue if you are consuming consumer 
> side.  Also, you have to call consumer.shutdown() as well. 
> 
> I hope this help ! 
> 
> Thanks,
> 
> Bhavesh 
> 
> 
> On Tue, Jan 6, 2015 at 1:53 PM, Marcel Alburg  > wrote:
> Hello,
> 
> i try to use spring-integration-kafka and after stopping the Tomcat i get
> an "possible memory leak" from the class loader.
> 
> I talked with the people of spring-integration-kafka and you can see this
> talk with a lot of debug messages and an reproduceable project under:
> 
> https://github.com/spring-projects/spring-integration-kafka/issues/10 
> 
> 
> Thanks in advance
> 



smime.p7s
Description: S/MIME cryptographic signature


Review Request 29647: Patch for KAFKA-1697

2015-01-06 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29647/
---

Review request for kafka.


Bugs: KAFKA-1697
https://issues.apache.org/jira/browse/KAFKA-1697


Repository: kafka


Description
---

removed broker code for handling acks>1 and made 
NotEnoughReplicasAfterAppendException non-retriable


Diffs
-

  
clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasAfterAppendException.java
 75c80a97e43089cb3f924a38f86d67b5a8dd2b89 
  core/src/main/scala/kafka/cluster/Partition.scala 
b230e9a1fb1a3161f4c9d164e4890a16eceb2ad4 

Diff: https://reviews.apache.org/r/29647/diff/


Testing
---


Thanks,

Gwen Shapira



[jira] [Updated] (KAFKA-1697) remove code related to ack>1 on the broker

2015-01-06 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1697:

Attachment: KAFKA-1697.patch

> remove code related to ack>1 on the broker
> --
>
> Key: KAFKA-1697
> URL: https://issues.apache.org/jira/browse/KAFKA-1697
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1697.patch
>
>
> We removed the ack>1 support from the producer client in kafka-1555. We can 
> completely remove the code in the broker that supports ack>1.
> Also, we probably want to make NotEnoughReplicasAfterAppend a non-retriable 
> exception and let the client decide what to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1697) remove code related to ack>1 on the broker

2015-01-06 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14267048#comment-14267048
 ] 

Gwen Shapira commented on KAFKA-1697:
-

Created reviewboard https://reviews.apache.org/r/29647/diff/
 against branch trunk

> remove code related to ack>1 on the broker
> --
>
> Key: KAFKA-1697
> URL: https://issues.apache.org/jira/browse/KAFKA-1697
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1697.patch
>
>
> We removed the ack>1 support from the producer client in kafka-1555. We can 
> completely remove the code in the broker that supports ack>1.
> Also, we probably want to make NotEnoughReplicasAfterAppend a non-retriable 
> exception and let the client decide what to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1697) remove code related to ack>1 on the broker

2015-01-06 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1697:

Status: Patch Available  (was: Open)

> remove code related to ack>1 on the broker
> --
>
> Key: KAFKA-1697
> URL: https://issues.apache.org/jira/browse/KAFKA-1697
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1697.patch
>
>
> We removed the ack>1 support from the producer client in kafka-1555. We can 
> completely remove the code in the broker that supports ack>1.
> Also, we probably want to make NotEnoughReplicasAfterAppend a non-retriable 
> exception and let the client decide what to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1697) remove code related to ack>1 on the broker

2015-01-06 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14267057#comment-14267057
 ] 

Gwen Shapira commented on KAFKA-1697:
-

Either I'm missing something or this was trivial :)

>From what I can see:
If acks = 0 || acks = 1, we only append to local log.
If acks = -1, we produce with delay.
So I simply removed the part where we count acks in delayed produce. 



> remove code related to ack>1 on the broker
> --
>
> Key: KAFKA-1697
> URL: https://issues.apache.org/jira/browse/KAFKA-1697
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1697.patch
>
>
> We removed the ack>1 support from the producer client in kafka-1555. We can 
> completely remove the code in the broker that supports ack>1.
> Also, we probably want to make NotEnoughReplicasAfterAppend a non-retriable 
> exception and let the client decide what to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28769: Patch for KAFKA-1809

2015-01-06 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28769/#review66944
---


Thanks for the patch. It's a lot work! A few general comments below, in 
addition to the more detailed comments.

1. Formatting: add a space after comma in function signature and function 
calls. So instead of 
def foo(a: A,b: B)
  use
def foo(a: A, b: B)

2. In some of the files, imports can be optimized.
3. Some new files are missing the license header.
4. It seems that the client needs to know the security protocol in addition to 
the port. Otherwise, it doesn't know which protocol to use to establish the 
socket connection. So, perhaps the broker.list should now be 
protocol://host:port?


core/src/main/scala/kafka/client/ClientUtils.scala


Is protocolType used?



core/src/main/scala/kafka/cluster/Broker.scala


It would be good to document the format for both v1 and v2.

Also, would it be cleaner if in v2, we remove the host/port fields and only 
include the endpoints field? Currently, v2 writes host/port, but doesn't use it 
itself. So, it's really just for backward compatibility. Since we have the 
"use.new.wire.protocol" config, we can let each broker register in ZK using v2 
format only after that config is set to true (when all brokers are upgraded to 
the new binary).



core/src/main/scala/kafka/cluster/Broker.scala


Is this used?



core/src/main/scala/kafka/cluster/Broker.scala


Instead of converting a seq to a map, perhaps it's simpler to just do 
seq.find(predicate).



core/src/main/scala/kafka/cluster/EndPoint.scala


Do we need the dot after -?



core/src/main/scala/kafka/cluster/EndPoint.scala


Do we want to support IPV6 format of host?



core/src/main/scala/kafka/javaapi/TopicMetadataRequest.scala


securityProtocol doesn't seem to be used here.



core/src/main/scala/kafka/network/BlockingChannel.scala


Adding this as an error log may pollute the log if a broker is unreachable 
and a client keeps trying to connect to it in a certain window (e.g., a replica 
fetcher keeps fetching from the old leader until the new leader info is 
obtained from the controller).



core/src/main/scala/kafka/network/SocketServer.scala


Should this be private?



core/src/main/scala/kafka/network/SocketServer.scala


I think each acceptor needs its own set of processors. We probably need to 
change the doc for num.network.threads to indicate that it's per endpoint.



core/src/main/scala/kafka/producer/ProducerPool.scala


Do we need to recreate the endpoint here?



core/src/main/scala/kafka/server/KafkaConfig.scala


To make the upgrade easier, it may be useful to support the old configs in 
a backward compatible way. For example, if the user doesn't specify listeners, 
but specifies host.name and port, we can automatically set "listeners" to a 
single listener based on host.name and port. Ditto for advertised listeners.

Also, it would be useful to verify that all ports in the listeners are 
unique.



core/src/main/scala/kafka/server/KafkaConfig.scala


In KafkaApi, it seems that we assume there is at most one endpoint per 
protocol (since we look for endpoint by protocol). If so, it would be good to 
validate that at config time.



core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala


Is this change necessary?


- Jun Rao


On Jan. 6, 2015, 7:46 p.m., Gwen Shapira wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28769/
> ---
> 
> (Updated Jan. 6, 2015, 7:46 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1809
> https://issues.apache.org/jira/browse/KAFKA-1809
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> first commit of refactoring.
> 
> 
> changed topicmetadata to include brokerendpoints and fixed few unit tests
> 
> 
> fixing systest and support for binding to default address
> 
> 
> fixed system tests
> 
> 
> fix default address binding and ipv6 support
> 

[jira] [Updated] (KAFKA-1512) Limit the maximum number of connections per ip address

2015-01-06 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1512:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to 0.8.2 branch and trunk, thanks for the patch Jeff

> Limit the maximum number of connections per ip address
> --
>
> Key: KAFKA-1512
> URL: https://issues.apache.org/jira/browse/KAFKA-1512
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 0.8.2
>Reporter: Jay Kreps
>Assignee: Jeff Holoman
> Fix For: 0.8.2
>
> Attachments: KAFKA-1512-082.patch, KAFKA-1512.patch, 
> KAFKA-1512.patch, KAFKA-1512_2014-07-03_15:17:55.patch, 
> KAFKA-1512_2014-07-14_13:28:15.patch, KAFKA-1512_2014-12-23_21:47:23.patch
>
>
> To protect against client connection leaks add a new configuration
>   max.connections.per.ip
> that causes the SocketServer to enforce a limit on the maximum number of 
> connections from each InetAddress instance. For backwards compatibility this 
> will default to 2 billion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Kafka-trunk #362

2015-01-06 Thread Apache Jenkins Server
See 



Review request for patch attached to KAFKA-1836

2015-01-06 Thread Jaikiran Pai

Hello Kafka team,

I was just looking around some newbie bugs in the JIRA, during the 
weekend, and decided to take up 
https://issues.apache.org/jira/browse/KAFKA-1836. I've a patch, which 
I've attached to that JIRA a few days back, which I would like to be 
reviewed. The patch handles the case where the awaitUpdate timeout value 
for Metadata updates is lesser than or equal to 0. It also has the 
documentation and config for metadata.fetch.timeout.ms property updated. 
Finally it has a testcase which verifies this change.


Could one of you please take a look?


-Jaikiran


Re: Review request for patch attached to KAFKA-1836

2015-01-06 Thread Joe Stein
Hi Jaikiran, do you mind utilizing the kafka patch review tool please
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool
as it is helpful for folks to comment on your code lines as it posts to
review board for you and updates JIRA too.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/

On Tue, Jan 6, 2015 at 11:26 PM, Jaikiran Pai 
wrote:

> Hello Kafka team,
>
> I was just looking around some newbie bugs in the JIRA, during the
> weekend, and decided to take up https://issues.apache.org/
> jira/browse/KAFKA-1836. I've a patch, which I've attached to that JIRA a
> few days back, which I would like to be reviewed. The patch handles the
> case where the awaitUpdate timeout value for Metadata updates is lesser
> than or equal to 0. It also has the documentation and config for
> metadata.fetch.timeout.ms property updated. Finally it has a testcase
> which verifies this change.
>
> Could one of you please take a look?
>
>
> -Jaikiran
>


[jira] [Updated] (KAFKA-1836) metadata.fetch.timeout.ms set to zero blocks forever

2015-01-06 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1836:
-
Fix Version/s: 0.8.3

> metadata.fetch.timeout.ms set to zero blocks forever
> 
>
> Key: KAFKA-1836
> URL: https://issues.apache.org/jira/browse/KAFKA-1836
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2
>Reporter: Paul Pearcy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1836.patch
>
>
> You can easily work around this by setting the timeout value to 1ms, but 0ms 
> should mean 0ms or at least have the behavior documented. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review request for patch attached to KAFKA-1836

2015-01-06 Thread Jaikiran Pai
Thanks Joe, I'll do that. I had indeed quickly looked at that document 
over the weekend but I saw that it was missing instructions for Debian 
based systems (am on LinuxMint) and so decided to skip that for now till 
I understand the whole tool chain. I'll see which of these tools I need 
to install and how to set them up (and if needed add those instructions 
to that document)


-Jaikiran
On Wednesday 07 January 2015 10:04 AM, Joe Stein wrote:

Hi Jaikiran, do you mind utilizing the kafka patch review tool please
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool
as it is helpful for folks to comment on your code lines as it posts to
review board for you and updates JIRA too.

/***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop 
/

On Tue, Jan 6, 2015 at 11:26 PM, Jaikiran Pai 
wrote:


Hello Kafka team,

I was just looking around some newbie bugs in the JIRA, during the
weekend, and decided to take up https://issues.apache.org/
jira/browse/KAFKA-1836. I've a patch, which I've attached to that JIRA a
few days back, which I would like to be reviewed. The patch handles the
case where the awaitUpdate timeout value for Metadata updates is lesser
than or equal to 0. It also has the documentation and config for
metadata.fetch.timeout.ms property updated. Finally it has a testcase
which verifies this change.

Could one of you please take a look?


-Jaikiran





Re: Review request for patch attached to KAFKA-1836

2015-01-06 Thread Joe Stein
Sounds good, if you need access to Confluence send along your Confluence
username and I will grant you access if you need to update the document.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/

On Wed, Jan 7, 2015 at 12:11 AM, Jaikiran Pai 
wrote:

> Thanks Joe, I'll do that. I had indeed quickly looked at that document
> over the weekend but I saw that it was missing instructions for Debian
> based systems (am on LinuxMint) and so decided to skip that for now till I
> understand the whole tool chain. I'll see which of these tools I need to
> install and how to set them up (and if needed add those instructions to
> that document)
>
> -Jaikiran
> On Wednesday 07 January 2015 10:04 AM, Joe Stein wrote:
>
>> Hi Jaikiran, do you mind utilizing the kafka patch review tool please
>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool
>> as it is helpful for folks to comment on your code lines as it posts to
>> review board for you and updates JIRA too.
>>
>> /***
>>   Joe Stein
>>   Founder, Principal Consultant
>>   Big Data Open Source Security LLC
>>   http://www.stealth.ly
>>   Twitter: @allthingshadoop 
>> /
>>
>> On Tue, Jan 6, 2015 at 11:26 PM, Jaikiran Pai 
>> wrote:
>>
>>  Hello Kafka team,
>>>
>>> I was just looking around some newbie bugs in the JIRA, during the
>>> weekend, and decided to take up https://issues.apache.org/
>>> jira/browse/KAFKA-1836. I've a patch, which I've attached to that JIRA a
>>> few days back, which I would like to be reviewed. The patch handles the
>>> case where the awaitUpdate timeout value for Metadata updates is lesser
>>> than or equal to 0. It also has the documentation and config for
>>> metadata.fetch.timeout.ms property updated. Finally it has a testcase
>>> which verifies this change.
>>>
>>> Could one of you please take a look?
>>>
>>>
>>> -Jaikiran
>>>
>>>
>