Re: Review Request 33204: Patch for KAFKA-1646 add test cases

2015-05-11 Thread Honghai Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33204/
---

(Updated May 11, 2015, 10 a.m.)


Review request for kafka.


Summary (updated)
-

Patch for KAFKA-1646 add test cases


Bugs: KAFKA-1646
https://issues.apache.org/jira/browse/KAFKA-1646


Repository: kafka


Description (updated)
---

Kafka 1646 fix add test cases


Diffs (updated)
-

  core/src/main/scala/kafka/log/FileMessageSet.scala 
2522604bd985c513527fa0c863a7df677ff7a503 
  core/src/main/scala/kafka/log/Log.scala 
84e7b8fe9dd014884b60c4fbe13c835cf02a40e4 
  core/src/main/scala/kafka/log/LogConfig.scala 
a907da09e1ccede3b446459225e407cd1ae6d8b3 
  core/src/main/scala/kafka/log/LogSegment.scala 
ed039539ac18ea4d65144073915cf112f7374631 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
9efa15ca5567b295ab412ee9eea7c03eb4cdc18b 
  core/src/main/scala/kafka/server/KafkaServer.scala 
b7d2a2842e17411a823b93bdedc84657cbd62be1 
  core/src/main/scala/kafka/utils/CoreUtils.scala 
d0a8fa701564b4c13b3cd6501e1b6218d77e8e06 
  core/src/test/scala/unit/kafka/log/FileMessageSetTest.scala 
cec1caecc51507ae339ebf8f3b8a028b12a1a056 
  core/src/test/scala/unit/kafka/log/LogSegmentTest.scala 
03fb3512c4a4450eac83d4cd4b0919baeaa22942 

Diff: https://reviews.apache.org/r/33204/diff/


Testing
---


Thanks,

Honghai Chen



[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537786#comment-14537786
 ] 

Honghai Chen commented on KAFKA-1646:
-

Add test cases https://reviews.apache.org/r/33204/diff/3/

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150422.patch)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150422.patch

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537787#comment-14537787
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Created reviewboard  against branch origin/trunk)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537791#comment-14537791
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150511_AddTestcases.patch)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150511_AddTestcases.patch

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150511_AddTestcases.patch

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch, KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Created reviewboard  against branch origin/trunk)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch, KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537793#comment-14537793
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch, KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150511_AddTestcases.patch)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150422.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Created reviewboard  against branch origin/trunk)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150422.patch)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150511_AddTestcases.patch

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537796#comment-14537796
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150511_AddTestcases.patch)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: KAFKA-1646_20150511_AddTestcases.patch

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537798#comment-14537798
 ] 

Honghai Chen commented on KAFKA-1646:
-

Created reviewboard  against branch origin/trunk

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646.patch, KAFKA-1646_20141216_163008.patch, 
> KAFKA-1646_20150306_005526.patch, KAFKA-1646_20150312_200352.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646.patch)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
> KAFKA-1646_20150312_200352.patch, KAFKA-1646_20150414_035415.patch, 
> KAFKA-1646_20150414_184503.patch, KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150312_200352.patch)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
> KAFKA-1646_20150414_035415.patch, KAFKA-1646_20150414_184503.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Comment: was deleted

(was: Created reviewboard  against branch origin/trunk)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150414_184503.patch)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1646) Improve consumer read performance for Windows

2015-05-11 Thread Honghai Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghai Chen updated KAFKA-1646:

Attachment: (was: KAFKA-1646_20150414_035415.patch)

> Improve consumer read performance for Windows
> -
>
> Key: KAFKA-1646
> URL: https://issues.apache.org/jira/browse/KAFKA-1646
> Project: Kafka
>  Issue Type: Improvement
>  Components: log
>Affects Versions: 0.8.1.1
> Environment: Windows
>Reporter: xueqiang wang
>Assignee: xueqiang wang
>  Labels: newbie, patch
> Attachments: Improve consumer read performance for Windows.patch, 
> KAFKA-1646-truncate-off-trailing-zeros-on-broker-restart-if-bro.patch, 
> KAFKA-1646_20141216_163008.patch, KAFKA-1646_20150306_005526.patch, 
> KAFKA-1646_20150511_AddTestcases.patch
>
>
> This patch is for Window platform only. In Windows platform, if there are 
> more than one replicas writing to disk, the segment log files will not be 
> consistent in disk and then consumer reading performance will be dropped down 
> greatly. This fix allocates more disk spaces when rolling a new segment, and 
> then it will improve the consumer reading performance in NTFS file system.
> This patch doesn't affect file allocation of other filesystems, for it only 
> adds statements like 'if(Os.iswindow)' or adds methods used on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1977) Make logEndOffset available in the Zookeeper consumer

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1977:
---
Status: In Progress  (was: Patch Available)

[~willf], thanks for the patch. This is a useful feature to add. I agree with 
Joe that we probably should just patch this in the new java consumer.

> Make logEndOffset available in the Zookeeper consumer
> -
>
> Key: KAFKA-1977
> URL: https://issues.apache.org/jira/browse/KAFKA-1977
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Will Funnell
>Priority: Minor
> Attachments: 
> Make_logEndOffset_available_in_the_Zookeeper_consumer.patch
>
>
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538199#comment-14538199
 ] 

Jun Rao commented on KAFKA-2169:


Parth,

Thanks for the patch. Since we haven't officially switched to github, could you 
still attach the patch to the jira?

A few comments on the patch.
1. Could you verify that zkClient 0.5 is api compatible with 0.3? For example, 
in the 0.8.2.0 release, if you just replace zkclient 0.3 jar with the 0.5 jar, 
does the consumer still work?
2. I agree with [~i_maravic]. If we can't establish a new ZK session, we should 
probably just let the broker or the consumer exit.
3. In the broker, the broker and the controller use the same zkclient instance. 
So the actually logic for handleSessionEstablishmentError() just needs to be 
done in one place. Similarly, on the consumer side, there is only one zkclient 
instance.
 

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33557: Patch for KAFKA-1936

2015-05-11 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33557/#review83234
---



core/src/main/scala/kafka/server/KafkaRequestHandler.scala


How about
markBrokerTopicMeters
or
updateBrokerTopicStats

I think an even clearer approach would be to have explicit methods:

```
messagesIn(n)
bytesIn(n)
bytesOut(n)
bytesRejected(n)
```
and so on.

The current code assumes everything is a meter (which it is) but the above 
may be clearer and makes fewer assumptions about the underlying metric types.

It may also eliminate the need for the enumeration.

What do you think?


- Joel Koshy


On May 4, 2015, 10:18 p.m., Dong Lin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33557/
> ---
> 
> (Updated May 4, 2015, 10:18 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1936
> https://issues.apache.org/jira/browse/KAFKA-1936
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1936; Track offset commit requests separately from producer requests
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/log/Log.scala 
> 84e7b8fe9dd014884b60c4fbe13c835cf02a40e4 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> b4004aa3a1456d337199aa1245fb0ae61f6add46 
>   core/src/main/scala/kafka/server/KafkaRequestHandler.scala 
> a1558afed20bc651ca442a774920d782890167a5 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> 59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> 652208a70f66045b854549d93cbbc2b77c24b10b 
> 
> Diff: https://reviews.apache.org/r/33557/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Dong Lin
> 
>



[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2169:

Status: Patch Available  (was: Open)

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 34047: Patch for KAFKA-2169

2015-05-11 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34047/
---

Review request for kafka.


Bugs: KAFKA-2169
https://issues.apache.org/jira/browse/KAFKA-2169


Repository: kafka


Description
---

KAFKA-2169: Moving to zkClient 0.5 release.


Diffs
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
  core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
  core/src/main/scala/kafka/controller/KafkaController.scala 
a6351163f5b6f080d6fa50bcc3533d445fcbc067 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
861b7f644941f88ce04a4e95f6b28d18bf1db16d 

Diff: https://reviews.apache.org/r/34047/diff/


Testing
---


Thanks,

Parth Brahmbhatt



[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2169:

Attachment: KAFKA-2169.patch

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538313#comment-14538313
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

Created reviewboard https://reviews.apache.org/r/34047/diff/
 against branch origin/trunk

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/
---

Review request for kafka.


Bugs: KAFKA-2169
https://issues.apache.org/jira/browse/KAFKA-2169


Repository: kafka


Description
---

System.exit instead of throwing RuntimeException when zokeeper session 
establishment fails.


Diffs
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
  core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
  core/src/main/scala/kafka/controller/KafkaController.scala 
a6351163f5b6f080d6fa50bcc3533d445fcbc067 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
861b7f644941f88ce04a4e95f6b28d18bf1db16d 

Diff: https://reviews.apache.org/r/34050/diff/


Testing
---


Thanks,

Parth Brahmbhatt



Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Jay Kreps
I totally agree that ZK is not in-and-of-itself a configuration management
solution and it would be better if we could just keep all our config in
files. Anyone who has followed the various config discussions over the past
few years of discussion knows I'm the biggest proponent of immutable
file-driven config.

The analogy to "normal unix services" isn't actually quite right though.
The problem Kafka has is that a number of the configurable entities it
manages are added dynamically--topics, clients, consumer groups, etc. What
this actually resembles is not a unix services like HTTPD but a database,
and databases typically do manage config dynamically for exactly the same
reason.

The last few emails are arguing that files > ZK as a config solution. I
agree with this, but that isn't really the question, right?The reality is
that we need to be able to configure dynamically created entities and we
won't get a satisfactory solution to that using files (e.g. rsync is not an
acceptable topic creation mechanism). What we are discussing is having a
single config mechanism or multiple. If we have multiple you need to solve
the whole config lifecycle problem for both--management, audit, rollback,
etc.

Gwen, you were saying we couldn't get rid of the configuration file, not
sure if I understand. Is that because we need to give the URL for ZK?
Wouldn't the same argument work to say that we can't use configuration
files because we have to specify the file path? I think we can just give
the server the same --zookeeper argument we use everywhere else, right?

-Jay

On Sun, May 10, 2015 at 11:28 AM, Todd Palino  wrote:

> I've been watching this discussion for a while, and I have to jump in and
> side with Gwen here. I see no benefit to putting the configs into Zookeeper
> entirely, and a lot of downside. The two biggest problems I have with this
> are:
>
> 1) Configuration management. OK, so you can write glue for Chef to put
> configs into Zookeeper. You also need to write glue for Puppet. And
> Cfengine. And everything else out there. Files are an industry standard
> practice, they're how just about everyone handles it, and there's reasons
> for that, not just "it's the way it's always been done".
>
> 2) Auditing. Configuration files can easily be managed in a source
> repository system which tracks what changes were made and who made them. It
> also easily allows for rolling back to a previous version. Zookeeper does
> not.
>
> I see absolutely nothing wrong with putting the quota (client) configs and
> the topic config overrides in Zookeeper, and keeping everything else
> exactly where it is, in the configuration file. To handle configurations
> for the broker that can be changed at runtime without a restart, you can
> use the industry standard practice of catching SIGHUP and rereading the
> configuration file at that point.
>
> -Todd
>
>
> On Sun, May 10, 2015 at 4:00 AM, Gwen Shapira 
> wrote:
>
> > I am still not clear about the benefits of managing configuration in
> > ZooKeeper vs. keeping the local file and adding a "refresh" mechanism
> > (signal, protocol, zookeeper, or other).
> >
> > Benefits of staying with configuration file:
> > 1. In line with pretty much any Linux service that exists, so admins
> have a
> > lot of related experience.
> > 2. Much smaller change to our code-base, so easier to patch, review and
> > test. Lower risk overall.
> >
> > Can you walk me over the benefits of using Zookeeper? Especially since it
> > looks like we can't get rid of the file entirely?
> >
> > Gwen
> >
> > On Thu, May 7, 2015 at 3:33 AM, Jun Rao  wrote:
> >
> > > One of the Chef users confirmed that Chef integration could still work
> if
> > > all configs are moved to ZK. My rough understanding of how Chef works
> is
> > > that a user first registers a service host with a Chef server. After
> > that,
> > > a Chef client will be run on the service host. The user can then push
> > > config changes intended for a service/host to the Chef server. The
> server
> > > is then responsible for pushing the changes to Chef clients. Chef
> clients
> > > support pluggable logic. For example, it can generate a config file
> that
> > > Kafka broker will take. If we move all configs to ZK, we can customize
> > the
> > > Chef client to use our config CLI to make the config changes in Kafka.
> In
> > > this model, one probably doesn't need to register every broker in Chef
> > for
> > > the config push. Not sure if Puppet works in a similar way.
> > >
> > > Also for storing the configs, we probably can't store the broker/global
> > > level configs in Kafka itself (e.g. in a special topic). The reason is
> > that
> > > in order to start a broker, we likely need to make some broker level
> > config
> > > changes (e.g., the default log.dir may not be present, the default port
> > may
> > > not be available, etc). If we need a broker to be up to make those
> > changes,
> > > we get into this chicken and egg problem.
> > >
> > > Thanks,
> 

[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538376#comment-14538376
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

Created reviewboard https://reviews.apache.org/r/34050/diff/
 against branch origin/trunk

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2169:

Attachment: KAFKA-2169.patch

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538409#comment-14538409
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

Posted a review on review board. https://reviews.apache.org/r/34050/diff/
1) I tried console-producer and console-consumer at trunk with only my changes 
applied and it works.
2) I do not disagree with the approach, however that is a change in behavior 
and I was trying to get the upgrade in given its blocking other jiras without 
having to tie that behavior change discussion to this jira. I have modified the 
behavior so it will not do System.exit.
3) Not sure what you mean here , we are handling it as part of 
handleSessionEstablishmentError() in all cases. 

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2184) ConsumerConfig does not honor default java.util.Properties

2015-05-11 Thread Jason Whaley (JIRA)
Jason Whaley created KAFKA-2184:
---

 Summary: ConsumerConfig does not honor default java.util.Properties
 Key: KAFKA-2184
 URL: https://issues.apache.org/jira/browse/KAFKA-2184
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.2.0
Reporter: Jason Whaley
Assignee: Neha Narkhede
Priority: Minor


When creating a ConsumerConfig from java.util.Properties, an 
IllegalArgumentException is thrown when the Properties instance is converted to 
a VerifiableProperties instance.  To reproduce:

{code}
package com.test;

import kafka.consumer.ConsumerConfig;

import java.util.Properties;

public class ContainsKeyTest {
public static void main(String[] args) {
Properties defaultProperties = new Properties();
defaultProperties.put("zookeeper.connect", "192.168.50.4:2181");
defaultProperties.put("zookeeper.session.timeout.ms", "400");
defaultProperties.put("zookeeper.sync.time.ms", "200");
defaultProperties.put("auto.commit.interval.ms", "1000");
defaultProperties.put("group.id", "consumerGroup");

Properties props = new Properties(defaultProperties);

//prints 192.168.50.4:2181
System.out.println(props.getProperty("zookeeper.connect"));  

//throws java.lang.IllegalArgumentException: requirement failed: 
Missing required property 'zookeeper.connect'
ConsumerConfig config = new ConsumerConfig(props); 
}
}
{code}

This is easy enough to work around, but default Properties should be honored by 
not calling containsKey inside of kafka.utils.VerifiableProperties#getString 
method




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2184) ConsumerConfig does not honor default java.util.Properties

2015-05-11 Thread Jason Whaley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Whaley updated KAFKA-2184:

Description: 
When creating a ConsumerConfig from java.util.Properties, an 
IllegalArgumentException is thrown when the Properties instance is converted to 
a VerifiableProperties instance.  To reproduce:

{code}
package com.test;

import kafka.consumer.ConsumerConfig;

import java.util.Properties;

public class ContainsKeyTest {
public static void main(String[] args) {
Properties defaultProperties = new Properties();
defaultProperties.put("zookeeper.connect", "192.168.50.4:2181");
defaultProperties.put("zookeeper.session.timeout.ms", "400");
defaultProperties.put("zookeeper.sync.time.ms", "200");
defaultProperties.put("auto.commit.interval.ms", "1000");
defaultProperties.put("group.id", "consumerGroup");

Properties props = new Properties(defaultProperties);

//prints 192.168.50.4:2181
System.out.println(props.getProperty("zookeeper.connect"));  

//throws java.lang.IllegalArgumentException: requirement failed: 
Missing required property 'zookeeper.connect'
ConsumerConfig config = new ConsumerConfig(props); 
}
}
{code}

This is easy enough to work around, but default Properties should be honored by 
not calling containsKey inside of kafka.utils.VerifiableProperties#getString


  was:
When creating a ConsumerConfig from java.util.Properties, an 
IllegalArgumentException is thrown when the Properties instance is converted to 
a VerifiableProperties instance.  To reproduce:

{code}
package com.test;

import kafka.consumer.ConsumerConfig;

import java.util.Properties;

public class ContainsKeyTest {
public static void main(String[] args) {
Properties defaultProperties = new Properties();
defaultProperties.put("zookeeper.connect", "192.168.50.4:2181");
defaultProperties.put("zookeeper.session.timeout.ms", "400");
defaultProperties.put("zookeeper.sync.time.ms", "200");
defaultProperties.put("auto.commit.interval.ms", "1000");
defaultProperties.put("group.id", "consumerGroup");

Properties props = new Properties(defaultProperties);

//prints 192.168.50.4:2181
System.out.println(props.getProperty("zookeeper.connect"));  

//throws java.lang.IllegalArgumentException: requirement failed: 
Missing required property 'zookeeper.connect'
ConsumerConfig config = new ConsumerConfig(props); 
}
}
{code}

This is easy enough to work around, but default Properties should be honored by 
not calling containsKey inside of kafka.utils.VerifiableProperties#getString 
method



> ConsumerConfig does not honor default java.util.Properties
> --
>
> Key: KAFKA-2184
> URL: https://issues.apache.org/jira/browse/KAFKA-2184
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.0
>Reporter: Jason Whaley
>Assignee: Neha Narkhede
>Priority: Minor
>
> When creating a ConsumerConfig from java.util.Properties, an 
> IllegalArgumentException is thrown when the Properties instance is converted 
> to a VerifiableProperties instance.  To reproduce:
> {code}
> package com.test;
> import kafka.consumer.ConsumerConfig;
> import java.util.Properties;
> public class ContainsKeyTest {
> public static void main(String[] args) {
> Properties defaultProperties = new Properties();
> defaultProperties.put("zookeeper.connect", "192.168.50.4:2181");
> defaultProperties.put("zookeeper.session.timeout.ms", "400");
> defaultProperties.put("zookeeper.sync.time.ms", "200");
> defaultProperties.put("auto.commit.interval.ms", "1000");
> defaultProperties.put("group.id", "consumerGroup");
> Properties props = new Properties(defaultProperties);
> //prints 192.168.50.4:2181
> System.out.println(props.getProperty("zookeeper.connect"));  
> //throws java.lang.IllegalArgumentException: requirement failed: 
> Missing required property 'zookeeper.connect'
> ConsumerConfig config = new ConsumerConfig(props); 
> }
> }
> {code}
> This is easy enough to work around, but default Properties should be honored 
> by not calling containsKey inside of 
> kafka.utils.VerifiableProperties#getString



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33916: Patch for KAFKA-2163

2015-05-11 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33916/#review83264
---

Ship it!


Thanks for the patch. Looks good. Just a couple of minor comments below.


core/src/main/scala/kafka/server/OffsetManager.scala


Should this be changed from debug to info?



core/src/main/scala/kafka/server/OffsetManager.scala


Should this be info level logging?


- Jun Rao


On May 6, 2015, 10:06 p.m., Joel Koshy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33916/
> ---
> 
> (Updated May 6, 2015, 10:06 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2163
> https://issues.apache.org/jira/browse/KAFKA-2163
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> fix
> 
> 
> renames and logging improvements
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> 122b1dbbe45cb27aed79b5be1e735fb617c716b0 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 18680ce100f10035175cc0263ba7787ab0f6a17a 
> 
> Diff: https://reviews.apache.org/r/33916/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Joel Koshy
> 
>



Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Todd Palino
I understand your point here, Jay, but I disagree that we can't have two
configuration systems. We have two different types of configuration
information. We have configuration that relates to the service itself (the
Kafka broker), and we have configuration that relates to the content within
the service (topics). I would put the client configuration (quotas) in the
with the second part, as it is dynamic information. I just don't see a good
argument for effectively degrading the configuration for the service
because of trying to keep it paired with the configuration of dynamic
resources.

-Todd

On Mon, May 11, 2015 at 11:33 AM, Jay Kreps  wrote:

> I totally agree that ZK is not in-and-of-itself a configuration management
> solution and it would be better if we could just keep all our config in
> files. Anyone who has followed the various config discussions over the past
> few years of discussion knows I'm the biggest proponent of immutable
> file-driven config.
>
> The analogy to "normal unix services" isn't actually quite right though.
> The problem Kafka has is that a number of the configurable entities it
> manages are added dynamically--topics, clients, consumer groups, etc. What
> this actually resembles is not a unix services like HTTPD but a database,
> and databases typically do manage config dynamically for exactly the same
> reason.
>
> The last few emails are arguing that files > ZK as a config solution. I
> agree with this, but that isn't really the question, right?The reality is
> that we need to be able to configure dynamically created entities and we
> won't get a satisfactory solution to that using files (e.g. rsync is not an
> acceptable topic creation mechanism). What we are discussing is having a
> single config mechanism or multiple. If we have multiple you need to solve
> the whole config lifecycle problem for both--management, audit, rollback,
> etc.
>
> Gwen, you were saying we couldn't get rid of the configuration file, not
> sure if I understand. Is that because we need to give the URL for ZK?
> Wouldn't the same argument work to say that we can't use configuration
> files because we have to specify the file path? I think we can just give
> the server the same --zookeeper argument we use everywhere else, right?
>
> -Jay
>
> On Sun, May 10, 2015 at 11:28 AM, Todd Palino  wrote:
>
> > I've been watching this discussion for a while, and I have to jump in and
> > side with Gwen here. I see no benefit to putting the configs into
> Zookeeper
> > entirely, and a lot of downside. The two biggest problems I have with
> this
> > are:
> >
> > 1) Configuration management. OK, so you can write glue for Chef to put
> > configs into Zookeeper. You also need to write glue for Puppet. And
> > Cfengine. And everything else out there. Files are an industry standard
> > practice, they're how just about everyone handles it, and there's reasons
> > for that, not just "it's the way it's always been done".
> >
> > 2) Auditing. Configuration files can easily be managed in a source
> > repository system which tracks what changes were made and who made them.
> It
> > also easily allows for rolling back to a previous version. Zookeeper does
> > not.
> >
> > I see absolutely nothing wrong with putting the quota (client) configs
> and
> > the topic config overrides in Zookeeper, and keeping everything else
> > exactly where it is, in the configuration file. To handle configurations
> > for the broker that can be changed at runtime without a restart, you can
> > use the industry standard practice of catching SIGHUP and rereading the
> > configuration file at that point.
> >
> > -Todd
> >
> >
> > On Sun, May 10, 2015 at 4:00 AM, Gwen Shapira 
> > wrote:
> >
> > > I am still not clear about the benefits of managing configuration in
> > > ZooKeeper vs. keeping the local file and adding a "refresh" mechanism
> > > (signal, protocol, zookeeper, or other).
> > >
> > > Benefits of staying with configuration file:
> > > 1. In line with pretty much any Linux service that exists, so admins
> > have a
> > > lot of related experience.
> > > 2. Much smaller change to our code-base, so easier to patch, review and
> > > test. Lower risk overall.
> > >
> > > Can you walk me over the benefits of using Zookeeper? Especially since
> it
> > > looks like we can't get rid of the file entirely?
> > >
> > > Gwen
> > >
> > > On Thu, May 7, 2015 at 3:33 AM, Jun Rao  wrote:
> > >
> > > > One of the Chef users confirmed that Chef integration could still
> work
> > if
> > > > all configs are moved to ZK. My rough understanding of how Chef works
> > is
> > > > that a user first registers a service host with a Chef server. After
> > > that,
> > > > a Chef client will be run on the service host. The user can then push
> > > > config changes intended for a service/host to the Chef server. The
> > server
> > > > is then responsible for pushing the changes to Chef clients. Chef
> > clients
> > > > support pluggable logic. For example, it can gene

[jira] [Created] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2185:
--

 Summary: Update to Gradle 2.4
 Key: KAFKA-2185
 URL: https://issues.apache.org/jira/browse/KAFKA-2185
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor


Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
There have been a large number of improvements over the various releases 
(including performance improvements):

https://gradle.org/docs/2.1/release-notes
https://gradle.org/docs/2.2/release-notes
https://gradle.org/docs/2.3/release-notes
http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 34056: Patch for KAFKA-2185

2015-05-11 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34056/
---

Review request for kafka.


Bugs: KAFKA-2185
https://issues.apache.org/jira/browse/KAFKA-2185


Repository: kafka


Description
---

Update gradle to 2.4


Diffs
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 

Diff: https://reviews.apache.org/r/34056/diff/


Testing
---


Thanks,

Ismael Juma



[jira] [Commented] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538537#comment-14538537
 ] 

Ismael Juma commented on KAFKA-2185:


Created reviewboard https://reviews.apache.org/r/34056/diff/
 against branch upstream/trunk

> Update to Gradle 2.4
> 
>
> Key: KAFKA-2185
> URL: https://issues.apache.org/jira/browse/KAFKA-2185
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
> Attachments: KAFKA-2185.patch
>
>
> Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
> There have been a large number of improvements over the various releases 
> (including performance improvements):
> https://gradle.org/docs/2.1/release-notes
> https://gradle.org/docs/2.2/release-notes
> https://gradle.org/docs/2.3/release-notes
> http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Status: Patch Available  (was: Open)

> Update to Gradle 2.4
> 
>
> Key: KAFKA-2185
> URL: https://issues.apache.org/jira/browse/KAFKA-2185
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
> Attachments: KAFKA-2185.patch
>
>
> Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
> There have been a large number of improvements over the various releases 
> (including performance improvements):
> https://gradle.org/docs/2.1/release-notes
> https://gradle.org/docs/2.2/release-notes
> https://gradle.org/docs/2.3/release-notes
> http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Attachment: KAFKA-2185.patch

> Update to Gradle 2.4
> 
>
> Key: KAFKA-2185
> URL: https://issues.apache.org/jira/browse/KAFKA-2185
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
> Attachments: KAFKA-2185.patch
>
>
> Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
> There have been a large number of improvements over the various releases 
> (including performance improvements):
> https://gradle.org/docs/2.1/release-notes
> https://gradle.org/docs/2.2/release-notes
> https://gradle.org/docs/2.3/release-notes
> http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Assignee: (was: Ismael Juma)

> Update to Gradle 2.4
> 
>
> Key: KAFKA-2185
> URL: https://issues.apache.org/jira/browse/KAFKA-2185
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Priority: Minor
> Attachments: KAFKA-2185.patch
>
>
> Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
> There have been a large number of improvements over the various releases 
> (including performance improvements):
> https://gradle.org/docs/2.1/release-notes
> https://gradle.org/docs/2.2/release-notes
> https://gradle.org/docs/2.3/release-notes
> http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34056: Patch for KAFKA-2185

2015-05-11 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34056/
---

(Updated May 11, 2015, 7:55 p.m.)


Review request for kafka.


Bugs: KAFKA-2185
https://issues.apache.org/jira/browse/KAFKA-2185


Repository: kafka


Description
---

Update gradle to 2.4


Diffs (updated)
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 

Diff: https://reviews.apache.org/r/34056/diff/


Testing
---


Thanks,

Ismael Juma



[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Attachment: KAFKA-2185_2015-05-11_20:55:08.patch

> Update to Gradle 2.4
> 
>
> Key: KAFKA-2185
> URL: https://issues.apache.org/jira/browse/KAFKA-2185
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Priority: Minor
> Attachments: KAFKA-2185_2015-05-11_20:55:08.patch
>
>
> Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
> There have been a large number of improvements over the various releases 
> (including performance improvements):
> https://gradle.org/docs/2.1/release-notes
> https://gradle.org/docs/2.2/release-notes
> https://gradle.org/docs/2.3/release-notes
> http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2185:
---
Attachment: (was: KAFKA-2185.patch)

> Update to Gradle 2.4
> 
>
> Key: KAFKA-2185
> URL: https://issues.apache.org/jira/browse/KAFKA-2185
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Priority: Minor
> Attachments: KAFKA-2185_2015-05-11_20:55:08.patch
>
>
> Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
> There have been a large number of improvements over the various releases 
> (including performance improvements):
> https://gradle.org/docs/2.1/release-notes
> https://gradle.org/docs/2.2/release-notes
> https://gradle.org/docs/2.3/release-notes
> http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538544#comment-14538544
 ] 

Ismael Juma commented on KAFKA-2185:


Updated reviewboard https://reviews.apache.org/r/34056/diff/
 against branch upstream/trunk

> Update to Gradle 2.4
> 
>
> Key: KAFKA-2185
> URL: https://issues.apache.org/jira/browse/KAFKA-2185
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Priority: Minor
> Attachments: KAFKA-2185_2015-05-11_20:55:08.patch
>
>
> Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
> There have been a large number of improvements over the various releases 
> (including performance improvements):
> https://gradle.org/docs/2.1/release-notes
> https://gradle.org/docs/2.2/release-notes
> https://gradle.org/docs/2.3/release-notes
> http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34056: Patch for KAFKA-2185

2015-05-11 Thread Ismael Juma

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34056/
---

(Updated May 11, 2015, 8:02 p.m.)


Review request for kafka.


Bugs: KAFKA-2185
https://issues.apache.org/jira/browse/KAFKA-2185


Repository: kafka


Description
---

Update gradle to 2.4


Diffs
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 

Diff: https://reviews.apache.org/r/34056/diff/


Testing (updated)
---

Rebuilt the gradle wrapper via `gradle` and then ran various build commands 
like:

- ./gradlew releaseTarGz
- ./gradlew jarAll
- ./gradlew test
- ./gradlew -PscalaVersion=2.11.6 test


Thanks,

Ismael Juma



Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Gwen Shapira
Hi Jay,

I don't say we can't get rid of configuration file, I believe we can - it
is just a lot of work and not a good idea IMO.

I think the analogy to "normal unix services" stands. MySQL and Postgres
use configuration files.

I think there two topics here:
1. Configuring dynamically created entities - topics, clients, etc. Topic
config is managed in ZK now, right? And we can do the same for clients, I
guess. Is this what was are discussing here?
2. Dynamic configuration of the broker itself - I think it makes more sense
to add a refresh from file mechanism and use puppet to manage broker
configuration (like normal services). I don't think we have any example of
that kind of configuration yet, right?

Gwen


On Mon, May 11, 2015 at 9:33 PM, Jay Kreps  wrote:

> I totally agree that ZK is not in-and-of-itself a configuration management
> solution and it would be better if we could just keep all our config in
> files. Anyone who has followed the various config discussions over the past
> few years of discussion knows I'm the biggest proponent of immutable
> file-driven config.
>
> The analogy to "normal unix services" isn't actually quite right though.
> The problem Kafka has is that a number of the configurable entities it
> manages are added dynamically--topics, clients, consumer groups, etc. What
> this actually resembles is not a unix services like HTTPD but a database,
> and databases typically do manage config dynamically for exactly the same
> reason.
>
> The last few emails are arguing that files > ZK as a config solution. I
> agree with this, but that isn't really the question, right?The reality is
> that we need to be able to configure dynamically created entities and we
> won't get a satisfactory solution to that using files (e.g. rsync is not an
> acceptable topic creation mechanism). What we are discussing is having a
> single config mechanism or multiple. If we have multiple you need to solve
> the whole config lifecycle problem for both--management, audit, rollback,
> etc.
>
> Gwen, you were saying we couldn't get rid of the configuration file, not
> sure if I understand. Is that because we need to give the URL for ZK?
> Wouldn't the same argument work to say that we can't use configuration
> files because we have to specify the file path? I think we can just give
> the server the same --zookeeper argument we use everywhere else, right?
>
> -Jay
>
> On Sun, May 10, 2015 at 11:28 AM, Todd Palino  wrote:
>
> > I've been watching this discussion for a while, and I have to jump in and
> > side with Gwen here. I see no benefit to putting the configs into
> Zookeeper
> > entirely, and a lot of downside. The two biggest problems I have with
> this
> > are:
> >
> > 1) Configuration management. OK, so you can write glue for Chef to put
> > configs into Zookeeper. You also need to write glue for Puppet. And
> > Cfengine. And everything else out there. Files are an industry standard
> > practice, they're how just about everyone handles it, and there's reasons
> > for that, not just "it's the way it's always been done".
> >
> > 2) Auditing. Configuration files can easily be managed in a source
> > repository system which tracks what changes were made and who made them.
> It
> > also easily allows for rolling back to a previous version. Zookeeper does
> > not.
> >
> > I see absolutely nothing wrong with putting the quota (client) configs
> and
> > the topic config overrides in Zookeeper, and keeping everything else
> > exactly where it is, in the configuration file. To handle configurations
> > for the broker that can be changed at runtime without a restart, you can
> > use the industry standard practice of catching SIGHUP and rereading the
> > configuration file at that point.
> >
> > -Todd
> >
> >
> > On Sun, May 10, 2015 at 4:00 AM, Gwen Shapira 
> > wrote:
> >
> > > I am still not clear about the benefits of managing configuration in
> > > ZooKeeper vs. keeping the local file and adding a "refresh" mechanism
> > > (signal, protocol, zookeeper, or other).
> > >
> > > Benefits of staying with configuration file:
> > > 1. In line with pretty much any Linux service that exists, so admins
> > have a
> > > lot of related experience.
> > > 2. Much smaller change to our code-base, so easier to patch, review and
> > > test. Lower risk overall.
> > >
> > > Can you walk me over the benefits of using Zookeeper? Especially since
> it
> > > looks like we can't get rid of the file entirely?
> > >
> > > Gwen
> > >
> > > On Thu, May 7, 2015 at 3:33 AM, Jun Rao  wrote:
> > >
> > > > One of the Chef users confirmed that Chef integration could still
> work
> > if
> > > > all configs are moved to ZK. My rough understanding of how Chef works
> > is
> > > > that a user first registers a service host with a Chef server. After
> > > that,
> > > > a Chef client will be run on the service host. The user can then push
> > > > config changes intended for a service/host to the Chef server. The
> > server
> > > > is then responsi

Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Gwen Shapira
What Todd said :)

(I think my ops background is showing...)

On Mon, May 11, 2015 at 10:17 PM, Todd Palino  wrote:

> I understand your point here, Jay, but I disagree that we can't have two
> configuration systems. We have two different types of configuration
> information. We have configuration that relates to the service itself (the
> Kafka broker), and we have configuration that relates to the content within
> the service (topics). I would put the client configuration (quotas) in the
> with the second part, as it is dynamic information. I just don't see a good
> argument for effectively degrading the configuration for the service
> because of trying to keep it paired with the configuration of dynamic
> resources.
>
> -Todd
>
> On Mon, May 11, 2015 at 11:33 AM, Jay Kreps  wrote:
>
> > I totally agree that ZK is not in-and-of-itself a configuration
> management
> > solution and it would be better if we could just keep all our config in
> > files. Anyone who has followed the various config discussions over the
> past
> > few years of discussion knows I'm the biggest proponent of immutable
> > file-driven config.
> >
> > The analogy to "normal unix services" isn't actually quite right though.
> > The problem Kafka has is that a number of the configurable entities it
> > manages are added dynamically--topics, clients, consumer groups, etc.
> What
> > this actually resembles is not a unix services like HTTPD but a database,
> > and databases typically do manage config dynamically for exactly the same
> > reason.
> >
> > The last few emails are arguing that files > ZK as a config solution. I
> > agree with this, but that isn't really the question, right?The reality is
> > that we need to be able to configure dynamically created entities and we
> > won't get a satisfactory solution to that using files (e.g. rsync is not
> an
> > acceptable topic creation mechanism). What we are discussing is having a
> > single config mechanism or multiple. If we have multiple you need to
> solve
> > the whole config lifecycle problem for both--management, audit, rollback,
> > etc.
> >
> > Gwen, you were saying we couldn't get rid of the configuration file, not
> > sure if I understand. Is that because we need to give the URL for ZK?
> > Wouldn't the same argument work to say that we can't use configuration
> > files because we have to specify the file path? I think we can just give
> > the server the same --zookeeper argument we use everywhere else, right?
> >
> > -Jay
> >
> > On Sun, May 10, 2015 at 11:28 AM, Todd Palino  wrote:
> >
> > > I've been watching this discussion for a while, and I have to jump in
> and
> > > side with Gwen here. I see no benefit to putting the configs into
> > Zookeeper
> > > entirely, and a lot of downside. The two biggest problems I have with
> > this
> > > are:
> > >
> > > 1) Configuration management. OK, so you can write glue for Chef to put
> > > configs into Zookeeper. You also need to write glue for Puppet. And
> > > Cfengine. And everything else out there. Files are an industry standard
> > > practice, they're how just about everyone handles it, and there's
> reasons
> > > for that, not just "it's the way it's always been done".
> > >
> > > 2) Auditing. Configuration files can easily be managed in a source
> > > repository system which tracks what changes were made and who made
> them.
> > It
> > > also easily allows for rolling back to a previous version. Zookeeper
> does
> > > not.
> > >
> > > I see absolutely nothing wrong with putting the quota (client) configs
> > and
> > > the topic config overrides in Zookeeper, and keeping everything else
> > > exactly where it is, in the configuration file. To handle
> configurations
> > > for the broker that can be changed at runtime without a restart, you
> can
> > > use the industry standard practice of catching SIGHUP and rereading the
> > > configuration file at that point.
> > >
> > > -Todd
> > >
> > >
> > > On Sun, May 10, 2015 at 4:00 AM, Gwen Shapira 
> > > wrote:
> > >
> > > > I am still not clear about the benefits of managing configuration in
> > > > ZooKeeper vs. keeping the local file and adding a "refresh" mechanism
> > > > (signal, protocol, zookeeper, or other).
> > > >
> > > > Benefits of staying with configuration file:
> > > > 1. In line with pretty much any Linux service that exists, so admins
> > > have a
> > > > lot of related experience.
> > > > 2. Much smaller change to our code-base, so easier to patch, review
> and
> > > > test. Lower risk overall.
> > > >
> > > > Can you walk me over the benefits of using Zookeeper? Especially
> since
> > it
> > > > looks like we can't get rid of the file entirely?
> > > >
> > > > Gwen
> > > >
> > > > On Thu, May 7, 2015 at 3:33 AM, Jun Rao  wrote:
> > > >
> > > > > One of the Chef users confirmed that Chef integration could still
> > work
> > > if
> > > > > all configs are moved to ZK. My rough understanding of how Chef
> works
> > > is
> > > > > that a user first registers a service 

Re: Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/#review83278
---


Thanks for the patch. A couple of comments blow.


core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala


This method throws exception instead of RuntimeException.



core/src/main/scala/kafka/controller/KafkaController.scala


We register two StateListeners on the same zkclient instance in the broker. 
If we can't establish a new ZK session, both listeners will be called. However, 
we only need to exit in one of the listners. So, we can just do the logging and 
exit in handleSessionEstablishedmentError() in KafkaHealthcheck and add a 
comment in the listener in KafkaController that the actual logic is done in the 
other listener.

Ditto to the two listeners in the consumer.


- Jun Rao


On May 11, 2015, 6:34 p.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34050/
> ---
> 
> (Updated May 11, 2015, 6:34 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2169
> https://issues.apache.org/jira/browse/KAFKA-2169
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> System.exit instead of throwing RuntimeException when zokeeper session 
> establishment fails.
> 
> 
> Diffs
> -
> 
>   build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
>   core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
> 38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> a6351163f5b6f080d6fa50bcc3533d445fcbc067 
>   core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
> 861b7f644941f88ce04a4e95f6b28d18bf1db16d 
> 
> Diff: https://reviews.apache.org/r/34050/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



[jira] [Comment Edited] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538409#comment-14538409
 ] 

Parth Brahmbhatt edited comment on KAFKA-2169 at 5/11/15 8:25 PM:
--

Posted a review on review board. https://reviews.apache.org/r/34050/diff/
1) I tried console-producer and console-consumer at trunk with only my changes 
applied and it works.
2) I do not disagree with the approach, however that is a change in behavior 
and I was trying to get the upgrade in given its blocking other jiras without 
having to tie that behavior change discussion to this jira. I have modified the 
behavior so it will now do System.exit.
3) Not sure what you mean here , we are handling it as part of 
handleSessionEstablishmentError() in all cases. 


was (Author: parth.brahmbhatt):
Posted a review on review board. https://reviews.apache.org/r/34050/diff/
1) I tried console-producer and console-consumer at trunk with only my changes 
applied and it works.
2) I do not disagree with the approach, however that is a change in behavior 
and I was trying to get the upgrade in given its blocking other jiras without 
having to tie that behavior change discussion to this jira. I have modified the 
behavior so it will not do System.exit.
3) Not sure what you mean here , we are handling it as part of 
handleSessionEstablishmentError() in all cases. 

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538592#comment-14538592
 ] 

Jun Rao commented on KAFKA-2169:


1) By api compatibility, I meant the following. Let's say an application uses a 
third party library that includes a Kafka consumer. Let's say that the third 
party library is built with Kafka 0.8.2 jars. If the api is compatible, the 
application can upgrade to Kafka 0.8.3 with the same third party library w/o 
forcing it to recompile. To test this out, you can get a Kafka 0.8.2 binary 
release, replace everything in libs with the jars in a Kafka 0.8.3 binary 
release (in particular, the new zkclient jar) and see if console consumer in 
Kafka 0.8.2 still works.
3) Commented on the RB. 

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/
---

(Updated May 11, 2015, 8:53 p.m.)


Review request for kafka.


Bugs: KAFKA-2169
https://issues.apache.org/jira/browse/KAFKA-2169


Repository: kafka


Description (updated)
---

System.exit instead of throwing RuntimeException when zokeeper session 
establishment fails.


Removing the unnecessary @throws.


Diffs (updated)
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
  core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
  core/src/main/scala/kafka/controller/KafkaController.scala 
a6351163f5b6f080d6fa50bcc3533d445fcbc067 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
861b7f644941f88ce04a4e95f6b28d18bf1db16d 

Diff: https://reviews.apache.org/r/34050/diff/


Testing
---


Thanks,

Parth Brahmbhatt



[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2169:

Attachment: KAFKA-2169_2015-05-11_13:52:57.patch

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
> KAFKA-2169_2015-05-11_13:52:57.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538629#comment-14538629
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

Updated reviewboard https://reviews.apache.org/r/34050/diff/
 against branch origin/trunk

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
> KAFKA-2169_2015-05-11_13:52:57.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33378: Patch for KAFKA-2136

2015-05-11 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/
---

(Updated May 11, 2015, 9:51 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2136
https://issues.apache.org/jira/browse/KAFKA-2136


Repository: kafka


Description (updated)
---

Fixing bug


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java 
ef9dd5238fbc771496029866ece1d85db6d7b7a5 
  clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
b2db91ca14bbd17fef5ce85839679144fff3f689 
  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
  clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
8686d83aa52e435c6adafbe9ff4bd1602281072a 
  clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
eb8951fba48c335095cc43fc3672de1c733e07ff 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
fabeae3083a8ea55cdacbb9568f3847ccd85bab4 
  clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
37ec0b79beafcf5735c386b066eb319fb697eff5 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
 419541011d652becf0cda7a5e62ce813cddb1732 
  
clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
 8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
e3cc1967e407b64cc734548c19e30de700b64ba8 
  core/src/main/scala/kafka/api/FetchRequest.scala 
b038c15186c0cbcc65b59479324052498361b717 
  core/src/main/scala/kafka/api/FetchResponse.scala 
75aaf57fb76ec01660d93701a57ae953d877d81c 
  core/src/main/scala/kafka/api/ProducerRequest.scala 
570b2da1d865086f9830aa919a49063abbbe574d 
  core/src/main/scala/kafka/api/ProducerResponse.scala 
5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
31a2639477bf66f9a05d2b9b07794572d7ec393b 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
a439046e118b6efcc3a5a9d9e8acb79f85e40398 
  core/src/main/scala/kafka/server/DelayedFetch.scala 
de6cf5bdaa0e70394162febc63b50b55ca0a92db 
  core/src/main/scala/kafka/server/DelayedProduce.scala 
05078b24ef28f2f4e099afa943e43f1d00359fda 
  core/src/main/scala/kafka/server/KafkaApis.scala 
417960dd1ab407ebebad8fdb0e97415db3e91a2f 
  core/src/main/scala/kafka/server/OffsetManager.scala 
18680ce100f10035175cc0263ba7787ab0f6a17a 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
b31b432a226ba79546dd22ef1d2acbb439c2e9a3 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
5717165f2344823fabe8f7cfafae4bb8af2d949a 
  core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
f3ab3f4ff8eb1aa6b2ab87ba75f72eceb6649620 
  core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
00d59337a99ac135e8689bd1ecd928f7b1423d79 

Diff: https://reviews.apache.org/r/33378/diff/


Testing
---

New tests added


Thanks,

Aditya Auradkar



[jira] [Commented] (KAFKA-2136) Client side protocol changes to return quota delays

2015-05-11 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538703#comment-14538703
 ] 

Aditya A Auradkar commented on KAFKA-2136:
--

Updated reviewboard https://reviews.apache.org/r/33378/diff/
 against branch origin/trunk

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Jun Rao


> On May 11, 2015, 8:25 p.m., Jun Rao wrote:
> > core/src/main/scala/kafka/controller/KafkaController.scala, lines 1116-1120
> > 
> >
> > We register two StateListeners on the same zkclient instance in the 
> > broker. If we can't establish a new ZK session, both listeners will be 
> > called. However, we only need to exit in one of the listners. So, we can 
> > just do the logging and exit in handleSessionEstablishedmentError() in 
> > KafkaHealthcheck and add a comment in the listener in KafkaController that 
> > the actual logic is done in the other listener.
> > 
> > Ditto to the two listeners in the consumer.

Is this issue addressed?


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/#review83278
---


On May 11, 2015, 8:53 p.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34050/
> ---
> 
> (Updated May 11, 2015, 8:53 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2169
> https://issues.apache.org/jira/browse/KAFKA-2169
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> System.exit instead of throwing RuntimeException when zokeeper session 
> establishment fails.
> 
> 
> Removing the unnecessary @throws.
> 
> 
> Diffs
> -
> 
>   build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
>   core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
> 38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> a6351163f5b6f080d6fa50bcc3533d445fcbc067 
>   core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
> 861b7f644941f88ce04a4e95f6b28d18bf1db16d 
> 
> Diff: https://reviews.apache.org/r/34050/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



[jira] [Updated] (KAFKA-2136) Client side protocol changes to return quota delays

2015-05-11 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2136:
-
Attachment: KAFKA-2136_2015-05-11_14:50:56.patch

> Client side protocol changes to return quota delays
> ---
>
> Key: KAFKA-2136
> URL: https://issues.apache.org/jira/browse/KAFKA-2136
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
> KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch
>
>
> As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
> the Fetch and the ProduceResponse objects. Add client side metrics on the new 
> producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538709#comment-14538709
 ] 

Jun Rao commented on KAFKA-2169:


Parth,

1. Have you done the api compatibility test?
3. Did you address the comment on handleSessionEstablishmentError() in the RB?

Thanks,

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
> KAFKA-2169_2015-05-11_13:52:57.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538713#comment-14538713
 ] 

Parth Brahmbhatt commented on KAFKA-2169:
-

[~junrao]
1) Yes I tested with 0.8.2 and it works fine.
2) I commented on the RB and updated it.

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
> KAFKA-2169_2015-05-11_13:52:57.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34050: Patch for KAFKA-2169

2015-05-11 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34050/#review83286
---



core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala


Actually it does not throw any exception now that we are just using 
System.exit. I have removed the  @throws annotation.



core/src/main/scala/kafka/controller/KafkaController.scala


Why would we want to do this? If the listeners are invoked twice as long as 
both of them exit whichever one gets invoked first will just kill the process 
and the other one will not be invoked. Why would we care which System.exit 
kills the process?


- Parth Brahmbhatt


On May 11, 2015, 8:53 p.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34050/
> ---
> 
> (Updated May 11, 2015, 8:53 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2169
> https://issues.apache.org/jira/browse/KAFKA-2169
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> System.exit instead of throwing RuntimeException when zokeeper session 
> establishment fails.
> 
> 
> Removing the unnecessary @throws.
> 
> 
> Diffs
> -
> 
>   build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
>   core/src/main/scala/kafka/consumer/ZookeeperTopicEventWatcher.scala 
> 38f4ec0bd1b388cc8fc04b38bbb2e7aaa1c3f43b 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> a6351163f5b6f080d6fa50bcc3533d445fcbc067 
>   core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
> 861b7f644941f88ce04a4e95f6b28d18bf1db16d 
> 
> Diff: https://reviews.apache.org/r/34050/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538762#comment-14538762
 ] 

Jun Rao commented on KAFKA-1997:


A late comment: shouldn't we hardcode the key/value serializer to be 
ByteSerializer in the producer? Both of them are required properties.

> Refactor Mirror Maker
> -
>
> Key: KAFKA-1997
> URL: https://issues.apache.org/jira/browse/KAFKA-1997
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
> KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
> KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
> KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
> KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
> KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
> KAFKA-1997_2015-03-18_12:47:32.patch
>
>
> Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-05-11 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538801#comment-14538801
 ] 

Jiangjie Qin commented on KAFKA-1997:
-

Hey [~junrao], good point. I will do that when I incorporate the close(timeout) 
into Mirror Maker.
Actually I'm a little bit confused why not provide a default 
serializer/deserializer? For new user of Kafka, it might be difficult to find 
right class to use. Right?

> Refactor Mirror Maker
> -
>
> Key: KAFKA-1997
> URL: https://issues.apache.org/jira/browse/KAFKA-1997
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
> KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
> KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
> KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
> KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
> KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
> KAFKA-1997_2015-03-18_12:47:32.patch
>
>
> Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1690) new java producer needs ssl support as a client

2015-05-11 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1690:
--
Attachment: KAFKA-1690_2015-05-11_16:09:36.patch

> new java producer needs ssl support as a client
> ---
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-05-11 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/
---

(Updated May 11, 2015, 11:09 p.m.)


Review request for kafka.


Bugs: KAFKA-1690
https://issues.apache.org/jira/browse/KAFKA-1690


Repository: kafka


Description (updated)
---

KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client.


KAFKA-1690. new java producer needs ssl support as a client. SSLFactory tests.


Diffs (updated)
-

  build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
  checkstyle/checkstyle.xml a215ff36e9252879f1e0be5a86fef9a875bb8f38 
  checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 
  clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
  clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
cf32e4e7c40738fe6d8adc36ae0cfad459ac5b0b 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
bdff518b732105823058e6182f445248b45dc388 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
d301be4709f7b112e1f3a39f3c04cfa65f00fa60 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
42b12928781463b56fc4a45d96bb4da2745b6d95 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
187d0004c8c46b6664ddaffecc6166d4b47351e5 
  clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
c4fa058692f50abb4f47bd344119d805c60123f5 
  clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Channel.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/PlainTextTransportLayer.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLFactory.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
b5f8d83e89f9026dc0853e5f92c00b2d7f043e22 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
57de0585e5e9a53eb9dcd99cac1ab3eb2086a302 
  clients/src/main/java/org/apache/kafka/common/network/TransportLayer.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
dab1a94dd29563688b6ecf4eeb0e180b06049d3f 
  clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
f73eedb030987f018d8446bb1dcd98d19fa97331 
  clients/src/test/java/org/apache/kafka/common/network/EchoServer.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SSLFactoryTest.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SSLSelectorTest.java 
PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
d5b306b026e788b4e5479f3419805aa49ae889f3 
  clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
2ebe3c21f611dc133a2dbb8c7dfb0845f8c21498 
  clients/src/test/java/org/apache/kafka/test/TestSSLUtils.java PRE-CREATION 

Diff: https://reviews.apache.org/r/33620/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



[jira] [Commented] (KAFKA-1690) new java producer needs ssl support as a client

2015-05-11 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538838#comment-14538838
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

Updated reviewboard https://reviews.apache.org/r/33620/diff/
 against branch origin/trunk

> new java producer needs ssl support as a client
> ---
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch, 
> KAFKA-1690_2015-05-10_23:20:30.patch, KAFKA-1690_2015-05-10_23:31:42.patch, 
> KAFKA-1690_2015-05-11_16:09:36.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33049: Patch for KAFKA-2084

2015-05-11 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33049/
---

(Updated May 11, 2015, 11:16 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2084
https://issues.apache.org/jira/browse/KAFKA-2084


Repository: kafka


Description (updated)
---

This is currently not being used anywhere in the code because I haven't yet 
figured out how to enforce delays i.e. purgatory vs delay queue. I'll have a 
better idea once I look at the new purgatory implementation. Hopefully, this 
smaller patch is easier to review.

Added more testcases


Some locking changes for reading/creating the sensors


WIP patch


Sample usage in ReplicaManager


Updated patch for quotas. This patch does the following: 1. Add per-client 
metrics for both producer and consumers 2. Add configuration for quotas 3. 
Compute delay times in the metrics package and return the delay times in 
QuotaViolationException 4. Add a DelayQueue in KafkaApi's that can be used to 
throttle any type of request. Implemented request throttling for produce and 
fetch requests. 5. Added unit and integration test cases. I've not yet added 
integration testcases testing the consumer delays.. will update the patch once 
those are ready


Incorporated Jun's comments


Adding javadoc


KAFKA-2084 - Moved the callbacks to ClientQuotaMetrics


Adding more configs


Don't quota replica traffic


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/common/metrics/MetricConfig.java 
dfa1b0a11042ad9d127226f0e0cec8b1d42b8441 
  clients/src/main/java/org/apache/kafka/common/metrics/Quota.java 
d82bb0c055e631425bc1ebbc7d387baac76aeeaa 
  
clients/src/main/java/org/apache/kafka/common/metrics/QuotaViolationException.java
 a451e5385c9eca76b38b425e8ac856b2715fcffe 
  clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
ca823fd4639523018311b814fde69b6177e73b97 
  clients/src/test/java/org/apache/kafka/common/utils/MockTime.java  
  core/src/main/scala/kafka/server/ClientQuotaMetrics.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
417960dd1ab407ebebad8fdb0e97415db3e91a2f 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
9efa15ca5567b295ab412ee9eea7c03eb4cdc18b 
  core/src/main/scala/kafka/server/KafkaServer.scala 
b7d2a2842e17411a823b93bdedc84657cbd62be1 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
  core/src/main/scala/kafka/server/ThrottledRequest.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/ShutdownableThread.scala 
fc226c863095b7761290292cd8755cd7ad0f155c 
  core/src/test/scala/integration/kafka/api/QuotasTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/server/ClientQuotaMetricsTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
8014a5a6c362785539f24eb03d77278434614fe6 
  core/src/test/scala/unit/kafka/server/ThrottledRequestExpirationTest.scala 
PRE-CREATION 

Diff: https://reviews.apache.org/r/33049/diff/


Testing
---


Thanks,

Aditya Auradkar



[jira] [Updated] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-05-11 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2084:
-
Attachment: KAFKA-2084_2015-05-11_16:16:01.patch

> byte rate metrics per client ID (producer and consumer)
> ---
>
> Key: KAFKA-2084
> URL: https://issues.apache.org/jira/browse/KAFKA-2084
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
> KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
> KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
> KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch
>
>
> We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
> basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-05-11 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538850#comment-14538850
 ] 

Aditya A Auradkar commented on KAFKA-2084:
--

Updated reviewboard https://reviews.apache.org/r/33049/diff/
 against branch origin/trunk

> byte rate metrics per client ID (producer and consumer)
> ---
>
> Key: KAFKA-2084
> URL: https://issues.apache.org/jira/browse/KAFKA-2084
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
> Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
> KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
> KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
> KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch
>
>
> We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
> basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33049: Patch for KAFKA-2084

2015-05-11 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33049/
---

(Updated May 11, 2015, 11:17 p.m.)


Review request for kafka, Joel Koshy and Jun Rao.


Bugs: KAFKA-2084
https://issues.apache.org/jira/browse/KAFKA-2084


Repository: kafka


Description (updated)
---

Updated patch for quotas. This patch does the following: 
1. Add per-client metrics for both producer and consumers 
2. Add configuration for quotas 
3. Compute delay times in the metrics package and return the delay times in 
QuotaViolationException 
4. Add a DelayQueue in KafkaApi's that can be used to throttle any type of 
request. Implemented request throttling for produce and fetch requests. 
5. Added unit and integration test cases.
6. This doesn't include a system test. There is a separate ticket for that


Diffs
-

  clients/src/main/java/org/apache/kafka/common/metrics/MetricConfig.java 
dfa1b0a11042ad9d127226f0e0cec8b1d42b8441 
  clients/src/main/java/org/apache/kafka/common/metrics/Quota.java 
d82bb0c055e631425bc1ebbc7d387baac76aeeaa 
  
clients/src/main/java/org/apache/kafka/common/metrics/QuotaViolationException.java
 a451e5385c9eca76b38b425e8ac856b2715fcffe 
  clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java 
ca823fd4639523018311b814fde69b6177e73b97 
  clients/src/test/java/org/apache/kafka/common/utils/MockTime.java  
  core/src/main/scala/kafka/server/ClientQuotaMetrics.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
417960dd1ab407ebebad8fdb0e97415db3e91a2f 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
9efa15ca5567b295ab412ee9eea7c03eb4cdc18b 
  core/src/main/scala/kafka/server/KafkaServer.scala 
b7d2a2842e17411a823b93bdedc84657cbd62be1 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
  core/src/main/scala/kafka/server/ThrottledRequest.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/ShutdownableThread.scala 
fc226c863095b7761290292cd8755cd7ad0f155c 
  core/src/test/scala/integration/kafka/api/QuotasTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/server/ClientQuotaMetricsTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
8014a5a6c362785539f24eb03d77278434614fe6 
  core/src/test/scala/unit/kafka/server/ThrottledRequestExpirationTest.scala 
PRE-CREATION 

Diff: https://reviews.apache.org/r/33049/diff/


Testing
---


Thanks,

Aditya Auradkar



Kafka KIP hangout May 12

2015-05-11 Thread Jun Rao
Hi, Everyone,

We will have a KIP hangout at 11 PST on May 12. The following is the
agenda. If you want to attend and is not on the invite, please let me know.

Agenda:
KIP-11 (authorization): any remaining issues
KIP-12 (sasl/ssl authentication): status check
KIP-19 (Add a request timeout to NetworkClient)
KIP-21 (configuration management)

Thanks,

Jun


[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-05-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538900#comment-14538900
 ] 

Jun Rao commented on KAFKA-1997:


The reasoning is that there is no good default value to set. By making these 
required, we are forcing the users to tell us what they want.

> Refactor Mirror Maker
> -
>
> Key: KAFKA-1997
> URL: https://issues.apache.org/jira/browse/KAFKA-1997
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
> KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
> KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
> KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
> KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
> KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
> KAFKA-1997_2015-03-18_12:47:32.patch
>
>
> Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Joel Koshy
So the general concern here is the dichotomy of configs (which we
already have - i.e., in the form of broker config files vs topic
configs in zookeeper). We (at LinkedIn) had some discussions on this
last week and had this very question for the operations team whose
opinion is I think to a large degree a touchstone for this decision:
"Has the operations team at LinkedIn experienced any pain so far with
managing topic configs in ZooKeeper (while broker configs are
file-based)?" It turns out that ops overwhelmingly favors the current
approach. i.e., service configs as file-based configs and client/topic
configs in ZooKeeper is intuitive and works great. This may be
somewhat counter-intuitive to devs, but this is one of those decisions
for which ops input is very critical - because for all practical
purposes, they are the users in this discussion.

If we continue with this dichotomy and need to support dynamic config
for client/topic configs as well as select service configs then there
will need to be dichotomy in the config change mechanism as well.
i.e., client/topic configs will change via (say) a ZooKeeper watch and
the service config will change via a config file re-read (on SIGHUP)
after config changes have been pushed out to local files. Is this a
bad thing? Personally, I don't think it is - i.e. I'm in favor of this
approach. What do others think?

Thanks,

Joel

On Mon, May 11, 2015 at 11:08:44PM +0300, Gwen Shapira wrote:
> What Todd said :)
> 
> (I think my ops background is showing...)
> 
> On Mon, May 11, 2015 at 10:17 PM, Todd Palino  wrote:
> 
> > I understand your point here, Jay, but I disagree that we can't have two
> > configuration systems. We have two different types of configuration
> > information. We have configuration that relates to the service itself (the
> > Kafka broker), and we have configuration that relates to the content within
> > the service (topics). I would put the client configuration (quotas) in the
> > with the second part, as it is dynamic information. I just don't see a good
> > argument for effectively degrading the configuration for the service
> > because of trying to keep it paired with the configuration of dynamic
> > resources.
> >
> > -Todd
> >
> > On Mon, May 11, 2015 at 11:33 AM, Jay Kreps  wrote:
> >
> > > I totally agree that ZK is not in-and-of-itself a configuration
> > management
> > > solution and it would be better if we could just keep all our config in
> > > files. Anyone who has followed the various config discussions over the
> > past
> > > few years of discussion knows I'm the biggest proponent of immutable
> > > file-driven config.
> > >
> > > The analogy to "normal unix services" isn't actually quite right though.
> > > The problem Kafka has is that a number of the configurable entities it
> > > manages are added dynamically--topics, clients, consumer groups, etc.
> > What
> > > this actually resembles is not a unix services like HTTPD but a database,
> > > and databases typically do manage config dynamically for exactly the same
> > > reason.
> > >
> > > The last few emails are arguing that files > ZK as a config solution. I
> > > agree with this, but that isn't really the question, right?The reality is
> > > that we need to be able to configure dynamically created entities and we
> > > won't get a satisfactory solution to that using files (e.g. rsync is not
> > an
> > > acceptable topic creation mechanism). What we are discussing is having a
> > > single config mechanism or multiple. If we have multiple you need to
> > solve
> > > the whole config lifecycle problem for both--management, audit, rollback,
> > > etc.
> > >
> > > Gwen, you were saying we couldn't get rid of the configuration file, not
> > > sure if I understand. Is that because we need to give the URL for ZK?
> > > Wouldn't the same argument work to say that we can't use configuration
> > > files because we have to specify the file path? I think we can just give
> > > the server the same --zookeeper argument we use everywhere else, right?
> > >
> > > -Jay
> > >
> > > On Sun, May 10, 2015 at 11:28 AM, Todd Palino  wrote:
> > >
> > > > I've been watching this discussion for a while, and I have to jump in
> > and
> > > > side with Gwen here. I see no benefit to putting the configs into
> > > Zookeeper
> > > > entirely, and a lot of downside. The two biggest problems I have with
> > > this
> > > > are:
> > > >
> > > > 1) Configuration management. OK, so you can write glue for Chef to put
> > > > configs into Zookeeper. You also need to write glue for Puppet. And
> > > > Cfengine. And everything else out there. Files are an industry standard
> > > > practice, they're how just about everyone handles it, and there's
> > reasons
> > > > for that, not just "it's the way it's always been done".
> > > >
> > > > 2) Auditing. Configuration files can easily be managed in a source
> > > > repository system which tracks what changes were made and who made
> > them.
> > > It
> > > > also easily al

[jira] [Commented] (KAFKA-2150) FetcherThread backoff need to grab lock before wait on condition.

2015-05-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14538994#comment-14538994
 ] 

Guozhang Wang commented on KAFKA-2150:
--

My bad on missing this but while reviewing. +1 and committed to trunk.

> FetcherThread backoff need to grab lock before wait on condition.
> -
>
> Key: KAFKA-2150
> URL: https://issues.apache.org/jira/browse/KAFKA-2150
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Attachments: KAFKA-2150.patch, KAFKA-2150_2015-04-25_13:14:05.patch, 
> KAFKA-2150_2015-04-25_13:18:35.patch, KAFKA-2150_2015-04-25_13:35:36.patch
>
>
> Saw the following error: 
> kafka.api.ProducerBounceTest > testBrokerFailure STANDARD_OUT
> [2015-04-25 00:40:43,997] ERROR [ReplicaFetcherThread-0-0], Error due to  
> (kafka.server.ReplicaFetcherThread:103)
> java.lang.IllegalMonitorStateException
>   at 
> java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:127)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1239)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.fullyRelease(AbstractQueuedSynchronizer.java:1668)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2107)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:95)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> [2015-04-25 00:40:47,064] ERROR [ReplicaFetcherThread-0-1], Error due to  
> (kafka.server.ReplicaFetcherThread:103)
> java.lang.IllegalMonitorStateException
>   at 
> java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:127)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1239)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.fullyRelease(AbstractQueuedSynchronizer.java:1668)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2107)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:95)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> We should grab the lock before waiting on the condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33551: Patch for KAFKA-2150

2015-05-11 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33551/#review83332
---

Ship it!


Ship It!

- Guozhang Wang


On April 25, 2015, 8:35 p.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33551/
> ---
> 
> (Updated April 25, 2015, 8:35 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2150
> https://issues.apache.org/jira/browse/KAFKA-2150
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-2150. FetcherThread backoff need to grab lock before wait on condition.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> a439046e118b6efcc3a5a9d9e8acb79f85e40398 
> 
> Diff: https://reviews.apache.org/r/33551/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



[jira] [Updated] (KAFKA-2150) FetcherThread backoff need to grab lock before wait on condition.

2015-05-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2150:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> FetcherThread backoff need to grab lock before wait on condition.
> -
>
> Key: KAFKA-2150
> URL: https://issues.apache.org/jira/browse/KAFKA-2150
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Attachments: KAFKA-2150.patch, KAFKA-2150_2015-04-25_13:14:05.patch, 
> KAFKA-2150_2015-04-25_13:18:35.patch, KAFKA-2150_2015-04-25_13:35:36.patch
>
>
> Saw the following error: 
> kafka.api.ProducerBounceTest > testBrokerFailure STANDARD_OUT
> [2015-04-25 00:40:43,997] ERROR [ReplicaFetcherThread-0-0], Error due to  
> (kafka.server.ReplicaFetcherThread:103)
> java.lang.IllegalMonitorStateException
>   at 
> java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:127)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1239)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.fullyRelease(AbstractQueuedSynchronizer.java:1668)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2107)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:95)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> [2015-04-25 00:40:47,064] ERROR [ReplicaFetcherThread-0-1], Error due to  
> (kafka.server.ReplicaFetcherThread:103)
> java.lang.IllegalMonitorStateException
>   at 
> java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:127)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1239)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.fullyRelease(AbstractQueuedSynchronizer.java:1668)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2107)
>   at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:95)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> We should grab the lock before waiting on the condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33916: Patch for KAFKA-2163

2015-05-11 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33916/#review8
---



core/src/main/scala/kafka/server/OffsetManager.scala


I think either is fine. i.e., this is one of those logs which would be 
useful in debugging and is not completely out of place at the INFO level. This 
should only occur periodically every offset retention check interval period 
which is by default 10 minutes. It is helpful for debugging offset management 
issues if any. When we remove offsets after topic deletion (KAFKA-2000) it 
would help get a breakdown of what was removed due to expiration and what was 
removed due to topic deletion.

I'm okay either way though. Let me know if you have a strong preference.



core/src/main/scala/kafka/server/OffsetManager.scala


Similar to above - this used to be debug level in the earlier version, but 
can be very useful for troubleshooting any issues.


- Joel Koshy


On May 6, 2015, 10:06 p.m., Joel Koshy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33916/
> ---
> 
> (Updated May 6, 2015, 10:06 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2163
> https://issues.apache.org/jira/browse/KAFKA-2163
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> fix
> 
> 
> renames and logging improvements
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/cluster/Partition.scala 
> 122b1dbbe45cb27aed79b5be1e735fb617c716b0 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 18680ce100f10035175cc0263ba7787ab0f6a17a 
> 
> Diff: https://reviews.apache.org/r/33916/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Joel Koshy
> 
>



[jira] [Created] (KAFKA-2186) Follow-up patch of KAFKA-1650

2015-05-11 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-2186:
---

 Summary: Follow-up patch of KAFKA-1650
 Key: KAFKA-2186
 URL: https://issues.apache.org/jira/browse/KAFKA-2186
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


Offsets commit with a map was added in KAFKA-1650. It should be added to 
consumer connector java API also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: [DISCUSS] KIP-21 Configuration Management

2015-05-11 Thread Aditya Auradkar
I did initially think having everything in ZK was better than having the 
dichotomy Joel referred to primarily because all kafka configs can be managed 
consistently.

I guess the biggest disadvantage of driving broker config primarily from ZK is 
that it requires everyone to manage Kafka configuration separately from other 
services. Several people have separately mentioned integration issues with 
systems like Puppet and Chef. While they may support pluggable logic, it does 
require everyone to write that additional piece of logic specific to Kafka. We 
will have to implement group, fabric, tag hierarchy (as Ashish mentioned), 
auditing and ACL management. While this potential consistency is nice, perhaps 
the tradeoff isn't worth it given that the resulting system isn't much superior 
to pushing out new config files and is also quite disruptive. Since this 
impacts operations teams the most, I also think their input is probably the 
most valuable and should perhaps drive the outcome.

I also think it is fine to treat topic and client configuration separately 
because they are more like metadata than actual service configuration. 

Aditya

From: Joel Koshy [jjkosh...@gmail.com]
Sent: Monday, May 11, 2015 4:54 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-21 Configuration Management

So the general concern here is the dichotomy of configs (which we
already have - i.e., in the form of broker config files vs topic
configs in zookeeper). We (at LinkedIn) had some discussions on this
last week and had this very question for the operations team whose
opinion is I think to a large degree a touchstone for this decision:
"Has the operations team at LinkedIn experienced any pain so far with
managing topic configs in ZooKeeper (while broker configs are
file-based)?" It turns out that ops overwhelmingly favors the current
approach. i.e., service configs as file-based configs and client/topic
configs in ZooKeeper is intuitive and works great. This may be
somewhat counter-intuitive to devs, but this is one of those decisions
for which ops input is very critical - because for all practical
purposes, they are the users in this discussion.

If we continue with this dichotomy and need to support dynamic config
for client/topic configs as well as select service configs then there
will need to be dichotomy in the config change mechanism as well.
i.e., client/topic configs will change via (say) a ZooKeeper watch and
the service config will change via a config file re-read (on SIGHUP)
after config changes have been pushed out to local files. Is this a
bad thing? Personally, I don't think it is - i.e. I'm in favor of this
approach. What do others think?

Thanks,

Joel

On Mon, May 11, 2015 at 11:08:44PM +0300, Gwen Shapira wrote:
> What Todd said :)
>
> (I think my ops background is showing...)
>
> On Mon, May 11, 2015 at 10:17 PM, Todd Palino  wrote:
>
> > I understand your point here, Jay, but I disagree that we can't have two
> > configuration systems. We have two different types of configuration
> > information. We have configuration that relates to the service itself (the
> > Kafka broker), and we have configuration that relates to the content within
> > the service (topics). I would put the client configuration (quotas) in the
> > with the second part, as it is dynamic information. I just don't see a good
> > argument for effectively degrading the configuration for the service
> > because of trying to keep it paired with the configuration of dynamic
> > resources.
> >
> > -Todd
> >
> > On Mon, May 11, 2015 at 11:33 AM, Jay Kreps  wrote:
> >
> > > I totally agree that ZK is not in-and-of-itself a configuration
> > management
> > > solution and it would be better if we could just keep all our config in
> > > files. Anyone who has followed the various config discussions over the
> > past
> > > few years of discussion knows I'm the biggest proponent of immutable
> > > file-driven config.
> > >
> > > The analogy to "normal unix services" isn't actually quite right though.
> > > The problem Kafka has is that a number of the configurable entities it
> > > manages are added dynamically--topics, clients, consumer groups, etc.
> > What
> > > this actually resembles is not a unix services like HTTPD but a database,
> > > and databases typically do manage config dynamically for exactly the same
> > > reason.
> > >
> > > The last few emails are arguing that files > ZK as a config solution. I
> > > agree with this, but that isn't really the question, right?The reality is
> > > that we need to be able to configure dynamically created entities and we
> > > won't get a satisfactory solution to that using files (e.g. rsync is not
> > an
> > > acceptable topic creation mechanism). What we are discussing is having a
> > > single config mechanism or multiple. If we have multiple you need to
> > solve
> > > the whole config lifecycle problem for both--management, audit, rollback,
> > > etc.
> >

Re: Review Request 33378: Patch for KAFKA-2136

2015-05-11 Thread Dong Lin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33378/#review83341
---



core/src/main/scala/kafka/api/FetchResponse.scala


Should delayTimeSize be deducted from expectedBytesToWrite?


- Dong Lin


On May 11, 2015, 9:51 p.m., Aditya Auradkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33378/
> ---
> 
> (Updated May 11, 2015, 9:51 p.m.)
> 
> 
> Review request for kafka, Joel Koshy and Jun Rao.
> 
> 
> Bugs: KAFKA-2136
> https://issues.apache.org/jira/browse/KAFKA-2136
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fixing bug
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java
>  ef9dd5238fbc771496029866ece1d85db6d7b7a5 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> b2db91ca14bbd17fef5ce85839679144fff3f689 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 3dc8b015afd2347a41c9a9dbc02b8e367da5f75f 
>   clients/src/main/java/org/apache/kafka/common/requests/FetchRequest.java 
> 8686d83aa52e435c6adafbe9ff4bd1602281072a 
>   clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java 
> eb8951fba48c335095cc43fc3672de1c733e07ff 
>   clients/src/main/java/org/apache/kafka/common/requests/ProduceRequest.java 
> fabeae3083a8ea55cdacbb9568f3847ccd85bab4 
>   clients/src/main/java/org/apache/kafka/common/requests/ProduceResponse.java 
> 37ec0b79beafcf5735c386b066eb319fb697eff5 
>   
> clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
>  419541011d652becf0cda7a5e62ce813cddb1732 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/internals/SenderTest.java
>  8b1805d3d2bcb9fe2bacb37d870c3236aa9532c4 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  e3cc1967e407b64cc734548c19e30de700b64ba8 
>   core/src/main/scala/kafka/api/FetchRequest.scala 
> b038c15186c0cbcc65b59479324052498361b717 
>   core/src/main/scala/kafka/api/FetchResponse.scala 
> 75aaf57fb76ec01660d93701a57ae953d877d81c 
>   core/src/main/scala/kafka/api/ProducerRequest.scala 
> 570b2da1d865086f9830aa919a49063abbbe574d 
>   core/src/main/scala/kafka/api/ProducerResponse.scala 
> 5d1fac4cb8943f5bfaa487f8e9d9d2856efbd330 
>   core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
> 31a2639477bf66f9a05d2b9b07794572d7ec393b 
>   core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
> a439046e118b6efcc3a5a9d9e8acb79f85e40398 
>   core/src/main/scala/kafka/server/DelayedFetch.scala 
> de6cf5bdaa0e70394162febc63b50b55ca0a92db 
>   core/src/main/scala/kafka/server/DelayedProduce.scala 
> 05078b24ef28f2f4e099afa943e43f1d00359fda 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 417960dd1ab407ebebad8fdb0e97415db3e91a2f 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 18680ce100f10035175cc0263ba7787ab0f6a17a 
>   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
> b31b432a226ba79546dd22ef1d2acbb439c2e9a3 
>   core/src/main/scala/kafka/server/ReplicaManager.scala 
> 59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> 5717165f2344823fabe8f7cfafae4bb8af2d949a 
>   core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
> f3ab3f4ff8eb1aa6b2ab87ba75f72eceb6649620 
>   core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala 
> 00d59337a99ac135e8689bd1ecd928f7b1423d79 
> 
> Diff: https://reviews.apache.org/r/33378/diff/
> 
> 
> Testing
> ---
> 
> New tests added
> 
> 
> Thanks,
> 
> Aditya Auradkar
> 
>



Review Request 34070: Patch for KAFKA-2186

2015-05-11 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34070/
---

Review request for kafka.


Bugs: KAFKA-2186
https://issues.apache.org/jira/browse/KAFKA-2186


Repository: kafka


Description
---

Patch for KAFKA-2186 follow-up patch of KAFKA-1650, add the missing offset 
commit with map in java api


Diffs
-

  core/src/main/scala/kafka/javaapi/consumer/ConsumerConnector.java 
cc3400ff81fc0db69b5129ad7b440f20a211a79d 

Diff: https://reviews.apache.org/r/34070/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Updated] (KAFKA-2186) Follow-up patch of KAFKA-1650

2015-05-11 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2186:

Attachment: KAFKA-2186.patch

> Follow-up patch of KAFKA-1650
> -
>
> Key: KAFKA-2186
> URL: https://issues.apache.org/jira/browse/KAFKA-2186
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2186.patch
>
>
> Offsets commit with a map was added in KAFKA-1650. It should be added to 
> consumer connector java API also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2186) Follow-up patch of KAFKA-1650

2015-05-11 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2186:

Status: Patch Available  (was: Open)

> Follow-up patch of KAFKA-1650
> -
>
> Key: KAFKA-2186
> URL: https://issues.apache.org/jira/browse/KAFKA-2186
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2186.patch
>
>
> Offsets commit with a map was added in KAFKA-1650. It should be added to 
> consumer connector java API also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2186) Follow-up patch of KAFKA-1650

2015-05-11 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539074#comment-14539074
 ] 

Jiangjie Qin commented on KAFKA-2186:
-

Created reviewboard https://reviews.apache.org/r/34070/diff/
 against branch origin/trunk

> Follow-up patch of KAFKA-1650
> -
>
> Key: KAFKA-2186
> URL: https://issues.apache.org/jira/browse/KAFKA-2186
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2186.patch
>
>
> Offsets commit with a map was added in KAFKA-1650. It should be added to 
> consumer connector java API also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: KafkaPreCommit #99

2015-05-11 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2150; move partitionMapCond.await into partitionMapLock; 
reviewed by Guozhang Wang

--
[...truncated 2140 lines...]
kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.JsonTest > testJsonEncoding PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ReplicationUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testParseTopicPartition

Re: Review Request 34070: Patch for KAFKA-2186

2015-05-11 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34070/#review83343
---

Ship it!



core/src/main/scala/kafka/javaapi/consumer/ConsumerConnector.java


How does this work if the consumer doesn't own these partitions? Is it 
possible to commit offsets for any topic? Just curious..


- Aditya Auradkar


On May 12, 2015, 1:39 a.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34070/
> ---
> 
> (Updated May 12, 2015, 1:39 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2186
> https://issues.apache.org/jira/browse/KAFKA-2186
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Patch for KAFKA-2186 follow-up patch of KAFKA-1650, add the missing offset 
> commit with map in java api
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/javaapi/consumer/ConsumerConnector.java 
> cc3400ff81fc0db69b5129ad7b440f20a211a79d 
> 
> Diff: https://reviews.apache.org/r/34070/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Updated] (KAFKA-2146) adding partition did not find the correct startIndex

2015-05-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2146:
-
Status: In Progress  (was: Patch Available)

> adding partition did not find the correct startIndex 
> -
>
> Key: KAFKA-2146
> URL: https://issues.apache.org/jira/browse/KAFKA-2146
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.0
>Reporter: chenshangan
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-2146.patch
>
>
> TopicCommand provide a tool to add partitions for existing topics. It try to 
> find the startIndex from existing partitions. There's a minor flaw in this 
> process, it try to use the first partition fetched from zookeeper as the 
> start partition, and use the first replica id in this partition as the 
> startIndex.
> One thing, the first partition fetched from zookeeper is not necessary to be 
> the start partition. As partition id begin from zero, we should use partition 
> with id zero as the start partition.
> The other, broker id does not necessary begin from 0, so the startIndex is 
> not necessary to be the first replica id in the start partition. 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2146) adding partition did not find the correct startIndex

2015-05-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539129#comment-14539129
 ] 

Guozhang Wang commented on KAFKA-2146:
--

[~chenshangan...@163.com] Thanks for the patch, and sorry for the late review.

I agree with you on #1, but for #2 there is a case when some brokers are 
temporarily not available, while the existing replica list is set statically. 
So 


brokerList.indexOf(existingReplicaList.head)


may return -1 in this case, causing a random pick of the starting index. In 
this case, we should probably pick the next available broker as the starting 
broker, i.e. if now the available broker list is (1,2,5) and the replica list 
of partition 0 is 3, we should pick broker-5 as starting broker.

Also for #1, I think for better replica distribution we should pick the 
starting index as the first replica of the LAST partition's list plus one. That 
is, if we already have three partitions with replica list:

1, 2, 3
2, 3, 4
3, 4, 5

Then if we add another partition, its starting broker should be 4 given if all 
(1,2,3,4,5) brokers are available; or should be 5 if only (1,2,3,5) are 
available

Another minor comment: not introduced in this patch, but the name of 
"existingReplicaList" was originally misleading, we could rename it to 
"existingReplicaListForLastPartition" or "existingReplicaListForPartitionZero" 
with your current patch.

2. 

> adding partition did not find the correct startIndex 
> -
>
> Key: KAFKA-2146
> URL: https://issues.apache.org/jira/browse/KAFKA-2146
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.0
>Reporter: chenshangan
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-2146.patch
>
>
> TopicCommand provide a tool to add partitions for existing topics. It try to 
> find the startIndex from existing partitions. There's a minor flaw in this 
> process, it try to use the first partition fetched from zookeeper as the 
> start partition, and use the first replica id in this partition as the 
> startIndex.
> One thing, the first partition fetched from zookeeper is not necessary to be 
> the start partition. As partition id begin from zero, we should use partition 
> with id zero as the start partition.
> The other, broker id does not necessary begin from 0, so the startIndex is 
> not necessary to be the first replica id in the start partition. 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: [DISCUSS] KIP 20 Enable log preallocate to improve consume performance under windows and some old Linux file system

2015-05-11 Thread Honghai Chen
All issues fixed, test cases added, performance result on windows attached.  
The patch can help improve the consume performance around 25%~50%.

Thanks, Honghai Chen 

-Original Message-
From: Jun Rao [mailto:j...@confluent.io] 
Sent: Wednesday, May 6, 2015 5:39 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP 20 Enable log preallocate to improve consume 
performance under windows and some old Linux file system

Thanks. Could you updated the wiki? Also, commented on the jira.

Jun

On Tue, May 5, 2015 at 12:48 AM, Honghai Chen 
wrote:

> Use config.segmentSize should be ok.   Previously add that one for make
> sure the file not exceed config.segmentSize, actually the function 
> maybeRoll already make sure that.
> When try add test case for recover, blocked by the rename related 
> issue, just open one jira at 
> https://issues.apache.org/jira/browse/KAFKA-2170 , any recommendation for fix 
> that issue?
>
> Thanks, Honghai Chen
>
> -Original Message-
> From: Jun Rao [mailto:j...@confluent.io]
> Sent: Tuesday, May 5, 2015 12:51 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP 20 Enable log preallocate to improve 
> consume performance under windows and some old Linux file system
>
> This seems similar to what's in
> https://issues.apache.org/jira/browse/KAFKA-1065.
>
> Also, could you explain why the preallocated size is set to 
> config.segmentSize
> - 2 * config.maxMessageSize, instead of just config.segmentSize?
>
> Thanks,
>
> Jun
>
> On Mon, May 4, 2015 at 8:12 PM, Honghai Chen 
> 
> wrote:
>
> >   Hi guys,
> > I'm trying add test cases, but below case crashed at line "
> > segReopen.recover(64*1024)--> index.trimToValidSize()  ", any idea 
> > for
> it?
> > Appreciate your help.
> > The case assume kafka suddenly crash, and need recover the 
> > last segment.
> >
> > kafka.log.LogSegmentTest > testCreateWithInitFileSizeCrash FAILED
> > java.io.IOException: The requested operation cannot be performed 
> > on a file w ith a user-mapped section open
> > at java.io.RandomAccessFile.setLength(Native Method)
> > at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:292)
> > at
> > kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:283)
> > at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:288)
> > at kafka.log.OffsetIndex.resize(OffsetIndex.scala:283)
> > at
> > kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(Offset
> > I
> > ndex.scala:272)
> > at
> > kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.s
> > c
> > ala:272)
> > at
> > kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.s
> > c
> > ala:272)
> > at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:288)
> > at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:271)
> > at kafka.log.LogSegment.recover(LogSegment.scala:199)
> > at
> > kafka.log.LogSegmentTest.testCreateWithInitFileSizeCrash(LogSegmentT
> > e
> > st.scala:306)
> >
> >   def recover(maxMessageSize: Int): Int = {
> > index.truncate()
> > index.resize(index.maxIndexSize)
> > var validBytes = 0
> > var lastIndexEntry = 0
> > val iter = log.iterator(maxMessageSize)
> > try {
> >   while(iter.hasNext) {
> > val entry = iter.next
> > entry.message.ensureValid()
> > if(validBytes - lastIndexEntry > indexIntervalBytes) {
> >   // we need to decompress the message, if required, to get 
> > the offset of the first uncompressed message
> >   val startOffset =
> > entry.message.compressionCodec match {
> >   case NoCompressionCodec =>
> > entry.offset
> >   case _ =>
> >
> > ByteBufferMessageSet.deepIterator(entry.message).next().offset
> >   }
> >   index.append(startOffset, validBytes)
> >   lastIndexEntry = validBytes
> > }
> > validBytes += MessageSet.entrySize(entry.message)
> >   }
> > } catch {
> >   case e: InvalidMessageException =>
> > logger.warn("Found invalid messages in log segment %s at 
> > byte offset %d: %s.".format(log.file.getAbsolutePath, validBytes,
> e.getMessage))
> > }
> > val truncated = log.sizeInBytes - validBytes
> > log.truncateTo(validBytes)
> > index.trimToValidSize()
> > truncated
> >   }
> >
> > /* create a segment with   pre allocate and Crash*/
> >   @Test
> >   def testCreateWithInitFileSizeCrash() {
> > val tempDir = TestUtils.tempDir()
> > val seg = new LogSegment(tempDir, 40, 1, 1000, 0, SystemTime, 
> > false, 512*1024*1024, true)
> >
> > val ms = messages(50, "hello", "there")
> > seg.append(50, ms)
> > val ms2 = messages(60, "alpha", "beta")
> > seg.append(60, ms2)
> > val read = seg.read(startOffset = 55, maxSize = 200, maxOffset =
> None)
> > assertEquals(ms2.toList, read.messageSet.toList)
> > va

[jira] [Updated] (KAFKA-2186) Follow-up patch of KAFKA-1650

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2186:
---
Reviewer: Joel Koshy

> Follow-up patch of KAFKA-1650
> -
>
> Key: KAFKA-2186
> URL: https://issues.apache.org/jira/browse/KAFKA-2186
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-2186.patch
>
>
> Offsets commit with a map was added in KAFKA-1650. It should be added to 
> consumer connector java API also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2169:
---
Reviewer: Jun Rao

> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
> Attachments: KAFKA-2169.patch, KAFKA-2169.patch, 
> KAFKA-2169_2015-05-11_13:52:57.patch
>
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2185) Update to Gradle 2.4

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2185:
---
Reviewer: Jun Rao
Assignee: Ismael Juma

> Update to Gradle 2.4
> 
>
> Key: KAFKA-2185
> URL: https://issues.apache.org/jira/browse/KAFKA-2185
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
> Attachments: KAFKA-2185_2015-05-11_20:55:08.patch
>
>
> Gradle 2.4 has been released recently while Kafka is still using Gradle 2.0. 
> There have been a large number of improvements over the various releases 
> (including performance improvements):
> https://gradle.org/docs/2.1/release-notes
> https://gradle.org/docs/2.2/release-notes
> https://gradle.org/docs/2.3/release-notes
> http://gradle.org/docs/current/release-notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2132) Move Log4J appender to clients module

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2132:
---
Reviewer: Jay Kreps  (was: Gwen Shapira)

> Move Log4J appender to clients module
> -
>
> Key: KAFKA-2132
> URL: https://issues.apache.org/jira/browse/KAFKA-2132
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: Ashish K Singh
> Attachments: KAFKA-2132.patch, KAFKA-2132_2015-04-27_19:59:46.patch, 
> KAFKA-2132_2015-04-30_12:22:02.patch, KAFKA-2132_2015-04-30_15:53:17.patch
>
>
> Log4j appender is just a producer.
> Since we have a new producer in the clients module, no need to keep Log4J 
> appender in "core" and force people to package all of Kafka with their apps.
> Lets move the Log4jAppender to clients module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2178) Loss of highwatermarks on incorrect cluster shutdown/restart

2015-05-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2178:
---
Reviewer: Jun Rao

> Loss of highwatermarks on incorrect cluster shutdown/restart
> 
>
> Key: KAFKA-2178
> URL: https://issues.apache.org/jira/browse/KAFKA-2178
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.1
>Reporter: Alexey Ozeritskiy
> Attachments: KAFKA-2178.patch
>
>
> ReplicaManager flushes highwatermarks only for partitions which it recieved 
> from Controller.
> If Controller sends incomplete list of partitions then ReplicaManager will 
> write incomplete list of highwatermarks.
> As a result one can lose a lot of data during incorrect broker restart.
> We got this situation in real life on our cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >