[jira] [Created] (KAFKA-1779) FetchRequest.maxWait has no effect

2014-11-17 Thread Magnus Vojbacke (JIRA)
Magnus Vojbacke created KAFKA-1779:
--

 Summary: FetchRequest.maxWait has no effect
 Key: KAFKA-1779
 URL: https://issues.apache.org/jira/browse/KAFKA-1779
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2
Reporter: Magnus Vojbacke
Priority: Minor
 Fix For: 0.8.2


Setting the maxWait field in a kafka.api.FetchRequest  does not appear to have 
any effect. Whereas my assumption is: If I send a fetch request for messages 
after offset X for a partition where there are currently no messages with 
offsets after X, I would expect that a Fetch request built with the maxWait 
option should block on the broker side for $maxWait milliseconds for a new 
message to arrive.

Currently, the request seems to return an empty result immediately. As a 
result, our client is forced to manually sleep on each fetch request that 
returns empty results.

On the mailing list, it was stated that this bug should be fixed in 0.8.2, but 
I'm still seeing this issue on 0.8.2-beta.

{code}
  // Chose a suitable topic / partition / lead broker combination
  val host = ???
  val port = ???
  val topic = ???
  val partition = ???

  val cons = new SimpleConsumer(host, port, 5000, 5000, "my-id")

val topicAndPartition = new TopicAndPartition(topic, partition)
println(topicAndPartition)
val requestInfo = Map(topicAndPartition -> new 
PartitionOffsetRequestInfo(kafka.api.OffsetRequest.LatestTime, 1))
println(requestInfo)
val request = new OffsetRequest(requestInfo)
println(request)
val response: OffsetResponse = cons.getOffsetsBefore(request)
println("code=" + 
response.partitionErrorAndOffsets(topicAndPartition).error)
println(response)
val offset = response.partitionErrorAndOffsets(topicAndPartition).offsets(0)
val req = new FetchRequestBuilder().clientId("my-id").addFetch(topic, 
partition, offset, 50).maxWait(1)

// The following requests appear to return within a few hundred milliseconds of 
// eachother, but my assumption is that maxWait 1 milliseconds should
// make each request block for at least 1 milliseconds before returning an
// empty result.
println(System.currentTimeMillis + " " + cons.fetch(req.build()))
println(System.currentTimeMillis + " " + cons.fetch(req.build()))
println(System.currentTimeMillis + " " + cons.fetch(req.build()))
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1481) Stop using dashes AND underscores as separators in MBean names

2014-11-17 Thread Vladimir Tretyakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Tretyakov updated KAFKA-1481:
--
Attachment: KAFKA-1481_2014-11-17_14-33-03.patch

> Stop using dashes AND underscores as separators in MBean names
> --
>
> Key: KAFKA-1481
> URL: https://issues.apache.org/jira/browse/KAFKA-1481
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Otis Gospodnetic
>Priority: Critical
>  Labels: patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-1481_2014-06-06_13-06-35.patch, 
> KAFKA-1481_2014-10-13_18-23-35.patch, KAFKA-1481_2014-10-14_21-53-35.patch, 
> KAFKA-1481_2014-10-15_10-23-35.patch, KAFKA-1481_2014-10-20_23-14-35.patch, 
> KAFKA-1481_2014-10-21_09-14-35.patch, KAFKA-1481_2014-10-30_21-35-43.patch, 
> KAFKA-1481_2014-10-31_14-35-43.patch, 
> KAFKA-1481_2014-11-03_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-03_17-02-23.patch, 
> KAFKA-1481_2014-11-10_20-39-41_doc.patch, 
> KAFKA-1481_2014-11-10_21-02-23.patch, KAFKA-1481_2014-11-14_16-33-03.patch, 
> KAFKA-1481_2014-11-14_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-17_14-33-03.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-14_21-53-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-15_10-23-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_20-14-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_23-14-35.patch, alternateLayout1.png, 
> alternateLayout2.png, diff-for-alternate-layout1.patch, 
> diff-for-alternate-layout2.patch, originalLayout.png
>
>
> MBeans should not use dashes or underscores as separators because these 
> characters are allowed in hostnames, topics, group and consumer IDs, etc., 
> and these are embedded in MBeans names making it impossible to parse out 
> individual bits from MBeans.
> Perhaps a pipe character should be used to avoid the conflict. 
> This looks like a major blocker because it means nobody can write Kafka 0.8.x 
> monitoring tools unless they are doing it for themselves AND do not use 
> dashes AND do not use underscores.
> See: http://search-hadoop.com/m/4TaT4lonIW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1481) Stop using dashes AND underscores as separators in MBean names

2014-11-17 Thread Vladimir Tretyakov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214559#comment-14214559
 ] 

Vladimir Tretyakov commented on KAFKA-1481:
---

Hi, provided new patch, last one?:)


80.1 Done
80.2 Done
80.3 Done
80.4 Do you mean 'new producer'? Can I just reformulate KAFKA-1768? Something 
like: "Expose version via JMX in new client"?
81. Done
82. Done
83. Left as is.

> Stop using dashes AND underscores as separators in MBean names
> --
>
> Key: KAFKA-1481
> URL: https://issues.apache.org/jira/browse/KAFKA-1481
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Otis Gospodnetic
>Priority: Critical
>  Labels: patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-1481_2014-06-06_13-06-35.patch, 
> KAFKA-1481_2014-10-13_18-23-35.patch, KAFKA-1481_2014-10-14_21-53-35.patch, 
> KAFKA-1481_2014-10-15_10-23-35.patch, KAFKA-1481_2014-10-20_23-14-35.patch, 
> KAFKA-1481_2014-10-21_09-14-35.patch, KAFKA-1481_2014-10-30_21-35-43.patch, 
> KAFKA-1481_2014-10-31_14-35-43.patch, 
> KAFKA-1481_2014-11-03_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-03_17-02-23.patch, 
> KAFKA-1481_2014-11-10_20-39-41_doc.patch, 
> KAFKA-1481_2014-11-10_21-02-23.patch, KAFKA-1481_2014-11-14_16-33-03.patch, 
> KAFKA-1481_2014-11-14_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-17_14-33-03.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-14_21-53-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-15_10-23-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_20-14-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_23-14-35.patch, alternateLayout1.png, 
> alternateLayout2.png, diff-for-alternate-layout1.patch, 
> diff-for-alternate-layout2.patch, originalLayout.png
>
>
> MBeans should not use dashes or underscores as separators because these 
> characters are allowed in hostnames, topics, group and consumer IDs, etc., 
> and these are embedded in MBeans names making it impossible to parse out 
> individual bits from MBeans.
> Perhaps a pipe character should be used to avoid the conflict. 
> This looks like a major blocker because it means nobody can write Kafka 0.8.x 
> monitoring tools unless they are doing it for themselves AND do not use 
> dashes AND do not use underscores.
> See: http://search-hadoop.com/m/4TaT4lonIW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1481) Stop using dashes AND underscores as separators in MBean names

2014-11-17 Thread Vladimir Tretyakov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214559#comment-14214559
 ] 

Vladimir Tretyakov edited comment on KAFKA-1481 at 11/17/14 11:52 AM:
--

Hi, provided new patch, last one?:)


80.1 Done
80.2 Done
80.3 Done
80.4 Do you mean 'new producer'? Can I just reformulate KAFKA-1768? Something 
like: "Expose version via JMX in new client"?
81. Done
82. Done
83. Left as is.

PS: I'll be traveling for 3 week starting 19.11 (Wednesday) and won't Internet 
access.  Can we get this committed before I go pleaase?


was (Author: vladimir.tretyakov):
Hi, provided new patch, last one?:)


80.1 Done
80.2 Done
80.3 Done
80.4 Do you mean 'new producer'? Can I just reformulate KAFKA-1768? Something 
like: "Expose version via JMX in new client"?
81. Done
82. Done
83. Left as is.

> Stop using dashes AND underscores as separators in MBean names
> --
>
> Key: KAFKA-1481
> URL: https://issues.apache.org/jira/browse/KAFKA-1481
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Otis Gospodnetic
>Priority: Critical
>  Labels: patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-1481_2014-06-06_13-06-35.patch, 
> KAFKA-1481_2014-10-13_18-23-35.patch, KAFKA-1481_2014-10-14_21-53-35.patch, 
> KAFKA-1481_2014-10-15_10-23-35.patch, KAFKA-1481_2014-10-20_23-14-35.patch, 
> KAFKA-1481_2014-10-21_09-14-35.patch, KAFKA-1481_2014-10-30_21-35-43.patch, 
> KAFKA-1481_2014-10-31_14-35-43.patch, 
> KAFKA-1481_2014-11-03_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-03_17-02-23.patch, 
> KAFKA-1481_2014-11-10_20-39-41_doc.patch, 
> KAFKA-1481_2014-11-10_21-02-23.patch, KAFKA-1481_2014-11-14_16-33-03.patch, 
> KAFKA-1481_2014-11-14_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-17_14-33-03.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-14_21-53-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-15_10-23-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_20-14-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_23-14-35.patch, alternateLayout1.png, 
> alternateLayout2.png, diff-for-alternate-layout1.patch, 
> diff-for-alternate-layout2.patch, originalLayout.png
>
>
> MBeans should not use dashes or underscores as separators because these 
> characters are allowed in hostnames, topics, group and consumer IDs, etc., 
> and these are embedded in MBeans names making it impossible to parse out 
> individual bits from MBeans.
> Perhaps a pipe character should be used to avoid the conflict. 
> This looks like a major blocker because it means nobody can write Kafka 0.8.x 
> monitoring tools unless they are doing it for themselves AND do not use 
> dashes AND do not use underscores.
> See: http://search-hadoop.com/m/4TaT4lonIW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27990: Patch for KAFKA-1751

2014-11-17 Thread Dmitry Pekar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27990/
---

(Updated Nov. 17, 2014, 2:25 p.m.)


Review request for kafka.


Bugs: KAFKA-1751
https://issues.apache.org/jira/browse/KAFKA-1751


Repository: kafka


Description
---

KAFKA-1751: handle "broker not exists" and "topic not exists" scenarios


Diffs (updated)
-

  core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala 
979992b68af3723cd229845faff81c641123bb88 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
e28979827110dfbbb92fe5b152e7f1cc973de400 
  core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
29cc01bcef9cacd8dec1f5d662644fc6fe4994bc 

Diff: https://reviews.apache.org/r/27990/diff/


Testing
---


Thanks,

Dmitry Pekar



[jira] [Commented] (KAFKA-1751) handle "broker not exists" and "topic not exists" scenarios

2014-11-17 Thread Dmitry Pekar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214687#comment-14214687
 ] 

Dmitry Pekar commented on KAFKA-1751:
-

Updated reviewboard https://reviews.apache.org/r/27990/diff/
 against branch origin/trunk

> handle "broker not exists" and "topic not exists" scenarios
> ---
>
> Key: KAFKA-1751
> URL: https://issues.apache.org/jira/browse/KAFKA-1751
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
> Attachments: KAFKA-1751.patch, KAFKA-1751_2014-11-17_16:25:14.patch, 
> kafka-1751.patch
>
>
> merged with 1750 to pass by single code review process



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1751) handle "broker not exists" and "topic not exists" scenarios

2014-11-17 Thread Dmitry Pekar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pekar updated KAFKA-1751:

Attachment: KAFKA-1751_2014-11-17_16:25:14.patch

> handle "broker not exists" and "topic not exists" scenarios
> ---
>
> Key: KAFKA-1751
> URL: https://issues.apache.org/jira/browse/KAFKA-1751
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
> Attachments: KAFKA-1751.patch, KAFKA-1751_2014-11-17_16:25:14.patch, 
> kafka-1751.patch
>
>
> merged with 1750 to pass by single code review process



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27990: Patch for KAFKA-1751

2014-11-17 Thread Dmitry Pekar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27990/
---

(Updated Nov. 17, 2014, 2:33 p.m.)


Review request for kafka.


Bugs: KAFKA-1751
https://issues.apache.org/jira/browse/KAFKA-1751


Repository: kafka


Description (updated)
---

KAFKA-1751 / CR fixes


Diffs (updated)
-

  core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala 
979992b68af3723cd229845faff81c641123bb88 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
e28979827110dfbbb92fe5b152e7f1cc973de400 
  core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
29cc01bcef9cacd8dec1f5d662644fc6fe4994bc 

Diff: https://reviews.apache.org/r/27990/diff/


Testing
---


Thanks,

Dmitry Pekar



[jira] [Commented] (KAFKA-1751) handle "broker not exists" and "topic not exists" scenarios

2014-11-17 Thread Dmitry Pekar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214694#comment-14214694
 ] 

Dmitry Pekar commented on KAFKA-1751:
-

Updated reviewboard https://reviews.apache.org/r/27990/diff/
 against branch origin/trunk

> handle "broker not exists" and "topic not exists" scenarios
> ---
>
> Key: KAFKA-1751
> URL: https://issues.apache.org/jira/browse/KAFKA-1751
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
> Attachments: KAFKA-1751.patch, KAFKA-1751_2014-11-17_16:25:14.patch, 
> KAFKA-1751_2014-11-17_16:33:43.patch, kafka-1751.patch
>
>
> merged with 1750 to pass by single code review process



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1751) handle "broker not exists" and "topic not exists" scenarios

2014-11-17 Thread Dmitry Pekar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pekar updated KAFKA-1751:

Attachment: KAFKA-1751_2014-11-17_16:33:43.patch

> handle "broker not exists" and "topic not exists" scenarios
> ---
>
> Key: KAFKA-1751
> URL: https://issues.apache.org/jira/browse/KAFKA-1751
> Project: Kafka
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Dmitry Pekar
>Assignee: Dmitry Pekar
> Fix For: 0.8.3
>
> Attachments: KAFKA-1751.patch, KAFKA-1751_2014-11-17_16:25:14.patch, 
> KAFKA-1751_2014-11-17_16:33:43.patch, kafka-1751.patch
>
>
> merged with 1750 to pass by single code review process



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214748#comment-14214748
 ] 

Joe Stein commented on KAFKA-1173:
--

I haven't had a chance to review the last changes for issues found with AWS but 
hopefully will do that today.

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1772) Add an Admin message type for request response

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein reassigned KAFKA-1772:


Assignee: Joe Stein

> Add an Admin message type for request response
> --
>
> Key: KAFKA-1772
> URL: https://issues.apache.org/jira/browse/KAFKA-1772
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Joe Stein
> Fix For: 0.8.3
>
>
> - timestamp
> - utility int8
> - command int8
> - format int8
> - args variable length bytes
> utility 
> 0 - Broker
> 1 - Topic
> 2 - Replication
> 3 - Controller
> 4 - Consumer
> 5 - Producer
> Command
> 0 - Create
> 1 - Alter
> 3 - Delete
> 4 - List
> 5 - Audit
> format
> 0 - JSON
> args e.g. (which would equate to the data structure values == 2,1,0)
> "meta-store": {
> {"zookeeper":"localhost:12913/kafka"}
> }"args": {
>  "partitions":
>   [
> {"topic": "topic1", "partition": "0"},
> {"topic": "topic1", "partition": "1"},
> {"topic": "topic1", "partition": "2"},
>  
> {"topic": "topic2", "partition": "0"},
> {"topic": "topic2", "partition": "1"},
>   ]
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1772) Add an Admin message type for request response

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1772:
-
Assignee: (was: Joe Stein)

> Add an Admin message type for request response
> --
>
> Key: KAFKA-1772
> URL: https://issues.apache.org/jira/browse/KAFKA-1772
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
> Fix For: 0.8.3
>
>
> - timestamp
> - utility int8
> - command int8
> - format int8
> - args variable length bytes
> utility 
> 0 - Broker
> 1 - Topic
> 2 - Replication
> 3 - Controller
> 4 - Consumer
> 5 - Producer
> Command
> 0 - Create
> 1 - Alter
> 3 - Delete
> 4 - List
> 5 - Audit
> format
> 0 - JSON
> args e.g. (which would equate to the data structure values == 2,1,0)
> "meta-store": {
> {"zookeeper":"localhost:12913/kafka"}
> }"args": {
>  "partitions":
>   [
> {"topic": "topic1", "partition": "0"},
> {"topic": "topic1", "partition": "1"},
> {"topic": "topic1", "partition": "2"},
>  
> {"topic": "topic2", "partition": "0"},
> {"topic": "topic2", "partition": "1"},
>   ]
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1772) Add an Admin message type for request response

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1772:
-
Assignee: Andrii Biletskyi

> Add an Admin message type for request response
> --
>
> Key: KAFKA-1772
> URL: https://issues.apache.org/jira/browse/KAFKA-1772
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
> Fix For: 0.8.3
>
>
> - timestamp
> - utility int8
> - command int8
> - format int8
> - args variable length bytes
> utility 
> 0 - Broker
> 1 - Topic
> 2 - Replication
> 3 - Controller
> 4 - Consumer
> 5 - Producer
> Command
> 0 - Create
> 1 - Alter
> 3 - Delete
> 4 - List
> 5 - Audit
> format
> 0 - JSON
> args e.g. (which would equate to the data structure values == 2,1,0)
> "meta-store": {
> {"zookeeper":"localhost:12913/kafka"}
> }"args": {
>  "partitions":
>   [
> {"topic": "topic1", "partition": "0"},
> {"topic": "topic1", "partition": "1"},
> {"topic": "topic1", "partition": "2"},
>  
> {"topic": "topic2", "partition": "0"},
> {"topic": "topic2", "partition": "1"},
>   ]
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1772) Add an Admin message type for request response

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1772:
-
Reviewer: Joe Stein

> Add an Admin message type for request response
> --
>
> Key: KAFKA-1772
> URL: https://issues.apache.org/jira/browse/KAFKA-1772
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
> Fix For: 0.8.3
>
>
> - timestamp
> - utility int8
> - command int8
> - format int8
> - args variable length bytes
> utility 
> 0 - Broker
> 1 - Topic
> 2 - Replication
> 3 - Controller
> 4 - Consumer
> 5 - Producer
> Command
> 0 - Create
> 1 - Alter
> 3 - Delete
> 4 - List
> 5 - Audit
> format
> 0 - JSON
> args e.g. (which would equate to the data structure values == 2,1,0)
> "meta-store": {
> {"zookeeper":"localhost:12913/kafka"}
> }"args": {
>  "partitions":
>   [
> {"topic": "topic1", "partition": "0"},
> {"topic": "topic1", "partition": "1"},
> {"topic": "topic1", "partition": "2"},
>  
> {"topic": "topic2", "partition": "0"},
> {"topic": "topic2", "partition": "1"},
>   ]
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1772) Add an Admin message type for request response

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1772:
-
Description: 
- utility int8
- command int8
- format int8
- args variable length bytes

utility 
0 - Broker
1 - Topic
2 - Replication
3 - Controller
4 - Consumer
5 - Producer

Command

0 - Create
1 - Alter
3 - Delete
4 - List
5 - Audit

format
0 - JSON

args e.g. (which would equate to the data structure values == 2,1,0)

"meta-store": {
{"zookeeper":"localhost:12913/kafka"}
}"args": {
 "partitions":
  [
{"topic": "topic1", "partition": "0"},
{"topic": "topic1", "partition": "1"},
{"topic": "topic1", "partition": "2"},
 
{"topic": "topic2", "partition": "0"},
{"topic": "topic2", "partition": "1"},
  ]
}



  was:
- timestamp
- utility int8
- command int8
- format int8
- args variable length bytes

utility 
0 - Broker
1 - Topic
2 - Replication
3 - Controller
4 - Consumer
5 - Producer

Command

0 - Create
1 - Alter
3 - Delete
4 - List
5 - Audit

format
0 - JSON

args e.g. (which would equate to the data structure values == 2,1,0)

"meta-store": {
{"zookeeper":"localhost:12913/kafka"}
}"args": {
 "partitions":
  [
{"topic": "topic1", "partition": "0"},
{"topic": "topic1", "partition": "1"},
{"topic": "topic1", "partition": "2"},
 
{"topic": "topic2", "partition": "0"},
{"topic": "topic2", "partition": "1"},
  ]
}




> Add an Admin message type for request response
> --
>
> Key: KAFKA-1772
> URL: https://issues.apache.org/jira/browse/KAFKA-1772
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
> Fix For: 0.8.3
>
>
> - utility int8
> - command int8
> - format int8
> - args variable length bytes
> utility 
> 0 - Broker
> 1 - Topic
> 2 - Replication
> 3 - Controller
> 4 - Consumer
> 5 - Producer
> Command
> 0 - Create
> 1 - Alter
> 3 - Delete
> 4 - List
> 5 - Audit
> format
> 0 - JSON
> args e.g. (which would equate to the data structure values == 2,1,0)
> "meta-store": {
> {"zookeeper":"localhost:12913/kafka"}
> }"args": {
>  "partitions":
>   [
> {"topic": "topic1", "partition": "0"},
> {"topic": "topic1", "partition": "1"},
> {"topic": "topic1", "partition": "2"},
>  
> {"topic": "topic2", "partition": "0"},
> {"topic": "topic2", "partition": "1"},
>   ]
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1779) FetchRequest.maxWait has no effect

2014-11-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214841#comment-14214841
 ] 

Guozhang Wang commented on KAFKA-1779:
--

Hi Magnus,

The semantics of maxWait is not for the un-reached offsets, i.e. if you send a 
fetch request whose starting offset is "out-of-range" meaning the offset does 
not exist on the log, it will return you an empty response with the error code 
set to "OutOfRange" immediately.

maxWait is for minBytes, another config in the fetch request indicating the 
minimum number of bytes returned in the response. For example, if you send a 
fetch with starting offset 5 and minBytes 100, and assuming each message is 10 
bytes and the log has message from offset 0 to 10, then if it returns you the 
messageset of 5 to 10 it will only have 60 bytes, so instead it will hold the 
fetch request and wait until more messages are appended to the log, until there 
are messages with offset 14.

> FetchRequest.maxWait has no effect
> --
>
> Key: KAFKA-1779
> URL: https://issues.apache.org/jira/browse/KAFKA-1779
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2
>Reporter: Magnus Vojbacke
>Priority: Minor
> Fix For: 0.8.2
>
>
> Setting the maxWait field in a kafka.api.FetchRequest  does not appear to 
> have any effect. Whereas my assumption is: If I send a fetch request for 
> messages after offset X for a partition where there are currently no messages 
> with offsets after X, I would expect that a Fetch request built with the 
> maxWait option should block on the broker side for $maxWait milliseconds for 
> a new message to arrive.
> Currently, the request seems to return an empty result immediately. As a 
> result, our client is forced to manually sleep on each fetch request that 
> returns empty results.
> On the mailing list, it was stated that this bug should be fixed in 0.8.2, 
> but I'm still seeing this issue on 0.8.2-beta.
> {code}
>   // Chose a suitable topic / partition / lead broker combination
>   val host = ???
>   val port = ???
>   val topic = ???
>   val partition = ???
>   val cons = new SimpleConsumer(host, port, 5000, 5000, "my-id")
> val topicAndPartition = new TopicAndPartition(topic, partition)
> println(topicAndPartition)
> val requestInfo = Map(topicAndPartition -> new 
> PartitionOffsetRequestInfo(kafka.api.OffsetRequest.LatestTime, 1))
> println(requestInfo)
> val request = new OffsetRequest(requestInfo)
> println(request)
> val response: OffsetResponse = cons.getOffsetsBefore(request)
> println("code=" + 
> response.partitionErrorAndOffsets(topicAndPartition).error)
> println(response)
> val offset = 
> response.partitionErrorAndOffsets(topicAndPartition).offsets(0)
> val req = new FetchRequestBuilder().clientId("my-id").addFetch(topic, 
> partition, offset, 50).maxWait(1)
> // The following requests appear to return within a few hundred milliseconds 
> of 
> // eachother, but my assumption is that maxWait 1 milliseconds should
> // make each request block for at least 1 milliseconds before returning an
> // empty result.
> println(System.currentTimeMillis + " " + cons.fetch(req.build()))
> println(System.currentTimeMillis + " " + cons.fetch(req.build()))
> println(System.currentTimeMillis + " " + cons.fetch(req.build()))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Kafka Simple Consumer API for 0.9

2014-11-17 Thread Guozhang Wang
Hi Dibyendu,

Yes we are changing the consumer API in 0.9, which will be different with
the current high-level consumer API. We are also trying to figure out a way
to preserve the functionality: KAFKA-1655


Guozhang

On Fri, Nov 14, 2014 at 5:29 PM, Dibyendu Bhattacharya <
dibyendu.bhattach...@gmail.com> wrote:

> Hi,
>
> Is the Simple Consumer API will change in Kafka 0.9 ?
>
> I can see a Consumer Re-design approach for Kafka 0.9 , which I believe
> will not impact any client written using Simple Consumer API . Is that
> correct ?
>
>
> Regards,
> Dibyendu
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-1780) Add peek()/poll() for ConsumerIterator

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-1780:


 Summary: Add peek()/poll() for ConsumerIterator
 Key: KAFKA-1780
 URL: https://issues.apache.org/jira/browse/KAFKA-1780
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.8.1.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


Currently, all consumer operations (next(), haveNext()) block. This is 
problematic for a couple of use cases. Most obviously, a peek() method would be 
nice so you can at least check whether any data is immediately available, 
getting a null value back if it's not.

A more difficult example is a proxy with a timeout, i.e. it consumes messages 
for up to N ms or M messages, and returns whatever it has at the end of that 
period. It's possible to approximate that with peek, but requires aggressive 
polling to match the proxy's timeout. A poll(timeout) method would allow for a 
correct implementation, where each call to poll gets a single message, but also 
allows the user to specify a custom timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 28121: Patch for KAFKA-1780

2014-11-17 Thread Ewen Cheslack-Postava

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28121/
---

Review request for kafka.


Bugs: KAFKA-1780
https://issues.apache.org/jira/browse/KAFKA-1780


Repository: kafka


Description
---

KAFKA-1780 Add peek and poll methods to ConsumerIterator.


Diffs
-

  core/src/main/scala/kafka/consumer/ConsumerIterator.scala 
78fbf75651583e390258af2d9f09df6911a97b59 
  core/src/main/scala/kafka/utils/IteratorTemplate.scala 
fd952f3ec0f04a3ba639c02779634265489fd186 
  core/src/test/scala/unit/kafka/consumer/ConsumerIteratorTest.scala 
c0355cc0135c6af2e346b4715659353a31723b86 
  core/src/test/scala/unit/kafka/utils/IteratorTemplateTest.scala 
46a4e899ef293c56a931bfa5bcf9a07d07ec5792 

Diff: https://reviews.apache.org/r/28121/diff/


Testing
---


Thanks,

Ewen Cheslack-Postava



[jira] [Updated] (KAFKA-1780) Add peek()/poll() for ConsumerIterator

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-1780:
-
Status: Patch Available  (was: Open)

> Add peek()/poll() for ConsumerIterator
> --
>
> Key: KAFKA-1780
> URL: https://issues.apache.org/jira/browse/KAFKA-1780
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1780.patch
>
>
> Currently, all consumer operations (next(), haveNext()) block. This is 
> problematic for a couple of use cases. Most obviously, a peek() method would 
> be nice so you can at least check whether any data is immediately available, 
> getting a null value back if it's not.
> A more difficult example is a proxy with a timeout, i.e. it consumes messages 
> for up to N ms or M messages, and returns whatever it has at the end of that 
> period. It's possible to approximate that with peek, but requires aggressive 
> polling to match the proxy's timeout. A poll(timeout) method would allow for 
> a correct implementation, where each call to poll gets a single message, but 
> also allows the user to specify a custom timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1780) Add peek()/poll() for ConsumerIterator

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-1780:
-
Attachment: KAFKA-1780.patch

> Add peek()/poll() for ConsumerIterator
> --
>
> Key: KAFKA-1780
> URL: https://issues.apache.org/jira/browse/KAFKA-1780
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1780.patch
>
>
> Currently, all consumer operations (next(), haveNext()) block. This is 
> problematic for a couple of use cases. Most obviously, a peek() method would 
> be nice so you can at least check whether any data is immediately available, 
> getting a null value back if it's not.
> A more difficult example is a proxy with a timeout, i.e. it consumes messages 
> for up to N ms or M messages, and returns whatever it has at the end of that 
> period. It's possible to approximate that with peek, but requires aggressive 
> polling to match the proxy's timeout. A poll(timeout) method would allow for 
> a correct implementation, where each call to poll gets a single message, but 
> also allows the user to specify a custom timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1780) Add peek()/poll() for ConsumerIterator

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214888#comment-14214888
 ] 

Ewen Cheslack-Postava commented on KAFKA-1780:
--

Created reviewboard https://reviews.apache.org/r/28121/diff/
 against branch origin/trunk

> Add peek()/poll() for ConsumerIterator
> --
>
> Key: KAFKA-1780
> URL: https://issues.apache.org/jira/browse/KAFKA-1780
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1780.patch
>
>
> Currently, all consumer operations (next(), haveNext()) block. This is 
> problematic for a couple of use cases. Most obviously, a peek() method would 
> be nice so you can at least check whether any data is immediately available, 
> getting a null value back if it's not.
> A more difficult example is a proxy with a timeout, i.e. it consumes messages 
> for up to N ms or M messages, and returns whatever it has at the end of that 
> period. It's possible to approximate that with peek, but requires aggressive 
> polling to match the proxy's timeout. A poll(timeout) method would allow for 
> a correct implementation, where each call to poll gets a single message, but 
> also allows the user to specify a custom timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1780) Add peek()/poll() for ConsumerIterator

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-1780:
-
Reviewer: Jun Rao

> Add peek()/poll() for ConsumerIterator
> --
>
> Key: KAFKA-1780
> URL: https://issues.apache.org/jira/browse/KAFKA-1780
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1780.patch
>
>
> Currently, all consumer operations (next(), haveNext()) block. This is 
> problematic for a couple of use cases. Most obviously, a peek() method would 
> be nice so you can at least check whether any data is immediately available, 
> getting a null value back if it's not.
> A more difficult example is a proxy with a timeout, i.e. it consumes messages 
> for up to N ms or M messages, and returns whatever it has at the end of that 
> period. It's possible to approximate that with peek, but requires aggressive 
> polling to match the proxy's timeout. A poll(timeout) method would allow for 
> a correct implementation, where each call to poll gets a single message, but 
> also allows the user to specify a custom timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1780) Add peek()/poll() for ConsumerIterator

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214897#comment-14214897
 ] 

Ewen Cheslack-Postava commented on KAFKA-1780:
--

Patch attached adds a NonBlocingIteratorTemplate subclass for IteratorTemplate 
and makes ConsumerIterator implement it. A few notes:

* I moved peek() and added poll() to a subclass of IteratorTemplate because not 
all implementations of IteratorTemplate can implement them correctly.
* The logic to figure out timeouts and throw ConsumerTimeoutExceptions is now a 
bit confusing because we have 2 ways of setting timeouts. Would have been nicer 
to just do the peek()/poll() in the first place.
* Adjusted the constraints on the IteratorTemplate/NonBlockingIteratorTemplate 
item type. The code already assumed nullable types, but didn't enforce it. In 
fact, the IteratorTemplateTest used a non-nullable type and just happens to 
work ok. As far as I can tell, adding this restriction has no negative effects. 
Not adding it and using null.asInstanceOf[T] where necessary permits invalid 
iterator implementations.
* Removed peek() from IteratorTemplate. It wasn't implemented correctly anyway 
and isn't used anywhere in the Kafka code. However, it is a public interface, 
so I'm not sure if we should actually remove it.

> Add peek()/poll() for ConsumerIterator
> --
>
> Key: KAFKA-1780
> URL: https://issues.apache.org/jira/browse/KAFKA-1780
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1780.patch
>
>
> Currently, all consumer operations (next(), haveNext()) block. This is 
> problematic for a couple of use cases. Most obviously, a peek() method would 
> be nice so you can at least check whether any data is immediately available, 
> getting a null value back if it's not.
> A more difficult example is a proxy with a timeout, i.e. it consumes messages 
> for up to N ms or M messages, and returns whatever it has at the end of that 
> period. It's possible to approximate that with peek, but requires aggressive 
> polling to match the proxy's timeout. A poll(timeout) method would allow for 
> a correct implementation, where each call to poll gets a single message, but 
> also allows the user to specify a custom timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Support for topics+streams at class WildcardStreamsHandler

2014-11-17 Thread Guozhang Wang
Just added an entry in the FAQ page:

https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whycan%27tIspecifythenumberofstreamsparallelismpertopicmapusingwildcardstreamasIusestaticstreamhandler
?

On Mon, Nov 10, 2014 at 7:56 AM, Guozhang Wang  wrote:

> Hi Alan,
>
> The reason we do not have per-topic parallelism spec in wildcard is two
> folds: 1) we use a per-topic hash-based partition algorithm, and hence
> having each topic with the same num. of streams may give us better load
> balance, 2) with the topicFilter we will not know exactly which topics to
> consume at the construction time, hence no way to specify per-topic specs.
>
> 1) has been lifted since we have implemented new partitioning algorithm,
> and for 2) we need to think about how to support it if we really want to,
> perhaps we can also use a regex-ed topic-count map, while ensuring that
> each regex in the map is precedent of the topic filter, and no overlap with
> each other, etc. What is your usecase that requires per-topic numStream
> spec?
>
> Guozhang
>
> On Sun, Nov 9, 2014 at 6:03 AM, Alan Lavintman 
> wrote:
>
>> Hi guys, i have seen that if create a message stream by using:
>>
>> createMessageStreams
>>
>> I can define a map with Topic->#Streams
>>
>> Is there a reason why createMessageStreamsByFilter us not giving the same
>> support? I have only a TopicFilter and numStreams interface such as:
>>
>> public List>
>> createMessageStreamsByFilter(TopicFilter topicFilter, int numStreams);
>>
>> But it does not allow me to specify the parallelism per topic. Am I
>> missing
>> something or my assumption is correct?
>>
>> Bests and thanks,
>> Alan.
>>
>
>
>
> --
> -- Guozhang
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Jean-Francois Im (JIRA)
Jean-Francois Im created KAFKA-1781:
---

 Summary: Readme should specify that Gradle 2.0 is required for 
initial bootstrap
 Key: KAFKA-1781
 URL: https://issues.apache.org/jira/browse/KAFKA-1781
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.2
Reporter: Jean-Francois Im
Priority: Trivial


Current README.md says "You need to have gradle installed."

As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1781:
-
Fix Version/s: 0.8.2

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Trivial
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1781:
-
Priority: Blocker  (was: Trivial)

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Jean-Francois Im (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Francois Im updated KAFKA-1781:

Attachment: gradle-2.0-readme.patch

Documentation patch that changes README.md to say "You need to have gradle 2.0 
or greater installed."

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Trivial
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28040: Patch for KAFKA-1770

2014-11-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28040/#review61751
---


UnknownTopicOrPartitionException can be thrown either at the producer side when 
the cached metadata does not have the info for this partitionId, or thrown at 
the server side when the specified partition does not exist on the broker. So 
how about:

Indicates one of the following situation:
1. Producer does not have the know the partition metadata for this id upon 
sending messages
2. Broker does not have the specified partition by id upon receiving messages


core/src/main/scala/kafka/common/UnknownTopicOrPartitionException.scala


remove " (and the end " on line 23)


- Guozhang Wang


On Nov. 14, 2014, 4:57 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28040/
> ---
> 
> (Updated Nov. 14, 2014, 4:57 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1770
> https://issues.apache.org/jira/browse/KAFKA-1770
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Modified doc for UnknownTopicOrPartitionException
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/common/UnknownTopicOrPartitionException.scala 
> 781e551e5b78b5f436431575c2961fe15acd1414 
> 
> Diff: https://reviews.apache.org/r/28040/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214951#comment-14214951
 ] 

Joe Stein commented on KAFKA-1781:
--

[~jean-francois.l...@teximus.com] I have this working with Gradle 1.8 on my 
machine but your point is valid, we should go with the min version (lets set 
that as 1.8). Do you mind updating the patch? Thanks!

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1624:
-
Fix Version/s: (was: 0.9.0)
   0.8.2

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein reopened KAFKA-1624:
--

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214956#comment-14214956
 ] 

Joe Stein commented on KAFKA-1624:
--

<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

That makes sense to me, folks are going to keep bringing this up more and more 
moving forward and no reason to make them keep making a minor change we can 
ship in 0.8.2 final (i think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214956#comment-14214956
 ] 

Joe Stein edited comment on KAFKA-1624 at 11/17/14 6:30 PM:


<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

[~guozhang] Thanks for testing the version out. Your suggestions makes sense to 
me, folks are going to keep bringing this up more and more moving forward and 
no reason to make them keep making a minor change we can ship in 0.8.2 final (i 
think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?


was (Author: joestein):
<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

That makes sense to me, folks are going to keep bringing this up more and more 
moving forward and no reason to make them keep making a minor change we can 
ship in 0.8.2 final (i think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214956#comment-14214956
 ] 

Joe Stein edited comment on KAFKA-1624 at 11/17/14 6:30 PM:


<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

[~guozhang] Thanks for testing the versions out. Your suggestions makes sense 
to me, folks are going to keep bringing this up more and more moving forward 
and no reason to make them keep making a minor change we can ship in 0.8.2 
final (i think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?


was (Author: joestein):
<< I did some tests locally with various Scala versions. Only the default 
2.10.1 seems not compile with Java 8; 2.10.2, 2.10.3 and 2.11 are all 
compatible with it. Shall we change the default version of Scala to at least 
2.10.2?

[~guozhang] Thanks for testing the version out. Your suggestions makes sense to 
me, folks are going to keep bringing this up more and more moving forward and 
no reason to make them keep making a minor change we can ship in 0.8.2 final (i 
think it would be ok to-do it there)

Do we want to go with 2.10.3 instead of 2.10.2 since it is later version? 

Any one else issues with doing this for 0.8.2?

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Jean-Francois Im (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214969#comment-14214969
 ] 

Jean-Francois Im commented on KAFKA-1781:
-

It doesn't seem to work with 1.8.

{quote}
$ rm -rf gradle/wrapper/
$ gradle -version


Gradle 1.8


Build time:   2013-09-24 07:32:33 UTC
Build number: none
Revision: 7970ec3503b4f5767ee1c1c69f8b4186c4763e3d

[snip]
$ gradle
[snip]
Building project 'core' with Scala version 2.10.1

FAILURE: Build failed with an exception.

* Where:
Build file '/home/jfim/projects/kafka/build.gradle' line: 199

* What went wrong:
A problem occurred evaluating root project 'kafka'.
> Could not create task of type 'ScalaDoc'.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 15.229 secs
$ ./gradlew
Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain
{quote}

This is what happens in 2.0. I also tested with 1.12, it does the same as 1.8.

{quote}
$ rm -rf gradle/wrapper/
$ gradle -version


Gradle 2.0


Build time:   2014-07-01 07:45:34 UTC
Build number: none
Revision: b6ead6fa452dfdadec484059191eb641d817226c

[snip]

$ gradle
Building project 'core' with Scala version 2.10.1
:downloadWrapper

BUILD SUCCESSFUL

Total time: 7.239 secs
$ ./gradlew
Building project 'core' with Scala version 2.10.1
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 6.937 secs
{quote}

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28025: Patch for KAFKA-345

2014-11-17 Thread Joel Koshy


> On Nov. 14, 2014, 11:36 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java, 
> > line 29
> > 
> >
> > It would be worth elaborating the use-case for these hooks (i.e., in 
> > the context of mirror maker data loss and how these will be used). Do we 
> > need all of these hooks?
> 
> Jiangjie Qin wrote:
> I checked the new consumer. The rebalance callback has 2 hooks. 
> onPartitionRevoked and onPartitionAssigned. I think we can keep them just for 
> future compatibility. It seems people potentially needs before and after hook 
> (that's why this ticket is initially created). For mirror maker we need the 
> hook after fetchers stopped and before the partition ownerships are released. 
> So it seems we still need 5 hooks...

It may be better for the purposes of code review, to explain the motivation by 
also including the patch for the mirror-maker, or a summary of how the hooks 
will be used. I'm more inclined to just have the hooks that are absolutely 
necessary. Furthermore, if the mirror-maker use-case warrants we can consider 
whether any changes/additions are required in the new consumer hooks as well.


- Joel


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28025/#review61556
---


On Nov. 15, 2014, 9:20 a.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28025/
> ---
> 
> (Updated Nov. 15, 2014, 9:20 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-345
> https://issues.apache.org/jira/browse/KAFKA-345
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Added new unit test.
> 
> 
> Incorporated Joel's comments
> 
> 
> Incorporated Joel's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> f476973eeff653473a60c3ecf36e870e386536bc 
>   core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java 
> PRE-CREATION 
>   core/src/main/scala/kafka/javaapi/consumer/ZookeeperConsumerConnector.scala 
> 1f98db5d692adc113189ec8c75a4fad29d6b6ffe 
>   
> core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
> e1d87112a2a587aa3a2f5875f278b276c32f45ac 
> 
> Diff: https://reviews.apache.org/r/28025/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14214978#comment-14214978
 ] 

Joe Stein commented on KAFKA-1781:
--

That is weird, I tried from a fresh clone just now

{code}

new-host:apache_kafka joestein$ git clone 
https://git-wip-us.apache.org/repos/asf/kafka.git KAFKA-1781
Cloning into 'KAFKA-1781'...
remote: Counting objects: 21794, done.
remote: Compressing objects: 100% (7216/7216), done.
remote: Total 21794 (delta 12923), reused 19669 (delta 11330)
Receiving objects: 100% (21794/21794), 15.18 MiB | 623 KiB/s, done.
Resolving deltas: 100% (12923/12923), done.
new-host:apache_kafka joestein$ cd KAFKA-1781/
new-host:KAFKA-1781 joestein$ git checkout -b 0.8.2 origin/0.8.2
Branch 0.8.2 set up to track remote branch 0.8.2 from origin.
Switched to a new branch '0.8.2'
new-host:KAFKA-1781 joestein$ gradle --version


Gradle 1.8


Build time:   2013-09-24 07:32:33 UTC
Build number: none
Revision: 7970ec3503b4f5767ee1c1c69f8b4186c4763e3d

Groovy:   1.8.6
Ant:  Apache Ant(TM) version 1.9.2 compiled on July 8 2013
Ivy:  2.2.0
JVM:  1.7.0_25 (Oracle Corporation 23.25-b01)
OS:   Mac OS X 10.8.5 x86_64

new-host:KAFKA-1781 joestein$ gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/1.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:downloadWrapper

BUILD SUCCESSFUL

Total time: 18.37 secs
new-host:KAFKA-1781 joestein$ ./gradlew jar
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:clients:compileJava
Download 
http://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.1.6/snappy-java-1.1.1.6.pom
Download 
http://repo1.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.1.6/snappy-java-1.1.1.6.jar
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
:clients:processResources UP-TO-DATE
:clients:classes
:clients:jar
:contrib:compileJava UP-TO-DATE
:contrib:processResources UP-TO-DATE
:contrib:classes UP-TO-DATE
:contrib:jar
:core:compileJava UP-TO-DATE
:core:compileScala
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/admin/AdminUtils.scala:259:
 non-variable type argument String in type pattern 
scala.collection.Map[String,_] is unchecked since it is eliminated by erasure
case Some(map: Map[String, _]) => 
   ^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/admin/AdminUtils.scala:262:
 non-variable type argument String in type pattern 
scala.collection.Map[String,String] is unchecked since it is eliminated by 
erasure
case Some(config: Map[String, String]) =>
  ^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/server/KafkaServer.scala:168:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/server/KafkaServer.scala:169:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/utils/Utils.scala:81: a 
pure expression does nothing in statement position; you may be omitting 
necessary parentheses
daemonThread(name, runnable(fun))
^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/network/SocketServer.scala:361:
 Visited SCOPE_EXIT before visiting corresponding SCOPE_ENTER. SI-6049
  maybeCloseOldestConnection
  ^
/opt/apache_kafka/KAFKA-1781/core/src/main/scala/kafka/network/SocketServer.scala:381:
 Visited SCOPE_EXIT before visiting corresponding SCOPE_ENTER. SI-6049
  try {
  ^
there were 12 feature warning(s); re-run with -feature for details
8 warnings found
:core:processResources UP-TO-DATE
:core:classes
:core:copyDependantLibs
:core:jar
:examples:compileJava
:examples:processResources UP-TO-DATE
:examples:classes
:examples:jar
:contrib:hadoop-consumer:compileJava
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
:contrib:hadoop-consumer:processResources UP-TO-DATE
:contrib:hadoop-consumer:classes
:contrib:hadoop-consumer:jar
:contrib:hadoop-producer:compileJava
:contrib:hadoop-producer:processResources UP-TO-DATE
:contrib:hadoop-producer:classes
:contrib:had

[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Jean-Francois Im (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215013#comment-14215013
 ] 

Jean-Francois Im commented on KAFKA-1781:
-

It is weird! This is what I get:

{quote}
$ git clone https://git-wip-us.apache.org/repos/asf/kafka.git KAFKA-1781
Cloning into 'KAFKA-1781'...
remote: Counting objects: 21794, done.
remote: Compressing objects: 100% (7216/7216), done.
remote: Total 21794 (delta 12925), reused 19667 (delta 11330)
Receiving objects: 100% (21794/21794), 15.17 MiB | 2.57 MiB/s, done.
Resolving deltas: 100% (12925/12925), done.
$ cd KAFKA-1781
$ git checkout -b 0.8.2 origin/0.8.2
Branch 0.8.2 set up to track remote branch 0.8.2 from origin.
Switched to a new branch '0.8.2'
$ gradle --version


Gradle 1.8


Build time:   2013-09-24 07:32:33 UTC
Build number: none
Revision: 7970ec3503b4f5767ee1c1c69f8b4186c4763e3d

Groovy:   1.8.6
Ant:  Apache Ant(TM) version 1.9.2 compiled on July 8 2013
Ivy:  2.2.0
JVM:  1.8.0_05 (Oracle Corporation 25.5-b02)
OS:   Linux 2.6.32-358.6.2.el6.x86_64 amd64

$ gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/1.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1

FAILURE: Build failed with an exception.

* Where:
Build file '/home/jfim/projects/KAFKA-1781/build.gradle' line: 199

* What went wrong:
A problem occurred evaluating root project 'KAFKA-1781'.
> Could not create task of type 'ScalaDoc'.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 10.612 secs
{quote}

The differences between our setups seem to be JDK version (1.7.0_25 for you, 
1.8.0_05 on my end) and OS (Mac OS X vs Linux). 2.0 seems to work fine with the 
commands you use.

{quote}
$ rm -rf KAFKA-1781/
$ git clone https://git-wip-us.apache.org/repos/asf/kafka.git KAFKA-1781
Cloning into 'KAFKA-1781'...
remote: Counting objects: 21794, done.
remote: Compressing objects: 100% (7216/7216), done.
remote: Total 21794 (delta 12924), reused 19668 (delta 11330)
Receiving objects: 100% (21794/21794), 15.17 MiB | 2.74 MiB/s, done.
Resolving deltas: 100% (12924/12924), done.
$ cd KAFKA-1781
$ git checkout -b 0.8.2 origin/0.8.2
Branch 0.8.2 set up to track remote branch 0.8.2 from origin.
Switched to a new branch '0.8.2'
$ gradle --version


Gradle 2.0


Build time:   2014-07-01 07:45:34 UTC
Build number: none
Revision: b6ead6fa452dfdadec484059191eb641d817226c

Groovy:   2.3.3
Ant:  Apache Ant(TM) version 1.9.3 compiled on December 23 2013
JVM:  1.8.0_05 (Oracle Corporation 25.5-b02)
OS:   Linux 2.6.32-358.6.2.el6.x86_64 amd64

$ gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:downloadWrapper

BUILD SUCCESSFUL

Total time: 11.341 secs
$ ./gradlew
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 7.159 secs
{quote}

I don't mind amending the patch to reflect a lower version, but 2.0 and 2.2 
both appear to work on my end, while 1.8 and 1.12 don't.

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215030#comment-14215030
 ] 

Joe Stein commented on KAFKA-1781:
--

I think your issue is related to your JVM being 8 instead of 7 which has some 
more info here KAFKA-1624 and not gradle versions 

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Jean-Francois Im (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215036#comment-14215036
 ] 

Jean-Francois Im commented on KAFKA-1781:
-

It seems to work with 1.8 and 1.12 if I switch the JDK to 1.7.0_51. Is JDK 8 
supported by Kafka?

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215040#comment-14215040
 ] 

Joe Stein commented on KAFKA-1781:
--

JDK 8 related support information is captured here KAFKA-1624

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1771) replicate_testsuite data verification broken if num_partitions > replica_factor

2014-11-17 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215065#comment-14215065
 ] 

Jiangjie Qin edited comment on KAFKA-1771 at 11/17/14 7:50 PM:
---

[~ewencp] have you got a chance to work on this?


was (Author: becket_qin):
Hi Ewen, have you got a chance to work on this?

> replicate_testsuite data verification broken if num_partitions > 
> replica_factor
> ---
>
> Key: KAFKA-1771
> URL: https://issues.apache.org/jira/browse/KAFKA-1771
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> As discussed in KAFKA-1763,   testcase_0131,  testcase_0132, and 
> testcase_0133 currently fail with an exception:
> {quote}
> Traceback (most recent call last):
> File
> "/mnt/u001/kafka_replication_system_test/system_test/replication_testsuite/
> replica_basic_test.py", line 434, in runTest
> kafka_system_test_utils.validate_simple_consumer_data_matched_across_replic
> as(self.systemTestEnv, self.testcaseEnv)
> File
> "/mnt/u001/kafka_replication_system_test/system_test/utils/kafka_system_tes
> t_utils.py", line 2223, in
> validate_simple_consumer_data_matched_across_replicas
> replicaIdxMsgIdList[replicaIdx - 1][topicPartition] = consumerMsgIdList
> IndexError: list index out of range
> {quote}
> The root cause seems to be kafka_system_test_utils.start_simple_consumer. The 
> current logic seems incorrect. It should be generating one consumer per 
> partition per replica so it can verify the data from all sources, but it 
> currently has a loop involving the list of brokers, where that loop variable 
> isn't even used.
> But probably a bigger issue is that it's generating multiple processes in the 
> background. It records pids to the single well-known entity pid path, which 
> means only the last pid is saved and we could easily leave zombie processes 
> if one of them hangs for some reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1771) replicate_testsuite data verification broken if num_partitions > replica_factor

2014-11-17 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215065#comment-14215065
 ] 

Jiangjie Qin commented on KAFKA-1771:
-

Hi Ewen, have you got a chance to work on this?

> replicate_testsuite data verification broken if num_partitions > 
> replica_factor
> ---
>
> Key: KAFKA-1771
> URL: https://issues.apache.org/jira/browse/KAFKA-1771
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> As discussed in KAFKA-1763,   testcase_0131,  testcase_0132, and 
> testcase_0133 currently fail with an exception:
> {quote}
> Traceback (most recent call last):
> File
> "/mnt/u001/kafka_replication_system_test/system_test/replication_testsuite/
> replica_basic_test.py", line 434, in runTest
> kafka_system_test_utils.validate_simple_consumer_data_matched_across_replic
> as(self.systemTestEnv, self.testcaseEnv)
> File
> "/mnt/u001/kafka_replication_system_test/system_test/utils/kafka_system_tes
> t_utils.py", line 2223, in
> validate_simple_consumer_data_matched_across_replicas
> replicaIdxMsgIdList[replicaIdx - 1][topicPartition] = consumerMsgIdList
> IndexError: list index out of range
> {quote}
> The root cause seems to be kafka_system_test_utils.start_simple_consumer. The 
> current logic seems incorrect. It should be generating one consumer per 
> partition per replica so it can verify the data from all sources, but it 
> currently has a loop involving the list of brokers, where that loop variable 
> isn't even used.
> But probably a bigger issue is that it's generating multiple processes in the 
> background. It records pids to the single well-known entity pid path, which 
> means only the last pid is saved and we could easily leave zombie processes 
> if one of them hangs for some reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Jean-Francois Im (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215074#comment-14215074
 ] 

Jean-Francois Im commented on KAFKA-1781:
-

I think the two are related to JDK 8, but distinct issues. For example, running 
with JDK 8, gradle 1.8 and scala version 2.11:

{quote}
$ gradle -version


Gradle 1.8


Build time:   2013-09-24 07:32:33 UTC
Build number: none
Revision: 7970ec3503b4f5767ee1c1c69f8b4186c4763e3d

Groovy:   1.8.6
Ant:  Apache Ant(TM) version 1.9.2 compiled on July 8 2013
Ivy:  2.2.0
JVM:  1.8.0_05 (Oracle Corporation 25.5-b02)
OS:   Linux 2.6.32-358.6.2.el6.x86_64 amd64

$ gradle -PscalaVersion=2.11
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/1.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.11

FAILURE: Build failed with an exception.

* Where:
Build file '/home/jfim/projects/KAFKA-1781/build.gradle' line: 199

* What went wrong:
A problem occurred evaluating root project 'KAFKA-1781'.
> Could not create task of type 'ScalaDoc'.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 7.725 secs
{quote}

I think the takeaway for the wrapper download is that 2.0 is required if 
running on JDK 8 and that 1.8 (and potentially lower) work on JDK 7. The rest 
of the build is still broken with scala 2.10.1 on JDK 8, even when on gradle 
2.0.

I'm not sure what the proper resolution to this issue would be. Requiring 2.0 
seems rather safe but is only required on JDK 8 and does not fix KAFKA-1624. 
Perhaps adding a note that JDK 8 is not supported at this point in time is the 
proper resolution?

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1781) Readme should specify that Gradle 2.0 is required for initial bootstrap

2014-11-17 Thread Jean-Francois Im (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215079#comment-14215079
 ] 

Jean-Francois Im commented on KAFKA-1781:
-

Also, see https://issues.gradle.org/browse/GRADLE-3094 which has been fixed in 
Gradle 2.0.

> Readme should specify that Gradle 2.0 is required for initial bootstrap
> ---
>
> Key: KAFKA-1781
> URL: https://issues.apache.org/jira/browse/KAFKA-1781
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2
>Reporter: Jean-Francois Im
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: gradle-2.0-readme.patch
>
>
> Current README.md says "You need to have gradle installed."
> As the bootstrap procedure doesn't work with gradle 1.12, this needs to say 
> that 2.0 or greater is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1771) replicate_testsuite data verification broken if num_partitions > replica_factor

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-1771:
-
Attachment: kafka-1771.wip.patch

[~becket_qin], I'm attaching the WIP patch I created just based on my 
investigation of the problem -- it changes start_simple_consumer to iterate 
over partitions and replicas, which is what I think was originally intended.

This at least gets rid of the uncaught exception (at least for testcase_0131), 
but the test still isn't passing:

{quote}
Validate for data matched on topic [test_1] across replicas  :  FAILED
{quote}

I haven't had time to look into this any further than that.

> replicate_testsuite data verification broken if num_partitions > 
> replica_factor
> ---
>
> Key: KAFKA-1771
> URL: https://issues.apache.org/jira/browse/KAFKA-1771
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.8.1.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Attachments: kafka-1771.wip.patch
>
>
> As discussed in KAFKA-1763,   testcase_0131,  testcase_0132, and 
> testcase_0133 currently fail with an exception:
> {quote}
> Traceback (most recent call last):
> File
> "/mnt/u001/kafka_replication_system_test/system_test/replication_testsuite/
> replica_basic_test.py", line 434, in runTest
> kafka_system_test_utils.validate_simple_consumer_data_matched_across_replic
> as(self.systemTestEnv, self.testcaseEnv)
> File
> "/mnt/u001/kafka_replication_system_test/system_test/utils/kafka_system_tes
> t_utils.py", line 2223, in
> validate_simple_consumer_data_matched_across_replicas
> replicaIdxMsgIdList[replicaIdx - 1][topicPartition] = consumerMsgIdList
> IndexError: list index out of range
> {quote}
> The root cause seems to be kafka_system_test_utils.start_simple_consumer. The 
> current logic seems incorrect. It should be generating one consumer per 
> partition per replica so it can verify the data from all sources, but it 
> currently has a loop involving the list of brokers, where that loop variable 
> isn't even used.
> But probably a bigger issue is that it's generating multiple processes in the 
> background. It records pids to the single well-known entity pid path, which 
> means only the last pid is saved and we could easily leave zombie processes 
> if one of them hangs for some reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25995: Patch for KAFKA-1650

2014-11-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25995/#review61528
---



core/src/main/scala/kafka/tools/MirrorMaker.scala


based on the hash value of the source topic-partitonId string



core/src/main/scala/kafka/tools/MirrorMaker.scala


Could you make this consistent with others as offset.commit.internal.ms?



core/src/main/scala/kafka/tools/MirrorMaker.scala


In this case, for MaxInFlightRequests > 1 we are not only at the risk of 
message reordering but also at the risk of message loss upon failures right?



core/src/main/scala/kafka/tools/MirrorMaker.scala


Actually, can we move this comment into the top comment of the MM class as 
a "NOTE" under the producer paragraph?



core/src/main/scala/kafka/tools/MirrorMaker.scala


Is it possible that the current chunk has been consumed completely and the 
fetcher thread has yet put in a new chunk, and hence hasNext() will return 
false? If this case shall we stop the consumer or just let it block?



core/src/main/scala/kafka/tools/MirrorMaker.scala


Since we already has the logIdent as threadName, I think we do not need 
this.getName here?



core/src/main/scala/kafka/tools/MirrorMaker.scala


With the logIdent it will become:

FATAL [mirrormaker-offset-commit-thread] Offset commit thread exits due to 
...

which is a bit verbose?


- Guozhang Wang


On Nov. 12, 2014, 5:51 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/25995/
> ---
> 
> (Updated Nov. 12, 2014, 5:51 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1650 and KAKFA-1650
> https://issues.apache.org/jira/browse/KAFKA-1650
> https://issues.apache.org/jira/browse/KAKFA-1650
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressed Guozhang's comments.
> 
> 
> Addressed Guozhang's comments
> 
> 
> commit before switch to trunk
> 
> 
> commit before rebase
> 
> 
> Rebased on trunk, Addressed Guozhang's comments.
> 
> 
> Addressed Guozhang's comments on MaxInFlightRequests
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> mirrormaker-redesign
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> fbc680fde21b02f11285a4f4b442987356abd17b 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> f399105087588946987bbc84e3759935d9498b6a 
> 
> Diff: https://reviews.apache.org/r/25995/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Commented] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215288#comment-14215288
 ] 

Guozhang Wang commented on KAFKA-1624:
--

Thanks Joe. So I will bump up the default version to 2.10.3 if no one else have 
issues with it.

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1481) Stop using dashes AND underscores as separators in MBean names

2014-11-17 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215304#comment-14215304
 ] 

Jun Rao commented on KAFKA-1481:


Thanks for the patch. +1 from me. Just a few minor comments from me.

90. Kafka: No need to call AppInfo.registerInfo().

91. ZookeeperConsumerConnector: Could we rename the following method to 
ownedPartitionsCountMetricTags?
def ownedPartitionsCountMetricName

Could you also provide a patch for trunk?

[~jjkoshy], do you want to look at the patch again?

> Stop using dashes AND underscores as separators in MBean names
> --
>
> Key: KAFKA-1481
> URL: https://issues.apache.org/jira/browse/KAFKA-1481
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Otis Gospodnetic
>Priority: Critical
>  Labels: patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-1481_2014-06-06_13-06-35.patch, 
> KAFKA-1481_2014-10-13_18-23-35.patch, KAFKA-1481_2014-10-14_21-53-35.patch, 
> KAFKA-1481_2014-10-15_10-23-35.patch, KAFKA-1481_2014-10-20_23-14-35.patch, 
> KAFKA-1481_2014-10-21_09-14-35.patch, KAFKA-1481_2014-10-30_21-35-43.patch, 
> KAFKA-1481_2014-10-31_14-35-43.patch, 
> KAFKA-1481_2014-11-03_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-03_17-02-23.patch, 
> KAFKA-1481_2014-11-10_20-39-41_doc.patch, 
> KAFKA-1481_2014-11-10_21-02-23.patch, KAFKA-1481_2014-11-14_16-33-03.patch, 
> KAFKA-1481_2014-11-14_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-17_14-33-03.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-14_21-53-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-15_10-23-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_20-14-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_23-14-35.patch, alternateLayout1.png, 
> alternateLayout2.png, diff-for-alternate-layout1.patch, 
> diff-for-alternate-layout2.patch, originalLayout.png
>
>
> MBeans should not use dashes or underscores as separators because these 
> characters are allowed in hostnames, topics, group and consumer IDs, etc., 
> and these are embedded in MBeans names making it impossible to parse out 
> individual bits from MBeans.
> Perhaps a pipe character should be used to avoid the conflict. 
> This looks like a major blocker because it means nobody can write Kafka 0.8.x 
> monitoring tools unless they are doing it for themselves AND do not use 
> dashes AND do not use underscores.
> See: http://search-hadoop.com/m/4TaT4lonIW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27391: Fix KAFKA-1634: Add the global retentionTime (in milis) and disable the per-offset timestamp

2014-11-17 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/#review61761
---



clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java


Not sure if this will make it worse; would it be clearer to call this 
DEFAULT_TIMESTAMP_V0_V1?



clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java


Made a follow-up comment in the earlier RB. But pasting here as well:

Agree that it is still "common" in the object but it is completely removed 
from the wire protocol - i.e., OFFSET_COMMIT_REQUEST_PARTITION_V1 which is in 
the V2 OffsetCommitRequest does not have a timestamp.

This method should probably be read as "initCommonFieldsInStruct" - i.e., 
effectively the wire protocol. That said, I'm loathe to add another init method 
which reads initCommonFieldsInV0AndV1. So I think rather than checking 
fetchPartitionData.timestamp it would be better to explicitly check the 
(already set) request version in the struct. If v0 or v1 then set the timestamp 
key name.



core/src/main/scala/kafka/api/OffsetCommitRequest.scala


>= 2 (or we may forget when we go to version 3)



core/src/main/scala/kafka/common/OffsetMetadataAndError.scala


I think our coding convention is to use the parameterless form in the 
absence of side-effects



core/src/main/scala/kafka/common/OffsetMetadataAndError.scala


Similar comment as above.



core/src/main/scala/kafka/server/KafkaApis.scala


See reply to earlier comment.


- Joel Koshy


On Nov. 8, 2014, 12:54 a.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27391/
> ---
> 
> (Updated Nov. 8, 2014, 12:54 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1634
> https://issues.apache.org/jira/browse/KAFKA-1634
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> The timestamp field of OffsetAndMetadata is preserved since we need to be 
> backward compatible with older versions
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
>   
> clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java
>  3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  df37fc6d8f0db0b8192a948426af603be3444da4 
>   core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
> 050615c72efe7dbaa4634f53943bd73273d20ffb 
>   core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
> c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
>   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
> 4cabffeacea09a49913505db19a96a55d58c0909 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> fbc680fde21b02f11285a4f4b442987356abd17b 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 968b0c4f809ea9f9c3f322aef9db105f9d2e0e23 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 4de812374e8fb1fed834d2be3f9655f55b511a74 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 2957bc435102bc4004d8f100dbcdd56287c8ffae 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> cd16ced5465d098be7a60498326b2a98c248f343 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> 8c5364fa97da1be09973c176d1baeb339455d319 
> 
> Diff: https://reviews.apache.org/r/27391/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



Re: Review Request 27391: Fix KAFKA-1634: Add the global retentionTime (in milis) and disable the per-offset timestamp

2014-11-17 Thread Joel Koshy


> On Nov. 7, 2014, 3 a.m., Joel Koshy wrote:
> > clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java,
> >  line 152
> > 
> >
> > This confused me a bit, and I think it is because initCommonFields is 
> > intended to initialize fields common to all versions of the request. It is 
> > a useful helper method but it becomes somewhat clunky when removing fields. 
> > The partition-level timestamp is no longer a common field.
> > 
> > If this is v2 then we should _never_ set anything in the timestamp 
> > field of the struct; and if it is < v2 then we should _always_ set the 
> > timestamp field (even if it is the default). However, since the timestamp 
> > field in the Field declaration for OFFSET_COMMIT_REQUEST_PARTITION_V0 does 
> > not have a default explicitly specified, I think this will break with a 
> > SchemaException("missing value...") for offset commit request v0, v1 if we 
> > choose to write to a bytebuffer under those versions with this code.
> > 
> > One option is to explicitly pass in the constructor version (0, 1, 2) 
> > to initCommonFields and use that to decide whether to include/exclude this 
> > field, but that is weird. Another alternative is a separate helper method 
> > for v0v1. That is even weirder.
> 
> Guozhang Wang wrote:
> Actually, the partition-level timestamp is still a commen field (we are 
> just deprecating it, and chose to not serialize / de-ser in v2). I agree this 
> is a bit wired as it is written in this way, I thought about this when I 
> started the first version but did not come up with a better approach.

Agree that it is still "common" in the object but it is completely removed from 
the wire protocol - i.e., OFFSET_COMMIT_REQUEST_PARTITION_V1 which is in the V2 
OffsetCommitRequest does not have a timestamp.

This method should probably be read as "initCommonFieldsInStruct" - i.e., 
effectively the wire protocol. That said, I'm loathe to add another init method 
which reads initCommonFieldsInV0AndV1. So I think rather than checking 
fetchPartitionData.timestamp it would be better to explicitly check the 
(already set) request version in the struct. If v0 or v1 then set the timestamp 
key name.


> On Nov. 7, 2014, 3 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/common/OffsetMetadataAndError.scala, line 22
> > 
> >
> > Can we mark this @Deprecated as well?
> > 
> > We should probably make the primary constructor without timestamp and 
> > add a secondary constructor with timestamp and mark deprecated there.
> > 
> > Also, can we use case class.copy if timestamp needs to be modified? 
> > However, per comment further down I don't think it needs to be touched.
> 
> Guozhang Wang wrote:
> Actually we cannot make it deprecated as it will be preserved even in the 
> new version, right? Note this is not used for the wire protocol but for the 
> cache / disk format.
> 
> Guozhang Wang wrote:
> I should say "not only for the wire protocol but also for cache disk 
> storage format". And thinking about this twice, I will change to two separate 
> classes, one for wire protocol and one for server storage format.

Yes that is what I was thinking - we should ideally have a separate wire 
protocol and storage format.


> On Nov. 7, 2014, 3 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/server/OffsetManager.scala, line 498
> > 
> >
> > Actually, since we always set the retention period (for v0, v1) in 
> > KafkaApis do we need to even touch this timestamp? i.e., we should 
> > basically ignore it right? So we only need to do:
> > value.set(VALUE_TIMESTAMP_FIELD, expirationTimestamp).
> 
> Guozhang Wang wrote:
> I think note. In v0/v1, if the timestamp is explicitly specified (i.e. 
> not -1) we need to use it as the expiration timestamp, or at least that was 
> how I understood the semantics.
> 
> Guozhang Wang wrote:
> "I think we cannot not"

Right - what I meant was in KafkaApis we can just compute the retentionPeriod 
if v0 or v1.

So if vo/v1 and timestamp = (now + 7 days), then set retention to 7 days.


- Joel


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/#review60285
---


On Nov. 8, 2014, 12:54 a.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27391/
> ---
> 
> (Updated Nov. 8, 2014, 12:54 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1634
> https://issues.apache.org/jira/br

How to recover from ConsumerRebalanceFailedException ?

2014-11-17 Thread Bhavesh Mistry
Hi Kafka Team,


I get following exception due to ZK/Network issues intermittently.  How do
I recover from consumer thread dying *programmatically* and restart source
because we have alerts that due to this error we have partition OWNERSHIP
is *none* ?  Please let me know how to restart source and detect consumer
thread died and need to be restarted ?



17 Nov 2014 04:29:41,180 ERROR [
ZkClient-EventThread-65-dare-msgq00.sv.walmartlabs.com:9091,
dare-msgq01.sv.walmartlabs.com:9091,dare-msgq02.sv.walmartlabs.com:9091]
(org.I0Itec.zkclient.ZkEventThread.run:77)  - Error handling event
ZkEvent[New session event sent to
kafka.consumer.ZookeeperConsumerConnector$ZKSessionExpireListener@552c5ea8]
kafka.common.ConsumerRebalanceFailedException:
mupd_statsd_logmon_metric_events18_sdc-stg-logsearch-app1-1416023311208-14501d68
can't rebalance after 8 retries
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:626)
at
kafka.consumer.ZookeeperConsumerConnector$ZKSessionExpireListener.handleNewSession(ZookeeperConsumerConnector.scala:481)
at org.I0Itec.zkclient.ZkClient$4.run(ZkClient.java:472)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)





ZK Connection Issues:

java.net.SocketException: Transport endpoint is not connected
at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
at
sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:633)
at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
at
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1205)
at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1170)




at
org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
at
org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
at kafka.utils.ZkUtils$.readData(ZkUtils.scala:449)
at
kafka.consumer.TopicCount$.constructTopicCount(TopicCount.scala:61)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance(ZookeeperConsumerConnector.scala:630)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1$$anonfun$apply$mcV$sp$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:601)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:595)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:592)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:592)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:591)
at
kafka.consumer.ZookeeperConsumerConnector$ZKSessionExpireListener.handleNewSession(ZookeeperConsumerConnector.scala:481)
at org.I0Itec.zkclient.ZkClient$4.run(ZkClient.java:472)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException:
KeeperErrorCode = NoNode for
/consumers/mupd_statsd_logmon_metric_events18/ids/mupd_statsd_logmon_metric_events18_sdc-stg-logsearch-app1-1416023311208-14501d68
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:921)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:950)
at org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:103)
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:770)
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:766)
at
org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)


[jira] [Commented] (KAFKA-1481) Stop using dashes AND underscores as separators in MBean names

2014-11-17 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215371#comment-14215371
 ] 

Joel Koshy commented on KAFKA-1481:
---

+1  (for 0.8.2)

> Stop using dashes AND underscores as separators in MBean names
> --
>
> Key: KAFKA-1481
> URL: https://issues.apache.org/jira/browse/KAFKA-1481
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Otis Gospodnetic
>Priority: Critical
>  Labels: patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-1481_2014-06-06_13-06-35.patch, 
> KAFKA-1481_2014-10-13_18-23-35.patch, KAFKA-1481_2014-10-14_21-53-35.patch, 
> KAFKA-1481_2014-10-15_10-23-35.patch, KAFKA-1481_2014-10-20_23-14-35.patch, 
> KAFKA-1481_2014-10-21_09-14-35.patch, KAFKA-1481_2014-10-30_21-35-43.patch, 
> KAFKA-1481_2014-10-31_14-35-43.patch, 
> KAFKA-1481_2014-11-03_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-03_17-02-23.patch, 
> KAFKA-1481_2014-11-10_20-39-41_doc.patch, 
> KAFKA-1481_2014-11-10_21-02-23.patch, KAFKA-1481_2014-11-14_16-33-03.patch, 
> KAFKA-1481_2014-11-14_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-17_14-33-03.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-14_21-53-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-15_10-23-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_20-14-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_23-14-35.patch, alternateLayout1.png, 
> alternateLayout2.png, diff-for-alternate-layout1.patch, 
> diff-for-alternate-layout2.patch, originalLayout.png
>
>
> MBeans should not use dashes or underscores as separators because these 
> characters are allowed in hostnames, topics, group and consumer IDs, etc., 
> and these are embedded in MBeans names making it impossible to parse out 
> individual bits from MBeans.
> Perhaps a pipe character should be used to avoid the conflict. 
> This looks like a major blocker because it means nobody can write Kafka 0.8.x 
> monitoring tools unless they are doing it for themselves AND do not use 
> dashes AND do not use underscores.
> See: http://search-hadoop.com/m/4TaT4lonIW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1624) building on JDK 8 fails

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215420#comment-14215420
 ] 

Ewen Cheslack-Postava commented on KAFKA-1624:
--

Isn't 2.10.4 the latest version in the 2.10 series?

> building on JDK 8 fails
> ---
>
> Key: KAFKA-1624
> URL: https://issues.apache.org/jira/browse/KAFKA-1624
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
>
> {code}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
> support was removed in 8.0
> error: error while loading CharSequence, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
> error: error while loading Comparator, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Comparator.class)' is 
> broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 20)
> error: error while loading AnnotatedElement, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)'
>  is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
> error: error while loading Arrays, class file 
> '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/util/Arrays.class)' is broken
> (class java.lang.RuntimeException/bad constant pool tag 18 at byte 765)
> /tmp/sbt_53783b12/xsbt/ExtractAPI.scala:395: error: java.util.Comparator does 
> not take type parameters
>   private[this] val sortClasses = new Comparator[Symbol] {
> ^
> 5 errors found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > org.gradle.messaging.remote.internal.PlaceholderException (no error message)
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug option to get more log output.
> BUILD FAILED
> Total time: 1 mins 48.298 secs
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1773) Add a tool to check available consumer groups

2014-11-17 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1773:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Add a tool to check available consumer groups
> -
>
> Key: KAFKA-1773
> URL: https://issues.apache.org/jira/browse/KAFKA-1773
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish Kumar Singh
>Assignee: Ashish Kumar Singh
> Attachments: KAFKA-1773.patch
>
>
> Right now ConsumerOffsetChecker expects consumer group. However, there is no 
> tool to get available consumer groups.
> This JIRA intends to add a tool to get available consumer groups from 
> ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1773) Add a tool to check available consumer groups

2014-11-17 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215506#comment-14215506
 ] 

Neha Narkhede commented on KAFKA-1773:
--

Thanks for the patch. The plan is to instead consolidate on a generic 
ConsumerCommand. See KAFKA-1476. We can handle this as part of that JIRA. It 
will be great to get your review on that patch.

> Add a tool to check available consumer groups
> -
>
> Key: KAFKA-1773
> URL: https://issues.apache.org/jira/browse/KAFKA-1773
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish Kumar Singh
>Assignee: Ashish Kumar Singh
> Attachments: KAFKA-1773.patch
>
>
> Right now ConsumerOffsetChecker expects consumer group. However, there is no 
> tool to get available consumer groups.
> This JIRA intends to add a tool to get available consumer groups from 
> ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215515#comment-14215515
 ] 

Joe Stein commented on KAFKA-1173:
--

FYI - trying a new AWS secret key but got error because I had a "+" in my key 
that AWS generated

There is a syntax error in the following Vagrantfile. The syntax error
message is reproduced below for convenience:

Vagrantfile.local:3: syntax error, unexpected tIDENTIFIER, expecting 
end-of-input

I am going to generate a new key and continue to test

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215523#comment-14215523
 ] 

Joe Stein commented on KAFKA-1173:
--

ignore last issue  I quoted everything and spinning up now

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1173:
-
Fix Version/s: 0.8.3

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27391: Fix KAFKA-1634: Add the global retentionTime (in milis) and disable the per-offset timestamp

2014-11-17 Thread Guozhang Wang


> On Nov. 17, 2014, 10:31 p.m., Joel Koshy wrote:
> > clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java,
> >  line 151
> > 
> >
> > Made a follow-up comment in the earlier RB. But pasting here as well:
> > 
> > Agree that it is still "common" in the object but it is completely 
> > removed from the wire protocol - i.e., OFFSET_COMMIT_REQUEST_PARTITION_V1 
> > which is in the V2 OffsetCommitRequest does not have a timestamp.
> > 
> > This method should probably be read as "initCommonFieldsInStruct" - 
> > i.e., effectively the wire protocol. That said, I'm loathe to add another 
> > init method which reads initCommonFieldsInV0AndV1. So I think rather than 
> > checking fetchPartitionData.timestamp it would be better to explicitly 
> > check the (already set) request version in the struct. If v0 or v1 then set 
> > the timestamp key name.

The version number info is not in the struct, so we cannot get its value unless 
we are going to add that from the request header into the constructor. But we 
can check if the struct has that field or not. Changed to this accordingly.


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/#review61761
---


On Nov. 8, 2014, 12:54 a.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27391/
> ---
> 
> (Updated Nov. 8, 2014, 12:54 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1634
> https://issues.apache.org/jira/browse/KAFKA-1634
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> The timestamp field of OffsetAndMetadata is preserved since we need to be 
> backward compatible with older versions
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
>   
> clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java
>  3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  df37fc6d8f0db0b8192a948426af603be3444da4 
>   core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
> 050615c72efe7dbaa4634f53943bd73273d20ffb 
>   core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
> c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
>   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
> 4cabffeacea09a49913505db19a96a55d58c0909 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> fbc680fde21b02f11285a4f4b442987356abd17b 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 968b0c4f809ea9f9c3f322aef9db105f9d2e0e23 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 4de812374e8fb1fed834d2be3f9655f55b511a74 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 2957bc435102bc4004d8f100dbcdd56287c8ffae 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> cd16ced5465d098be7a60498326b2a98c248f343 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> 8c5364fa97da1be09973c176d1baeb339455d319 
> 
> Diff: https://reviews.apache.org/r/27391/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



Re: Review Request 27391: Fix KAFKA-1634: Add the global retentionTime (in milis) and disable the per-offset timestamp

2014-11-17 Thread Guozhang Wang


> On Nov. 7, 2014, 3 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/server/OffsetManager.scala, line 498
> > 
> >
> > Actually, since we always set the retention period (for v0, v1) in 
> > KafkaApis do we need to even touch this timestamp? i.e., we should 
> > basically ignore it right? So we only need to do:
> > value.set(VALUE_TIMESTAMP_FIELD, expirationTimestamp).
> 
> Guozhang Wang wrote:
> I think note. In v0/v1, if the timestamp is explicitly specified (i.e. 
> not -1) we need to use it as the expiration timestamp, or at least that was 
> how I understood the semantics.
> 
> Guozhang Wang wrote:
> "I think we cannot not"
> 
> Joel Koshy wrote:
> Right - what I meant was in KafkaApis we can just compute the 
> retentionPeriod if v0 or v1.
> 
> So if vo/v1 and timestamp = (now + 7 days), then set retention to 7 days.

retention is a global parameter which we use later to set the per-message 
timestamp; but if the timestamps from the v0/v1 requests are different then we 
cannot just use a single retention right?


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/#review60285
---


On Nov. 8, 2014, 12:54 a.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27391/
> ---
> 
> (Updated Nov. 8, 2014, 12:54 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1634
> https://issues.apache.org/jira/browse/KAFKA-1634
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> The timestamp field of OffsetAndMetadata is preserved since we need to be 
> backward compatible with older versions
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
> 7517b879866fc5dad5f8d8ad30636da8bbe7784a 
>   
> clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java
>  3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
>   
> clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
>  df37fc6d8f0db0b8192a948426af603be3444da4 
>   core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
> 050615c72efe7dbaa4634f53943bd73273d20ffb 
>   core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
> c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
>   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
> 4cabffeacea09a49913505db19a96a55d58c0909 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> fbc680fde21b02f11285a4f4b442987356abd17b 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> 968b0c4f809ea9f9c3f322aef9db105f9d2e0e23 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> 4de812374e8fb1fed834d2be3f9655f55b511a74 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 2957bc435102bc4004d8f100dbcdd56287c8ffae 
>   core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
> cd16ced5465d098be7a60498326b2a98c248f343 
>   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
> 8c5364fa97da1be09973c176d1baeb339455d319 
> 
> Diff: https://reviews.apache.org/r/27391/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



Re: Review Request 27391: Fix KAFKA-1634

2014-11-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27391/
---

(Updated Nov. 18, 2014, 1:42 a.m.)


Review request for kafka.


Summary (updated)
-

Fix KAFKA-1634


Bugs: KAFKA-1634
https://issues.apache.org/jira/browse/KAFKA-1634


Repository: kafka


Description
---

The timestamp field of OffsetAndMetadata is preserved since we need to be 
backward compatible with older versions


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/common/protocol/Protocol.java 
7517b879866fc5dad5f8d8ad30636da8bbe7784a 
  
clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java 
3ee5cbad55ce836fd04bb954dcf6ef2f9bc3288f 
  
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java 
df37fc6d8f0db0b8192a948426af603be3444da4 
  core/src/main/scala/kafka/api/OffsetCommitRequest.scala 
050615c72efe7dbaa4634f53943bd73273d20ffb 
  core/src/main/scala/kafka/api/OffsetFetchRequest.scala 
c7604b9cdeb8f46507f04ed7d35fcb3d5f150713 
  core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
4cabffeacea09a49913505db19a96a55d58c0909 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
f476973eeff653473a60c3ecf36e870e386536bc 
  core/src/main/scala/kafka/server/KafkaApis.scala 
968b0c4f809ea9f9c3f322aef9db105f9d2e0e23 
  core/src/main/scala/kafka/server/KafkaServer.scala 
4de812374e8fb1fed834d2be3f9655f55b511a74 
  core/src/main/scala/kafka/server/OffsetManager.scala 
2957bc435102bc4004d8f100dbcdd56287c8ffae 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
cd16ced5465d098be7a60498326b2a98c248f343 
  core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
8c5364fa97da1be09973c176d1baeb339455d319 

Diff: https://reviews.apache.org/r/27391/diff/


Testing
---


Thanks,

Guozhang Wang



Re: Review Request 28025: Patch for KAFKA-345

2014-11-17 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28025/
---

(Updated Nov. 18, 2014, 1:42 a.m.)


Review request for kafka.


Bugs: KAFKA-345
https://issues.apache.org/jira/browse/KAFKA-345


Repository: kafka


Description (updated)
---

Added new unit test.


Incorporated Joel's comments


Incorporated Joel's comments


Addressed Joel's comments.


Diffs (updated)
-

  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
f476973eeff653473a60c3ecf36e870e386536bc 
  core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java 
PRE-CREATION 
  core/src/main/scala/kafka/javaapi/consumer/ZookeeperConsumerConnector.scala 
1f98db5d692adc113189ec8c75a4fad29d6b6ffe 
  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
e1d87112a2a587aa3a2f5875f278b276c32f45ac 

Diff: https://reviews.apache.org/r/28025/diff/


Testing
---


Thanks,

Jiangjie Qin



Re: Review Request 27990: Patch for KAFKA-1751

2014-11-17 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27990/#review61855
---

Ship it!


Looks good! Two minor readability comments from my side, and a non-binding +1.


core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala


I still think its pretty difficult to tell here that entry._2 refers to 
replicas.



core/src/test/scala/unit/kafka/admin/AdminTest.scala


Maybe comment on why you want to fail here. Unlike assertions, these tests 
are not self documenting.


- Gwen Shapira


On Nov. 17, 2014, 2:33 p.m., Dmitry Pekar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27990/
> ---
> 
> (Updated Nov. 17, 2014, 2:33 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1751
> https://issues.apache.org/jira/browse/KAFKA-1751
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1751 / CR fixes
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala 
> 979992b68af3723cd229845faff81c641123bb88 
>   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
> e28979827110dfbbb92fe5b152e7f1cc973de400 
>   core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
> 29cc01bcef9cacd8dec1f5d662644fc6fe4994bc 
> 
> Diff: https://reviews.apache.org/r/27990/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Dmitry Pekar
> 
>



[jira] [Updated] (KAFKA-345) Add a listener to ZookeeperConsumerConnector to get notified on rebalance events

2014-11-17 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-345:
---
Attachment: KAFKA-345_2014-11-17_17:42:42.patch

> Add a listener to ZookeeperConsumerConnector to get notified on rebalance 
> events
> 
>
> Key: KAFKA-345
> URL: https://issues.apache.org/jira/browse/KAFKA-345
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.7, 0.8.0
>Reporter: Peter Romianowski
> Attachments: KAFKA-345.patch, KAFKA-345.patch, 
> KAFKA-345_2014-11-15_01:00:55.patch, KAFKA-345_2014-11-15_01:19:56.patch, 
> KAFKA-345_2014-11-17_17:42:42.patch
>
>
> A sample use-case
> In our scenario we partition events by userid and then apply these to some 
> kind of state machine, that modifies the actual state of a user. So events 
> trigger state transitions. In order to avoid the need of loading user's state 
> upon each event processed, we cache that. But if a user's partition is moved 
> to another consumer and then back to the previous consumer we have stale 
> caches and hell breaks loose. I guess the same kind of problem occurs in 
> other scenarios like counting numbers by user, too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1634) Improve semantics of timestamp in OffsetCommitRequests and update documentation

2014-11-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1634:
-
Attachment: KAFKA-1634_2014-11-17_17:42:44.patch

> Improve semantics of timestamp in OffsetCommitRequests and update 
> documentation
> ---
>
> Key: KAFKA-1634
> URL: https://issues.apache.org/jira/browse/KAFKA-1634
> Project: Kafka
>  Issue Type: Bug
>Reporter: Neha Narkhede
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-1634.patch, KAFKA-1634_2014-11-06_15:35:46.patch, 
> KAFKA-1634_2014-11-07_16:54:33.patch, KAFKA-1634_2014-11-17_17:42:44.patch
>
>
> From the mailing list -
> following up on this -- I think the online API docs for OffsetCommitRequest
> still incorrectly refer to client-side timestamps:
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest
> Wasn't that removed and now always handled server-side now?  Would one of
> the devs mind updating the API spec wiki?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1634) Improve semantics of timestamp in OffsetCommitRequests and update documentation

2014-11-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215585#comment-14215585
 ] 

Guozhang Wang commented on KAFKA-1634:
--

Updated reviewboard https://reviews.apache.org/r/27391/diff/
 against branch origin/trunk

> Improve semantics of timestamp in OffsetCommitRequests and update 
> documentation
> ---
>
> Key: KAFKA-1634
> URL: https://issues.apache.org/jira/browse/KAFKA-1634
> Project: Kafka
>  Issue Type: Bug
>Reporter: Neha Narkhede
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8.3
>
> Attachments: KAFKA-1634.patch, KAFKA-1634_2014-11-06_15:35:46.patch, 
> KAFKA-1634_2014-11-07_16:54:33.patch, KAFKA-1634_2014-11-17_17:42:44.patch
>
>
> From the mailing list -
> following up on this -- I think the online API docs for OffsetCommitRequest
> still incorrectly refer to client-side timestamps:
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest
> Wasn't that removed and now always handled server-side now?  Would one of
> the devs mind updating the API spec wiki?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-345) Add a listener to ZookeeperConsumerConnector to get notified on rebalance events

2014-11-17 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215584#comment-14215584
 ] 

Jiangjie Qin commented on KAFKA-345:


Updated reviewboard https://reviews.apache.org/r/28025/diff/
 against branch origin/trunk

> Add a listener to ZookeeperConsumerConnector to get notified on rebalance 
> events
> 
>
> Key: KAFKA-345
> URL: https://issues.apache.org/jira/browse/KAFKA-345
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.7, 0.8.0
>Reporter: Peter Romianowski
> Attachments: KAFKA-345.patch, KAFKA-345.patch, 
> KAFKA-345_2014-11-15_01:00:55.patch, KAFKA-345_2014-11-15_01:19:56.patch, 
> KAFKA-345_2014-11-17_17:42:42.patch
>
>
> A sample use-case
> In our scenario we partition events by userid and then apply these to some 
> kind of state machine, that modifies the actual state of a user. So events 
> trigger state transitions. In order to avoid the need of loading user's state 
> upon each event processed, we cache that. But if a user's partition is moved 
> to another consumer and then back to the previous consumer we have stale 
> caches and hell breaks loose. I guess the same kind of problem occurs in 
> other scenarios like counting numbers by user, too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215589#comment-14215589
 ] 

Joe Stein commented on KAFKA-1173:
--

[~ewencp] The aws parts are further along and servers are spinning up with the 
code and configs installed but still getting issue.

All of the DNS hosts are on the public IP but my default security group is only 
setup for 22 on the outside and only the internal security group for inside.  
Can this change to use the private IP instead of the public address?

should I have set
ec2_associate_public_ip = false

I ran vagrant outside the VPC but maybe I should change it to false though the 
comment above that line is the reason I didn't?


> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215589#comment-14215589
 ] 

Joe Stein edited comment on KAFKA-1173 at 11/18/14 1:47 AM:


[~ewencp] The aws parts are further along and servers are spinning up with the 
code and configs installed but still getting issue.

All of the DNS hosts are on the public IP but my default security group is only 
setup for 22 on the outside and only the internal security group for inside.  
Can this change to use the private IP instead of the public address?

should I have set
ec2_associate_public_ip = false

I ran vagrant outside the VPC but maybe I should change it to false though the 
comment above that line is the reason I didn't?

Maybe something else but right now the servers can only communicate with each 
on the internal private IP but hosts spin up on public ip (however we fix that) 
thanks!



was (Author: joestein):
[~ewencp] The aws parts are further along and servers are spinning up with the 
code and configs installed but still getting issue.

All of the DNS hosts are on the public IP but my default security group is only 
setup for 22 on the outside and only the internal security group for inside.  
Can this change to use the private IP instead of the public address?

should I have set
ec2_associate_public_ip = false

I ran vagrant outside the VPC but maybe I should change it to false though the 
comment above that line is the reason I didn't?


> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215639#comment-14215639
 ] 

Ewen Cheslack-Postava commented on KAFKA-1173:
--

You need ec2_associate_public_ip if you want to be able to SSH externally. I 
think for your setup you'd want to set override.hostmanager.ignore_private_ip = 
false, whereas I specifically set it to true to get the public address. You'd 
only want to turn of ec2_associate_public_ip if everything including vagrant 
was running inside your VPC, e.g. internal automated tests that leverage the 
Vagrant script to setup the cluster.

To explain why I wrote things the way I did -- my primary use case is 
development of and on top of Kafka. I want to make it easy to setup the 
cluster, run admin commands to inspect its state, setup producer/consumer 
processes, and, when necessary, SSH in and debug things. A lot of that can be 
done from my laptop, so supporting access from outside the VPC is handy. My 
setup is definitely not secure, but that's kind of by design -- if I'm just 
dumping test data into the cluster, I'm not particularly concerned  about the 
security of the data. (But it would suck if someone else randomly started 
publishing data to my cluster...).

I'm hesitant of adding even more toggles -- eventually it gets so complex that 
its easier for each person to write their own custom Vagrantfile. And the 
amount of effort to get up and running on EC2 is already pretty high. Thoughts 
on a good compromise? The primary use cases I was thinking about were Kafka 
development (i.e. patch needs testing against a real cluster, system tests are 
breaking and I need to reproduce the issue), demo/tutorial (i.e. help users get 
a real cluster they can test against up and running), and a testbed for 
application-level code and benchmarks. It sounds like you either have a 
slightly different use case or just have a different workflow for using EC2 
during development.

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1481) Stop using dashes AND underscores as separators in MBean names

2014-11-17 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215645#comment-14215645
 ] 

Jun Rao commented on KAFKA-1481:


[~vladimir.tretyakov], thanks a lot for your work. Committed to 0.8.2 after 
fixing 90 and 91.

Will leave the jira open until trunk is patched too.

> Stop using dashes AND underscores as separators in MBean names
> --
>
> Key: KAFKA-1481
> URL: https://issues.apache.org/jira/browse/KAFKA-1481
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1.1
>Reporter: Otis Gospodnetic
>Priority: Critical
>  Labels: patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-1481_2014-06-06_13-06-35.patch, 
> KAFKA-1481_2014-10-13_18-23-35.patch, KAFKA-1481_2014-10-14_21-53-35.patch, 
> KAFKA-1481_2014-10-15_10-23-35.patch, KAFKA-1481_2014-10-20_23-14-35.patch, 
> KAFKA-1481_2014-10-21_09-14-35.patch, KAFKA-1481_2014-10-30_21-35-43.patch, 
> KAFKA-1481_2014-10-31_14-35-43.patch, 
> KAFKA-1481_2014-11-03_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-03_17-02-23.patch, 
> KAFKA-1481_2014-11-10_20-39-41_doc.patch, 
> KAFKA-1481_2014-11-10_21-02-23.patch, KAFKA-1481_2014-11-14_16-33-03.patch, 
> KAFKA-1481_2014-11-14_16-39-41_doc.patch, 
> KAFKA-1481_2014-11-17_14-33-03.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-14_21-53-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-15_10-23-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_20-14-35.patch, 
> KAFKA-1481_IDEA_IDE_2014-10-20_23-14-35.patch, alternateLayout1.png, 
> alternateLayout2.png, diff-for-alternate-layout1.patch, 
> diff-for-alternate-layout2.patch, originalLayout.png
>
>
> MBeans should not use dashes or underscores as separators because these 
> characters are allowed in hostnames, topics, group and consumer IDs, etc., 
> and these are embedded in MBeans names making it impossible to parse out 
> individual bits from MBeans.
> Perhaps a pipe character should be used to avoid the conflict. 
> This looks like a major blocker because it means nobody can write Kafka 0.8.x 
> monitoring tools unless they are doing it for themselves AND do not use 
> dashes AND do not use underscores.
> See: http://search-hadoop.com/m/4TaT4lonIW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25995: Patch for KAFKA-1650

2014-11-17 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25995/
---

(Updated Nov. 18, 2014, 2:44 a.m.)


Review request for kafka.


Bugs: KAFKA-1650 and KAKFA-1650
https://issues.apache.org/jira/browse/KAFKA-1650
https://issues.apache.org/jira/browse/KAKFA-1650


Repository: kafka


Description (updated)
---

Addressed Guozhang's comments.


Addressed Guozhang's comments


commit before switch to trunk


commit before rebase


Rebased on trunk, Addressed Guozhang's comments.


Addressed Guozhang's comments on MaxInFlightRequests


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
mirrormaker-redesign


Incorporated Guozhang's comments


Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
mirrormaker-redesign


Diffs (updated)
-

  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
f476973eeff653473a60c3ecf36e870e386536bc 
  core/src/main/scala/kafka/tools/MirrorMaker.scala 
f399105087588946987bbc84e3759935d9498b6a 

Diff: https://reviews.apache.org/r/25995/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215653#comment-14215653
 ] 

Joe Stein commented on KAFKA-1173:
--

[~ewencp] I agree no more toggles unless required. I am looking at this from 
the community perspective and thinking that once we commit this we have to 
support it and all the questions/issues people are going to have and will come 
over the mailing list (etc). 

I flipped ignore_private_ip to false in the aws section of the vagrant file and 
giving it another try.

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-11-17 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1650:

Attachment: KAFKA-1650_2014-11-17_18:44:37.patch

> Mirror Maker could lose data on unclean shutdown.
> -
>
> Key: KAFKA-1650
> URL: https://issues.apache.org/jira/browse/KAFKA-1650
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1650.patch, KAFKA-1650_2014-10-06_10:17:46.patch, 
> KAFKA-1650_2014-11-12_09:51:30.patch, KAFKA-1650_2014-11-17_18:44:37.patch
>
>
> Currently if mirror maker got shutdown uncleanly, the data in the data 
> channel and buffer could potentially be lost. With the new producer's 
> callback, this issue could be solved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-11-17 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215654#comment-14215654
 ] 

Jiangjie Qin commented on KAFKA-1650:
-

Updated reviewboard https://reviews.apache.org/r/25995/diff/
 against branch origin/trunk

> Mirror Maker could lose data on unclean shutdown.
> -
>
> Key: KAFKA-1650
> URL: https://issues.apache.org/jira/browse/KAFKA-1650
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1650.patch, KAFKA-1650_2014-10-06_10:17:46.patch, 
> KAFKA-1650_2014-11-12_09:51:30.patch, KAFKA-1650_2014-11-17_18:44:37.patch
>
>
> Currently if mirror maker got shutdown uncleanly, the data in the data 
> channel and buffer could potentially be lost. With the new producer's 
> callback, this issue could be solved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25995: Patch for KAFKA-1650

2014-11-17 Thread Jiangjie Qin


> On Nov. 17, 2014, 9:48 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/tools/MirrorMaker.scala, line 338
> > 
> >
> > Is it possible that the current chunk has been consumed completely and 
> > the fetcher thread has yet put in a new chunk, and hence hasNext() will 
> > return false? If this case shall we stop the consumer or just let it block?

I dont' think it is possible. If consuemr time out is set to -1, the hasNext() 
seems only return false when:
1. Shutdown message received.
2. Imcomplete message (which we should probably exit)
Otherwise it will block until the next data chunk is put into the queue.


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25995/#review61528
---


On Nov. 18, 2014, 2:44 a.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/25995/
> ---
> 
> (Updated Nov. 18, 2014, 2:44 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1650 and KAKFA-1650
> https://issues.apache.org/jira/browse/KAFKA-1650
> https://issues.apache.org/jira/browse/KAKFA-1650
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressed Guozhang's comments.
> 
> 
> Addressed Guozhang's comments
> 
> 
> commit before switch to trunk
> 
> 
> commit before rebase
> 
> 
> Rebased on trunk, Addressed Guozhang's comments.
> 
> 
> Addressed Guozhang's comments on MaxInFlightRequests
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> mirrormaker-redesign
> 
> 
> Incorporated Guozhang's comments
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> mirrormaker-redesign
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> f476973eeff653473a60c3ecf36e870e386536bc 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> f399105087588946987bbc84e3759935d9498b6a 
> 
> Diff: https://reviews.apache.org/r/25995/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



Re: Review Request 27684: Patch for KAFKA-1743

2014-11-17 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27684/#review61863
---


Thanks for the patch. Got the following compilation error.

:core:compileTestScala/Users/junrao/intellij/kafka/core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala:116:
 overloaded method value commitOffsets with alternatives:
  => Unit 
  (isAutoCommit: Boolean)Unit
 cannot be applied to ()
zkConsumerConnector1.commitOffsets()
 ^
/Users/junrao/intellij/kafka/core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala:204:
 overloaded method value commitOffsets with alternatives:
  => Unit 
  (isAutoCommit: Boolean)Unit
 cannot be applied to ()
zkConsumerConnector1.commitOffsets()
 ^
two errors found
 FAILED

- Jun Rao


On Nov. 16, 2014, 6:42 a.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27684/
> ---
> 
> (Updated Nov. 16, 2014, 6:42 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1743
> https://issues.apache.org/jira/browse/KAFKA-1743
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> def commitOffsets method added to make ConsumerConnector backward  
> compatible; Addressing Jun's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ConsumerConnector.scala 
> 07677c1c26768ef9c9032626180d0015f12cb0e0 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> f476973eeff653473a60c3ecf36e870e386536bc 
>   core/src/main/scala/kafka/javaapi/consumer/ZookeeperConsumerConnector.scala 
> 1f98db5d692adc113189ec8c75a4fad29d6b6ffe 
> 
> Diff: https://reviews.apache.org/r/27684/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Manikumar Reddy O
> 
>



[jira] [Updated] (KAFKA-1769) javadoc should only include client facing packages

2014-11-17 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1769:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review. Double committed to trunk and 0.8.2.

> javadoc should only include client facing packages
> --
>
> Key: KAFKA-1769
> URL: https://issues.apache.org/jira/browse/KAFKA-1769
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: kafka-1769.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215661#comment-14215661
 ] 

Ewen Cheslack-Postava commented on KAFKA-1173:
--

[~joestein] Got it, makes sense. The EC2 security group thing is a pain, it 
ends up requiring too much manual effort. The Spark EC2 scripts do a nice job 
of just setting up a usable default so it's really easy to get up and running, 
but I'm also hesitant to have the script automatically muck with users' 
security settings.

> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1173) Using Vagrant to get up and running with Apache Kafka

2014-11-17 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215675#comment-14215675
 ] 

Joe Stein commented on KAFKA-1173:
--

[~ewencp] changing override.hostmanager.ignore_private_ip = false in the AWS 
section didn't work :( The host manager is setting the hosts to the 192 address 
cat /etc/hosts ## vagrant-hostmanager-start
192.168.50.51   broker1 
192.168.50.52   broker2 
192.168.50.53   broker3 
192.168.50.11   zk1 

The virtual box parts are great I think for folks to jump in and get up and 
running quickly using vagrant and it is helpful for development and works 
without futzing with it, yup. One option is we could commit that part and move 
the AWS pieces to another ticket. I don't mind that but I am ok with helping to 
keep testing the EC2 parts as long as it can work for folks out the box with 
little issues/steps as our end game. I should have a chance to try this all 
again and/or review whatever changes on Wednesday & Thursdasy (FYI gotta knock 
off for the evening and tomorrow is packed). Many folks have VPC we should try 
to accommodate them otherwise it just looks like Kafka isn't working (or is 
harder than it really is to setup). We already get a lot of emails about EC2 
and advertising hosts and everything else so this could be really helpful for 
folks once it is working more. 

<< The Spark EC2 scripts do a nice job of just setting up a usable default so 
it's really easy to get up and running, but I'm also hesitant to have the 
script automatically muck with users' security settings.

You can make it a flag to use it and have some detail about changing the flag 
and what is going to happen if you do (so it is not automatic).  I think if 
things are going to work then folks can make the decision themselves if the 
impact of that is something that is worth it for them. I think having it just 
not work though without having to-do a lot kind of takes away from the "spin up 
something working" aspect to the change.

We will also find out a lot more and learn what the community wants more and/or 
differently as this gets in and out to the world.


> Using Vagrant to get up and running with Apache Kafka
> -
>
> Key: KAFKA-1173
> URL: https://issues.apache.org/jira/browse/KAFKA-1173
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.8.3
>
> Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh  but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1773) Add a tool to check available consumer groups

2014-11-17 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215716#comment-14215716
 ] 

Ashish Kumar Singh commented on KAFKA-1773:
---

[~nehanarkhede] thanks for pointing to KAFKA-1476. It makes sense to close 
this. Will look into KAFKA-1476.

> Add a tool to check available consumer groups
> -
>
> Key: KAFKA-1773
> URL: https://issues.apache.org/jira/browse/KAFKA-1773
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Ashish Kumar Singh
>Assignee: Ashish Kumar Singh
> Attachments: KAFKA-1773.patch
>
>
> Right now ConsumerOffsetChecker expects consumer group. However, there is no 
> tool to get available consumer groups.
> This JIRA intends to add a tool to get available consumer groups from 
> ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27693: Patch for KAFKA-1476

2014-11-17 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27693/#review61873
---


Good work! Left some comments.


core/src/main/scala/kafka/tools/ConsumerCommand.scala


NIT: Not sure if there is any standard we have around max line length, but 
having small line size usually makes code mroe readable.



core/src/main/scala/kafka/tools/ConsumerCommand.scala


Any particular reason we are hiding stacktrace here? Adding message is 
always good, but stacktrace is always more useful :)



core/src/main/scala/kafka/tools/ConsumerCommand.scala


Given that ConsumerCommand serves multiple functionality, .i.e., prints 
multiple information, it is important that we provide information on what we 
are printing. Something like, "Available consumer groups", is required here.



core/src/main/scala/kafka/tools/ConsumerCommand.scala


Same as above about not hiding ST.



core/src/main/scala/kafka/utils/ZkUtils.scala


Use ZkUtils.ConsumersPath


- Ashish Singh


On Nov. 10, 2014, 7:06 p.m., Balaji Seshadri wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27693/
> ---
> 
> (Updated Nov. 10, 2014, 7:06 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1476
> https://issues.apache.org/jira/browse/KAFKA-1476
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1476 Get list of consumer groups - Review Comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/tools/ConsumerCommand.scala PRE-CREATION 
>   core/src/main/scala/kafka/utils/ZkUtils.scala 
> 56e3e88e0cc6d917b0ffd1254e173295c1c4aabd 
> 
> Diff: https://reviews.apache.org/r/27693/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Balaji Seshadri
> 
>



[jira] [Commented] (KAFKA-1476) Get a list of consumer groups

2014-11-17 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215730#comment-14215730
 ] 

Ashish Kumar Singh commented on KAFKA-1476:
---

[~techybalaji] Left some comments on RB.

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: BalajiSeshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch, 
> KAFKA-1476.patch, KAFKA-1476.patch, KAFKA-1476_2014-11-10_11:58:26.patch, 
> KAFKA-1476_2014-11-10_12:04:01.patch, KAFKA-1476_2014-11-10_12:06:35.patch
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27684: Patch for KAFKA-1743

2014-11-17 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27684/
---

(Updated Nov. 18, 2014, 5:29 a.m.)


Review request for kafka.


Bugs: KAFKA-1743
https://issues.apache.org/jira/browse/KAFKA-1743


Repository: kafka


Description
---

def commitOffsets method added to make ConsumerConnector backward  compatible; 
Addressing Jun's comments


Diffs (updated)
-

  core/src/main/scala/kafka/consumer/ConsumerConnector.scala 
07677c1c26768ef9c9032626180d0015f12cb0e0 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
fe9d8e028cf08db844f0d72de4dd1e78f0e4258c 
  core/src/main/scala/kafka/javaapi/consumer/ZookeeperConsumerConnector.scala 
1f98db5d692adc113189ec8c75a4fad29d6b6ffe 
  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
bad099a904967651bc3a38b6bb9a9cdb592b832b 

Diff: https://reviews.apache.org/r/27684/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Updated] (KAFKA-1743) ConsumerConnector.commitOffsets in 0.8.2 is not backward compatible

2014-11-17 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1743:
---
Attachment: KAFKA-1743_2014-11-18_10:59:05.patch

> ConsumerConnector.commitOffsets in 0.8.2 is not backward compatible
> ---
>
> Key: KAFKA-1743
> URL: https://issues.apache.org/jira/browse/KAFKA-1743
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: KAFKA-1743.patch, KAFKA-1743_2014-11-08_11:49:31.patch, 
> KAFKA-1743_2014-11-14_22:29:21.patch, KAFKA-1743_2014-11-16_12:11:51.patch, 
> KAFKA-1743_2014-11-18_10:59:05.patch
>
>
> In 0.8.1.x, ConsumerConnector has the following api:
>   def commitOffsets
> This is changed to the following in 0.8.2 and breaks compatibility
>   def commitOffsets(retryOnFailure: Boolean = true)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1743) ConsumerConnector.commitOffsets in 0.8.2 is not backward compatible

2014-11-17 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215733#comment-14215733
 ] 

Manikumar Reddy commented on KAFKA-1743:


Updated reviewboard https://reviews.apache.org/r/27684/diff/
 against branch origin/0.8.2

> ConsumerConnector.commitOffsets in 0.8.2 is not backward compatible
> ---
>
> Key: KAFKA-1743
> URL: https://issues.apache.org/jira/browse/KAFKA-1743
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>Priority: Blocker
> Fix For: 0.8.2
>
> Attachments: KAFKA-1743.patch, KAFKA-1743_2014-11-08_11:49:31.patch, 
> KAFKA-1743_2014-11-14_22:29:21.patch, KAFKA-1743_2014-11-16_12:11:51.patch, 
> KAFKA-1743_2014-11-18_10:59:05.patch
>
>
> In 0.8.1.x, ConsumerConnector has the following api:
>   def commitOffsets
> This is changed to the following in 0.8.2 and breaks compatibility
>   def commitOffsets(retryOnFailure: Boolean = true)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27684: Patch for KAFKA-1743

2014-11-17 Thread Manikumar Reddy O


> On Nov. 18, 2014, 2:49 a.m., Jun Rao wrote:
> > Thanks for the patch. Got the following compilation error.
> > 
> > :core:compileTestScala/Users/junrao/intellij/kafka/core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala:116:
> >  overloaded method value commitOffsets with alternatives:
> >   => Unit 
> >   (isAutoCommit: Boolean)Unit
> >  cannot be applied to ()
> > zkConsumerConnector1.commitOffsets()
> >  ^
> > /Users/junrao/intellij/kafka/core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala:204:
> >  overloaded method value commitOffsets with alternatives:
> >   => Unit 
> >   (isAutoCommit: Boolean)Unit
> >  cannot be applied to ()
> > zkConsumerConnector1.commitOffsets()
> >  ^
> > two errors found
> >  FAILED

Oh My Bad!  missed the test classes. Pl review the latest patch.


- Manikumar Reddy


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27684/#review61863
---


On Nov. 18, 2014, 5:29 a.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27684/
> ---
> 
> (Updated Nov. 18, 2014, 5:29 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1743
> https://issues.apache.org/jira/browse/KAFKA-1743
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> def commitOffsets method added to make ConsumerConnector backward  
> compatible; Addressing Jun's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ConsumerConnector.scala 
> 07677c1c26768ef9c9032626180d0015f12cb0e0 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> fe9d8e028cf08db844f0d72de4dd1e78f0e4258c 
>   core/src/main/scala/kafka/javaapi/consumer/ZookeeperConsumerConnector.scala 
> 1f98db5d692adc113189ec8c75a4fad29d6b6ffe 
>   
> core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
> bad099a904967651bc3a38b6bb9a9cdb592b832b 
> 
> Diff: https://reviews.apache.org/r/27684/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Manikumar Reddy O
> 
>