[jira] [Commented] (KAFKA-1095) Kafka does not compile with sbt

2014-02-05 Thread jobs wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13891965#comment-13891965
 ] 

jobs wang commented on KAFKA-1095:
--

i add 2.9.3 to sbt build file, but report errors!

> Kafka does not compile with sbt
> ---
>
> Key: KAFKA-1095
> URL: https://issues.apache.org/jira/browse/KAFKA-1095
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Affects Versions: 0.9.0
> Environment: Linux 64bit, OpenJDK 1.7
>Reporter: Marcel Lohmann
>Priority: Blocker
>
> Expected behaviour:
> After `git pull`, `./sbt update` and `./sbt package` the current snapshot 
> version should compile without errors.
> Current behaviour:
> It fails with different error messages. This is one possible error log: 
> https://gist.github.com/mumrah/7086356
> With `./sbt "++2.10.1 package"` the errors are different but still results in 
> a failed compile. Other scala versions fail, too.
> The responsible commit which leads to the error messages is unknown to me.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (KAFKA-1242) Get added to the asf-cla group

2014-02-05 Thread Tomas Uribe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13892265#comment-13892265
 ] 

Tomas Uribe commented on KAFKA-1242:


Don't seem to have edit permission yet. Who's in charge of adding people to the 
list?

> Get added to the asf-cla group
> --
>
> Key: KAFKA-1242
> URL: https://issues.apache.org/jira/browse/KAFKA-1242
> Project: Kafka
>  Issue Type: Task
>  Components: website
>Reporter: Tomas Uribe
>Assignee: Joe Stein
>Priority: Blocker
>
> Would like to be added to the wiki-editing group. ICLA is on file now, user 
> tomas.uribe. Thx.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 17537: Patch for KAFKA-1028

2014-02-05 Thread Andrew Olson


> On Feb. 4, 2014, 11:21 p.m., Neha Narkhede wrote:
> > core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala, line 64
> > 
> >
> > the config object should already have the per topic preference for 
> > unclean leader election. So we don't have to read from zookeeper again.

It doesn't look like this is actually the case. The KafkaConfig is passed from 
the KafkaServer to the KafkaController with no topic context, and the 
controller does not appear to be integrated with the topic log configuration 
logic in the TopicConfigManager/LogManager.

Just to confirm my understanding of the code, I removed this Zookeeper read and 
doing so caused the two TopicOverride integration tests that I added to fail. 
Is there is a simpler or less awkward way to implement this as per topic 
configuration? Reading the config on demand from ZK seems like the simplest and 
least invasive option since this should not be a frequently executed code path, 
but I could be missing something obvious.


- Andrew


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17537/#review33658
---


On Jan. 30, 2014, 7:45 p.m., Andrew Olson wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/17537/
> ---
> 
> (Updated Jan. 30, 2014, 7:45 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1028
> https://issues.apache.org/jira/browse/KAFKA-1028
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1028: per topic configuration of preference for consistency over 
> availability
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> a167756f0fd358574c8ccb42c5c96aaf13def4f5 
>   core/src/main/scala/kafka/common/NoReplicaOnlineException.scala 
> a1e12794978adf79020936c71259bbdabca8ee68 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> a0267ae2670e8d5f365e49ec0fa5db1f62b815bf 
>   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
> fd9200f3bf941aab54df798bb5899eeb552ea3a3 
>   core/src/main/scala/kafka/log/LogConfig.scala 
> 0b32aeeffcd9d4755ac90573448d197d3f729749 
>   core/src/main/scala/kafka/server/KafkaConfig.scala 
> 3c3aafc2b3f06fc8f3168a8a9c1e0b08e944c1ef 
>   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
> 73e605eb31bc71642d48b0bb8bd1632fd70b9dca 
>   core/src/test/scala/unit/kafka/integration/RollingBounceTest.scala 
> b585f0ec0b1c402d95a3b34934dab7545dcfcb1f 
>   core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
> 89c207a3f56c7a7711f8cee6fb277626329882a6 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 426b1a7bea1d83a64081f2c6b672c88c928713b7 
> 
> Diff: https://reviews.apache.org/r/17537/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Andrew Olson
> 
>



Re: Review Request 17460: Patch for KAFKA-330

2014-02-05 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17460/
---

(Updated Feb. 5, 2014, 5:31 p.m.)


Review request for kafka.


Bugs: KAFKA-330
https://issues.apache.org/jira/browse/KAFKA-330


Repository: kafka


Description (updated)
---

Joel's review suggestions - Changed the controllerLock instances to inLock 
instead of synchronized, fixed some logging


Removed init() API from TopicDeletionManager and added docs to 
TopicDeletionManager to describe the lifecycle of topic deletion


Updated docs for the new states. Removed the changes to log4j.properties


Cleanup unused APIs, consolidated APIs of TopicDeletionManager, added docs, 
unit tests working


Moved deletion states into ReplicaStateMachine. All unit tests pass. Cleanup of 
some APIs pending


Changed controller to reference APIs in TopicDeletionManager. All unit tests 
pass


Introduced a TopicDeletionManager. KafkaController changes pending to use the 
new TopicDeletionManager


Addressed Guozhang's review comments


Fixed docs in a few places


Fixed the resume logic for partition reassignment to also include topics that 
are queued up for deletion, since topic deletetion is halted until partition 
reassignment can finish anyway. We need to let partition reassignment finish 
(since it started before topic deletion) so that topic deletion can resume


Organized imports


Moved offline replica handling to controller failover


Reading replica assignment from zookeeper instead of local cache


Deleting unused APIs


Reverting the change to the stop replica request protocol. Instead hacking 
around with callbacks


All functionality and all unit tests working


Rebased with trunk after controller cleanup patch


Diffs (updated)
-

  core/src/main/scala/kafka/admin/AdminUtils.scala 
a167756f0fd358574c8ccb42c5c96aaf13def4f5 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
842c11047cca0531fbc572fdb25523244ba2b626 
  core/src/main/scala/kafka/api/ControlledShutdownResponse.scala 
a80aa4924cfe9a4670591d03258dd82c428bc3af 
  core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
a984878fbd8147b21211829a49de511fd1335421 
  core/src/main/scala/kafka/api/StopReplicaRequest.scala 
820f0f57b00849a588a840358d07f3a4a31772d4 
  core/src/main/scala/kafka/api/StopReplicaResponse.scala 
d7e36308263aec2298e8adff8f22e18212e33fca 
  core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
54dd7bd4e195cc2ff4637ac93e2f9b681e316024 
  core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
ea8485b479155b479c575ebc89a4f73086c872cb 
  core/src/main/scala/kafka/controller/DeleteTopicsThread.scala PRE-CREATION 
  core/src/main/scala/kafka/controller/KafkaController.scala 
a0267ae2670e8d5f365e49ec0fa5db1f62b815bf 
  core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
fd9200f3bf941aab54df798bb5899eeb552ea3a3 
  core/src/main/scala/kafka/controller/PartitionStateMachine.scala 
ac4262a403fc73edaecbddf55858703c640b11c0 
  core/src/main/scala/kafka/controller/ReplicaStateMachine.scala 
483559aa64726c51320d18b64a1b48f8fb2905a0 
  core/src/main/scala/kafka/controller/TopicDeletionManager.scala PRE-CREATION 
  core/src/main/scala/kafka/network/BlockingChannel.scala 
d22dabdf4fc2346c5487b9fd94cadfbcab70040d 
  core/src/main/scala/kafka/server/KafkaApis.scala 
bd7940b80ca1f1fa4a671c49cf6be1aeec2bbd7e 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
9dca55c9254948f1196ba17e1d3ebacdcd66be0c 
  core/src/main/scala/kafka/server/OffsetCheckpoint.scala 
b5719f89f79b9f2df4b6cb0f1c869b6eae9f8a7b 
  core/src/main/scala/kafka/server/ReplicaManager.scala 
f9d10d385cee49a1e3be8c82e3ffa22ef87a8fd6 
  core/src/main/scala/kafka/server/TopicConfigManager.scala 
42e98dd66f3269e6e3a8210934dabfd65df2dba9 
  core/src/main/scala/kafka/server/ZookeeperLeaderElector.scala 
b189619bdc1b0d2bba8e8f88467fce014be96ccd 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
b42e52b8e5668383b287b2a86385df65e51b5108 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
59de1b469fece0b28e1d04dcd7b7015c12576abb 
  core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/api/RequestResponseSerializationTest.scala 
8df0982a1e71e3f50a073c4ae181096d32914f3e 
  core/src/test/scala/unit/kafka/server/LogOffsetTest.scala 
9aea67b140e50c6f9f1868ce2e2aac9e7530fa77 
  core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
c0475d07a778ff957ad266c08a7a81ea500debd2 
  core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala 
03e6266ffdad5891ec81df786bd094066b78b4c0 
  core/src/test/scala/unit/kafka/utils/TestUtils.scala 
426b1a7bea1d83a64081f2c6b672c88c928713b7 

Diff: https://reviews.apache.org/r/17460/diff/


Testing
---

Several integration tests added to test -

1. Topic deletion when all replica brokers are alive
2. Halt and resume topic deletion afte

[jira] [Commented] (KAFKA-330) Add delete topic support

2014-02-05 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13892325#comment-13892325
 ] 

Neha Narkhede commented on KAFKA-330:
-

Updated reviewboard https://reviews.apache.org/r/17460/
 against branch trunk

> Add delete topic support 
> -
>
> Key: KAFKA-330
> URL: https://issues.apache.org/jira/browse/KAFKA-330
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, log, replication
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>  Labels: features, project
> Fix For: 0.8.1
>
> Attachments: KAFKA-330.patch, KAFKA-330_2014-01-28_15:19:20.patch, 
> KAFKA-330_2014-01-28_22:01:16.patch, KAFKA-330_2014-01-31_14:19:14.patch, 
> KAFKA-330_2014-01-31_17:45:25.patch, KAFKA-330_2014-02-01_11:30:32.patch, 
> KAFKA-330_2014-02-01_14:58:31.patch, KAFKA-330_2014-02-05_09:31:30.patch, 
> kafka-330-v1.patch, kafka-330-v2.patch
>
>
> One proposal of this API is here - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+replication+detailed+design+V2#KafkareplicationdetaileddesignV2-Deletetopic



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 17460: Patch for KAFKA-330

2014-02-05 Thread Neha Narkhede


> On Feb. 5, 2014, 2:50 a.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/controller/ControllerChannelManager.scala, line 
> > 304
> > 
> >
> > You mean just put all in a single StopReplicaRequest? If so, any reason 
> > not to do it now?
> 
> Neha Narkhede wrote:
> Was planning on making the change after I received a more detailed 
> review. Will probably include it in the next patch.

Actually, thinking about this more. The reason I didn't include the batching 
here is because the way stop replica is designed is essentially one replica at 
a time and the callback is also associated with every replica. Batching this is 
a pretty large refactor of existing code that needs to be changed across the 
board for batching stop replica requests. Prefer to do that separately. 


- Neha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17460/#review33668
---


On Feb. 1, 2014, 10:58 p.m., Neha Narkhede wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/17460/
> ---
> 
> (Updated Feb. 1, 2014, 10:58 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-330
> https://issues.apache.org/jira/browse/KAFKA-330
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Removed init() API from TopicDeletionManager and added docs to 
> TopicDeletionManager to describe the lifecycle of topic deletion
> 
> 
> Updated docs for the new states. Removed the changes to log4j.properties
> 
> 
> Cleanup unused APIs, consolidated APIs of TopicDeletionManager, added docs, 
> unit tests working
> 
> 
> Moved deletion states into ReplicaStateMachine. All unit tests pass. Cleanup 
> of some APIs pending
> 
> 
> Changed controller to reference APIs in TopicDeletionManager. All unit tests 
> pass
> 
> 
> Introduced a TopicDeletionManager. KafkaController changes pending to use the 
> new TopicDeletionManager
> 
> 
> Addressed Guozhang's review comments
> 
> 
> Fixed docs in a few places
> 
> 
> Fixed the resume logic for partition reassignment to also include topics that 
> are queued up for deletion, since topic deletetion is halted until partition 
> reassignment can finish anyway. We need to let partition reassignment finish 
> (since it started before topic deletion) so that topic deletion can resume
> 
> 
> Organized imports
> 
> 
> Moved offline replica handling to controller failover
> 
> 
> Reading replica assignment from zookeeper instead of local cache
> 
> 
> Deleting unused APIs
> 
> 
> Reverting the change to the stop replica request protocol. Instead hacking 
> around with callbacks
> 
> 
> All functionality and all unit tests working
> 
> 
> Rebased with trunk after controller cleanup patch
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/admin/AdminUtils.scala 
> a167756f0fd358574c8ccb42c5c96aaf13def4f5 
>   core/src/main/scala/kafka/admin/TopicCommand.scala 
> 842c11047cca0531fbc572fdb25523244ba2b626 
>   core/src/main/scala/kafka/api/ControlledShutdownResponse.scala 
> a80aa4924cfe9a4670591d03258dd82c428bc3af 
>   core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala 
> a984878fbd8147b21211829a49de511fd1335421 
>   core/src/main/scala/kafka/api/StopReplicaRequest.scala 
> 820f0f57b00849a588a840358d07f3a4a31772d4 
>   core/src/main/scala/kafka/api/StopReplicaResponse.scala 
> d7e36308263aec2298e8adff8f22e18212e33fca 
>   core/src/main/scala/kafka/api/UpdateMetadataRequest.scala 
> 54dd7bd4e195cc2ff4637ac93e2f9b681e316024 
>   core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
> ea8485b479155b479c575ebc89a4f73086c872cb 
>   core/src/main/scala/kafka/controller/DeleteTopicsThread.scala PRE-CREATION 
>   core/src/main/scala/kafka/controller/KafkaController.scala 
> a0267ae2670e8d5f365e49ec0fa5db1f62b815bf 
>   core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
> fd9200f3bf941aab54df798bb5899eeb552ea3a3 
>   core/src/main/scala/kafka/controller/PartitionStateMachine.scala 
> ac4262a403fc73edaecbddf55858703c640b11c0 
>   core/src/main/scala/kafka/controller/ReplicaStateMachine.scala 
> 483559aa64726c51320d18b64a1b48f8fb2905a0 
>   core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/network/BlockingChannel.scala 
> d22dabdf4fc2346c5487b9fd94cadfbcab70040d 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> bd7940b80ca1f1fa4a671c49cf6be1aeec2bbd7e 
>   core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
> 9dca55c9254948f1196ba17e1d3ebacdcd66be0c 
>   core/src/main/scala/kafka/server/OffsetCheckpoint.scala 
> b5719f89f79b9f2df4b6cb0f1c869b6eae9f8a7b 
>   core/src/main/scala/kafka/server/ReplicaManager.

[jira] [Updated] (KAFKA-1171) Gradle build for Kafka

2014-02-05 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1171:
---

Attachment: kafka-1171_v13.patch

Attach patch v13. This fixes all known issues.

1. Added hadoop-producer and hadoop-consumer projects.
2. Fixed maven publishing to use project name. kafka.jar is still published 
under kafka, same as we have in maven now. Other projects like hadoop-consumer 
will now be called kafka-hadoop-consumer to make it clear. Since those jars are 
not widely used, changing the name is probably ok.
3. Renamed the *_all tasks a bit and the new tasks are summarized in README.
4. For copyDependantLibs, I kept the name. The old assemblyDependency target in 
sbt builds a fat jar. Here, the jars are copied individually and not assembled 
together, which is clearer.
5. ./gradlew is probably used to distinguish it from the locally install gradle 
executable.

Future stuff:
1. I think we should keep the core jar name kafka until the old 
producer/consumer code is phased out, at which point, we can rename it to 
kafka-server. Before that, since it has the old client, we should leave it in 
the old coordinate so that an application can dedup the jars correctly if 
multiple versions of kafka jar are dragged in through transitive dependencies.

> Gradle build for Kafka
> --
>
> Key: KAFKA-1171
> URL: https://issues.apache.org/jira/browse/KAFKA-1171
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8.1, 0.9.0
>Reporter: David Arthur
>Assignee: David Arthur
> Attachments: 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> kafka-1171_v10.patch, kafka-1171_v11.patch, kafka-1171_v12.patch, 
> kafka-1171_v13.patch, kafka-1171_v6.patch, kafka-1171_v7.patch, 
> kafka-1171_v8.patch, kafka-1171_v9.patch
>
>
> We have previously discussed moving away from SBT to an 
> easier-to-comprehend-and-debug build system such as Ant or Gradle. I put up a 
> patch for an Ant+Ivy build a while ago[1], and it sounded like people wanted 
> to check out Gradle as well.
> 1. https://issues.apache.org/jira/browse/KAFKA-855



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (KAFKA-330) Add delete topic support

2014-02-05 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-330:


Attachment: KAFKA-330_2014-02-05_09:31:30.patch

> Add delete topic support 
> -
>
> Key: KAFKA-330
> URL: https://issues.apache.org/jira/browse/KAFKA-330
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, log, replication
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>  Labels: features, project
> Fix For: 0.8.1
>
> Attachments: KAFKA-330.patch, KAFKA-330_2014-01-28_15:19:20.patch, 
> KAFKA-330_2014-01-28_22:01:16.patch, KAFKA-330_2014-01-31_14:19:14.patch, 
> KAFKA-330_2014-01-31_17:45:25.patch, KAFKA-330_2014-02-01_11:30:32.patch, 
> KAFKA-330_2014-02-01_14:58:31.patch, KAFKA-330_2014-02-05_09:31:30.patch, 
> kafka-330-v1.patch, kafka-330-v2.patch
>
>
> One proposal of this API is here - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+replication+detailed+design+V2#KafkareplicationdetaileddesignV2-Deletetopic



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (KAFKA-1171) Gradle build for Kafka

2014-02-05 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13892351#comment-13892351
 ] 

Guozhang Wang commented on KAFKA-1171:
--

Wondering how to run a single unit test:

./gradlew -Dtest.single=RequestResponseSerializationTest core:test

would execute all the tests in cores.

./gradlew -Dtest.single=RequestResponseSerializationTest 
core:test:[SpecificTestName]

seems not the correct syntax.

> Gradle build for Kafka
> --
>
> Key: KAFKA-1171
> URL: https://issues.apache.org/jira/browse/KAFKA-1171
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8.1, 0.9.0
>Reporter: David Arthur
>Assignee: David Arthur
> Attachments: 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> kafka-1171_v10.patch, kafka-1171_v11.patch, kafka-1171_v12.patch, 
> kafka-1171_v13.patch, kafka-1171_v6.patch, kafka-1171_v7.patch, 
> kafka-1171_v8.patch, kafka-1171_v9.patch
>
>
> We have previously discussed moving away from SBT to an 
> easier-to-comprehend-and-debug build system such as Ant or Gradle. I put up a 
> patch for an Ant+Ivy build a while ago[1], and it sounded like people wanted 
> to check out Gradle as well.
> 1. https://issues.apache.org/jira/browse/KAFKA-855



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (KAFKA-1171) Gradle build for Kafka

2014-02-05 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13892366#comment-13892366
 ] 

Jun Rao commented on KAFKA-1171:


Guozhang,

I tried 
./gradlew -Dtest.single=RequestResponseSerializationTest core:test

and it only ran 1 test.

> Gradle build for Kafka
> --
>
> Key: KAFKA-1171
> URL: https://issues.apache.org/jira/browse/KAFKA-1171
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8.1, 0.9.0
>Reporter: David Arthur
>Assignee: David Arthur
> Attachments: 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> kafka-1171_v10.patch, kafka-1171_v11.patch, kafka-1171_v12.patch, 
> kafka-1171_v13.patch, kafka-1171_v14.patch, kafka-1171_v6.patch, 
> kafka-1171_v7.patch, kafka-1171_v8.patch, kafka-1171_v9.patch
>
>
> We have previously discussed moving away from SBT to an 
> easier-to-comprehend-and-debug build system such as Ant or Gradle. I put up a 
> patch for an Ant+Ivy build a while ago[1], and it sounded like people wanted 
> to check out Gradle as well.
> 1. https://issues.apache.org/jira/browse/KAFKA-855



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (KAFKA-1171) Gradle build for Kafka

2014-02-05 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1171:
---

Attachment: kafka-1171_v14.patch

Attach patch v14 to fix a typo in README.

> Gradle build for Kafka
> --
>
> Key: KAFKA-1171
> URL: https://issues.apache.org/jira/browse/KAFKA-1171
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8.1, 0.9.0
>Reporter: David Arthur
>Assignee: David Arthur
> Attachments: 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> kafka-1171_v10.patch, kafka-1171_v11.patch, kafka-1171_v12.patch, 
> kafka-1171_v13.patch, kafka-1171_v14.patch, kafka-1171_v6.patch, 
> kafka-1171_v7.patch, kafka-1171_v8.patch, kafka-1171_v9.patch
>
>
> We have previously discussed moving away from SBT to an 
> easier-to-comprehend-and-debug build system such as Ant or Gradle. I put up a 
> patch for an Ant+Ivy build a while ago[1], and it sounded like people wanted 
> to check out Gradle as well.
> 1. https://issues.apache.org/jira/browse/KAFKA-855



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (KAFKA-1171) Gradle build for Kafka

2014-02-05 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13892463#comment-13892463
 ] 

Guozhang Wang commented on KAFKA-1171:
--

When I run ./gradlew -Dtest.single=RequestResponseSerializationTest core:test, 
I get:

-
The TaskContainer.add() method has been deprecated and is scheduled to be 
removed in Gradle 2.0. Please use the create() method instead.
Building project 'core' with Scala version 2.8.0
Building project 'perf' with Scala version 2.8.0
:core:compileJava UP-TO-DATE
:core:compileScala UP-TO-DATE
:core:processResources UP-TO-DATE
:core:classes UP-TO-DATE
:core:compileTestJava UP-TO-DATE
:core:compileTestScala UP-TO-DATE
:core:processTestResources UP-TO-DATE
:core:testClasses UP-TO-DATE
:core:test

BUILD SUCCESSFUL

Total time: 1 mins 19.067 secs

--

Which is similar when I run ./gradlew build, except for more tasks UP-TO-DATE, 
so is there a way to show which tests have been actually executed?

> Gradle build for Kafka
> --
>
> Key: KAFKA-1171
> URL: https://issues.apache.org/jira/browse/KAFKA-1171
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8.1, 0.9.0
>Reporter: David Arthur
>Assignee: David Arthur
> Attachments: 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> kafka-1171_v10.patch, kafka-1171_v11.patch, kafka-1171_v12.patch, 
> kafka-1171_v13.patch, kafka-1171_v14.patch, kafka-1171_v6.patch, 
> kafka-1171_v7.patch, kafka-1171_v8.patch, kafka-1171_v9.patch
>
>
> We have previously discussed moving away from SBT to an 
> easier-to-comprehend-and-debug build system such as Ant or Gradle. I put up a 
> patch for an Ant+Ivy build a while ago[1], and it sounded like people wanted 
> to check out Gradle as well.
> 1. https://issues.apache.org/jira/browse/KAFKA-855



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 17460: Patch for KAFKA-330

2014-02-05 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17460/#review33711
---


In the follow-up patch that serialize all the admin tasks in the back ground 
thread, I would suggest switching away from using the callbacks to trigger 
state change while executing the process but depending on some ZK path change, 
as we did for partition re-assignment. Since the controller-broker 
communication is already async, I think it is OK to not retry the 
stopReplicaRequest, but let the brokers to detect some replicas it currently 
holds have already be deleted through MetadataRequest, which will become the 
source of truth anyways.


core/src/main/scala/kafka/api/ControlledShutdownResponse.scala


Ditto as below



core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala


This import may be removed: this is the only change in this file.



core/src/main/scala/kafka/api/StopReplicaResponse.scala


Could you change to (topicAndPartition, errorCode) <- responseMap ?



core/src/main/scala/kafka/api/UpdateMetadataRequest.scala


Ditto as above.



core/src/main/scala/kafka/controller/ControllerChannelManager.scala


Instead of checking the replicaId == -1 case here, I feel it is better to 
handle it in ReplicaStateMachine.handleStateChange function, for indicating the 
devs that this is possible that the leader becomes -1.



core/src/main/scala/kafka/controller/ControllerChannelManager.scala


Ditto as above.



core/src/main/scala/kafka/controller/ControllerChannelManager.scala


When the broker is down, the RequestSendThread will just keep trying 
resend, and the callback function will not be executed until the broker is back 
and a receive is returned from channel.receive(), is that correct? If yes, then 
will the process be blocked during the time the broker is down?



core/src/main/scala/kafka/controller/TopicDeletionManager.scala


I remember the coding principle for this function is to omit ()?



core/src/main/scala/kafka/controller/TopicDeletionManager.scala


It is an exception case if topicsTobeDeleted.!contains(topics).



core/src/main/scala/kafka/controller/TopicDeletionManager.scala


Ditto above



core/src/main/scala/kafka/controller/TopicDeletionManager.scala


Use inLock here?



core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala


On general comment is that since for each test we need to setup ZK and 
Broker before and tear them down after, which could be dominant time consuming 
in running these tests, maybe we can merge some of the testcases into one?


- Guozhang Wang


On Feb. 5, 2014, 5:31 p.m., Neha Narkhede wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/17460/
> ---
> 
> (Updated Feb. 5, 2014, 5:31 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-330
> https://issues.apache.org/jira/browse/KAFKA-330
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Joel's review suggestions - Changed the controllerLock instances to inLock 
> instead of synchronized, fixed some logging
> 
> 
> Removed init() API from TopicDeletionManager and added docs to 
> TopicDeletionManager to describe the lifecycle of topic deletion
> 
> 
> Updated docs for the new states. Removed the changes to log4j.properties
> 
> 
> Cleanup unused APIs, consolidated APIs of TopicDeletionManager, added docs, 
> unit tests working
> 
> 
> Moved deletion states into ReplicaStateMachine. All unit tests pass. Cleanup 
> of some APIs pending
> 
> 
> Changed controller to reference APIs in TopicDeletionManager. All unit tests 
> pass
> 
> 
> Introduced a TopicDeletionManager. KafkaController changes pending to use the 
> new TopicDeletionManager
> 
> 
> Addressed Guozhang's review comments
> 
> 
> Fixed docs in a few places
> 
> 
> Fixed the resume logic for partition reassignment to also include topics that 
> are queued up for deletion, since topic deletetion is halted until partition 
> reassignment can finish anyway. We need to let partition reassignment finish 
> (since it started before topic deletion) so that topic deletion can resume
> 
> 
> Or

Re: Review Request 17460: Patch for KAFKA-330

2014-02-05 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17460/#review33699
---


Some high level comments.

1. While most of the replica states are now managed in ReplicaStateMachine, 
there are a few still managed in TopicDeletionManager through 
haltedTopicsForDeletion and topicDeletionInProgress. It probably would be 
clearer if those are managed in ReplicaStateMachine too. 
topicDeletionInProgress seems redundant since it equals to at least one of the 
replicas in ReplicaDeletionStarted state. We can just add a helper function in 
ReplicaStateMachine. We may need to add a new  replica state in 
ReplicaStateManager to represent haltedTopicsForDeletion, but perhaps we can 
just reuse ReplicaDeletionFailed (and give it a more general name).

2. The actual deletion logic is split between TopicDeletionManager and 
DeleteTopicsThread, which makes it a bit hard to read. I was thinking that 
TopicDeletionManager only has methods for synchronization with other threads 
(through the condition) and all real work will be included in 
DeleteTopicsThread. Compared with partition reassignment, the logic in topic 
deletion is a bit harder to read. Part of the reason is that in partition 
reassignment, all the logic is linearly written in one method. In topic 
deletion, the logic is not linear since it's driven by various callbacks. 
Perhaps just by putting all the logic in one way and put them close to each 
other will help. Also, a bunch of helper methods in TopicDeletionManager like 
the following should really be in ReplicaStateMachine.
isAtLeastOneReplicaInDeletionStartedState()
replicasInState()
AllReplicasForTopicDeleted()

3. When a topic is in the process of being deleted, we prevent future 
operations like partition re-assignment and leader rebalancing on that topic. 
However, if one of those operations is already started, we allow topic deletion 
to start, which will then get blocked by those operations. Another way to do 
that is if a topic is to be deleted, we don't start the deletion until other 
ongoing operations like partition re-assignment finish (once finished, they 
can't be started again since they will see topic being deleted). This way, the 
logic in DeleteTopicsThread will be somewhat simpler since we don't have to 
check if it can interfere with other operations.

4. In TopicDeletionManager, when doing wait/notify (and changing internal 
states), we expect the caller to hold the lock. All callers probably do hold 
the locks. However, I am wondering if it's better to get the lock anyway in 
TopicDeletionManager to make it more self contained. The locks are re-entrant. 
So locking it again won't hurt.

5. TopicDeletionManager: It seems that replicas in ReplicaDeletionStarted state 
remain in that state until the topic is successfully deleted. So, it seems that 
when calling startReplicaDeletion(), we can pass in replicas already in 
ReplicaDeletionSuccessful state. However, transitioning from 
ReplicaDeletionSuccessful to ReplicaDeletionStarted is not allowed.




core/src/main/scala/kafka/controller/DeleteTopicsThread.scala


If the deletion of a replica is started and another failed broker is 
started immediately afterward, will we be missing the only chance of starting 
the deletion of the replica on the newly started broker (assuming there is a 
replica there not yet deleted)?



core/src/main/scala/kafka/controller/DeleteTopicsThread.scala


Can just use topicsToBeDeleted. Could we just merge this block and the 
previous block in the same foreach?



core/src/main/scala/kafka/controller/KafkaController.scala


Why do we need to read from ZK, instead of from the cache?



core/src/main/scala/kafka/controller/KafkaController.scala


For replicas that are being deleted, should we move them to OnlineReplica 
state?



core/src/main/scala/kafka/controller/KafkaController.scala


Should we disallow adding partitions when a topic is being deleted?



core/src/main/scala/kafka/controller/ReplicaStateMachine.scala


Could we add the new replica states in the comment?



core/src/main/scala/kafka/controller/ReplicaStateMachine.scala


Some of the state transitions are missing, e.g., ReplicaDeletionFailed -> 
ReplicaDeletionStarted.



core/src/main/scala/kafka/controller/TopicDeletionManager.scala


This method should be private and it would be better if it's placed close 
to onPartitionDeletion().



core/src/main/scala/kafka/con

Re: Config for new clients (and server)

2014-02-05 Thread Joel Koshy
Overall, +1 on sticking with key-values for configs.


> Con: The IDE gives nice auto-completion for pojos.
> 
> Con: There are some advantages to javadoc as a documentation mechanism for
> java people.

Optionally, both the above cons can be addressed (to some degree) by
wrapper config POJOs that read in the config. i.e., the client will
provide a KV config, but then we (internally) would load that into a
specific config POJO that will be helpful for auto-completion and
javadocs and convenience for our internal implementation (as opposed
to using getLong/getString, etc. which could cause runtime exceptions
if done incorrectly). The javadoc in the pojo would need a @value link
to the original config key string if it is to show up in the generated
javadoc.


> show you the value of the constant, just the variable name (unless you
> discover how to unhide it). That is fine for the clients, but for the

Figuring out a way to un-hide it would be preferable to having to keep
the website as the single source of documentation (even if it is
generated from the javadoc) and make the javadoc link to it. I tried,
but was unsuccessful so unless someone knows how to do that the above
approach is the next-best alternative.

> server would be very weird especially for non-java people. We could attempt
> to duplicate documentation between the javadoc and the ConfigDef but given
> our struggle to get well-documented config in a single place this seems
> unwise.
> 
> So I recommend we have a single source for documentation of these and that
> that source be the website documentation on configuration that covers
> clients and server and that that be generated off the config defs. The
> javadoc on KafkaProducer will link to this table so it should be quite
> convenient to discover.



Re: Config for new clients (and server)

2014-02-05 Thread Guozhang Wang
I like the helper function in all except in parseType: is it better to be
restrict about types, i.e. now allowing "true" if the type is really
Boolean?


On Wed, Feb 5, 2014 at 5:06 PM, Joel Koshy  wrote:

> Overall, +1 on sticking with key-values for configs.
>
>
> > Con: The IDE gives nice auto-completion for pojos.
> >
> > Con: There are some advantages to javadoc as a documentation mechanism
> for
> > java people.
>
> Optionally, both the above cons can be addressed (to some degree) by
> wrapper config POJOs that read in the config. i.e., the client will
> provide a KV config, but then we (internally) would load that into a
> specific config POJO that will be helpful for auto-completion and
> javadocs and convenience for our internal implementation (as opposed
> to using getLong/getString, etc. which could cause runtime exceptions
> if done incorrectly). The javadoc in the pojo would need a @value link
> to the original config key string if it is to show up in the generated
> javadoc.
>
>
> > show you the value of the constant, just the variable name (unless you
> > discover how to unhide it). That is fine for the clients, but for the
>
> Figuring out a way to un-hide it would be preferable to having to keep
> the website as the single source of documentation (even if it is
> generated from the javadoc) and make the javadoc link to it. I tried,
> but was unsuccessful so unless someone knows how to do that the above
> approach is the next-best alternative.
>
> > server would be very weird especially for non-java people. We could
> attempt
> > to duplicate documentation between the javadoc and the ConfigDef but
> given
> > our struggle to get well-documented config in a single place this seems
> > unwise.
> >
> > So I recommend we have a single source for documentation of these and
> that
> > that source be the website documentation on configuration that covers
> > clients and server and that that be generated off the config defs. The
> > javadoc on KafkaProducer will link to this table so it should be quite
> > convenient to discover.
>
>


-- 
-- Guozhang


[jira] [Commented] (KAFKA-1171) Gradle build for Kafka

2014-02-05 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13892908#comment-13892908
 ] 

Jakob Homan commented on KAFKA-1171:


There's the test output html, but you can also have gradle be more chatty about 
which tests it's running/run: 
http://forums.gradle.org/gradle/topics/whats_new_in_gradle_1_1_test_logging


> Gradle build for Kafka
> --
>
> Key: KAFKA-1171
> URL: https://issues.apache.org/jira/browse/KAFKA-1171
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8.1, 0.9.0
>Reporter: David Arthur
>Assignee: David Arthur
> Attachments: 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> 0001-Adding-basic-Gradle-build.patch, 0001-Adding-basic-Gradle-build.patch, 
> kafka-1171_v10.patch, kafka-1171_v11.patch, kafka-1171_v12.patch, 
> kafka-1171_v13.patch, kafka-1171_v14.patch, kafka-1171_v6.patch, 
> kafka-1171_v7.patch, kafka-1171_v8.patch, kafka-1171_v9.patch
>
>
> We have previously discussed moving away from SBT to an 
> easier-to-comprehend-and-debug build system such as Ant or Gradle. I put up a 
> patch for an Ant+Ivy build a while ago[1], and it sounded like people wanted 
> to check out Gradle as well.
> 1. https://issues.apache.org/jira/browse/KAFKA-855



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)