Re: Kafka security

2013-09-17 Thread Neha Narkhede
At this point, the wiki is in a preliminary stage. Feel free to comment on
the wiki with any ideas that you may have on Kafka Security. If it helps,
we can take the discussion to a JIRA.

Thanks,
Neha


On Mon, Sep 16, 2013 at 1:51 AM, Joe Brown  wrote:

> https://cwiki.apache.org/confluence/display/KAFKA/Security we'd like to
> help with this - who is best to contact?
>
> Joe


Re: Random Partitioning Issue

2013-09-17 Thread Joel Koshy
I agree that minimizing the number of producer connections (while
being a good thing) is really required in very large production
deployments, and the net-effect of the existing change is
counter-intuitive to users who expect an immediate even distribution
across _all_ partitions of the topic.

However, I don't think it is a hack because it is almost exactly the
same behavior as 0.7 in one of its modes. The 0.7 producer (which I
think was even more confusing) had three modes:
i) ZK send
ii) Config send(a): static list of broker1:port1,broker2:port2,etc.
iii) Config send(b): static list of a hardwareVIP:VIPport

(i) and (ii) would achieve even distribution. (iii) would effectively
select one broker and distribute to partitions on that broker within
each reconnect interval. (iii) is very similar to what we now do in
0.8. (Although we stick to one partition during each metadata refresh
interval that can be changed to stick to one broker and distribute
across partitions on that broker).

At the same time, I agree with Joe's suggestion that we should keep
the more intuitive pre-KAFKA-1017 behavior as the default and move the
change in KAFKA-1017 to a more specific partitioner implementation.

Joel


On Sun, Sep 15, 2013 at 8:44 AM, Jay Kreps  wrote:
> Let me ask another question which I think is more objective. Let's say 100
> random, smart infrastructure specialists try Kafka, of these 100 how many
> do you believe will
> 1. Say that this behavior is what they expected to happen?
> 2. Be happy with this behavior?
> I am not being facetious I am genuinely looking for a numerical estimate. I
> am trying to figure out if nobody thought about this or if my estimate is
> just really different. For what it is worth my estimate is 0 and 5
> respectively.
>
> This would be fine expect that we changed it from the good behavior to the
> bad behavior to fix an issue that probably only we have.
>
> -Jay
>
>
> On Sun, Sep 15, 2013 at 8:37 AM, Jay Kreps  wrote:
>
>> I just took a look at this change. I agree with Joe, not to put to fine a
>> point on it, but this is a confusing hack.
>>
>> Jun, I don't think wanting to minimizing the number of TCP connections is
>> going to be a very common need for people with less than 10k producers. I
>> also don't think people are going to get very good load balancing out of
>> this because most people don't have a ton of producers. I think instead we
>> will spend the next year explaining this behavior which 99% of people will
>> think is a bug (because it is crazy, non-intuitive, and breaks their usage).
>>
>> Why was this done by adding special default behavior in the null key case
>> instead of as a partitioner? The argument that the partitioner interface
>> doesn't have sufficient information to choose a partition is not a good
>> argument for hacking in changes to the default, it is an argument for *
>> improving* the partitioner interface.
>>
>> The whole point of a partitioner interface is to make it possible to plug
>> in non-standard behavior like this, right?
>>
>> -Jay
>>
>>
>> On Sat, Sep 14, 2013 at 8:15 PM, Jun Rao  wrote:
>>
>>> Joe,
>>>
>>> Thanks for bringing this up. I want to clarify this a bit.
>>>
>>> 1. Currently, the producer side logic is that if the partitioning key is
>>> not provided (i.e., it is null), the partitioner won't be called. We did
>>> that because we want to select a random and "available" partition to send
>>> messages so that if some partitions are temporarily unavailable (because
>>> of
>>> broker failures), messages can still be sent to other partitions. Doing
>>> this in the partitioner is difficult since the partitioner doesn't know
>>> which partitions are currently available (the DefaultEventHandler does).
>>>
>>> 2. As Joel said, the common use case in production is that there are many
>>> more producers than #partitions in a topic. In this case, sticking to a
>>> partition for a few minutes is not going to cause too much imbalance in
>>> the
>>> partitions and has the benefit of reducing the # of socket connections. My
>>> feeling is that this will benefit most production users. In fact, if one
>>> uses a hardware load balancer for producing data in 0.7, it behaves in
>>> exactly the same way (a producer will stick to a broker until the
>>> reconnect
>>> interval is reached).
>>>
>>> 3. It is true that If one is testing a topic with more than one partition
>>> (which is not the default value), this behavior can be a bit weird.
>>> However, I think it can be mitigated by running multiple test producer
>>> instances.
>>>
>>> 4. Someone reported in the mailing list that all data shows in only one
>>> partition after a few weeks. This is clearly not the expected behavior. We
>>> can take a closer look to see if this is real issue.
>>>
>>> Do you think these address your concerns?
>>>
>>> Thanks,
>>>
>>> Jun
>>>
>>>
>>>
>>> On Sat, Sep 14, 2013 at 11:18 AM, Joe Stein  wrote:
>>>
>>> > How about creating a new class called RandomRefreshPartion

Re: [VOTE] Bylaws!

2013-09-17 Thread Jay Kreps
Tally is 6 for and 0 against, so those are now the Kafka bylaws. :-)

-Jay


On Mon, Sep 16, 2013 at 4:09 PM, Prashanth Menon  wrote:

> +1
>
>
> On Fri, Sep 13, 2013 at 7:36 PM, Jun Rao  wrote:
>
> > +1
> >
> > Jun
> >
> >
> > On Fri, Sep 13, 2013 at 9:53 AM, Jay Kreps  wrote:
> >
> > > It was pointed out to us that Apache projects should formalize their
> > > bylaws--basically how votes are done, who votes on what, and what the
> > > criteria are for voting. We have been acting informally so far and
> > haven't
> > > really taken the time to do this.
> > >
> > > Here is a set of bylaws which I propose we adopt:
> > > https://cwiki.apache.org/confluence/display/KAFKA/Bylaws
> > >
> > > In true open source fashion this is cut-and-pasted and improved from
> the
> > > Hive bylaws.
> > >
> > > I am a binding +1. :-)
> > >
> > > -Jay
> > >
> >
>


Re: Review Request 14041: Patch for KAFKA-1030

2013-09-17 Thread Swapnil Ghike

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14041/#review26179
---

Ship it!


Ship It!

- Swapnil Ghike


On Sept. 17, 2013, 6 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14041/
> ---
> 
> (Updated Sept. 17, 2013, 6 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1030
> https://issues.apache.org/jira/browse/KAFKA-1030
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Using the approach of reading directly from ZK.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> 81bf0bda3229e94ecb6b6aff3ffc9fde852df61b 
> 
> Diff: https://reviews.apache.org/r/14041/diff/
> 
> 
> Testing
> ---
> 
> unit tests
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



Re: Review Request 14041: Patch for KAFKA-1030

2013-09-17 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14041/#review26180
---

Ship it!


- Neha Narkhede


On Sept. 17, 2013, 6 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14041/
> ---
> 
> (Updated Sept. 17, 2013, 6 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1030
> https://issues.apache.org/jira/browse/KAFKA-1030
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Using the approach of reading directly from ZK.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> 81bf0bda3229e94ecb6b6aff3ffc9fde852df61b 
> 
> Diff: https://reviews.apache.org/r/14041/diff/
> 
> 
> Testing
> ---
> 
> unit tests
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



Re: Kafka security

2013-09-17 Thread Jay Kreps
Yeah, it would be great to get help. Right now we are basically just
fleshing out requirements. We don't have a concrete date for when anyone
would start work but we've done some prototyping on the auth front,
basically just to understand Kerberos and SASL.

-Jay


On Tue, Sep 17, 2013 at 9:08 AM, Neha Narkhede wrote:

> At this point, the wiki is in a preliminary stage. Feel free to comment on
> the wiki with any ideas that you may have on Kafka Security. If it helps,
> we can take the discussion to a JIRA.
>
> Thanks,
> Neha
>
>
> On Mon, Sep 16, 2013 at 1:51 AM, Joe Brown  wrote:
>
> > https://cwiki.apache.org/confluence/display/KAFKA/Security we'd like to
> > help with this - who is best to contact?
> >
> > Joe
>


[jira] [Commented] (KAFKA-1056) Evenly Distribute Intervals in OffsetIndex

2013-09-17 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769664#comment-13769664
 ] 

Jay Kreps commented on KAFKA-1056:
--

Agreed. This becomes a problem in the case where producer batch size >> index 
interval. This basically just requires properly calculating a set of offset 
index entries to append. This is a bit tricky but doable, I think.

> Evenly Distribute Intervals in OffsetIndex
> --
>
> Key: KAFKA-1056
> URL: https://issues.apache.org/jira/browse/KAFKA-1056
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.8.1
>
>
> Today a new entry will be created in OffsetIndex for each produce request 
> regardless of the number of messages it contains. It is better to evenly 
> distribute the intervals between index entries for index search efficiency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request 14041: Patch for KAFKA-1030

2013-09-17 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14041/
---

(Updated Sept. 17, 2013, 6 p.m.)


Review request for kafka.


Summary (updated)
-

Patch for KAFKA-1030


Bugs: KAFKA-1030
https://issues.apache.org/jira/browse/KAFKA-1030


Repository: kafka


Description
---

Using the approach of reading directly from ZK.


Diffs (updated)
-

  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
81bf0bda3229e94ecb6b6aff3ffc9fde852df61b 

Diff: https://reviews.apache.org/r/14041/diff/


Testing
---

unit tests


Thanks,

Guozhang Wang



[jira] [Updated] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-09-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1030:
-

Attachment: KAFKA-1030-v1.patch

> Addition of partitions requires bouncing all the consumers of that topic
> 
>
> Key: KAFKA-1030
> URL: https://issues.apache.org/jira/browse/KAFKA-1030
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8
>
> Attachments: KAFKA-1030-v1.patch
>
>
> Consumer may not notice new partitions because the propagation of the 
> metadata to servers can be delayed. 
> Options:
> 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new 
> partition data from zookeeper instead of a kafka server.
> 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
> smallest once the consumer has started.
> 1 sounds easier to do. If 1 causes long delays in reading all partitions at 
> the start of every rebalance, 2 may be worth considering.
>  
> The same issue affects MirrorMaker when new topics are created, MirrorMaker 
> may not notice all partitions of the new topics until the next rebalance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-09-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769744#comment-13769744
 ] 

Guozhang Wang commented on KAFKA-1030:
--

Updated reviewboard https://reviews.apache.org/r/14041/


> Addition of partitions requires bouncing all the consumers of that topic
> 
>
> Key: KAFKA-1030
> URL: https://issues.apache.org/jira/browse/KAFKA-1030
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8
>
> Attachments: KAFKA-1030-v1.patch
>
>
> Consumer may not notice new partitions because the propagation of the 
> metadata to servers can be delayed. 
> Options:
> 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new 
> partition data from zookeeper instead of a kafka server.
> 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
> smallest once the consumer has started.
> 1 sounds easier to do. If 1 causes long delays in reading all partitions at 
> the start of every rebalance, 2 may be worth considering.
>  
> The same issue affects MirrorMaker when new topics are created, MirrorMaker 
> may not notice all partitions of the new topics until the next rebalance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Random Partitioning Issue

2013-09-17 Thread Jay Kreps
I would be in favor of that. I agree this is better than 0.7.

-Jay


On Tue, Sep 17, 2013 at 10:19 AM, Joel Koshy  wrote:

> I agree that minimizing the number of producer connections (while
> being a good thing) is really required in very large production
> deployments, and the net-effect of the existing change is
> counter-intuitive to users who expect an immediate even distribution
> across _all_ partitions of the topic.
>
> However, I don't think it is a hack because it is almost exactly the
> same behavior as 0.7 in one of its modes. The 0.7 producer (which I
> think was even more confusing) had three modes:
> i) ZK send
> ii) Config send(a): static list of broker1:port1,broker2:port2,etc.
> iii) Config send(b): static list of a hardwareVIP:VIPport
>
> (i) and (ii) would achieve even distribution. (iii) would effectively
> select one broker and distribute to partitions on that broker within
> each reconnect interval. (iii) is very similar to what we now do in
> 0.8. (Although we stick to one partition during each metadata refresh
> interval that can be changed to stick to one broker and distribute
> across partitions on that broker).
>
> At the same time, I agree with Joe's suggestion that we should keep
> the more intuitive pre-KAFKA-1017 behavior as the default and move the
> change in KAFKA-1017 to a more specific partitioner implementation.
>
> Joel
>
>
> On Sun, Sep 15, 2013 at 8:44 AM, Jay Kreps  wrote:
> > Let me ask another question which I think is more objective. Let's say
> 100
> > random, smart infrastructure specialists try Kafka, of these 100 how many
> > do you believe will
> > 1. Say that this behavior is what they expected to happen?
> > 2. Be happy with this behavior?
> > I am not being facetious I am genuinely looking for a numerical
> estimate. I
> > am trying to figure out if nobody thought about this or if my estimate is
> > just really different. For what it is worth my estimate is 0 and 5
> > respectively.
> >
> > This would be fine expect that we changed it from the good behavior to
> the
> > bad behavior to fix an issue that probably only we have.
> >
> > -Jay
> >
> >
> > On Sun, Sep 15, 2013 at 8:37 AM, Jay Kreps  wrote:
> >
> >> I just took a look at this change. I agree with Joe, not to put to fine
> a
> >> point on it, but this is a confusing hack.
> >>
> >> Jun, I don't think wanting to minimizing the number of TCP connections
> is
> >> going to be a very common need for people with less than 10k producers.
> I
> >> also don't think people are going to get very good load balancing out of
> >> this because most people don't have a ton of producers. I think instead
> we
> >> will spend the next year explaining this behavior which 99% of people
> will
> >> think is a bug (because it is crazy, non-intuitive, and breaks their
> usage).
> >>
> >> Why was this done by adding special default behavior in the null key
> case
> >> instead of as a partitioner? The argument that the partitioner interface
> >> doesn't have sufficient information to choose a partition is not a good
> >> argument for hacking in changes to the default, it is an argument for *
> >> improving* the partitioner interface.
> >>
> >> The whole point of a partitioner interface is to make it possible to
> plug
> >> in non-standard behavior like this, right?
> >>
> >> -Jay
> >>
> >>
> >> On Sat, Sep 14, 2013 at 8:15 PM, Jun Rao  wrote:
> >>
> >>> Joe,
> >>>
> >>> Thanks for bringing this up. I want to clarify this a bit.
> >>>
> >>> 1. Currently, the producer side logic is that if the partitioning key
> is
> >>> not provided (i.e., it is null), the partitioner won't be called. We
> did
> >>> that because we want to select a random and "available" partition to
> send
> >>> messages so that if some partitions are temporarily unavailable
> (because
> >>> of
> >>> broker failures), messages can still be sent to other partitions. Doing
> >>> this in the partitioner is difficult since the partitioner doesn't know
> >>> which partitions are currently available (the DefaultEventHandler
> does).
> >>>
> >>> 2. As Joel said, the common use case in production is that there are
> many
> >>> more producers than #partitions in a topic. In this case, sticking to a
> >>> partition for a few minutes is not going to cause too much imbalance in
> >>> the
> >>> partitions and has the benefit of reducing the # of socket
> connections. My
> >>> feeling is that this will benefit most production users. In fact, if
> one
> >>> uses a hardware load balancer for producing data in 0.7, it behaves in
> >>> exactly the same way (a producer will stick to a broker until the
> >>> reconnect
> >>> interval is reached).
> >>>
> >>> 3. It is true that If one is testing a topic with more than one
> partition
> >>> (which is not the default value), this behavior can be a bit weird.
> >>> However, I think it can be mitigated by running multiple test producer
> >>> instances.
> >>>
> >>> 4. Someone reported in the mailing list that all data shows in o

[jira] [Commented] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-09-17 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769879#comment-13769879
 ] 

Guozhang Wang commented on KAFKA-1030:
--

Here are the performance testing results:

Setup: 1) 5 instances of mirror maker consuming from around 3800 
topic/partitions, 2) 1 instance of console consumer consuming from around 300 
topic/partitions.

1). Bouncing mirror makers:

ZK-located-in-same-DC: 4 minutes and 20 seconds with the fix

ZK-located-in-same-DC: 3 minutes 50 secs without the fix

ZK-located-in-other-DC: 8 minutes 2 seconds with the fix

ZK-located-in-other-DC: 7 minutes 6 seconds without the fix

2). Bouncing console consumer 

ZK-located-in-same-DC: 15 seconds with the fix

ZK-located-in-same-DC: 15 seconds without the fix

---

Given the results, I think it worth pushing this approach (read-from-ZK) in 0.8 
and we can later pursue the other approach Joel proposed in the reviewboard in 
trunk.


> Addition of partitions requires bouncing all the consumers of that topic
> 
>
> Key: KAFKA-1030
> URL: https://issues.apache.org/jira/browse/KAFKA-1030
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8
>
> Attachments: KAFKA-1030-v1.patch
>
>
> Consumer may not notice new partitions because the propagation of the 
> metadata to servers can be delayed. 
> Options:
> 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new 
> partition data from zookeeper instead of a kafka server.
> 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
> smallest once the consumer has started.
> 1 sounds easier to do. If 1 causes long delays in reading all partitions at 
> the start of every rebalance, 2 may be worth considering.
>  
> The same issue affects MirrorMaker when new topics are created, MirrorMaker 
> may not notice all partitions of the new topics until the next rebalance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request 14041: Patch for KAFKA-1030

2013-09-17 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14041/#review26183
---

Ship it!


Ship It!

- Neha Narkhede


On Sept. 17, 2013, 6 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14041/
> ---
> 
> (Updated Sept. 17, 2013, 6 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1030
> https://issues.apache.org/jira/browse/KAFKA-1030
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Using the approach of reading directly from ZK.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> 81bf0bda3229e94ecb6b6aff3ffc9fde852df61b 
> 
> Diff: https://reviews.apache.org/r/14041/diff/
> 
> 
> Testing
> ---
> 
> unit tests
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



[jira] [Commented] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-09-17 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769884#comment-13769884
 ] 

Swapnil Ghike commented on KAFKA-1030:
--

+1 that Guozhang, thanks for running the tests.

> Addition of partitions requires bouncing all the consumers of that topic
> 
>
> Key: KAFKA-1030
> URL: https://issues.apache.org/jira/browse/KAFKA-1030
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8
>
> Attachments: KAFKA-1030-v1.patch
>
>
> Consumer may not notice new partitions because the propagation of the 
> metadata to servers can be delayed. 
> Options:
> 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new 
> partition data from zookeeper instead of a kafka server.
> 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
> smallest once the consumer has started.
> 1 sounds easier to do. If 1 causes long delays in reading all partitions at 
> the start of every rebalance, 2 may be worth considering.
>  
> The same issue affects MirrorMaker when new topics are created, MirrorMaker 
> may not notice all partitions of the new topics until the next rebalance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1046) Added support for Scala 2.10 builds while maintaining compatibility with 2.8.x

2013-09-17 Thread Andrew Otto (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769853#comment-13769853
 ] 

Andrew Otto commented on KAFKA-1046:


Hm, I seem to be getting the same compilation error that is in the attached 
Screen Shot.  How do I fix?

> Added support for Scala 2.10 builds while maintaining compatibility with 2.8.x
> --
>
> Key: KAFKA-1046
> URL: https://issues.apache.org/jira/browse/KAFKA-1046
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8
>Reporter: Christopher Freeman
>Assignee: Christopher Freeman
> Fix For: 0.8
>
> Attachments: kafka_2_10_refactor_0.8.patch, 
> kafka_2_10_refactor.patch, Screen Shot 2013-09-09 at 9.34.09 AM.png
>
>
> I refactored the project such that it will compile against Scala 2.10.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


RE: [jira] [Updated] (KAFKA-1046) Added support for Scala 2.10 builds while maintaining compatibility with 2.8.x

2013-09-17 Thread Yu, Libo
Hi team,

Is it safe to apply the 0.8 patch to 0.8 beta1?

Regards,

Libo

-Original Message-
From: Joe Stein [mailto:crypt...@gmail.com] 
Sent: Friday, September 13, 2013 4:10 PM
To: dev@kafka.apache.org; us...@kafka.apache.org
Subject: Re: [jira] [Updated] (KAFKA-1046) Added support for Scala 2.10 builds 
while maintaining compatibility with 2.8.x

Thanks Chris for the patches and Neha for reviewing and committing them!!!

It is great we now have support for Scala 2.10 in Kafka trunk and also 0.8 
branch and without losing any existing support for anything else.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/


On Fri, Sep 13, 2013 at 3:57 PM, Neha Narkhede (JIRA) wrote:

>
>  [
> https://issues.apache.org/jira/browse/KAFKA-1046?page=com.atlassian.ji
> ra.plugin.system.issuetabpanels:all-tabpanel]
>
> Neha Narkhede updated KAFKA-1046:
> -
>
> Resolution: Fixed
> Status: Resolved  (was: Patch Available)
>
> Thanks for the patches. Checked in your patch to trunk
>
> > Added support for Scala 2.10 builds while maintaining compatibility 
> > with
> 2.8.x
> >
> --
> 
> >
> > Key: KAFKA-1046
> > URL: https://issues.apache.org/jira/browse/KAFKA-1046
> > Project: Kafka
> >  Issue Type: Improvement
> >Affects Versions: 0.8
> >Reporter: Christopher Freeman
> >Assignee: Christopher Freeman
> > Fix For: 0.8
> >
> > Attachments: kafka_2_10_refactor_0.8.patch,
> kafka_2_10_refactor.patch, Screen Shot 2013-09-09 at 9.34.09 AM.png
> >
> >
> > I refactored the project such that it will compile against Scala 2.10.1.
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA 
> administrators For more information on JIRA, see: 
> http://www.atlassian.com/software/jira
>


[jira] [Commented] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-09-17 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769994#comment-13769994
 ] 

Neha Narkhede commented on KAFKA-1030:
--

Thanks for the updated patch and the performance comparison analysis. I agree 
that the ideal change might prove to be too large for 0.8 and will require 
non-trivial amount of time stabilizing it since it is fairly tricky. We can 
just do it properly on trunk and live with this minor performance hit for 
consumer rebalancing on 0.8.


> Addition of partitions requires bouncing all the consumers of that topic
> 
>
> Key: KAFKA-1030
> URL: https://issues.apache.org/jira/browse/KAFKA-1030
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8
>
> Attachments: KAFKA-1030-v1.patch
>
>
> Consumer may not notice new partitions because the propagation of the 
> metadata to servers can be delayed. 
> Options:
> 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new 
> partition data from zookeeper instead of a kafka server.
> 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
> smallest once the consumer has started.
> 1 sounds easier to do. If 1 causes long delays in reading all partitions at 
> the start of every rebalance, 2 may be worth considering.
>  
> The same issue affects MirrorMaker when new topics are created, MirrorMaker 
> may not notice all partitions of the new topics until the next rebalance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (KAFKA-1030) Addition of partitions requires bouncing all the consumers of that topic

2013-09-17 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede resolved KAFKA-1030.
--

Resolution: Fixed


Checked in the latest patch to 0.8

> Addition of partitions requires bouncing all the consumers of that topic
> 
>
> Key: KAFKA-1030
> URL: https://issues.apache.org/jira/browse/KAFKA-1030
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.8
>
> Attachments: KAFKA-1030-v1.patch
>
>
> Consumer may not notice new partitions because the propagation of the 
> metadata to servers can be delayed. 
> Options:
> 1. As Jun suggested on KAFKA-956, the easiest fix would be to read the new 
> partition data from zookeeper instead of a kafka server.
> 2. Run a fetch metadata loop in consumer, and set auto.offset.reset to 
> smallest once the consumer has started.
> 1 sounds easier to do. If 1 causes long delays in reading all partitions at 
> the start of every rebalance, 2 may be worth considering.
>  
> The same issue affects MirrorMaker when new topics are created, MirrorMaker 
> may not notice all partitions of the new topics until the next rebalance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Review Request 14184: Draft patch for KAFKA-1049

2013-09-17 Thread joel koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14184/
---

Review request for kafka.


Bugs: KAFKA-1049
https://issues.apache.org/jira/browse/KAFKA-1049


Repository: kafka


Description
---

This doesn't work yet; trying out the patch review tool.


Diffs
-

  core/src/main/scala/kafka/producer/Producer.scala 
f5829198ebe7555f1e80bcdd02b688c918050426 
  core/src/main/scala/kafka/serializer/Encoder.scala 
020e73c72a310e874ba07cf0691517a61c1fc35f 
  core/src/main/scala/kafka/utils/Utils.scala 
e0a5a27c72abf3560f68fc6c2dbfc67d90cc5cd9 
  core/src/main/scala/kafka/utils/VerifiableProperties.scala 
d694ba98522a0aa2fc9cac84ebcfc4bd51505300 

Diff: https://reviews.apache.org/r/14184/diff/


Testing
---


Thanks,

joel koshy



[jira] [Created] (KAFKA-1057) Trim whitespaces from user specified configs

2013-09-17 Thread Neha Narkhede (JIRA)
Neha Narkhede created KAFKA-1057:


 Summary: Trim whitespaces from user specified configs
 Key: KAFKA-1057
 URL: https://issues.apache.org/jira/browse/KAFKA-1057
 Project: Kafka
  Issue Type: Bug
  Components: config
Reporter: Neha Narkhede
 Fix For: 0.8.1


Whitespaces in configs are a common problem that leads to config errors. It 
will be nice if Kafka can trim the whitespaces from configs automatically

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Review Request 14188: Draft patch for KAFKA-1049

2013-09-17 Thread joel koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14188/
---

Review request for kafka.


Bugs: KAFKA-1049
https://issues.apache.org/jira/browse/KAFKA-1049


Repository: kafka


Description
---

This doesn't work yet; trying out the patch review tool.


Diffs
-

  core/src/main/scala/kafka/producer/Producer.scala 
f5829198ebe7555f1e80bcdd02b688c918050426 
  core/src/main/scala/kafka/serializer/Encoder.scala 
020e73c72a310e874ba07cf0691517a61c1fc35f 
  core/src/main/scala/kafka/utils/Utils.scala 
e0a5a27c72abf3560f68fc6c2dbfc67d90cc5cd9 
  core/src/main/scala/kafka/utils/VerifiableProperties.scala 
d694ba98522a0aa2fc9cac84ebcfc4bd51505300 

Diff: https://reviews.apache.org/r/14188/diff/


Testing
---


Thanks,

joel koshy



[jira] [Updated] (KAFKA-1049) Encoder implementations are required to provide an undocumented constructor.

2013-09-17 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1049:
--

Attachment: KAFKA-1049.patch

> Encoder implementations are required to provide an undocumented constructor.
> 
>
> Key: KAFKA-1049
> URL: https://issues.apache.org/jira/browse/KAFKA-1049
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Rosenberg
>Priority: Minor
> Attachments: KAFKA-1049.patch
>
>
> So, it seems that if I want to set a custom serializer class on the producer 
> (in 0.8), I have to use a class that includes a special constructor like:
> public class MyKafkaEncoder implements Encoder {
>   // This constructor is expected by the kafka producer, used by reflection
>   public MyKafkaEncoder(VerifiableProperties props) {
> // what can I do with this?
>   }
>  @Override
>   public byte[] toBytes(MyType message) {
> return message.toByteArray();
>   }
> }
> It seems odd that this would be a requirement when implementing an interface. 
>  This seems not to have been the case in 0.7.
> What could my encoder class do with the VerifiableProperties?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-1049) Encoder implementations are required to provide an undocumented constructor.

2013-09-17 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770182#comment-13770182
 ] 

Joel Koshy commented on KAFKA-1049:
---

Created reviewboard https://reviews.apache.org/r/14188/


> Encoder implementations are required to provide an undocumented constructor.
> 
>
> Key: KAFKA-1049
> URL: https://issues.apache.org/jira/browse/KAFKA-1049
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Rosenberg
>Priority: Minor
> Attachments: KAFKA-1049.patch
>
>
> So, it seems that if I want to set a custom serializer class on the producer 
> (in 0.8), I have to use a class that includes a special constructor like:
> public class MyKafkaEncoder implements Encoder {
>   // This constructor is expected by the kafka producer, used by reflection
>   public MyKafkaEncoder(VerifiableProperties props) {
> // what can I do with this?
>   }
>  @Override
>   public byte[] toBytes(MyType message) {
> return message.toByteArray();
>   }
> }
> It seems odd that this would be a requirement when implementing an interface. 
>  This seems not to have been the case in 0.7.
> What could my encoder class do with the VerifiableProperties?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request 14188: Draft patch for KAFKA-1049

2013-09-17 Thread joel koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14188/#review26193
---


Mainly wanted to try out the patch review tool. This draft patch does not work 
due to the issue mentioned below.


core/src/main/scala/kafka/producer/Producer.scala


This doesn't work with the createUtils method below as it will still try to 
find a constructor with an argument of type VerifiableProperties



core/src/main/scala/kafka/utils/Utils.scala


The following getConstructor call fails in this diff - e.g., try running 
AsyncProducerTest.


- joel koshy


On Sept. 17, 2013, 11:52 p.m., joel koshy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14188/
> ---
> 
> (Updated Sept. 17, 2013, 11:52 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1049
> https://issues.apache.org/jira/browse/KAFKA-1049
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> This doesn't work yet; trying out the patch review tool.
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/producer/Producer.scala 
> f5829198ebe7555f1e80bcdd02b688c918050426 
>   core/src/main/scala/kafka/serializer/Encoder.scala 
> 020e73c72a310e874ba07cf0691517a61c1fc35f 
>   core/src/main/scala/kafka/utils/Utils.scala 
> e0a5a27c72abf3560f68fc6c2dbfc67d90cc5cd9 
>   core/src/main/scala/kafka/utils/VerifiableProperties.scala 
> d694ba98522a0aa2fc9cac84ebcfc4bd51505300 
> 
> Diff: https://reviews.apache.org/r/14188/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> joel koshy
> 
>



[jira] [Commented] (KAFKA-1053) Kafka patch review tool

2013-09-17 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770270#comment-13770270
 ] 

Joel Koshy commented on KAFKA-1053:
---

Nice - I tried this on KAFKA-1049 (as a test - that patch does not work) and it 
worked great!

+1

I did not get time to dig into the issue I ran into on Linux but the steps 
worked on my laptop. I can look into that and update the wiki with a 
work-around if I find one.

Minor comment: the direct Python API is interesting 
http://www.reviewboard.org/docs/rbtools/dev/api/overview (I'm in general wary 
of popen/subprocess); but it is probably more work than its worth to interface 
with that and post-review likely wraps that anyway and is a well-maintained 
tool. Also, would prefer to have the tool create a os.tmpfile as opposed to 
leaving around a patch file but not a big deal.


> Kafka patch review tool
> ---
>
> Key: KAFKA-1053
> URL: https://issues.apache.org/jira/browse/KAFKA-1053
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
> KAFKA-1053_2013-09-15_20:28:01.patch, KAFKA-1053_2013-09-16_14:40:15.patch, 
> KAFKA-1053-followup2.patch, KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, 
> KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, 
> KAFKA-1053-v3.patch
>
>
> Created a new patch review tool that will integrate JIRA and reviewboard - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (KAFKA-1053) Kafka patch review tool

2013-09-17 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-1053 started by Neha Narkhede.

> Kafka patch review tool
> ---
>
> Key: KAFKA-1053
> URL: https://issues.apache.org/jira/browse/KAFKA-1053
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
> KAFKA-1053_2013-09-15_20:28:01.patch, KAFKA-1053_2013-09-16_14:40:15.patch, 
> KAFKA-1053-followup2.patch, KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, 
> KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, 
> KAFKA-1053-v3.patch
>
>
> Created a new patch review tool that will integrate JIRA and reviewboard - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1053) Kafka patch review tool

2013-09-17 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1053:
-

Status: Patch Available  (was: In Progress)

> Kafka patch review tool
> ---
>
> Key: KAFKA-1053
> URL: https://issues.apache.org/jira/browse/KAFKA-1053
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
> KAFKA-1053_2013-09-15_20:28:01.patch, KAFKA-1053_2013-09-16_14:40:15.patch, 
> KAFKA-1053-followup2.patch, KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, 
> KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, 
> KAFKA-1053-v3.patch
>
>
> Created a new patch review tool that will integrate JIRA and reviewboard - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (KAFKA-1053) Kafka patch review tool

2013-09-17 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede closed KAFKA-1053.



> Kafka patch review tool
> ---
>
> Key: KAFKA-1053
> URL: https://issues.apache.org/jira/browse/KAFKA-1053
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
> KAFKA-1053_2013-09-15_20:28:01.patch, KAFKA-1053_2013-09-16_14:40:15.patch, 
> KAFKA-1053-followup2.patch, KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, 
> KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, 
> KAFKA-1053-v3.patch
>
>
> Created a new patch review tool that will integrate JIRA and reviewboard - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-1053) Kafka patch review tool

2013-09-17 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1053:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the reviews. 

Joel -

Moved the patch to tempdir. Moving to the python API for reviewboard would be 
great, will file another JIRA to get that fixed.

Checked in the tool with the tempdir fix.

> Kafka patch review tool
> ---
>
> Key: KAFKA-1053
> URL: https://issues.apache.org/jira/browse/KAFKA-1053
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
> KAFKA-1053_2013-09-15_20:28:01.patch, KAFKA-1053_2013-09-16_14:40:15.patch, 
> KAFKA-1053-followup2.patch, KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, 
> KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, 
> KAFKA-1053-v3.patch
>
>
> Created a new patch review tool that will integrate JIRA and reviewboard - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (KAFKA-1058) Change the patch review tool to use the reviewboard python client

2013-09-17 Thread Neha Narkhede (JIRA)
Neha Narkhede created KAFKA-1058:


 Summary: Change the patch review tool to use the reviewboard 
python client
 Key: KAFKA-1058
 URL: https://issues.apache.org/jira/browse/KAFKA-1058
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Neha Narkhede
Priority: Minor


Joel's suggestion on KAFKA-1053 -

The direct Python API is interesting 
http://www.reviewboard.org/docs/rbtools/dev/api/overview (I'm in general wary 
of popen/subprocess); but it is probably more work than its worth to interface 
with that and post-review likely wraps that anyway and is a well-maintained 
tool. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (KAFKA-1053) Kafka patch review tool

2013-09-17 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770396#comment-13770396
 ] 

Neha Narkhede edited comment on KAFKA-1053 at 9/18/13 3:59 AM:
---

Thanks for the reviews. 

Joel -

Moved the patch to tempdir. Moving to the python API for reviewboard would be 
great, filed KAFKA-1058 to address that.

Checked in the tool with the tempdir fix.

  was (Author: nehanarkhede):
Thanks for the reviews. 

Joel -

Moved the patch to tempdir. Moving to the python API for reviewboard would be 
great, will file another JIRA to get that fixed.

Checked in the tool with the tempdir fix.
  
> Kafka patch review tool
> ---
>
> Key: KAFKA-1053
> URL: https://issues.apache.org/jira/browse/KAFKA-1053
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Attachments: KAFKA-1053-2013-09-15_09:40:04.patch, 
> KAFKA-1053_2013-09-15_20:28:01.patch, KAFKA-1053_2013-09-16_14:40:15.patch, 
> KAFKA-1053-followup2.patch, KAFKA-1053-followup.patch, KAFKA-1053-v1.patch, 
> KAFKA-1053-v1.patch, KAFKA-1053-v1.patch, KAFKA-1053-v2.patch, 
> KAFKA-1053-v3.patch
>
>
> Created a new patch review tool that will integrate JIRA and reviewboard - 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (KAFKA-1057) Trim whitespaces from user specified configs

2013-09-17 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede reassigned KAFKA-1057:


Assignee: Neha Narkhede

> Trim whitespaces from user specified configs
> 
>
> Key: KAFKA-1057
> URL: https://issues.apache.org/jira/browse/KAFKA-1057
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Fix For: 0.8.1
>
>
> Whitespaces in configs are a common problem that leads to config errors. It 
> will be nice if Kafka can trim the whitespaces from configs automatically

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira