[jira] [Commented] (KAFKA-1865) Add a flush() call to the new producer API

2015-02-22 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332080#comment-14332080
 ] 

Jiangjie Qin commented on KAFKA-1865:
-

Yes you are right, Jay. This approach sounds good. Thanks.

> Add a flush() call to the new producer API
> --
>
> Key: KAFKA-1865
> URL: https://issues.apache.org/jira/browse/KAFKA-1865
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1865.patch, KAFKA-1865_2015-02-21_15:36:54.patch
>
>
> The postconditions of this would be that any record enqueued prior to flush() 
> would have completed being sent (either successfully or not).
> An open question is whether you can continue sending new records while this 
> call is executing (on other threads).
> We should only do this if it doesn't add inefficiencies for people who don't 
> use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1811) ensuring registered broker host:port is unique

2015-02-22 Thread Chao Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332106#comment-14332106
 ] 

Chao Chu commented on KAFKA-1811:
-

Many thanks for your reply! [~metadave], are you still working on this or not? 
Please let me know, thanks!

> ensuring registered broker host:port is unique
> --
>
> Key: KAFKA-1811
> URL: https://issues.apache.org/jira/browse/KAFKA-1811
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>  Labels: newbie
> Attachments: KAFKA_1811.patch
>
>
> Currently, we expect each of the registered broker to have a unique host:port 
> pair. However, we don't enforce that, which causes various weird problems. It 
> would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1811) ensuring registered broker host:port is unique

2015-02-22 Thread Dave Parfitt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332167#comment-14332167
 ] 

Dave Parfitt commented on KAFKA-1811:
-

Chao Chu - sorry, I'm not able to finish working on this. I just had an 
addition to my family thats taking up all my free time.

Cheers -
Dave
  

> ensuring registered broker host:port is unique
> --
>
> Key: KAFKA-1811
> URL: https://issues.apache.org/jira/browse/KAFKA-1811
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>  Labels: newbie
> Attachments: KAFKA_1811.patch
>
>
> Currently, we expect each of the registered broker to have a unique host:port 
> pair. However, we don't enforce that, which causes various weird problems. It 
> would be useful to ensure this during broker registration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [kafka-clients] Re: [VOTE] 0.8.2.1 Candidate 1

2015-02-22 Thread Jun Rao
We identified at least one more blocker issue KAFKA-1971 during testing.
So, we will have to roll another RC for 0.8.2.1.

Thanks,

Jun

On Sat, Feb 21, 2015 at 6:04 PM, Joe Stein  wrote:

> Source verified, tests pass, quick start ok.
>
> Binaries verified, tests on scala
> https://github.com/stealthly/scala-kafka/pull/27 and go clients
> https://github.com/stealthly/go_kafka_client/pull/55 passing.
>
> If the release passes we should update the release notes to include the
> change from KAFKA-1729 please.
>
> +1 (binding)
>
> ~ Joe Stein
>
> On Fri, Feb 20, 2015 at 9:08 PM, ted won  wrote:
>
>> +1
>>
>> On Friday, February 20, 2015, Guozhang Wang  wrote:
>>
>> > +1 binding.
>> >
>> > Checked the md5, and quick start.
>> >
>> > Some minor comments:
>> >
>> > 1. The quickstart section would better include the building step after
>> > download and before starting server.
>> >
>> > 2. There seems to be a bug in Gradle 1.1x with Java 8 causing the
>> "gradle"
>> > initialization to fail:
>> >
>> > -
>> >
>> > FAILURE: Build failed with an exception.
>> >
>> > * Where:
>> > Build file '/home/guwang/Workspace/temp/kafka/build.gradle' line: 199
>> >
>> > * What went wrong:
>> > A problem occurred evaluating root project 'kafka'.
>> > > Could not create task of type 'ScalaDoc'.
>> > --
>> >
>> > Downgrading Java to 1.7 resolve this issue.
>> >
>> > Guozhang
>> >
>> > On Wed, Feb 18, 2015 at 7:56 PM, Connie Yang > > > wrote:
>> >
>> > > +1
>> > > On Feb 18, 2015 7:23 PM, "Matt Narrell" > > > wrote:
>> > >
>> > > > +1
>> > > >
>> > > > > On Feb 18, 2015, at 7:56 PM, Jun Rao > > > wrote:
>> > > > >
>> > > > > This is the first candidate for release of Apache Kafka 0.8.2.1.
>> This
>> > > > > only fixes one critical issue (KAFKA-1952) in 0.8.2.0.
>> > > > >
>> > > > > Release Notes for the 0.8.2.1 release
>> > > > >
>> > > >
>> > >
>> >
>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/RELEASE_NOTES.html
>> > > > >
>> > > > > *** Please download, test and vote by Saturday, Feb 21, 7pm PT
>> > > > >
>> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
>> > > > > http://kafka.apache.org/KEYS in addition to the md5, sha1
>> > > > > and sha2 (SHA256) checksum.
>> > > > >
>> > > > > * Release artifacts to be voted upon (source and binary):
>> > > > > https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/
>> > > > >
>> > > > > * Maven artifacts to be voted upon prior to release:
>> > > > > https://repository.apache.org/content/groups/staging/
>> > > > >
>> > > > > * scala-doc
>> > > > >
>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/scaladoc/
>> > > > >
>> > > > > * java-doc
>> > > > >
>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/javadoc/
>> > > > >
>> > > > > * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.1
>> tag
>> > > > >
>> > > >
>> > >
>> >
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=c1b4c58531343dce80232e0122d085fc687633f6
>> > > > >
>> > > > > /***
>> > > > >
>> > > > > Thanks,
>> > > > >
>> > > > > Jun
>> > > >
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > -- Guozhang
>> >
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at http://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAA7ooCDvUNQx2B351P3LaOYAejoxR9M_PbzfmWo5-ssgEJ_%2Bpw%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-22 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332268#comment-14332268
 ] 

Ted Malaska commented on KAFKA-1961:


Hey Gwen,

Thank you for the help here is the link to Review Board.

https://reviews.apache.org/r/31271/

I will add this to the jira and mark the Jira as patch available 

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>  Labels: newbie
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-22 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated KAFKA-1961:
---
Attachment: KAFKA-1961.3.patch

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>  Labels: newbie
> Attachments: KAFKA-1961.3.patch
>
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-22 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated KAFKA-1961:
---
Status: Patch Available  (was: Open)

The change makes it so the user can not delete topics that are contained in 
Topic.InternalTopics.

If they do they will be notified.

Patch also included unit tests

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>  Labels: newbie
> Attachments: KAFKA-1961.3.patch
>
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-22 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332270#comment-14332270
 ] 

Gwen Shapira commented on KAFKA-1961:
-

Thank you [~ted.m].

Patch and test looks good to me.

[~nehanarkhede], [~jkreps] or [~jjkoshy] - can one of you take a look and see 
if its read for commit?

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>  Labels: newbie
> Attachments: KAFKA-1961.3.patch
>
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-5 - Broker Configuration Management

2015-02-22 Thread Neha Narkhede
Andrii,

Thanks for updating the KIP. Few comments -

Would you mind updating the Compatibility, Migration Plan section as well?

We should discuss the new config store options - I'm not so sure that
zookeeper is such a slam dunk choice for storing the configs.

   1. Firstly, we have seen quite a few problems with zkClient which is
   currently the client we use to talk to ZooKeeper. In our observation, the
   extra layer of indirection it provides seemed to be useful when we were
   starting off but the more production experience we got, the more problems
   popped up. These include missed watches
    in some cases, no
   timeouts , limited
   APIs etc. This makes it much trickier to depend on ZooKeeper for
   important state information that needs to propagate in the cluster. We may
   have to investigate writing our own utility for interacting
   with ZooKeeper but even then it is not going to get much easier since we
   still have to solve the corner case issues we ran into with zkClient.
   2. Secondly, this adds yet another piece of important state information
   we store in ZooKeeper assuming that zookeeper is the easiest alternative
   for key-value storage. However, we have gotten burnt by this in the past
   and still are. For example, admin commands like preferred replica election
   and partition reassignment spend quite some time updating the progress of
   the admin operation in ZooKeeper and hence slowing down the operation
   since ZooKeeper is not optimized for sudden spikes in write throughput.
   Also, I don't know if the argument that the information is necessarily
   safer in ZooKeeper is quite convincing, especially, with the successful
   move of consumer offsets from ZooKeeper to Kafka. It may not be a bad
   option to consider storing configs in a special durable Kafka topic. I'm
   not necessarily advocating this but would definitely like to discuss it.

Before we move forward on adding more important state information in
ZooKeeper, I think we should make an effort at reducing some technical debt
we have accumulated and clean that up first.

On Mon, Feb 16, 2015 at 9:18 AM, Andrii Biletskyi <
andrii.bilets...@stealth.ly> wrote:

> Hi all,
>
> I've added some details to Proposed Changes paragraph.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-5+-+Broker+Configuration+Management
>
> Please let me know if something else require description / details.
>
> Thanks,
> Andrii
>
> On Sat, Jan 24, 2015 at 1:39 AM, Jay Kreps  wrote:
>
> > Cool. Yeah sorry to nag about these KIPs, and I hope it doesn't come
> across
> > the wrong way. But the hope I really have for these is that they are
> > complete enough that even highly involved users can see and understand
> the
> > change, motivation, etc. I think that will do a ton to help extend the
> > community.
> >
> > -Jay
> >
> > On Thu, Jan 22, 2015 at 10:22 PM, Joe Stein 
> wrote:
> >
> > > There is still some to-dos to be done in
> > > https://reviews.apache.org/r/29513/diff/ to use changing to ConfigDef
> > > https://reviews.apache.org/r/30126/diff/ once that is in.
> > >
> > > We can get more written up on it, will do.
> > >
> > > On Fri, Jan 23, 2015 at 12:05 AM, Jay Kreps 
> wrote:
> > >
> > > > Hey Joe,
> > > >
> > > > Can you fill in this KIP? The purpose of these KIPs is to give a full
> > > > overview of the feature, how it will work, be implemented, the
> > > > considerations involved, etc. There is only like one sentence on this
> > > which
> > > > isn't enough for anyone to know what you are thinking.
> > > >
> > > > Moving off of configs to something that I'm guessing would be
> > > > Zookeeper-based (?) is a massive change so we really need to describe
> > > this
> > > > in a way that can be widely circulated.
> > > >
> > > > I actually think this would be a good idea. But there are a ton of
> > > > advantages to good old fashioned text files in terms of config
> > management
> > > > and change control. And trying to support both may or may not be
> > better.
> > > >
> > > > -Jay
> > > >
> > > >
> > > > On Wed, Jan 21, 2015 at 10:34 PM, Joe Stein 
> > > wrote:
> > > >
> > > > > Created a KIP
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-5+-+Broker+Configuration+Management
> > > > >
> > > > > JIRA https://issues.apache.org/jira/browse/KAFKA-1786
> > > > >
> > > > > /***
> > > > >  Joe Stein
> > > > >  Founder, Principal Consultant
> > > > >  Big Data Open Source Security LLC
> > > > >  http://www.stealth.ly
> > > > >  Twitter: @allthingshadoop  >
> > > > > /
> > > > >
> > > >
> > >
> >
>



-- 
Thanks,
Neha


[jira] [Commented] (KAFKA-1824) in ConsoleProducer - properties key.separator and parse.key no longer work

2015-02-22 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332339#comment-14332339
 ] 

Neha Narkhede commented on KAFKA-1824:
--

[~gwenshap] I'm having trouble applying the patch on trunk -
{code}
nnarkhed-mn1:kafka nnarkhed$ git apply --check 1824.patch 
error: patch failed: core/src/main/scala/kafka/tools/ConsoleProducer.scala:36
error: core/src/main/scala/kafka/tools/ConsoleProducer.scala: patch does not 
apply
error: patch failed: core/src/main/scala/kafka/tools/ConsoleProducer.scala:34
error: core/src/main/scala/kafka/tools/ConsoleProducer.scala: patch does not 
apply
{code}

> in ConsoleProducer - properties key.separator and parse.key no longer work
> --
>
> Key: KAFKA-1824
> URL: https://issues.apache.org/jira/browse/KAFKA-1824
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.8.3
>
> Attachments: KAFKA-1824.patch, KAFKA-1824.patch, 
> KAFKA-1824_2014-12-22_16:17:42.patch
>
>
> Looks like the change in kafka-1711 breaks them accidentally.
> reader.init is called with readerProps which is initialized with commandline 
> properties as defaults.
> the problem is that reader.init checks:
> if(props.containsKey("parse.key"))
> and defaults don't return true in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1961) Looks like its possible to delete _consumer_offsets topic

2015-02-22 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated KAFKA-1961:
---
Attachment: KAFKA-1961.4.patch

Changed action from print statement to exception.  Also updated unit test

> Looks like its possible to delete _consumer_offsets topic
> -
>
> Key: KAFKA-1961
> URL: https://issues.apache.org/jira/browse/KAFKA-1961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>  Labels: newbie
> Attachments: KAFKA-1961.3.patch, KAFKA-1961.4.patch
>
>
> Noticed that kafka-topics.sh --delete can successfully delete internal topics 
> (__consumer_offsets).
> I'm pretty sure we want to prevent that, to avoid users shooting themselves 
> in the foot.
> Topic admin command should check for internal topics, just like 
> ReplicaManager does and not let users delete them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1972: JMXTool multiple attributes

2015-02-22 Thread rekhajoshm
GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/45

KAFKA-1972: JMXTool multiple attributes

KAFKA-1972: JMX Tool output for CSV format does not handle attributes with 
comma in their value

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-1972

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/45.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #45


commit b599610ef512c21f5acb621c65168da03c8093c0
Author: Joshi 
Date:   2015-02-22T21:48:55Z

KAFKA-1972: JMXTool multiple attributes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1972) JMX Tool output for CSV format does not handle attributes with comma in their value

2015-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332386#comment-14332386
 ] 

ASF GitHub Bot commented on KAFKA-1972:
---

GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/45

KAFKA-1972: JMXTool multiple attributes

KAFKA-1972: JMX Tool output for CSV format does not handle attributes with 
comma in their value

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-1972

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/45.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #45


commit b599610ef512c21f5acb621c65168da03c8093c0
Author: Joshi 
Date:   2015-02-22T21:48:55Z

KAFKA-1972: JMXTool multiple attributes




> JMX Tool output for CSV format does not handle attributes with comma in their 
> value
> ---
>
> Key: KAFKA-1972
> URL: https://issues.apache.org/jira/browse/KAFKA-1972
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Jonathan Rafalski
>Priority: Minor
>  Labels: newbie
>
> When the JMXTools outputs all attributes using a comma delimitation it does 
> not have an exit character or a way to handle attributes that contain comma's 
> in their value.  This could potentially limit the uses of the output to 
> single value attributes only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1972) JMX Tool output for CSV format does not handle attributes with comma in their value

2015-02-22 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332387#comment-14332387
 ] 

Rekha Joshi commented on KAFKA-1972:


[~jrafalski] Patch at https://github.com/apache/kafka/pull/45 Thanks!

> JMX Tool output for CSV format does not handle attributes with comma in their 
> value
> ---
>
> Key: KAFKA-1972
> URL: https://issues.apache.org/jira/browse/KAFKA-1972
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Jonathan Rafalski
>Priority: Minor
>  Labels: newbie
>
> When the JMXTools outputs all attributes using a comma delimitation it does 
> not have an exit character or a way to handle attributes that contain comma's 
> in their value.  This could potentially limit the uses of the output to 
> single value attributes only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [kafka-clients] Re: [VOTE] 0.8.2.1 Candidate 1

2015-02-22 Thread Joe Stein
Jun,

Can we also add https://issues.apache.org/jira/browse/KAFKA-1724 to the
next RC please?

Thanks!

~ Joe Stein
- - - - - - - - - - - - - - - - -

  http://www.stealth.ly
- - - - - - - - - - - - - - - - -

On Sun, Feb 22, 2015 at 11:59 AM, Jun Rao  wrote:

> We identified at least one more blocker issue KAFKA-1971 during testing.
> So, we will have to roll another RC for 0.8.2.1.
>
> Thanks,
>
> Jun
>
> On Sat, Feb 21, 2015 at 6:04 PM, Joe Stein  wrote:
>
>> Source verified, tests pass, quick start ok.
>>
>> Binaries verified, tests on scala
>> https://github.com/stealthly/scala-kafka/pull/27 and go clients
>> https://github.com/stealthly/go_kafka_client/pull/55 passing.
>>
>> If the release passes we should update the release notes to include the
>> change from KAFKA-1729 please.
>>
>> +1 (binding)
>>
>> ~ Joe Stein
>>
>> On Fri, Feb 20, 2015 at 9:08 PM, ted won  wrote:
>>
>>> +1
>>>
>>> On Friday, February 20, 2015, Guozhang Wang  wrote:
>>>
>>> > +1 binding.
>>> >
>>> > Checked the md5, and quick start.
>>> >
>>> > Some minor comments:
>>> >
>>> > 1. The quickstart section would better include the building step after
>>> > download and before starting server.
>>> >
>>> > 2. There seems to be a bug in Gradle 1.1x with Java 8 causing the
>>> "gradle"
>>> > initialization to fail:
>>> >
>>> > -
>>> >
>>> > FAILURE: Build failed with an exception.
>>> >
>>> > * Where:
>>> > Build file '/home/guwang/Workspace/temp/kafka/build.gradle' line: 199
>>> >
>>> > * What went wrong:
>>> > A problem occurred evaluating root project 'kafka'.
>>> > > Could not create task of type 'ScalaDoc'.
>>> > --
>>> >
>>> > Downgrading Java to 1.7 resolve this issue.
>>> >
>>> > Guozhang
>>> >
>>> > On Wed, Feb 18, 2015 at 7:56 PM, Connie Yang >> > > wrote:
>>> >
>>> > > +1
>>> > > On Feb 18, 2015 7:23 PM, "Matt Narrell" >> > > wrote:
>>> > >
>>> > > > +1
>>> > > >
>>> > > > > On Feb 18, 2015, at 7:56 PM, Jun Rao >> > > wrote:
>>> > > > >
>>> > > > > This is the first candidate for release of Apache Kafka 0.8.2.1.
>>> This
>>> > > > > only fixes one critical issue (KAFKA-1952) in 0.8.2.0.
>>> > > > >
>>> > > > > Release Notes for the 0.8.2.1 release
>>> > > > >
>>> > > >
>>> > >
>>> >
>>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/RELEASE_NOTES.html
>>> > > > >
>>> > > > > *** Please download, test and vote by Saturday, Feb 21, 7pm PT
>>> > > > >
>>> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
>>> > > > > http://kafka.apache.org/KEYS in addition to the md5, sha1
>>> > > > > and sha2 (SHA256) checksum.
>>> > > > >
>>> > > > > * Release artifacts to be voted upon (source and binary):
>>> > > > > https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/
>>> > > > >
>>> > > > > * Maven artifacts to be voted upon prior to release:
>>> > > > > https://repository.apache.org/content/groups/staging/
>>> > > > >
>>> > > > > * scala-doc
>>> > > > >
>>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/scaladoc/
>>> > > > >
>>> > > > > * java-doc
>>> > > > >
>>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/javadoc/
>>> > > > >
>>> > > > > * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.1
>>> tag
>>> > > > >
>>> > > >
>>> > >
>>> >
>>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=c1b4c58531343dce80232e0122d085fc687633f6
>>> > > > >
>>> > > > > /***
>>> > > > >
>>> > > > > Thanks,
>>> > > > >
>>> > > > > Jun
>>> > > >
>>> > > >
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> > -- Guozhang
>>> >
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "kafka-clients" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kafka-clients+unsubscr...@googlegroups.com.
>> To post to this group, send email to kafka-clie...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/kafka-clients.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/kafka-clients/CAA7ooCDvUNQx2B351P3LaOYAejoxR9M_PbzfmWo5-ssgEJ_%2Bpw%40mail.gmail.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>


[jira] [Commented] (KAFKA-1971) starting a broker with a conflicting id will delete the previous broker registration

2015-02-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332414#comment-14332414
 ] 

Guozhang Wang commented on KAFKA-1971:
--

I think that is because when the controller node is shutting down while there 
are still in-flight requests to itself in the request channel, the shutdown 
process will be blocked upon waiting for controller model to drain all the 
requests. Deleting the ZK path will allow the controller listener to fire and 
clear the corresponding request channel and resolve the dead-lock.

> starting a broker with a conflicting id will delete the previous broker 
> registration
> 
>
> Key: KAFKA-1971
> URL: https://issues.apache.org/jira/browse/KAFKA-1971
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.8.2.1
>
>
> This issue can be easily reproduced by the following steps.
> 1. Start broker 1.
> 2. Start broker 2 with the same id as broker 1 (configure different port, log 
> dir).
> Broker 2's registration will fail. However, broker 1's registration in ZK is 
> now deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1919) Metadata request issued with no backoff in new producer if there are no topics

2015-02-22 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1919:
-
Assignee: Jay Kreps

> Metadata request issued with no backoff in new producer if there are no topics
> --
>
> Key: KAFKA-1919
> URL: https://issues.apache.org/jira/browse/KAFKA-1919
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>Assignee: Jay Kreps
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: KAFKA-1919-v1.patch
>
>
> Original report:
> We have observed high cpu and high network traffic problem when
> 1) cluster (0.8.1.1) has no topic
> 2) KafkaProducer (0.8.2-beta) object is created without sending any traffic
> We have observed such problem twice. In both cases, problem went away
> immediately after one/any topic is created.
> Is this a known issue? Just want to check with the community first before I
> spend much time to reproduce it.
> I couldn't reproduce the issue with similar setup with unit test code in
> IDE. start two brokers with no topic locally on my laptop. create a
> KafkaProducer object without sending any msgs. but I only tested with
> 0.8.2-beta for both broker and producer.
> Issue exists in 0.8.2 as well:
> I have re-run my unit test with 0.8.2.0. same tight-loop problem happened
> after a few mins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


How to get a JIRA assigned

2015-02-22 Thread Jonathan Rafalski
Hello,

  I was wondering if there are any rights to be able to assign JIRA tickets to 
myself?  I found what I think is a bug while working on 1679 so I opened a 
ticket and was going to assign a review board for both with my solution but now 
some else has attempted a patch.  Just want to be able to assign a ticket to me 
so time isn't wasted.

If it is something that I need to be granted after submitting a few patches 
that are accepted can someone at least assign 1679 and 1972 to me so nobody 
else attempts to work while I am?

Thanks!

Jonathan.

Sent from my iPhone

[GitHub] kafka pull request: KAFKA-1621 : Standardize --messages option

2015-02-22 Thread rekhajoshm
GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/46

KAFKA-1621 : Standardize --messages option

KAFKA-1621: Standardize --messages option in perf scripts

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-1621

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/46.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #46


commit d123f48f85604c765b464d6b9d5cee4b3ec0de25
Author: Joshi 
Date:   2015-02-22T23:21:55Z

KAFKA-1621 : Standardize --messages option




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1621) Standardize --messages option in perf scripts

2015-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332423#comment-14332423
 ] 

ASF GitHub Bot commented on KAFKA-1621:
---

GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/46

KAFKA-1621 : Standardize --messages option

KAFKA-1621: Standardize --messages option in perf scripts

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-1621

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/46.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #46


commit d123f48f85604c765b464d6b9d5cee4b3ec0de25
Author: Joshi 
Date:   2015-02-22T23:21:55Z

KAFKA-1621 : Standardize --messages option




> Standardize --messages option in perf scripts
> -
>
> Key: KAFKA-1621
> URL: https://issues.apache.org/jira/browse/KAFKA-1621
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Jay Kreps
>  Labels: newbie
>
> This option is specified in PerfConfig and is used by the producer, consumer 
> and simple consumer perf commands. The docstring on the argument does not 
> list it as required but the producer performance test requires it--others 
> don't.
> We should standardize this so that either all the commands require the option 
> and it is marked as required in the docstring or none of them list it as 
> required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1621) Standardize --messages option in perf scripts

2015-02-22 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-1621:
---
Reviewer: Jay Kreps
  Status: Patch Available  (was: Open)

https://github.com/apache/kafka/pull/46

> Standardize --messages option in perf scripts
> -
>
> Key: KAFKA-1621
> URL: https://issues.apache.org/jira/browse/KAFKA-1621
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Jay Kreps
>  Labels: newbie
>
> This option is specified in PerfConfig and is used by the producer, consumer 
> and simple consumer perf commands. The docstring on the argument does not 
> list it as required but the producer performance test requires it--others 
> don't.
> We should standardize this so that either all the commands require the option 
> and it is marked as required in the docstring or none of them list it as 
> required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1972) JMX Tool output for CSV format does not handle attributes with comma in their value

2015-02-22 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-1972:
---
Reviewer: Jonathan Rafalski
  Status: Patch Available  (was: Open)

Patch at https://github.com/apache/kafka/pull/45 Thanks!

> JMX Tool output for CSV format does not handle attributes with comma in their 
> value
> ---
>
> Key: KAFKA-1972
> URL: https://issues.apache.org/jira/browse/KAFKA-1972
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Jonathan Rafalski
>Priority: Minor
>  Labels: newbie
>
> When the JMXTools outputs all attributes using a comma delimitation it does 
> not have an exit character or a way to handle attributes that contain comma's 
> in their value.  This could potentially limit the uses of the output to 
> single value attributes only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: How to get a JIRA assigned

2015-02-22 Thread Guozhang Wang
Hi Jonathan,

You need to be added to the "contributor" list before can be assigned to
jiras, and only committers can do that for you.

I have just add you to the list so you should be able to assign yourself
now.

Guozhang

On Sun, Feb 22, 2015 at 3:08 PM, Jonathan Rafalski <
jonathan.rafal...@gmail.com> wrote:

> Hello,
>
>   I was wondering if there are any rights to be able to assign JIRA
> tickets to myself?  I found what I think is a bug while working on 1679 so
> I opened a ticket and was going to assign a review board for both with my
> solution but now some else has attempted a patch.  Just want to be able to
> assign a ticket to me so time isn't wasted.
>
> If it is something that I need to be granted after submitting a few
> patches that are accepted can someone at least assign 1679 and 1972 to me
> so nobody else attempts to work while I am?
>
> Thanks!
>
> Jonathan.
>
> Sent from my iPhone




-- 
-- Guozhang


[GitHub] kafka pull request: KAFKA-1545: KafkaHealthcheck.register failure

2015-02-22 Thread rekhajoshm
GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/47

KAFKA-1545: KafkaHealthcheck.register failure

KAFKA-1545: java.net.InetAddress.getLocalHost in KafkaHealthcheck.register 
may fail on some irregular hostnames

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-1545

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/47.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #47


commit d123f48f85604c765b464d6b9d5cee4b3ec0de25
Author: Joshi 
Date:   2015-02-22T23:21:55Z

KAFKA-1621 : Standardize --messages option

commit 262df13b91d86bee2c5fb937630c794830854947
Author: Joshi 
Date:   2015-02-22T23:54:43Z

KAFKA-1545: KafkaHealthcheck.register failure




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1545) java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on some irregular hostnames

2015-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332432#comment-14332432
 ] 

ASF GitHub Bot commented on KAFKA-1545:
---

GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/47

KAFKA-1545: KafkaHealthcheck.register failure

KAFKA-1545: java.net.InetAddress.getLocalHost in KafkaHealthcheck.register 
may fail on some irregular hostnames

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-1545

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/47.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #47


commit d123f48f85604c765b464d6b9d5cee4b3ec0de25
Author: Joshi 
Date:   2015-02-22T23:21:55Z

KAFKA-1621 : Standardize --messages option

commit 262df13b91d86bee2c5fb937630c794830854947
Author: Joshi 
Date:   2015-02-22T23:54:43Z

KAFKA-1545: KafkaHealthcheck.register failure




> java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on 
> some irregular hostnames
> ---
>
> Key: KAFKA-1545
> URL: https://issues.apache.org/jira/browse/KAFKA-1545
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: newbie
> Fix For: 0.9.0
>
>
> For example:
> kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic FAILED
> java.net.UnknownHostException: guwang-mn2: guwang-mn2: nodename nor 
> servname provided, or not known
> at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
> at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:59)
> at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:121)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:130)
> at kafka.server.LogOffsetTest.setUp(LogOffsetTest.scala:53)
> Caused by:
> java.net.UnknownHostException: guwang-mn2: nodename nor servname 
> provided, or not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
> at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
> ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1545: KafkaHealthcheck.register failure

2015-02-22 Thread rekhajoshm
Github user rekhajoshm closed the pull request at:

https://github.com/apache/kafka/pull/47


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1545) java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on some irregular hostnames

2015-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332433#comment-14332433
 ] 

ASF GitHub Bot commented on KAFKA-1545:
---

Github user rekhajoshm closed the pull request at:

https://github.com/apache/kafka/pull/47


> java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on 
> some irregular hostnames
> ---
>
> Key: KAFKA-1545
> URL: https://issues.apache.org/jira/browse/KAFKA-1545
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: newbie
> Fix For: 0.9.0
>
>
> For example:
> kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic FAILED
> java.net.UnknownHostException: guwang-mn2: guwang-mn2: nodename nor 
> servname provided, or not known
> at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
> at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:59)
> at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:121)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:130)
> at kafka.server.LogOffsetTest.setUp(LogOffsetTest.scala:53)
> Caused by:
> java.net.UnknownHostException: guwang-mn2: nodename nor servname 
> provided, or not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
> at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
> ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1416) Unify sendMessages/getMessages in unit tests

2015-02-22 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1416:
-
Assignee: Flutra Osmani

> Unify sendMessages/getMessages in unit tests
> 
>
> Key: KAFKA-1416
> URL: https://issues.apache.org/jira/browse/KAFKA-1416
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Flutra Osmani
>  Labels: newbie
>
> Multiple unit tests have its own internal function to send/get messages from 
> the brokers. For example:
> sendMessages in ZookeeperConsumerConnectorTest
> produceMessage in UncleanLeaderElectionTest
> sendMessages in FetcherTest
> etc
> It is better to unify them in TestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1416) Unify sendMessages/getMessages in unit tests

2015-02-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332434#comment-14332434
 ] 

Guozhang Wang commented on KAFKA-1416:
--

[~futtre] I have assigned the ticket yo you, thanks.

> Unify sendMessages/getMessages in unit tests
> 
>
> Key: KAFKA-1416
> URL: https://issues.apache.org/jira/browse/KAFKA-1416
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Flutra Osmani
>  Labels: newbie
>
> Multiple unit tests have its own internal function to send/get messages from 
> the brokers. For example:
> sendMessages in ZookeeperConsumerConnectorTest
> produceMessage in UncleanLeaderElectionTest
> sendMessages in FetcherTest
> etc
> It is better to unify them in TestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1545: KafkaHealthcheck.register failure

2015-02-22 Thread rekhajoshm
GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/48

KAFKA-1545: KafkaHealthcheck.register failure

KAFKA-1545: java.net.InetAddress.getLocalHost in KafkaHealthcheck.register 
may fail on some irregular hostnames

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-1545

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/48.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #48


commit 3127b9058b19916657c234635437edf8a93123d4
Author: Joshi 
Date:   2015-02-23T00:05:28Z

KAFKA-1545 : KafkaHealthcheck.register failure




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1545) java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on some irregular hostnames

2015-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332435#comment-14332435
 ] 

ASF GitHub Bot commented on KAFKA-1545:
---

GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/48

KAFKA-1545: KafkaHealthcheck.register failure

KAFKA-1545: java.net.InetAddress.getLocalHost in KafkaHealthcheck.register 
may fail on some irregular hostnames

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-1545

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/48.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #48


commit 3127b9058b19916657c234635437edf8a93123d4
Author: Joshi 
Date:   2015-02-23T00:05:28Z

KAFKA-1545 : KafkaHealthcheck.register failure




> java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on 
> some irregular hostnames
> ---
>
> Key: KAFKA-1545
> URL: https://issues.apache.org/jira/browse/KAFKA-1545
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: newbie
> Fix For: 0.9.0
>
>
> For example:
> kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic FAILED
> java.net.UnknownHostException: guwang-mn2: guwang-mn2: nodename nor 
> servname provided, or not known
> at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
> at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:59)
> at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:121)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:130)
> at kafka.server.LogOffsetTest.setUp(LogOffsetTest.scala:53)
> Caused by:
> java.net.UnknownHostException: guwang-mn2: nodename nor servname 
> provided, or not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
> at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
> ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1545) java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on some irregular hostnames

2015-02-22 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-1545:
---
 Reviewer: Guozhang Wang
Affects Version/s: 0.8.1.1
   Status: Patch Available  (was: Open)

> java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on 
> some irregular hostnames
> ---
>
> Key: KAFKA-1545
> URL: https://issues.apache.org/jira/browse/KAFKA-1545
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: newbie
> Fix For: 0.9.0
>
>
> For example:
> kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic FAILED
> java.net.UnknownHostException: guwang-mn2: guwang-mn2: nodename nor 
> servname provided, or not known
> at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
> at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:59)
> at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:121)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:130)
> at kafka.server.LogOffsetTest.setUp(LogOffsetTest.scala:53)
> Caused by:
> java.net.UnknownHostException: guwang-mn2: nodename nor servname 
> provided, or not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
> at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
> ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30763: Second attempt at flush()

2015-02-22 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30763/
---

(Updated Feb. 23, 2015, 12:26 a.m.)


Review request for kafka.


Summary (updated)
-

Second attempt at flush()


Bugs: KAFKA-1865
https://issues.apache.org/jira/browse/KAFKA-1865


Repository: kafka


Description
---

KAFKA-1865 Add a flush() method to the producer.


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/Metadata.java 
e8afecda956303a6ee116499fd443a54c018e17d 
  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
1fd6917c8a5131254c740abad7f7228a47e3628c 
  clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
84530f2b948f9abd74203db48707e490dd9c81a5 
  clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
17fe541588d462c68c33f6209717cc4015e9b62f 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerRecord.java 
4990692efa6f01c62e1d7b05fbf31bec50e398c9 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/FutureRecordMetadata.java
 4a2da41f47994f778109e3c4107ffd90195f0bae 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
 ecfe2144d778a5d9b614df5278b9f0a15637f10b 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
 dd0af8aee98abed5d4a0dc50989e37888bb353fe 
  clients/src/main/java/org/apache/kafka/common/errors/InterruptException.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/utils/SystemTime.java 
d682bd46ec3826f0a72388cc4ec30e1b1223d0f3 
  clients/src/test/java/org/apache/kafka/clients/producer/BufferPoolTest.java 
4ae43ed47e31ad8052b4348a731da11120968508 
  clients/src/test/java/org/apache/kafka/clients/producer/MetadataTest.java 
743aa7e523dd476949f484bfa4c7fb8a3afd7bf8 
  clients/src/test/java/org/apache/kafka/clients/producer/MockProducerTest.java 
75513b0bdd439329c5771d87436ef83fda853bfb 
  
clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
 83338633717cfa4ef7cf2a590b5aa6b9c8cb1dd2 
  core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
2802a399bf599e9530f53b7df72f12702a10d3c4 
  core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
b15237b76def3b234924280fa3fdb25dbb0cc0dc 
  core/src/test/scala/unit/kafka/utils/TestUtils.scala 
21d0ed2cb7c9459261d3cdc7c21dece5e2079698 

Diff: https://reviews.apache.org/r/30763/diff/


Testing
---


Thanks,

Jay Kreps



[jira] [Updated] (KAFKA-1865) Add a flush() call to the new producer API

2015-02-22 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-1865:
-
Attachment: KAFKA-1865_2015-02-22_16:26:46.patch

> Add a flush() call to the new producer API
> --
>
> Key: KAFKA-1865
> URL: https://issues.apache.org/jira/browse/KAFKA-1865
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1865.patch, KAFKA-1865_2015-02-21_15:36:54.patch, 
> KAFKA-1865_2015-02-22_16:26:46.patch
>
>
> The postconditions of this would be that any record enqueued prior to flush() 
> would have completed being sent (either successfully or not).
> An open question is whether you can continue sending new records while this 
> call is executing (on other threads).
> We should only do this if it doesn't add inefficiencies for people who don't 
> use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1865) Add a flush() call to the new producer API

2015-02-22 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332450#comment-14332450
 ] 

Jay Kreps commented on KAFKA-1865:
--

Updated reviewboard https://reviews.apache.org/r/30763/diff/
 against branch trunk

> Add a flush() call to the new producer API
> --
>
> Key: KAFKA-1865
> URL: https://issues.apache.org/jira/browse/KAFKA-1865
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1865.patch, KAFKA-1865_2015-02-21_15:36:54.patch, 
> KAFKA-1865_2015-02-22_16:26:46.patch
>
>
> The postconditions of this would be that any record enqueued prior to flush() 
> would have completed being sent (either successfully or not).
> An open question is whether you can continue sending new records while this 
> call is executing (on other threads).
> We should only do this if it doesn't add inefficiencies for people who don't 
> use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1865) Add a flush() call to the new producer API

2015-02-22 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332452#comment-14332452
 ] 

Jay Kreps commented on KAFKA-1865:
--

Uploaded a new patch that tracks all incomplete RecordBatch's in the 
RecordAccumulator and uses these to block on for flush.

I was having trouble with test hangs, but I'm not sure if they are related to 
this patch or not so I haven't yet validated the tests.

I also improved the producer javadoc while in there since I was adding docs for 
flush.

> Add a flush() call to the new producer API
> --
>
> Key: KAFKA-1865
> URL: https://issues.apache.org/jira/browse/KAFKA-1865
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1865.patch, KAFKA-1865_2015-02-21_15:36:54.patch, 
> KAFKA-1865_2015-02-22_16:26:46.patch
>
>
> The postconditions of this would be that any record enqueued prior to flush() 
> would have completed being sent (either successfully or not).
> An open question is whether you can continue sending new records while this 
> call is executing (on other threads).
> We should only do this if it doesn't add inefficiencies for people who don't 
> use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: How to get a JIRA assigned

2015-02-22 Thread Jay Kreps
Anyone know if there a way to turn this off? Is it possible to configure
JIRA to let anyone assign them? Unlike the other lockdown stuff which
prevents spam this doesn't seem like it could be a spam vector and it would
be awesome to make it easier for people.

-Jay

On Sun, Feb 22, 2015 at 3:43 PM, Guozhang Wang  wrote:

> Hi Jonathan,
>
> You need to be added to the "contributor" list before can be assigned to
> jiras, and only committers can do that for you.
>
> I have just add you to the list so you should be able to assign yourself
> now.
>
> Guozhang
>
> On Sun, Feb 22, 2015 at 3:08 PM, Jonathan Rafalski <
> jonathan.rafal...@gmail.com> wrote:
>
> > Hello,
> >
> >   I was wondering if there are any rights to be able to assign JIRA
> > tickets to myself?  I found what I think is a bug while working on 1679
> so
> > I opened a ticket and was going to assign a review board for both with my
> > solution but now some else has attempted a patch.  Just want to be able
> to
> > assign a ticket to me so time isn't wasted.
> >
> > If it is something that I need to be granted after submitting a few
> > patches that are accepted can someone at least assign 1679 and 1972 to me
> > so nobody else attempts to work while I am?
> >
> > Thanks!
> >
> > Jonathan.
> >
> > Sent from my iPhone
>
>
>
>
> --
> -- Guozhang
>


[jira] [Commented] (KAFKA-1865) Add a flush() call to the new producer API

2015-02-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332459#comment-14332459
 ] 

Guozhang Wang commented on KAFKA-1865:
--

Does it hang on ConsumerTest? Maybe we can disable it for now while I work on 
fixing the test.

> Add a flush() call to the new producer API
> --
>
> Key: KAFKA-1865
> URL: https://issues.apache.org/jira/browse/KAFKA-1865
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1865.patch, KAFKA-1865_2015-02-21_15:36:54.patch, 
> KAFKA-1865_2015-02-22_16:26:46.patch
>
>
> The postconditions of this would be that any record enqueued prior to flush() 
> would have completed being sent (either successfully or not).
> An open question is whether you can continue sending new records while this 
> call is executing (on other threads).
> We should only do this if it doesn't add inefficiencies for people who don't 
> use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-269: run-test.sh async test

2015-02-22 Thread rekhajoshm
GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/49

KAFKA-269: run-test.sh async test

KAFKA-269: run-test.sh async test

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-269

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/49.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #49


commit 99302738459c1be9166ca9808971643bc220f675
Author: Joshi 
Date:   2015-02-23T01:38:29Z

KAFKA-269: run-test.sh async test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-269) ./system_test/producer_perf/bin/run-test.sh without --async flag does not run

2015-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332475#comment-14332475
 ] 

ASF GitHub Bot commented on KAFKA-269:
--

GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/49

KAFKA-269: run-test.sh async test

KAFKA-269: run-test.sh async test

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-269

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/49.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #49


commit 99302738459c1be9166ca9808971643bc220f675
Author: Joshi 
Date:   2015-02-23T01:38:29Z

KAFKA-269: run-test.sh async test




> ./system_test/producer_perf/bin/run-test.sh  without --async flag does not run
> --
>
> Key: KAFKA-269
> URL: https://issues.apache.org/jira/browse/KAFKA-269
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.7
> Environment: Linux 2.6.18-238.1.1.el5 , x86_64 x86_64 x86_64 GNU/Linux
> ext3 file system with raid10 
>Reporter: Praveen Ramachandra
>  Labels: newbie, performance
>
> When I run the tests without --async option, The tests doesn't produce even a 
> single message. 
> Following defaults where changed in the server.properties
> num.threads=Tried with 8, 10, 100
> num.partitions=10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-269) ./system_test/producer_perf/bin/run-test.sh without --async flag does not run

2015-02-22 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-269:
--
 Reviewer: Jay Kreps
Affects Version/s: 0.8.1.1
   Status: Patch Available  (was: Open)

[~praveen27] 
By default if no  --sync option is provided the producer run is in async mode.

Ran the run-test.sh without --async option and works on 0.8.2.or maybe you can 
check your zk/topic/reporting-interval?
start producing 200 messages ...
start.time, end.time, compression, message.size, batch.size, 
total.data.sent.in.MB, MB.sec, total.data.sent.in.nMsg, nMsg.sec
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/Users/rjoshi2/Documents/code/kafka-fork/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/Users/rjoshi2/Documents/code/kafka-fork/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-02-22 17:35:18:471, 2015-02-22 17:35:24:331, 0, 200, 200, 381.47, 65.0972, 
200, 341296.9283
wait for data to be persisted

The patch as, AFAIU, 'async' is not a recognized option.Only - -sync is.
Thanks.

> ./system_test/producer_perf/bin/run-test.sh  without --async flag does not run
> --
>
> Key: KAFKA-269
> URL: https://issues.apache.org/jira/browse/KAFKA-269
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.8.1.1, 0.7
> Environment: Linux 2.6.18-238.1.1.el5 , x86_64 x86_64 x86_64 GNU/Linux
> ext3 file system with raid10 
>Reporter: Praveen Ramachandra
>  Labels: newbie, performance
>
> When I run the tests without --async option, The tests doesn't produce even a 
> single message. 
> Following defaults where changed in the server.properties
> num.threads=Tried with 8, 10, 100
> num.partitions=10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-724: auto socket buffer set

2015-02-22 Thread rekhajoshm
GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/50

KAFKA-724: auto socket buffer set

KAFKA-724: Allow automatic socket.send.buffer from operating system 
If socket.receive.buffer.bytes/socket.send.buffer.bytes set to non-zero/-1, 
the OS defaults work.Do not explicitly set buffers.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-724

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/50.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #50


commit 118fdc3cdba2711d5d4389609da1b1fe759c5cab
Author: Joshi 
Date:   2015-02-23T02:01:58Z

KAFKA-724: auto socket buffer set




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-724) Allow automatic socket.send.buffer from operating system

2015-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332482#comment-14332482
 ] 

ASF GitHub Bot commented on KAFKA-724:
--

GitHub user rekhajoshm opened a pull request:

https://github.com/apache/kafka/pull/50

KAFKA-724: auto socket buffer set

KAFKA-724: Allow automatic socket.send.buffer from operating system 
If socket.receive.buffer.bytes/socket.send.buffer.bytes set to non-zero/-1, 
the OS defaults work.Do not explicitly set buffers.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rekhajoshm/kafka KAFKA-724

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/50.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #50


commit 118fdc3cdba2711d5d4389609da1b1fe759c5cab
Author: Joshi 
Date:   2015-02-23T02:01:58Z

KAFKA-724: auto socket buffer set




> Allow automatic socket.send.buffer from operating system
> 
>
> Key: KAFKA-724
> URL: https://issues.apache.org/jira/browse/KAFKA-724
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Pablo Barrera
>  Labels: newbie
>
> To do this, don't call to socket().setXXXBufferSize. This can be 
> controlled by the configuration parameter: if the value socket.send.buffer or 
> others are set to -1, don't call to socket().setXXXBufferSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-724) Allow automatic socket.send.buffer from operating system

2015-02-22 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated KAFKA-724:
--
 Reviewer: Jay Kreps
Affects Version/s: 0.8.2.0
   Status: Patch Available  (was: Open)

> Allow automatic socket.send.buffer from operating system
> 
>
> Key: KAFKA-724
> URL: https://issues.apache.org/jira/browse/KAFKA-724
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Pablo Barrera
>  Labels: newbie
>
> To do this, don't call to socket().setXXXBufferSize. This can be 
> controlled by the configuration parameter: if the value socket.send.buffer or 
> others are set to -1, don't call to socket().setXXXBufferSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1907) ZkClient can block controlled shutdown indefinitely

2015-02-22 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332503#comment-14332503
 ] 

jaikiran pai commented on KAFKA-1907:
-

Hi [~nehanarkhede], I submitted a patch for this to the ZkClient project and 
they have merged it https://github.com/sgroschupf/zkclient/pull/29. They are 
willing to release a new version of that library. However, I've been away the 
past couple of weeks so couldn't take this forward. I'll look into this later 
this weekend.


> ZkClient can block controlled shutdown indefinitely
> ---
>
> Key: KAFKA-1907
> URL: https://issues.apache.org/jira/browse/KAFKA-1907
> Project: Kafka
>  Issue Type: Bug
>  Components: core, zkclient
>Affects Versions: 0.8.2.0
>Reporter: Ewen Cheslack-Postava
>Assignee: jaikiran pai
> Attachments: KAFKA-1907.patch
>
>
> There are some calls to ZkClient via ZkUtils in 
> KafkaServer.controlledShutdown() that can block indefinitely because they 
> internally call waitUntilConnected. The ZkClient API doesn't provide an 
> alternative with timeouts, so fixing this will require enforcing timeouts in 
> some other way.
> This may be a more general issue if there are any non daemon threads that 
> also call ZkUtils methods.
> Stacktrace showing the issue:
> {code}
> "Thread-2" prio=10 tid=0xb3305000 nid=0x4758 waiting on condition [0x6ad69000]
>java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x70a93368> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.parkUntil(LockSupport.java:267)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUntil(AbstractQueuedSynchronizer.java:2130)
> at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:636)
> at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:619)
> at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:615)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:679)
> at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
> at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
> at kafka.utils.ZkUtils$.readDataMaybeNull(ZkUtils.scala:456)
> at kafka.utils.ZkUtils$.getController(ZkUtils.scala:65)
> at 
> kafka.server.KafkaServer.kafka$server$KafkaServer$$controlledShutdown(KafkaServer.scala:194)
> at 
> kafka.server.KafkaServer$$anonfun$shutdown$1.apply$mcV$sp(KafkaServer.scala:269)
> at kafka.utils.Utils$.swallow(Utils.scala:172)
> at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
> at kafka.utils.Utils$.swallowWarn(Utils.scala:45)
> at kafka.utils.Logging$class.swallow(Logging.scala:94)
> at kafka.utils.Utils$.swallow(Utils.scala:45)
> at kafka.server.KafkaServer.shutdown(KafkaServer.scala:269)
> at 
> kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
> at kafka.Kafka$$anon$1.run(Kafka.scala:42)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1976) transient unit test failure in ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown

2015-02-22 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-1976:
--

 Summary: transient unit test failure in 
ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown
 Key: KAFKA-1976
 URL: https://issues.apache.org/jira/browse/KAFKA-1976
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao


Saw the following failure a few times.

kafka.api.test.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown FAILED
org.scalatest.junit.JUnitTestFailedError: Expected 
NotEnoughReplicasException when producing to topic with fewer brokers than 
min.insync.replicas
at 
org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
at 
org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
at org.scalatest.Assertions$class.fail(Assertions.scala:711)
at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
at 
kafka.api.test.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:352)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1976) transient unit test failure in ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown

2015-02-22 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332513#comment-14332513
 ] 

Jun Rao commented on KAFKA-1976:


I suspect the issue is the following: Since broker failure is handled 
asynchronously, we can get either NotEnoughReplicasException or 
NotEnoughReplicasAfterAppendException.

> transient unit test failure in 
> ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown
> ---
>
> Key: KAFKA-1976
> URL: https://issues.apache.org/jira/browse/KAFKA-1976
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>
> Saw the following failure a few times.
> kafka.api.test.ProducerFailureHandlingTest > 
> testNotEnoughReplicasAfterBrokerShutdown FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected 
> NotEnoughReplicasException when producing to topic with fewer brokers than 
> min.insync.replicas
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.api.test.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:352)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1976) transient unit test failure in ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown

2015-02-22 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332514#comment-14332514
 ] 

Sriharsha Chintalapani commented on KAFKA-1976:
---

[~junrao] I covered this issue as part of this JIRA 
https://issues.apache.org/jira/browse/KAFKA-1887

> transient unit test failure in 
> ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown
> ---
>
> Key: KAFKA-1976
> URL: https://issues.apache.org/jira/browse/KAFKA-1976
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>
> Saw the following failure a few times.
> kafka.api.test.ProducerFailureHandlingTest > 
> testNotEnoughReplicasAfterBrokerShutdown FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected 
> NotEnoughReplicasException when producing to topic with fewer brokers than 
> min.insync.replicas
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.api.test.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:352)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1976) transient unit test failure in ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown

2015-02-22 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332515#comment-14332515
 ] 

Gwen Shapira commented on KAFKA-1976:
-

Thanks [~harsha_ch].

As I mentioned in KAFKA-1887, both are legitimate exceptions for the test, 
since they are result of slightly different timings.

> transient unit test failure in 
> ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown
> ---
>
> Key: KAFKA-1976
> URL: https://issues.apache.org/jira/browse/KAFKA-1976
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>
> Saw the following failure a few times.
> kafka.api.test.ProducerFailureHandlingTest > 
> testNotEnoughReplicasAfterBrokerShutdown FAILED
> org.scalatest.junit.JUnitTestFailedError: Expected 
> NotEnoughReplicasException when producing to topic with fewer brokers than 
> min.insync.replicas
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.api.test.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:352)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



Re: Review Request 30763: Second attempt at flush()

2015-02-22 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30763/#review73505
---

Ship it!


Thanks Jay. Looks good to me. Just a minor comment.


clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java


It probably does not matter, but here we are only sending 10 messages which 
can be put into one batch. Should we test the case where accumulator has more 
than one batch for a partition?


- Jiangjie Qin


On Feb. 23, 2015, 12:26 a.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30763/
> ---
> 
> (Updated Feb. 23, 2015, 12:26 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1865
> https://issues.apache.org/jira/browse/KAFKA-1865
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1865 Add a flush() method to the producer.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/Metadata.java 
> e8afecda956303a6ee116499fd443a54c018e17d 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 1fd6917c8a5131254c740abad7f7228a47e3628c 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 84530f2b948f9abd74203db48707e490dd9c81a5 
>   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
> 17fe541588d462c68c33f6209717cc4015e9b62f 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerRecord.java 
> 4990692efa6f01c62e1d7b05fbf31bec50e398c9 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/FutureRecordMetadata.java
>  4a2da41f47994f778109e3c4107ffd90195f0bae 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  ecfe2144d778a5d9b614df5278b9f0a15637f10b 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  dd0af8aee98abed5d4a0dc50989e37888bb353fe 
>   
> clients/src/main/java/org/apache/kafka/common/errors/InterruptException.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/utils/SystemTime.java 
> d682bd46ec3826f0a72388cc4ec30e1b1223d0f3 
>   clients/src/test/java/org/apache/kafka/clients/producer/BufferPoolTest.java 
> 4ae43ed47e31ad8052b4348a731da11120968508 
>   clients/src/test/java/org/apache/kafka/clients/producer/MetadataTest.java 
> 743aa7e523dd476949f484bfa4c7fb8a3afd7bf8 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/MockProducerTest.java 
> 75513b0bdd439329c5771d87436ef83fda853bfb 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
>  83338633717cfa4ef7cf2a590b5aa6b9c8cb1dd2 
>   core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
> 2802a399bf599e9530f53b7df72f12702a10d3c4 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> b15237b76def3b234924280fa3fdb25dbb0cc0dc 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 21d0ed2cb7c9459261d3cdc7c21dece5e2079698 
> 
> Diff: https://reviews.apache.org/r/30763/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jay Kreps
> 
>



Re: Review Request 31260: Patch for kafka-1971

2015-02-22 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31260/
---

(Updated Feb. 23, 2015, 5:11 a.m.)


Review request for kafka.


Bugs: kafka-1971
https://issues.apache.org/jira/browse/kafka-1971


Repository: kafka


Description (updated)
---

address review comments


remove unused util method


add unit test


Diffs (updated)
-

  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
4acdd70fe9c1ee78d6510741006c2ece65450671 
  core/src/main/scala/kafka/server/KafkaServer.scala 
7e5ddcb9be8fcef3df6ebc82a13ef44ef95f73ae 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
c78a1b6ff4213e13cabccd21a7b40cfeddbfb237 
  core/src/test/scala/unit/kafka/server/ConflictBrokerRegistrationTest.scala 
PRE-CREATION 

Diff: https://reviews.apache.org/r/31260/diff/


Testing
---


Thanks,

Jun Rao



[jira] [Updated] (KAFKA-1971) starting a broker with a conflicting id will delete the previous broker registration

2015-02-22 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1971:
---
Attachment: kafka-1971_2015-02-22_21:11:52.patch

> starting a broker with a conflicting id will delete the previous broker 
> registration
> 
>
> Key: KAFKA-1971
> URL: https://issues.apache.org/jira/browse/KAFKA-1971
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: kafka-1971_2015-02-22_21:11:52.patch
>
>
> This issue can be easily reproduced by the following steps.
> 1. Start broker 1.
> 2. Start broker 2 with the same id as broker 1 (configure different port, log 
> dir).
> Broker 2's registration will fail. However, broker 1's registration in ZK is 
> now deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1971) starting a broker with a conflicting id will delete the previous broker registration

2015-02-22 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14333012#comment-14333012
 ] 

Jun Rao commented on KAFKA-1971:


Updated reviewboard https://reviews.apache.org/r/31260/diff/
 against branch origin/trunk

> starting a broker with a conflicting id will delete the previous broker 
> registration
> 
>
> Key: KAFKA-1971
> URL: https://issues.apache.org/jira/browse/KAFKA-1971
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: kafka-1971_2015-02-22_21:11:52.patch
>
>
> This issue can be easily reproduced by the following steps.
> 1. Start broker 1.
> 2. Start broker 2 with the same id as broker 1 (configure different port, log 
> dir).
> Broker 2's registration will fail. However, broker 1's registration in ZK is 
> now deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1971) starting a broker with a conflicting id will delete the previous broker registration

2015-02-22 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14333015#comment-14333015
 ] 

Jun Rao commented on KAFKA-1971:


It seems that the reason why we explicitly delete the broker registration path 
in ZK is to avoid the deadlock in shutting down the controller, because the 
controller message queue is finite. We have since made the controller message 
queue unbounded. So, we don't need to explicitly deregister the broker any 
more. 

Attached is a new patch.

> starting a broker with a conflicting id will delete the previous broker 
> registration
> 
>
> Key: KAFKA-1971
> URL: https://issues.apache.org/jira/browse/KAFKA-1971
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.8.2.1
>
> Attachments: kafka-1971_2015-02-22_21:11:52.patch
>
>
> This issue can be easily reproduced by the following steps.
> 1. Start broker 1.
> 2. Start broker 2 with the same id as broker 1 (configure different port, log 
> dir).
> Broker 2's registration will fail. However, broker 1's registration in ZK is 
> now deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1680) JmxTool exits if no arguments are given

2015-02-22 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14333020#comment-14333020
 ] 

Ewen Cheslack-Postava commented on KAFKA-1680:
--

Pretty sure this is still a bug. There's a default value for the JMX URL 
setting, so it should be valid to run without passing any parameters. Of course 
an alternative fix would be to remove that default value or make some other 
setting required.

That said, the patch I posted addresses a much larger, far reaching bug across 
all the command line tools -- a bunch were incorrectly checking for zero 
command line arguments and exiting instead of allowing the options 
parser/validation to determine when there were errors in the command line 
arguments.

> JmxTool exits if no arguments are given
> ---
>
> Key: KAFKA-1680
> URL: https://issues.apache.org/jira/browse/KAFKA-1680
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Ryan Berdeen
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
>  Labels: newbie
> Attachments: KAFKA-1680.patch
>
>
> JmxTool has no required arguments, but it exits if no arguments are provided. 
> You can work around this by passing a non-option argument, which will be 
> ignored, e.g.{{./bin/kafka-run-class.sh kafka.tools.JmxTool xxx}}.
> It looks like this was broken in KAFKA-1291 / 
> 6b0ae4bba0d0f8e4c8da19de65a8f03f162bec39



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1969) NPE in unit test for new consumer

2015-02-22 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14333034#comment-14333034
 ] 

Guozhang Wang commented on KAFKA-1969:
--

Thanks Jay. I am working on some of those issues as part of KAFKA-1910 and will 
definitely let you know if I got something.

> NPE in unit test for new consumer
> -
>
> Key: KAFKA-1969
> URL: https://issues.apache.org/jira/browse/KAFKA-1969
> Project: Kafka
>  Issue Type: Bug
>Reporter: Neha Narkhede
>  Labels: newbie
> Attachments: stack.out
>
>
> {code}
> kafka.api.ConsumerTest > testConsumptionWithBrokerFailures FAILED
> java.lang.NullPointerException
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.ensureCoordinatorReady(KafkaConsumer.java:1238)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.initiateCoordinatorRequest(KafkaConsumer.java:1189)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commit(KafkaConsumer.java:777)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commit(KafkaConsumer.java:816)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:704)
> at 
> kafka.api.ConsumerTest.consumeWithBrokerFailures(ConsumerTest.scala:167)
> at 
> kafka.api.ConsumerTest.testConsumptionWithBrokerFailures(ConsumerTest.scala:152)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29379: Patch for KAFKA-1788

2015-02-22 Thread Ewen Cheslack-Postava


> On Jan. 6, 2015, 6:43 p.m., Parth Brahmbhatt wrote:
> > clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java,
> >  line 225
> > 
> >
> > sender.completeBatch() is only called as part of produce response 
> > handling or disconnect. Both of which will never be invoked when there is 
> > no broker. I could add sender as a member of record accumulator or pass it 
> > as the callback arg as part of the ready() method. All of which is too 
> > hecky.
> > 
> > Let me know if you see some other alternative.

Agree that those options are hacky. Maybe return the information in 
ReadyCheckResult? RecordAccumulator.ready() is only called from Sender.run(), 
which could then handle calling completeBatch() on any expired batches. This 
also has the benefit of integrating with the existing retry logic, although I'm 
not sure if we want to treat this as a retriable error or not.


- Ewen


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29379/#review66879
---


On Jan. 6, 2015, 6:44 p.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29379/
> ---
> 
> (Updated Jan. 6, 2015, 6:44 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1788
> https://issues.apache.org/jira/browse/KAFKA-1788
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Merge remote-tracking branch 'origin/trunk' into KAFKA-1788
> 
> 
> KAFKA-1788: addressed Ewen's comments.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> f61efb35db7e0de590556e6a94a7b5cb850cdae9 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> a893d88c2f4e21509b6c70d6817b4b2cdd0fd657 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  c15485d1af304ef53691d478f113f332fe67af77 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  dd0af8aee98abed5d4a0dc50989e37888bb353fe 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
>  2c9932401d573549c40f16fda8c4e3e11309cb85 
>   clients/src/test/java/org/apache/kafka/clients/producer/SenderTest.java 
> ef2ca65cabe97b909f17b62027a1bb06827e88fe 
> 
> Diff: https://reviews.apache.org/r/29379/diff/
> 
> 
> Testing
> ---
> 
> Unit test added. 
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



Re: Review Request 29379: Patch for KAFKA-1788

2015-02-22 Thread Ewen Cheslack-Postava

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29379/#review73512
---


Minor comments, I think the biggest issue remaining is getting 
Sender.completeBatch called since that's the only way errors and retries will 
be properly handled. Left a suggestion about a possible approach to do that 
while still maintaining the current layering of Sender and RecordAccumulator.


clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java


An hour seems awfully long, what's the reasoning behind this default?



clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java


Add final


- Ewen Cheslack-Postava


On Jan. 6, 2015, 6:44 p.m., Parth Brahmbhatt wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29379/
> ---
> 
> (Updated Jan. 6, 2015, 6:44 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1788
> https://issues.apache.org/jira/browse/KAFKA-1788
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Merge remote-tracking branch 'origin/trunk' into KAFKA-1788
> 
> 
> KAFKA-1788: addressed Ewen's comments.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> f61efb35db7e0de590556e6a94a7b5cb850cdae9 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> a893d88c2f4e21509b6c70d6817b4b2cdd0fd657 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  c15485d1af304ef53691d478f113f332fe67af77 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  dd0af8aee98abed5d4a0dc50989e37888bb353fe 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
>  2c9932401d573549c40f16fda8c4e3e11309cb85 
>   clients/src/test/java/org/apache/kafka/clients/producer/SenderTest.java 
> ef2ca65cabe97b909f17b62027a1bb06827e88fe 
> 
> Diff: https://reviews.apache.org/r/29379/diff/
> 
> 
> Testing
> ---
> 
> Unit test added. 
> 
> 
> Thanks,
> 
> Parth Brahmbhatt
> 
>



[jira] [Commented] (KAFKA-1788) producer record can stay in RecordAccumulator forever if leader is no available

2015-02-22 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14333041#comment-14333041
 ] 

Ewen Cheslack-Postava commented on KAFKA-1788:
--

Ok, I'll try to clear up a few issues.

I think that just making sure we make NetworkClient.leastLoadedNode eventually 
returns all nodes isn't sufficient. I was just raising another case where this 
issue could occur. The reason this isn't sufficient for the original case is 
due to the type of situation [~Bmis13] raises. If you have a temporary network 
outage to a single broker (e.g. due to firewall misconfiguration or just a 
network partition issues), it may still correctly be listed as leader. If 
holding the data in the RecordAccumulator only affected data sent to that one 
broker, then as [~jkreps] points out, we could potentially get away with just 
holding on to the messages indefinitely since errors should manifest in other 
ways. (I think it's *better* to have the timeouts, but not strictly necessary).

However, since the RecordAccumulator is a shared resource, holding onto these 
messages also means you're going to block sending data to other brokers once 
your buffer fills up with data for the unreachable broker. Adding timeouts at 
least ensures messages for these other brokers will eventually get a chance to 
send data, even if there are periods where they are automatically rejected 
because the buffer is already full. So [~parth.brahmbhatt], I think the 
approach you're trying to take in the patch is definitely the right thing to 
do, and I agree with [~Bmis13] that the error record metrics definitely should 
(eventually) be increasing.

More generally -- yes, pretty much everything that could potentially block 
things up for a long time/indefinitely *should* have a timeout. And in a lot of 
cases this is true even if the operation will eventually timeout "naturally", 
e.g. due to a TCP timeout. It's better to have control over the timeout (even 
if we highly recommend using the default values) than rely on settings from 
other systems, especially when they may be adjusted in unexpected ways outside 
of our control. This is a pervasive concern that we should keep an eye out for 
with new code, and try to file JIRAs for as we find missing timeouts in 
existing code.

Given the above, I think the options for controlling memory usage may not be 
very good for some use cases -- we've been saying people should use a single 
producer where possible since it's very fast and you actually benefit from 
sharing the network thread since you can collect all data for all 
topic-partitions destined for the same broker into a single request. But it 
turns out that sharing the underlying resources (the buffer) can lead to 
starvation for some topic-partitions when it shouldn't really be necessary. 
Would it make sense to allow a per-topic, or even per-partition limit on memory 
usage? So the effect would be similar to fetch.message.max.bytes for the 
consumer, where your actual memory usage cap is a n times the value, where n is 
the number of topic-partitions you're working with? It could also be by broker, 
but I think that leads to much less intuitive and harder to predict behavior. 
If people think that's a good idea we can file an additional issue for that.

> producer record can stay in RecordAccumulator forever if leader is no 
> available
> ---
>
> Key: KAFKA-1788
> URL: https://issues.apache.org/jira/browse/KAFKA-1788
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1788.patch, KAFKA-1788_2015-01-06_13:42:37.patch, 
> KAFKA-1788_2015-01-06_13:44:41.patch
>
>
> In the new producer, when a partition has no leader for a long time (e.g., 
> all replicas are down), the records for that partition will stay in the 
> RecordAccumulator until the leader is available. This may cause the 
> bufferpool to be full and the callback for the produced message to block for 
> a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30763: Second attempt at flush()

2015-02-22 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30763/#review73475
---



clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java


Add @throws KafkaException



clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java


One possible optimization is to keep a RecordMetadata field in the 
FutureRecordMetadata, and value() call will then only create the object once. 
Here we could then call

callback.onCompletion(thunk.future.value());


- Guozhang Wang


On Feb. 23, 2015, 12:26 a.m., Jay Kreps wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30763/
> ---
> 
> (Updated Feb. 23, 2015, 12:26 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1865
> https://issues.apache.org/jira/browse/KAFKA-1865
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1865 Add a flush() method to the producer.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/Metadata.java 
> e8afecda956303a6ee116499fd443a54c018e17d 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 1fd6917c8a5131254c740abad7f7228a47e3628c 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 84530f2b948f9abd74203db48707e490dd9c81a5 
>   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
> 17fe541588d462c68c33f6209717cc4015e9b62f 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerRecord.java 
> 4990692efa6f01c62e1d7b05fbf31bec50e398c9 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/FutureRecordMetadata.java
>  4a2da41f47994f778109e3c4107ffd90195f0bae 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  ecfe2144d778a5d9b614df5278b9f0a15637f10b 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java
>  dd0af8aee98abed5d4a0dc50989e37888bb353fe 
>   
> clients/src/main/java/org/apache/kafka/common/errors/InterruptException.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/utils/SystemTime.java 
> d682bd46ec3826f0a72388cc4ec30e1b1223d0f3 
>   clients/src/test/java/org/apache/kafka/clients/producer/BufferPoolTest.java 
> 4ae43ed47e31ad8052b4348a731da11120968508 
>   clients/src/test/java/org/apache/kafka/clients/producer/MetadataTest.java 
> 743aa7e523dd476949f484bfa4c7fb8a3afd7bf8 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/MockProducerTest.java 
> 75513b0bdd439329c5771d87436ef83fda853bfb 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/RecordAccumulatorTest.java
>  83338633717cfa4ef7cf2a590b5aa6b9c8cb1dd2 
>   core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
> 2802a399bf599e9530f53b7df72f12702a10d3c4 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> b15237b76def3b234924280fa3fdb25dbb0cc0dc 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> 21d0ed2cb7c9459261d3cdc7c21dece5e2079698 
> 
> Diff: https://reviews.apache.org/r/30763/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jay Kreps
> 
>