[jira] [Created] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-07 Thread JIRA
Igor Maravić created KAFKA-2176:
---

 Summary: DefaultPartitioner doesn't perform consistent hashing 
based on 
 Key: KAFKA-2176
 URL: https://issues.apache.org/jira/browse/KAFKA-2176
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.1
Reporter: Igor Maravić
 Fix For: 0.8.1


While deploying MirrorMakers in production, we configured it to use 
kafka.producer.DefaultPartitioner. By doing this and since we had the same 
amount partitions for the topic in local and aggregation cluster, we expect 
that the messages will be partitioned the same way.

This wasn't the case. Messages were properly partitioned with 
DefaultPartitioner on our local cluster, since the key was of the type String.
On the MirrorMaker side, the messages were not properly partitioned.

Problem is that the Array[Byte] doesn't implement hashCode function, since it 
is mutable collection.

Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Maravić updated KAFKA-2176:

Status: Open  (was: Patch Available)

> DefaultPartitioner doesn't perform consistent hashing based on 
> ---
>
> Key: KAFKA-2176
> URL: https://issues.apache.org/jira/browse/KAFKA-2176
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Igor Maravić
>  Labels: easyfix, newbie
> Fix For: 0.8.1
>
>
> While deploying MirrorMakers in production, we configured it to use 
> kafka.producer.DefaultPartitioner. By doing this and since we had the same 
> amount partitions for the topic in local and aggregation cluster, we expect 
> that the messages will be partitioned the same way.
> This wasn't the case. Messages were properly partitioned with 
> DefaultPartitioner on our local cluster, since the key was of the type String.
> On the MirrorMaker side, the messages were not properly partitioned.
> Problem is that the Array[Byte] doesn't implement hashCode function, since it 
> is mutable collection.
> Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Maravić updated KAFKA-2176:

Status: Patch Available  (was: Open)

> DefaultPartitioner doesn't perform consistent hashing based on 
> ---
>
> Key: KAFKA-2176
> URL: https://issues.apache.org/jira/browse/KAFKA-2176
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Igor Maravić
>  Labels: easyfix, newbie
> Fix For: 0.8.1
>
>
> While deploying MirrorMakers in production, we configured it to use 
> kafka.producer.DefaultPartitioner. By doing this and since we had the same 
> amount partitions for the topic in local and aggregation cluster, we expect 
> that the messages will be partitioned the same way.
> This wasn't the case. Messages were properly partitioned with 
> DefaultPartitioner on our local cluster, since the key was of the type String.
> On the MirrorMaker side, the messages were not properly partitioned.
> Problem is that the Array[Byte] doesn't implement hashCode function, since it 
> is mutable collection.
> Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 33939: Patch for KAFKA-2176

2015-05-07 Thread Igor Maravi?

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33939/
---

Review request for kafka.


Bugs: KAFKA-2176
https://issues.apache.org/jira/browse/KAFKA-2176


Repository: kafka


Description
---

KAFKA-2176 DefaultPartitioner now produces consistent hashes for Arrays


Diffs
-

  core/src/main/scala/kafka/producer/DefaultPartitioner.scala 
1141ed16769b80e6ea8b0c975b94f5b543bf6a98 
  core/src/test/scala/unit/kafka/producer/DefaultPartitionerTest.scala 
PRE-CREATION 

Diff: https://reviews.apache.org/r/33939/diff/


Testing
---


Thanks,

Igor Maravi?



[jira] [Commented] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14532514#comment-14532514
 ] 

Igor Maravić commented on KAFKA-2176:
-

Created reviewboard https://reviews.apache.org/r/33939/diff/
 against branch origin/trunk

> DefaultPartitioner doesn't perform consistent hashing based on 
> ---
>
> Key: KAFKA-2176
> URL: https://issues.apache.org/jira/browse/KAFKA-2176
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Igor Maravić
>  Labels: easyfix, newbie
> Fix For: 0.8.1
>
> Attachments: KAFKA-2176.patch
>
>
> While deploying MirrorMakers in production, we configured it to use 
> kafka.producer.DefaultPartitioner. By doing this and since we had the same 
> amount partitions for the topic in local and aggregation cluster, we expect 
> that the messages will be partitioned the same way.
> This wasn't the case. Messages were properly partitioned with 
> DefaultPartitioner on our local cluster, since the key was of the type String.
> On the MirrorMaker side, the messages were not properly partitioned.
> Problem is that the Array[Byte] doesn't implement hashCode function, since it 
> is mutable collection.
> Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Maravić updated KAFKA-2176:

Attachment: KAFKA-2176.patch

> DefaultPartitioner doesn't perform consistent hashing based on 
> ---
>
> Key: KAFKA-2176
> URL: https://issues.apache.org/jira/browse/KAFKA-2176
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Igor Maravić
>  Labels: easyfix, newbie
> Fix For: 0.8.1
>
> Attachments: KAFKA-2176.patch
>
>
> While deploying MirrorMakers in production, we configured it to use 
> kafka.producer.DefaultPartitioner. By doing this and since we had the same 
> amount partitions for the topic in local and aggregation cluster, we expect 
> that the messages will be partitioned the same way.
> This wasn't the case. Messages were properly partitioned with 
> DefaultPartitioner on our local cluster, since the key was of the type String.
> On the MirrorMaker side, the messages were not properly partitioned.
> Problem is that the Array[Byte] doesn't implement hashCode function, since it 
> is mutable collection.
> Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Maravić updated KAFKA-2176:

Status: Patch Available  (was: Open)

> DefaultPartitioner doesn't perform consistent hashing based on 
> ---
>
> Key: KAFKA-2176
> URL: https://issues.apache.org/jira/browse/KAFKA-2176
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Igor Maravić
>  Labels: easyfix, newbie
> Fix For: 0.8.1
>
> Attachments: KAFKA-2176.patch
>
>
> While deploying MirrorMakers in production, we configured it to use 
> kafka.producer.DefaultPartitioner. By doing this and since we had the same 
> amount partitions for the topic in local and aggregation cluster, we expect 
> that the messages will be partitioned the same way.
> This wasn't the case. Messages were properly partitioned with 
> DefaultPartitioner on our local cluster, since the key was of the type String.
> On the MirrorMaker side, the messages were not properly partitioned.
> Problem is that the Array[Byte] doesn't implement hashCode function, since it 
> is mutable collection.
> Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-21 Configuration Management

2015-05-07 Thread Jun Rao
Ashish,

3. This is true. However, using files has the same problem. You can't store
the location of the file in the file itself. The location of the file has
to be passed out of band into Kafka.

Thanks,

Jun

On Wed, May 6, 2015 at 6:34 PM, Ashish Singh  wrote:

> Hey Jun,
>
> Where does the broker get the info, which zk it needs to talk to?
>
> On Wednesday, May 6, 2015, Jun Rao  wrote:
>
> > Ashish,
> >
> > 3. Just want to clarify. Why can't you store ZK connection config in ZK?
> > This is a property for ZK clients, not ZK server.
> >
> > Thanks,
> >
> > Jun
> >
> > On Wed, May 6, 2015 at 5:48 PM, Ashish Singh  > > wrote:
> >
> > > I too would like to share some concerns that we came up with while
> > > discussing the effect of moving configs to zookeeper will have.
> > >
> > > 1. Kafka will start to become a configuration management tool to some
> > > degree, and be subject to all the things such tools are commonly asked
> to
> > > do. Kafka'll likely need to re-implement the role / group / service
> > > hierarchy that CM uses. Kafka'll need some way to conveniently dump its
> > > configs so they can be re-imported later, as a backup tool. People will
> > > want this to be audited, which means you'd need distinct logins for
> > > different people, and user management. You can try to push some of this
> > > stuff onto tools like CM, but this is Kafka going out of its way to be
> > > difficult to manage, and most projects don't want to do that. Being
> > unique
> > > in how configuration is done is strictly a bad thing for both
> integration
> > > and usability. Probably lots of other stuff. Seems like a bad
> direction.
> > >
> > > 2. Where would the default config live? If we decide on keeping the
> > config
> > > files around just for getting the default config, then I think on
> > restart,
> > > the config file will be ignored. This creates an obnoxious asymmetry
> for
> > > how to configure Kafka the first time and how you update it. You have
> to
> > > learn 2 ways of making config changes. If there was a mistake in your
> > > original config file, you can't just edit the config file and restart,
> > you
> > > have to go through the API. Reading configs is also more irritating.
> This
> > > all creates a learning curve for users of Kafka that will make it
> harder
> > to
> > > use than other projects. This is also a backwards-incompatible change.
> > >
> > > 3. All Kafka configs living in ZK is strictly impossible, since at the
> > very
> > > least ZK connection configs cannot be stored in ZK. So you will have a
> > file
> > > where some values are in effect but others are not, which is again
> > > confusing. Also, since you are still reading the config file on first
> > > start, there are still multiple sources of truth, or at least the
> > > appearance of such to the user.
> > >
> > > On Wed, May 6, 2015 at 5:33 PM, Jun Rao  >
> > wrote:
> > >
> > > > One of the Chef users confirmed that Chef integration could still
> work
> > if
> > > > all configs are moved to ZK. My rough understanding of how Chef works
> > is
> > > > that a user first registers a service host with a Chef server. After
> > > that,
> > > > a Chef client will be run on the service host. The user can then push
> > > > config changes intended for a service/host to the Chef server. The
> > server
> > > > is then responsible for pushing the changes to Chef clients. Chef
> > clients
> > > > support pluggable logic. For example, it can generate a config file
> > that
> > > > Kafka broker will take. If we move all configs to ZK, we can
> customize
> > > the
> > > > Chef client to use our config CLI to make the config changes in
> Kafka.
> > In
> > > > this model, one probably doesn't need to register every broker in
> Chef
> > > for
> > > > the config push. Not sure if Puppet works in a similar way.
> > > >
> > > > Also for storing the configs, we probably can't store the
> broker/global
> > > > level configs in Kafka itself (e.g. in a special topic). The reason
> is
> > > that
> > > > in order to start a broker, we likely need to make some broker level
> > > config
> > > > changes (e.g., the default log.dir may not be present, the default
> port
> > > may
> > > > not be available, etc). If we need a broker to be up to make those
> > > changes,
> > > > we get into this chicken and egg problem.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Tue, May 5, 2015 at 4:14 PM, Gwen Shapira  > >
> > > > wrote:
> > > >
> > > > > Sorry I missed the call today :)
> > > > >
> > > > > I think an additional requirement would be:
> > > > > Make sure that traditional deployment tools (Puppet, Chef, etc) are
> > > still
> > > > > capable of managing Kafka configuration.
> > > > >
> > > > > For this reason, I'd like the configuration refresh to be pretty
> > close
> > > to
> > > > > what most Linux services are doing to force a reload of
> > configuration.
> > > > > AFAIK, this involves handling HUP signal in the main thread to
> reload

Re: [VOTE] KIP-4 Admin Commands / Phase-1

2015-05-07 Thread Jun Rao
Hi, Andrii,

A few more comments.

1. We need to remove isr and replicaLags from TMR, right?

2. For TMR v1, we decided to keep the same auto topic creation logic as v0
initially. We can drop the auto topic creation logic later after the
producer client starts using createTopicRequest.

3. For the configs returned in TMR, we can keep this for now. We may need
some adjustment later depending on the outcome of KIP-21(configuration
management).

Other than those, this looks good.

Thanks,

Jun


On Tue, May 5, 2015 at 11:16 AM, Andrii Biletskyi <
andrii.bilets...@stealth.ly> wrote:

> Hi all,
>
> This is a voting thread for KIP-4 Phase-1. It will include Wire protocol
> changes
> and server side handling code.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
>
> Thanks,
> Andrii Biletskyi
>


Re: [DISCUSS] KIP-21 Configuration Management

2015-05-07 Thread Ashish Singh
Agreed :). However, the other concerns remain. Do you think just providing
zk info to broker will be sufficient? I will myself spend some to look into
the existing required confine.

On Thursday, May 7, 2015, Jun Rao  wrote:

> Ashish,
>
> 3. This is true. However, using files has the same problem. You can't store
> the location of the file in the file itself. The location of the file has
> to be passed out of band into Kafka.
>
> Thanks,
>
> Jun
>
> On Wed, May 6, 2015 at 6:34 PM, Ashish Singh  > wrote:
>
> > Hey Jun,
> >
> > Where does the broker get the info, which zk it needs to talk to?
> >
> > On Wednesday, May 6, 2015, Jun Rao >
> wrote:
> >
> > > Ashish,
> > >
> > > 3. Just want to clarify. Why can't you store ZK connection config in
> ZK?
> > > This is a property for ZK clients, not ZK server.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, May 6, 2015 at 5:48 PM, Ashish Singh  
> > > > wrote:
> > >
> > > > I too would like to share some concerns that we came up with while
> > > > discussing the effect of moving configs to zookeeper will have.
> > > >
> > > > 1. Kafka will start to become a configuration management tool to some
> > > > degree, and be subject to all the things such tools are commonly
> asked
> > to
> > > > do. Kafka'll likely need to re-implement the role / group / service
> > > > hierarchy that CM uses. Kafka'll need some way to conveniently dump
> its
> > > > configs so they can be re-imported later, as a backup tool. People
> will
> > > > want this to be audited, which means you'd need distinct logins for
> > > > different people, and user management. You can try to push some of
> this
> > > > stuff onto tools like CM, but this is Kafka going out of its way to
> be
> > > > difficult to manage, and most projects don't want to do that. Being
> > > unique
> > > > in how configuration is done is strictly a bad thing for both
> > integration
> > > > and usability. Probably lots of other stuff. Seems like a bad
> > direction.
> > > >
> > > > 2. Where would the default config live? If we decide on keeping the
> > > config
> > > > files around just for getting the default config, then I think on
> > > restart,
> > > > the config file will be ignored. This creates an obnoxious asymmetry
> > for
> > > > how to configure Kafka the first time and how you update it. You have
> > to
> > > > learn 2 ways of making config changes. If there was a mistake in your
> > > > original config file, you can't just edit the config file and
> restart,
> > > you
> > > > have to go through the API. Reading configs is also more irritating.
> > This
> > > > all creates a learning curve for users of Kafka that will make it
> > harder
> > > to
> > > > use than other projects. This is also a backwards-incompatible
> change.
> > > >
> > > > 3. All Kafka configs living in ZK is strictly impossible, since at
> the
> > > very
> > > > least ZK connection configs cannot be stored in ZK. So you will have
> a
> > > file
> > > > where some values are in effect but others are not, which is again
> > > > confusing. Also, since you are still reading the config file on first
> > > > start, there are still multiple sources of truth, or at least the
> > > > appearance of such to the user.
> > > >
> > > > On Wed, May 6, 2015 at 5:33 PM, Jun Rao  
> > >
> > > wrote:
> > > >
> > > > > One of the Chef users confirmed that Chef integration could still
> > work
> > > if
> > > > > all configs are moved to ZK. My rough understanding of how Chef
> works
> > > is
> > > > > that a user first registers a service host with a Chef server.
> After
> > > > that,
> > > > > a Chef client will be run on the service host. The user can then
> push
> > > > > config changes intended for a service/host to the Chef server. The
> > > server
> > > > > is then responsible for pushing the changes to Chef clients. Chef
> > > clients
> > > > > support pluggable logic. For example, it can generate a config file
> > > that
> > > > > Kafka broker will take. If we move all configs to ZK, we can
> > customize
> > > > the
> > > > > Chef client to use our config CLI to make the config changes in
> > Kafka.
> > > In
> > > > > this model, one probably doesn't need to register every broker in
> > Chef
> > > > for
> > > > > the config push. Not sure if Puppet works in a similar way.
> > > > >
> > > > > Also for storing the configs, we probably can't store the
> > broker/global
> > > > > level configs in Kafka itself (e.g. in a special topic). The reason
> > is
> > > > that
> > > > > in order to start a broker, we likely need to make some broker
> level
> > > > config
> > > > > changes (e.g., the default log.dir may not be present, the default
> > port
> > > > may
> > > > > not be available, etc). If we need a broker to be up to make those
> > > > changes,
> > > > > we get into this chicken and egg problem.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > > On Tue, May 5, 2015 at 4:14 PM, Gwen Shapira <
> gshap...@cloudera.com 
> > > >
> > 

Re: [VOTE] KIP-4 Admin Commands / Phase-1

2015-05-07 Thread Andrii Biletskyi
Jun,

1. My bad, updated description, but forgot the schema itself. Fixed.

2. Okay, changed that.

3. Sure, we'll evolve TMR again once needed.

Thanks,
Andrii Biletskyi

On Thu, May 7, 2015 at 6:16 PM, Jun Rao  wrote:

> Hi, Andrii,
>
> A few more comments.
>
> 1. We need to remove isr and replicaLags from TMR, right?
>
> 2. For TMR v1, we decided to keep the same auto topic creation logic as v0
> initially. We can drop the auto topic creation logic later after the
> producer client starts using createTopicRequest.
>
> 3. For the configs returned in TMR, we can keep this for now. We may need
> some adjustment later depending on the outcome of KIP-21(configuration
> management).
>
> Other than those, this looks good.
>
> Thanks,
>
> Jun
>
>
> On Tue, May 5, 2015 at 11:16 AM, Andrii Biletskyi <
> andrii.bilets...@stealth.ly> wrote:
>
> > Hi all,
> >
> > This is a voting thread for KIP-4 Phase-1. It will include Wire protocol
> > changes
> > and server side handling code.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
> >
> > Thanks,
> > Andrii Biletskyi
> >
>


Re: [DISCUSSION] Reuse o.a.k.clients.NetworkClient in controller.

2015-05-07 Thread Jay Kreps
Hey guys,

I haven't thought this all the way through, but I think having metadata in
the NetworkClient actually does make sense. After all it is the case that
all requests are directed to Kafka nodes so this is a higher level of
abstraction to work at. I think the feature you are looking for is to avoid
the automatic metadata update requests.

Wouldn't the simplest approach be to add a new parameter to the
NetworkClient constructor (something like autoRefreshMetadata) and also add
an updateMetadata(Cluster c) method for manual updates? Setting
autoRefreshMetadata=false would disable to the metadata updates and depend
on the user to manually update metadata.

The existing usage would remain as in, but the client in the controller
would do its own updates.

I suppose this is just a question of whether or not to try to factor out
the metadata update code. If we want to add another ClientNetworkClient
layer I think that's reasonable but consider not making it a subsclass.
These subclass/superclass interwinings can be a bit of a mess.

-Jay

On Tue, May 5, 2015 at 11:31 PM, Ewen Cheslack-Postava 
wrote:

> +1 on trying to reuse the NetworkClient code.
>
> I think Jun's approach could work, but I'm wondering if refactoring a bit
> could get better separation of concerns without a somewhat awkward nop
> implementation of Metadata. I'm not sure what combination of delegation or
> subclassing makes sense yet, but here's another approach that I think could
> work:
>
> * Get rid of metadata stuff from NetworkClient. Add a subclass that also
> manages all the metadata. (Since it's used for both producer and consumer,
> the obvious name that I first jumped to is ClientNetworkClient, but somehow
> I think we can come up with something less confusing.)
> * leastLoadedNode is the only caller of metadata.fetch() in that class,
> maybeUpdateMetadata is the only caller of leastLoadedNode,
> maybeUpdateMetadata is only called in poll when a combination of metadata
> related timeouts end up being 0. These can be safely refactored into the
> subclass with one override of poll(). Same with metadataFetchInProgress
> assuming the rest of the changes below.
> * Some of the default implementations (e.g. handleMetadataResponse) can be
> left nops in NetworkClient and moved to the subclass.
> * Others can be overridden to call the super method then take the
> additional action necessary (e.g., on disconnect, move the metadata update
> request to the subclass).
> * Making the timeout handling in poll() work for both NetworkClient and the
> new base class might be the messiest part and might require breaking down
> the implementation of poll into multiple methods.
> * isReady uses metadataFetchInProgress and gets a timeout from the Metadata
> class. We can just override this method as well, though I feel like there's
> probably a cleaner solution.
>
> -Ewen
>
>
> On Tue, May 5, 2015 at 4:54 PM, Jun Rao  wrote:
>
> > Hi, Jiangjie,
> >
> > Thanks for taking on this.
> >
> > I was thinking that one way to decouple the dependency on Metadata in
> > NetworkClient is the following.
> > 1. Make Metadata an interface.
> > 2. Rename current Metadata class to sth like KafkaMetadata that
> implements
> > the Metadata interface.
> > 3. Have a new NoOpMetadata class that implements the Metadata interface.
> > This class
> > 3.1 does nothing for any write method
> > 3.2 returns max long for any method that asks for a timestamp
> > 3.3. returns an empty Cluster for fetch().
> >
> > Then we can leave NetworkClient unchanged and just pass in a NoOpMetadata
> > when using NetworkClient in the controller. The consumer/producer client
> > will be using KafkaMetadata.
> >
> > As for replica fetchers, it may be possible to use KafkaConsumer.
> However,
> > we don't need the metadata and the offset management. So, perhaps it's
> > easier to just use NetworkClient. Also, currently, there is one replica
> > fetcher thread per source broker. By using NetworkClient, we can change
> > that to using a single thread for all source brokers. This is probably a
> > bigger change. So, maybe we can do it later.
> >
> > Jun
> >
> >
> > I think we probably need to replace replica fetcher with NetworkClient as
> > well. Replica fetcher gets leader from the controller and therefore
> doesn't
> >
> > On Tue, May 5, 2015 at 1:37 PM, Jiangjie Qin 
> > wrote:
> >
> > > I am trying to see if we can reuse the NetworkClient class to be used
> in
> > > controller to broker communication. (Also, we can probably use
> > > KafkaConsumer which is already using NetworkClient in replica
> fetchers).
> > > Currently NetworkClient does the following things in addition to
> sending
> > > requests.
> > >
> > >   1.  Connection state management.
> > >   2.  Flow control (inflight requests)
> > >   3.  Metadata refresh
> > >
> > > In controller we need (1) and (2) but not (3). NetworkClient is tightly
> > > coupled with metadata now and this is the major blocker of reusing
> > > NetworkClient 

[DISCUSS] KIP-23 - Add JSON/CSV output and looping options to ConsumerGroupCommand

2015-05-07 Thread Ashish Singh
Hi Guys,

I just added a KIP, KIP-23 - Add JSON/CSV output and looping options to
ConsumerGroupCommand
, for KAFKA-313
. The changes made as part
of the JIRA can be found here .

Comments and suggestions are welcome!

-- 

Regards,
Ashish


[jira] [Commented] (KAFKA-313) Add JSON/CSV output and looping options to ConsumerGroupCommand

2015-05-07 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533166#comment-14533166
 ] 

Ashish K Singh commented on KAFKA-313:
--

[~nehanarkhede] thanks for the review! Makes sense to put out a KIP. Created 
[KIP-23|https://cwiki.apache.org/confluence/display/KAFKA/KIP-23].

> Add JSON/CSV output and looping options to ConsumerGroupCommand
> ---
>
> Key: KAFKA-313
> URL: https://issues.apache.org/jira/browse/KAFKA-313
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dave DeMaagd
>Assignee: Ashish K Singh
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 0.8.3
>
> Attachments: KAFKA-313-2012032200.diff, KAFKA-313.1.patch, 
> KAFKA-313.patch, KAFKA-313_2015-02-23_18:11:32.patch
>
>
> Adds:
> * '--loop N' - causes the program to loop forever, sleeping for up to N 
> seconds between loops (loop time minus collection time, unless that's less 
> than 0, at which point it will just run again immediately)
> * '--asjson' - display as a JSON string instead of the more human readable 
> output format.
> Neither of the above  depend on each other (you can loop in the human 
> readable output, or do a single shot execution with JSON output).  Existing 
> behavior/output maintained if neither of the above are used.  Diff Attached.
> Impacted files:
> core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Need Access to Wiki Page To Create Page for Discussion

2015-05-07 Thread Bhavesh Mistry
Hi Guozhang,

Here is link to account
https://cwiki.apache.org/confluence/users/viewuserprofile.action?username=bmis13
I just created the account wiki.   I was under the impression that jira
account is wiki account.  Thanks in advance for help !

Thanks,
Bhavesh

On Wed, May 6, 2015 at 8:33 PM, Guozhang Wang  wrote:

> Bhavesh,
>
> I could not find Bmis13 when adding you to the wiki permission. Could you
> double check the account id?
>
> Guozhang
>
> On Wed, May 6, 2015 at 6:47 PM, Bhavesh Mistry  >
> wrote:
>
> > Hi Jun,
> >
> > The account id is Bmis13.
> >
> > Thanks,
> > Bhavesh
> >
> > On Wed, May 6, 2015 at 4:52 PM, Jun Rao  wrote:
> >
> > > What your wiki user id?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, May 6, 2015 at 11:09 AM, Bhavesh Mistry <
> > > mistry.p.bhav...@gmail.com>
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I need access to create Discussion or KIP document.  Let me know what
> > is
> > > > process of getting access.
> > > >
> > > > Thanks,
> > > >
> > > > Bhavesh
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-2177) Modify o.a.k.clients.NetworkClient to reuse it in controller/replica fetchers.

2015-05-07 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-2177:
---

 Summary: Modify o.a.k.clients.NetworkClient to reuse it in 
controller/replica fetchers.
 Key: KAFKA-2177
 URL: https://issues.apache.org/jira/browse/KAFKA-2177
 Project: Kafka
  Issue Type: Task
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


Trying to see if we can decouple it a little bit from metadata so we can reuse 
it in controller/replica fetchers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-23 - Add JSON/CSV output and looping options to ConsumerGroupCommand

2015-05-07 Thread Ashish Singh
Had to change the title of the page and that surprisingly changed the link
as well. KIP-23 is now available at here
.

On Thu, May 7, 2015 at 11:34 AM, Ashish Singh  wrote:

> Hi Guys,
>
> I just added a KIP, KIP-23 - Add JSON/CSV output and looping options to
> ConsumerGroupCommand
> , for KAFKA-313
> . The changes made as
> part of the JIRA can be found here .
>
> Comments and suggestions are welcome!
>
> --
>
> Regards,
> Ashish
>



-- 

Regards,
Ashish


[jira] [Created] (KAFKA-2178) Loss of highwatermarks on incorrect cluster shutdown/restart

2015-05-07 Thread Alexey Ozeritskiy (JIRA)
Alexey Ozeritskiy created KAFKA-2178:


 Summary: Loss of highwatermarks on incorrect cluster 
shutdown/restart
 Key: KAFKA-2178
 URL: https://issues.apache.org/jira/browse/KAFKA-2178
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.2.1
Reporter: Alexey Ozeritskiy
 Attachments: KAFKA-2178.patch

ReplicaManager flushes highwatermarks only for partitions which it recieved 
from Controller.
If Controller sends incomplete list of partitions then ReplicaManager will 
write incomplete list of highwatermarks.
As a result one can lose a lot of data during incorrect broker restart.

We got this situation in real life on our cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2178) Loss of highwatermarks on incorrect cluster shutdown/restart

2015-05-07 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-2178:
-
Attachment: KAFKA-2178.patch

> Loss of highwatermarks on incorrect cluster shutdown/restart
> 
>
> Key: KAFKA-2178
> URL: https://issues.apache.org/jira/browse/KAFKA-2178
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.1
>Reporter: Alexey Ozeritskiy
> Attachments: KAFKA-2178.patch
>
>
> ReplicaManager flushes highwatermarks only for partitions which it recieved 
> from Controller.
> If Controller sends incomplete list of partitions then ReplicaManager will 
> write incomplete list of highwatermarks.
> As a result one can lose a lot of data during incorrect broker restart.
> We got this situation in real life on our cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2178) Loss of highwatermarks on incorrect cluster shutdown/restart

2015-05-07 Thread Alexey Ozeritskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ozeritskiy updated KAFKA-2178:
-
Status: Patch Available  (was: Open)

> Loss of highwatermarks on incorrect cluster shutdown/restart
> 
>
> Key: KAFKA-2178
> URL: https://issues.apache.org/jira/browse/KAFKA-2178
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.1
>Reporter: Alexey Ozeritskiy
> Attachments: KAFKA-2178.patch
>
>
> ReplicaManager flushes highwatermarks only for partitions which it recieved 
> from Controller.
> If Controller sends incomplete list of partitions then ReplicaManager will 
> write incomplete list of highwatermarks.
> As a result one can lose a lot of data during incorrect broker restart.
> We got this situation in real life on our cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2179) no graceful nor fast way to shutdown every broker without killing them

2015-05-07 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-2179:


 Summary: no graceful nor fast way to shutdown every broker without 
killing them
 Key: KAFKA-2179
 URL: https://issues.apache.org/jira/browse/KAFKA-2179
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.2
Reporter: Joe Stein
Priority: Minor
 Fix For: 0.8.3


if you do a controlled shutdown of every broker at the same time the controlled 
shutdown process spins out of control. Every leader can't go anywhere because 
every broker is trying to controlled shutdown itself. The result is the brokers 
take a long (variable) time before it eventually does actually shutdown.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2180) topics never create on brokers though it succeeds in tool and is in zookeeper

2015-05-07 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-2180:


 Summary: topics never create on brokers though it succeeds in tool 
and is in zookeeper
 Key: KAFKA-2180
 URL: https://issues.apache.org/jira/browse/KAFKA-2180
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1.2
Reporter: Joe Stein
Priority: Critical
 Fix For: 0.8.3


Ran into an issue with a 0.8.2.1 cluster where create topic was succeeding when 
running bin/kafka-topics.sh --create and seen in zookeeper but brokers never 
get updated. 

We ended up fixing this by deleting the /controller znode so controller leader 
election would result. Wwe really should have some better way to make the 
controller failover ( KAFKA-1778 ) than rmr /controller in the zookeeper shell




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2169: Moving to zkClient 0.5 release.

2015-05-07 Thread Parth-Brahmbhatt
GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/61

KAFKA-2169: Moving to zkClient 0.5 release.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-2169

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/61.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #61


commit e5eb373dcec7562292cec32f3962e42dda5cea24
Author: Parth Brahmbhatt 
Date:   2015-05-07T20:15:55Z

KAFKA-2169: Moving to zkClient 0.5 release.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2169) Upgrade to zkclient-0.5

2015-05-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533309#comment-14533309
 ] 

ASF GitHub Bot commented on KAFKA-2169:
---

GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/61

KAFKA-2169: Moving to zkClient 0.5 release.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-2169

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/61.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #61


commit e5eb373dcec7562292cec32f3962e42dda5cea24
Author: Parth Brahmbhatt 
Date:   2015-05-07T20:15:55Z

KAFKA-2169: Moving to zkClient 0.5 release.




> Upgrade to zkclient-0.5
> ---
>
> Key: KAFKA-2169
> URL: https://issues.apache.org/jira/browse/KAFKA-2169
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Neha Narkhede
>Assignee: Parth Brahmbhatt
>Priority: Critical
>
> zkclient-0.5 is released 
> http://mvnrepository.com/artifact/com.101tec/zkclient/0.5 and has the fix for 
> KAFKA-824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Why we call callback.onCompletion in send()?

2015-05-07 Thread Jiangjie Qin
Hi,

I just notice that in new producer we actually call callback.onCompleteion() in 
the catch clause of send(). This seems breaking the callback execution order 
guarantee we provided. Any reason why we do this instead of just throw an 
exception?

Jiangjie (Becket) Qin


Re: Need Access to Wiki Page To Create Page for Discussion

2015-05-07 Thread Guozhang Wang
Done.

Cheers,
Guozhang

On Thu, May 7, 2015 at 12:00 PM, Bhavesh Mistry 
wrote:

> Hi Guozhang,
>
> Here is link to account
>
> https://cwiki.apache.org/confluence/users/viewuserprofile.action?username=bmis13
> I just created the account wiki.   I was under the impression that jira
> account is wiki account.  Thanks in advance for help !
>
> Thanks,
> Bhavesh
>
> On Wed, May 6, 2015 at 8:33 PM, Guozhang Wang  wrote:
>
> > Bhavesh,
> >
> > I could not find Bmis13 when adding you to the wiki permission. Could you
> > double check the account id?
> >
> > Guozhang
> >
> > On Wed, May 6, 2015 at 6:47 PM, Bhavesh Mistry <
> mistry.p.bhav...@gmail.com
> > >
> > wrote:
> >
> > > Hi Jun,
> > >
> > > The account id is Bmis13.
> > >
> > > Thanks,
> > > Bhavesh
> > >
> > > On Wed, May 6, 2015 at 4:52 PM, Jun Rao  wrote:
> > >
> > > > What your wiki user id?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Wed, May 6, 2015 at 11:09 AM, Bhavesh Mistry <
> > > > mistry.p.bhav...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I need access to create Discussion or KIP document.  Let me know
> what
> > > is
> > > > > process of getting access.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Bhavesh
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


Re: Review Request 33731: Second Attempt to Fix KAFKA-2160

2015-05-07 Thread Onur Karaman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33731/#review82911
---


I'm guessing the answer is yes, but is it too much of a performance hit to 
replace the purgatory's Pool with just a HashMap guarded by a striped lock? It 
seems like it would simplify purgatory synchronization logic in that we 
wouldn't have to worry about working with the Pool's synchronization logic and 
we could get rid of the Watchers operations list synchronization.

A basic version of striped locks: 
https://gist.github.com/onurkaraman/e8afb91154ec4a832234

- Onur Karaman


On May 6, 2015, 11:31 p.m., Guozhang Wang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33731/
> ---
> 
> (Updated May 6, 2015, 11:31 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2160
> https://issues.apache.org/jira/browse/KAFKA-2160
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Add a per-key lock in Pool which is used for non-primitive 
> UpdateAndMaybeRemove function
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> aa8d9404a3e78a365df06404b79d0d8f694b4bd6 
>   core/src/main/scala/kafka/server/DelayedOperation.scala 
> 2ed9b467c2865e5717d7f6fd933cd09a5c5b22c0 
>   core/src/main/scala/kafka/server/DelayedProduce.scala 
> 05078b24ef28f2f4e099afa943e43f1d00359fda 
>   core/src/main/scala/kafka/utils/Pool.scala 
> 9ddcde797341ddd961923cafca16472d84417b5a 
>   core/src/main/scala/kafka/utils/timer/Timer.scala 
> b8cde820a770a4e894804f1c268b24b529940650 
>   core/src/test/scala/unit/kafka/log/LogConfigTest.scala 
> f3546adee490891e0d8d0214bef00b1dd7f42227 
>   core/src/test/scala/unit/kafka/server/DelayedOperationTest.scala 
> f3ab3f4ff8eb1aa6b2ab87ba75f72eceb6649620 
> 
> Diff: https://reviews.apache.org/r/33731/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Guozhang Wang
> 
>



[jira] [Commented] (KAFKA-1387) Kafka getting stuck creating ephemeral node it has already created when two zookeeper sessions are established in a very short period of time

2015-05-07 Thread Abhishek Nigam (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533402#comment-14533402
 ] 

Abhishek Nigam commented on KAFKA-1387:
---

I have seen the ephemeral node issue before and the fix made there was exactly 
what Thomas mentioned:
"It seems the simplest thing to do would be to just delete the conflicted node 
and write the truth about the process environment it knows."

Is there a reason why the approach outlined by Thomas does not work for kafka?

> Kafka getting stuck creating ephemeral node it has already created when two 
> zookeeper sessions are established in a very short period of time
> -
>
> Key: KAFKA-1387
> URL: https://issues.apache.org/jira/browse/KAFKA-1387
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Fedor Korotkiy
>Priority: Blocker
>  Labels: newbie, patch, zkclient-problems
> Attachments: kafka-1387.patch
>
>
> Kafka broker re-registers itself in zookeeper every time handleNewSession() 
> callback is invoked.
> https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaHealthcheck.scala
>  
> Now imagine the following sequence of events.
> 1) Zookeeper session reestablishes. handleNewSession() callback is queued by 
> the zkClient, but not invoked yet.
> 2) Zookeeper session reestablishes again, queueing callback second time.
> 3) First callback is invoked, creating /broker/[id] ephemeral path.
> 4) Second callback is invoked and it tries to create /broker/[id] path using 
> createEphemeralPathExpectConflictHandleZKBug() function. But the path is 
> already exists, so createEphemeralPathExpectConflictHandleZKBug() is getting 
> stuck in the infinite loop.
> Seems like controller election code have the same issue.
> I'am able to reproduce this issue on the 0.8.1 branch from github using the 
> following configs.
> # zookeeper
> tickTime=10
> dataDir=/tmp/zk/
> clientPort=2101
> maxClientCnxns=0
> # kafka
> broker.id=1
> log.dir=/tmp/kafka
> zookeeper.connect=localhost:2101
> zookeeper.connection.timeout.ms=100
> zookeeper.sessiontimeout.ms=100
> Just start kafka and zookeeper and then pause zookeeper several times using 
> Ctrl-Z.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33088: add heartbeat to coordinator

2015-05-07 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33088/#review82921
---

Ship it!


I think we can check this in if you agree with the following minor comments (I 
can do them while committing), and then move on to the rebalance logic / 
leftover works.


core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala


"private[Coordinator]" ?



core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala


Add @nonthreadsafe for this function.



core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala


We can probably merge these three functions into one:

updateTopicSubscriptionForGroup(groupId, topicsToAdd, topicsToRemove)

where topicsToAdd/Remove can be empty.


- Guozhang Wang


On May 5, 2015, 5:50 p.m., Onur Karaman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33088/
> ---
> 
> (Updated May 5, 2015, 5:50 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1334
> https://issues.apache.org/jira/browse/KAFKA-1334
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> add heartbeat to coordinator
> 
> todo:
> - see how it performs under real load
> - add error code handling on the consumer side
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
>  e55ab11df4db0b0084f841a74cbcf819caf780d5 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 36aa412404ff1458c7bef0feecaaa8bc45bed9c7 
>   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
> 456b602245e111880e1b8b361319cabff38ee0e9 
>   core/src/main/scala/kafka/coordinator/ConsumerRegistry.scala 
> 2f5797064d4131ecfc9d2750d9345a9fa3972a9a 
>   core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
> 6a6bc7bc4ceb648b67332e789c2c33de88e4cd86 
>   core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
> df60cbc35d09937b4e9c737c67229889c69d8698 
>   core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
> 8defa2e41c92f1ebe255177679d275c70dae5b3e 
>   core/src/main/scala/kafka/coordinator/Group.scala PRE-CREATION 
>   core/src/main/scala/kafka/coordinator/GroupRegistry.scala 
> 94ef5829b3a616c90018af1db7627bfe42e259e5 
>   core/src/main/scala/kafka/coordinator/HeartbeatBucket.scala 
> 821e26e97eaa97b5f4520474fff0fedbf406c82a 
>   core/src/main/scala/kafka/coordinator/PartitionAssignor.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/DelayedOperationKey.scala 
> b673e43b0ba401b2e22f27aef550e3ab0ef4323c 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> b4004aa3a1456d337199aa1245fb0ae61f6add46 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> c63f4ba9d622817ea8636d4e6135fba917ce085a 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 18680ce100f10035175cc0263ba7787ab0f6a17a 
>   core/src/test/scala/unit/kafka/coordinator/CoordinatorMetadataTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/coordinator/GroupTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/coordinator/PartitionAssignorTest.scala 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/33088/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Onur Karaman
> 
>



Re: Review Request 33088: add heartbeat to coordinator

2015-05-07 Thread Guozhang Wang


> On May 7, 2015, 10:07 p.m., Guozhang Wang wrote:
> > I think we can check this in if you agree with the following minor comments 
> > (I can do them while committing), and then move on to the rebalance logic / 
> > leftover works.

Also could you rebase since currently patch does not apply cleanly on trunk?


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33088/#review82921
---


On May 5, 2015, 5:50 p.m., Onur Karaman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33088/
> ---
> 
> (Updated May 5, 2015, 5:50 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1334
> https://issues.apache.org/jira/browse/KAFKA-1334
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> add heartbeat to coordinator
> 
> todo:
> - see how it performs under real load
> - add error code handling on the consumer side
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
>  e55ab11df4db0b0084f841a74cbcf819caf780d5 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 36aa412404ff1458c7bef0feecaaa8bc45bed9c7 
>   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
> 456b602245e111880e1b8b361319cabff38ee0e9 
>   core/src/main/scala/kafka/coordinator/ConsumerRegistry.scala 
> 2f5797064d4131ecfc9d2750d9345a9fa3972a9a 
>   core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
> 6a6bc7bc4ceb648b67332e789c2c33de88e4cd86 
>   core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
> df60cbc35d09937b4e9c737c67229889c69d8698 
>   core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
> 8defa2e41c92f1ebe255177679d275c70dae5b3e 
>   core/src/main/scala/kafka/coordinator/Group.scala PRE-CREATION 
>   core/src/main/scala/kafka/coordinator/GroupRegistry.scala 
> 94ef5829b3a616c90018af1db7627bfe42e259e5 
>   core/src/main/scala/kafka/coordinator/HeartbeatBucket.scala 
> 821e26e97eaa97b5f4520474fff0fedbf406c82a 
>   core/src/main/scala/kafka/coordinator/PartitionAssignor.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/DelayedOperationKey.scala 
> b673e43b0ba401b2e22f27aef550e3ab0ef4323c 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> b4004aa3a1456d337199aa1245fb0ae61f6add46 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> c63f4ba9d622817ea8636d4e6135fba917ce085a 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 18680ce100f10035175cc0263ba7787ab0f6a17a 
>   core/src/test/scala/unit/kafka/coordinator/CoordinatorMetadataTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/coordinator/GroupTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/coordinator/PartitionAssignorTest.scala 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/33088/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Onur Karaman
> 
>



[jira] [Created] (KAFKA-2181) online doc issues

2015-05-07 Thread Jason Rosenberg (JIRA)
Jason Rosenberg created KAFKA-2181:
--

 Summary: online doc issues
 Key: KAFKA-2181
 URL: https://issues.apache.org/jira/browse/KAFKA-2181
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg
Priority: Minor


On the page: http://kafka.apache.org/documentation.html

There are some typos, etc.  I'm unsure if this is the place to report issues 
(please advise alternate location). I'd be happy to send edits/suggestions 
directly.

Issues currently of mind:

1. The command to delete a topic config should use the flag --delete-config 
instead of --deleteConfig.

2. The consumer config option for 'partition.assignment.strategy' appears twice 
in the table (and only has the full detail in the second appearance at the end).





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33088: add heartbeat to coordinator

2015-05-07 Thread Onur Karaman


> On May 7, 2015, 10:07 p.m., Guozhang Wang wrote:
> > I think we can check this in if you agree with the following minor comments 
> > (I can do them while committing), and then move on to the rebalance logic / 
> > leftover works.
> 
> Guozhang Wang wrote:
> Also could you rebase since currently patch does not apply cleanly on 
> trunk?

Will do.


> On May 7, 2015, 10:07 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala, line 35
> > 
> >
> > "private[Coordinator]" ?

private[coordinator] says to keep the class private to the kafka.coordinator 
package. Since there is no package called kafka.Coordinator with a capital C, 
that won't work.


> On May 7, 2015, 10:07 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala, lines 
> > 101-123
> > 
> >
> > We can probably merge these three functions into one:
> > 
> > updateTopicSubscriptionForGroup(groupId, topicsToAdd, topicsToRemove)
> > 
> > where topicsToAdd/Remove can be empty.

Yeah I originally only had the one combined bindAndUnbindGroupFromTopics 
function in an earlier draft (not in rb).

I found that the combined fuction with empty topicsToBind/topicsToUnbind params 
made addConsumer and removeConsumer harder to understand. It also made unit 
tests a lot harder to reason about in terms of figuring out which branches were 
covered.


- Onur


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33088/#review82921
---


On May 5, 2015, 5:50 p.m., Onur Karaman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33088/
> ---
> 
> (Updated May 5, 2015, 5:50 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1334
> https://issues.apache.org/jira/browse/KAFKA-1334
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> add heartbeat to coordinator
> 
> todo:
> - see how it performs under real load
> - add error code handling on the consumer side
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
>  e55ab11df4db0b0084f841a74cbcf819caf780d5 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 36aa412404ff1458c7bef0feecaaa8bc45bed9c7 
>   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
> 456b602245e111880e1b8b361319cabff38ee0e9 
>   core/src/main/scala/kafka/coordinator/ConsumerRegistry.scala 
> 2f5797064d4131ecfc9d2750d9345a9fa3972a9a 
>   core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
> 6a6bc7bc4ceb648b67332e789c2c33de88e4cd86 
>   core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
> df60cbc35d09937b4e9c737c67229889c69d8698 
>   core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
> 8defa2e41c92f1ebe255177679d275c70dae5b3e 
>   core/src/main/scala/kafka/coordinator/Group.scala PRE-CREATION 
>   core/src/main/scala/kafka/coordinator/GroupRegistry.scala 
> 94ef5829b3a616c90018af1db7627bfe42e259e5 
>   core/src/main/scala/kafka/coordinator/HeartbeatBucket.scala 
> 821e26e97eaa97b5f4520474fff0fedbf406c82a 
>   core/src/main/scala/kafka/coordinator/PartitionAssignor.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/DelayedOperationKey.scala 
> b673e43b0ba401b2e22f27aef550e3ab0ef4323c 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> b4004aa3a1456d337199aa1245fb0ae61f6add46 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> c63f4ba9d622817ea8636d4e6135fba917ce085a 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 18680ce100f10035175cc0263ba7787ab0f6a17a 
>   core/src/test/scala/unit/kafka/coordinator/CoordinatorMetadataTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/coordinator/GroupTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/coordinator/PartitionAssignorTest.scala 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/33088/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Onur Karaman
> 
>



Re: Review Request 31627: Patch for KAFKA-1884

2015-05-07 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31627/#review82926
---

Ship it!



clients/src/main/java/org/apache/kafka/clients/NetworkClient.java


It could happen that even if cluster.nodes().size() > 0 while some 
transient error may happen as expected while refreshing metadata, such as 
UNKNOWN_TOPIC, so better moving out of if-else statement.


- Guozhang Wang


On March 2, 2015, 3:57 p.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31627/
> ---
> 
> (Updated March 2, 2015, 3:57 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1884
> https://issues.apache.org/jira/browse/KAFKA-1884
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Added logs to print metadata response errors
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/NetworkClient.java 
> a7fa4a9dfbcfbc4d9e9259630253cbcded158064 
> 
> Diff: https://reviews.apache.org/r/31627/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Manikumar Reddy O
> 
>



[jira] [Commented] (KAFKA-1884) Print metadata response errors

2015-05-07 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533539#comment-14533539
 ] 

Guozhang Wang commented on KAFKA-1884:
--

[~omkreddy] Sorry for the late reply, committed to trunk with some minor 
changes.

> Print metadata response errors
> --
>
> Key: KAFKA-1884
> URL: https://issues.apache.org/jira/browse/KAFKA-1884
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Manikumar Reddy
>Assignee: Manikumar Reddy
> Fix For: 0.8.3
>
> Attachments: KAFKA-1884.patch
>
>
> Print metadata response errors.
> producer logs:
> DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Trying 
> to send metadata request to node -1
> DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Sending 
> metadata request ClientRequest(expectResponse=true, payload=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=50845,client_id=my-producer},
>  body={topics=[TOPIC=]})) to node -1
> TRACE [2015-01-20 12:46:13,416] NetworkClient: handleMetadataResponse(): 
> Ignoring empty metadata response with correlation id 50845.
> DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying 
> to send metadata request to node -1
> DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Sending 
> metadata request ClientRequest(expectResponse=true, payload=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=50846,client_id=my-producer},
>  body={topics=[TOPIC=]})) to node -1
> TRACE [2015-01-20 12:46:13,417] NetworkClient: handleMetadataResponse(): 
> Ignoring empty metadata response with correlation id 50846.
> DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying 
> to send metadata request to node -1
> DEBUG [2015-01-20 12:46:13,418] NetworkClient: maybeUpdateMetadata(): Sending 
> metadata request ClientRequest(expectResponse=true, payload=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=50847,client_id=my-producer},
>  body={topics=[TOPIC=]})) to node -1
> TRACE [2015-01-20 12:46:13,418] NetworkClient: handleMetadataResponse(): 
> Ignoring empty metadata response with correlation id 50847.
> Broker logs:
> [2015-01-20 12:46:14,074] ERROR [KafkaApi-0] error when handling request 
> Name: TopicMetadataRequest; Version: 0; CorrelationId: 51020; ClientId: 
> my-producer; Topics: TOPIC= (kafka.server.KafkaApis)
> kafka.common.InvalidTopicException: topic name TOPIC= is illegal, contains a 
> character other than ASCII alphanumerics, '.', '_' and '-'
>   at kafka.common.Topic$.validate(Topic.scala:42)
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:186)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:177)
>   at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:367)
>   at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:350)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at 
> scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
>   at scala.collection.SetLike$class.map(SetLike.scala:93)
>   at scala.collection.AbstractSet.map(Set.scala:47)
>   at kafka.server.KafkaApis.getTopicMetadata(KafkaApis.scala:350)
>   at 
> kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:389)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:57)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
>   at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1884) Print metadata response errors

2015-05-07 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1884:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Print metadata response errors
> --
>
> Key: KAFKA-1884
> URL: https://issues.apache.org/jira/browse/KAFKA-1884
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Manikumar Reddy
>Assignee: Manikumar Reddy
> Fix For: 0.8.3
>
> Attachments: KAFKA-1884.patch
>
>
> Print metadata response errors.
> producer logs:
> DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Trying 
> to send metadata request to node -1
> DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Sending 
> metadata request ClientRequest(expectResponse=true, payload=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=50845,client_id=my-producer},
>  body={topics=[TOPIC=]})) to node -1
> TRACE [2015-01-20 12:46:13,416] NetworkClient: handleMetadataResponse(): 
> Ignoring empty metadata response with correlation id 50845.
> DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying 
> to send metadata request to node -1
> DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Sending 
> metadata request ClientRequest(expectResponse=true, payload=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=50846,client_id=my-producer},
>  body={topics=[TOPIC=]})) to node -1
> TRACE [2015-01-20 12:46:13,417] NetworkClient: handleMetadataResponse(): 
> Ignoring empty metadata response with correlation id 50846.
> DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying 
> to send metadata request to node -1
> DEBUG [2015-01-20 12:46:13,418] NetworkClient: maybeUpdateMetadata(): Sending 
> metadata request ClientRequest(expectResponse=true, payload=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=50847,client_id=my-producer},
>  body={topics=[TOPIC=]})) to node -1
> TRACE [2015-01-20 12:46:13,418] NetworkClient: handleMetadataResponse(): 
> Ignoring empty metadata response with correlation id 50847.
> Broker logs:
> [2015-01-20 12:46:14,074] ERROR [KafkaApi-0] error when handling request 
> Name: TopicMetadataRequest; Version: 0; CorrelationId: 51020; ClientId: 
> my-producer; Topics: TOPIC= (kafka.server.KafkaApis)
> kafka.common.InvalidTopicException: topic name TOPIC= is illegal, contains a 
> character other than ASCII alphanumerics, '.', '_' and '-'
>   at kafka.common.Topic$.validate(Topic.scala:42)
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:186)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:177)
>   at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:367)
>   at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:350)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at 
> scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
>   at scala.collection.SetLike$class.map(SetLike.scala:93)
>   at scala.collection.AbstractSet.map(Set.scala:47)
>   at kafka.server.KafkaApis.getTopicMetadata(KafkaApis.scala:350)
>   at 
> kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:389)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:57)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
>   at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Environment variables for network config

2015-05-07 Thread Lukas Steiblys
Hello Kafka developers,

We have built a prototype analytics processing pipeline using Kafka and Samza 
and are working on making it production ready. We are using Docker to 
containerize Kafka service and one problem we are having is specifying 
configuration options in properties files, specifically the 
advertised.host.name property. We can not use the result returned from 
getCanonicalName() because it is not resolvable and would like to specify it as 
an environment variable when starting the container, as is recommended in 
http://12factor.net/config . 

Currently there is no option to do it, as I understand. Any thoughts around 
implementing this? I can submit a patch for it myself, if needed, but I would 
like to get more input from the current developers before going forward.

Lukas


Re: [VOTE] KIP-4 Admin Commands / Phase-1

2015-05-07 Thread Jun Rao
+1 on the vote.

Thanks,

Jun

On Thu, May 7, 2015 at 8:29 AM, Andrii Biletskyi <
andrii.bilets...@stealth.ly> wrote:

> Jun,
>
> 1. My bad, updated description, but forgot the schema itself. Fixed.
>
> 2. Okay, changed that.
>
> 3. Sure, we'll evolve TMR again once needed.
>
> Thanks,
> Andrii Biletskyi
>
> On Thu, May 7, 2015 at 6:16 PM, Jun Rao  wrote:
>
> > Hi, Andrii,
> >
> > A few more comments.
> >
> > 1. We need to remove isr and replicaLags from TMR, right?
> >
> > 2. For TMR v1, we decided to keep the same auto topic creation logic as
> v0
> > initially. We can drop the auto topic creation logic later after the
> > producer client starts using createTopicRequest.
> >
> > 3. For the configs returned in TMR, we can keep this for now. We may need
> > some adjustment later depending on the outcome of KIP-21(configuration
> > management).
> >
> > Other than those, this looks good.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Tue, May 5, 2015 at 11:16 AM, Andrii Biletskyi <
> > andrii.bilets...@stealth.ly> wrote:
> >
> > > Hi all,
> > >
> > > This is a voting thread for KIP-4 Phase-1. It will include Wire
> protocol
> > > changes
> > > and server side handling code.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
> > >
> > > Thanks,
> > > Andrii Biletskyi
> > >
> >
>


RE: [DISCUSS] KIP-21 Configuration Management

2015-05-07 Thread Aditya Auradkar
Theoretically, using just the broker id and zk connect string it should be 
possible for the broker to read all configs from Zookeeper. Like Ashish said, 
we should probably take a look and make sure.

Additionally, we've spoken about making config changes only through a broker 
API. However, we also need a way to change properties even if a specific 
broker, controller or entire cluster is down or unable to accept config change 
requests for any reason. This implies that we need a mechanism to make config 
changes by talking to zookeeper directly and that we cant rely solely on the 
broker/controller API.

Aditya


From: Ashish Singh [asi...@cloudera.com]
Sent: Thursday, May 07, 2015 8:19 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-21 Configuration Management

Agreed :). However, the other concerns remain. Do you think just providing
zk info to broker will be sufficient? I will myself spend some to look into
the existing required confine.

On Thursday, May 7, 2015, Jun Rao  wrote:

> Ashish,
>
> 3. This is true. However, using files has the same problem. You can't store
> the location of the file in the file itself. The location of the file has
> to be passed out of band into Kafka.
>
> Thanks,
>
> Jun
>
> On Wed, May 6, 2015 at 6:34 PM, Ashish Singh  > wrote:
>
> > Hey Jun,
> >
> > Where does the broker get the info, which zk it needs to talk to?
> >
> > On Wednesday, May 6, 2015, Jun Rao >
> wrote:
> >
> > > Ashish,
> > >
> > > 3. Just want to clarify. Why can't you store ZK connection config in
> ZK?
> > > This is a property for ZK clients, not ZK server.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, May 6, 2015 at 5:48 PM, Ashish Singh  
> > > > wrote:
> > >
> > > > I too would like to share some concerns that we came up with while
> > > > discussing the effect of moving configs to zookeeper will have.
> > > >
> > > > 1. Kafka will start to become a configuration management tool to some
> > > > degree, and be subject to all the things such tools are commonly
> asked
> > to
> > > > do. Kafka'll likely need to re-implement the role / group / service
> > > > hierarchy that CM uses. Kafka'll need some way to conveniently dump
> its
> > > > configs so they can be re-imported later, as a backup tool. People
> will
> > > > want this to be audited, which means you'd need distinct logins for
> > > > different people, and user management. You can try to push some of
> this
> > > > stuff onto tools like CM, but this is Kafka going out of its way to
> be
> > > > difficult to manage, and most projects don't want to do that. Being
> > > unique
> > > > in how configuration is done is strictly a bad thing for both
> > integration
> > > > and usability. Probably lots of other stuff. Seems like a bad
> > direction.
> > > >
> > > > 2. Where would the default config live? If we decide on keeping the
> > > config
> > > > files around just for getting the default config, then I think on
> > > restart,
> > > > the config file will be ignored. This creates an obnoxious asymmetry
> > for
> > > > how to configure Kafka the first time and how you update it. You have
> > to
> > > > learn 2 ways of making config changes. If there was a mistake in your
> > > > original config file, you can't just edit the config file and
> restart,
> > > you
> > > > have to go through the API. Reading configs is also more irritating.
> > This
> > > > all creates a learning curve for users of Kafka that will make it
> > harder
> > > to
> > > > use than other projects. This is also a backwards-incompatible
> change.
> > > >
> > > > 3. All Kafka configs living in ZK is strictly impossible, since at
> the
> > > very
> > > > least ZK connection configs cannot be stored in ZK. So you will have
> a
> > > file
> > > > where some values are in effect but others are not, which is again
> > > > confusing. Also, since you are still reading the config file on first
> > > > start, there are still multiple sources of truth, or at least the
> > > > appearance of such to the user.
> > > >
> > > > On Wed, May 6, 2015 at 5:33 PM, Jun Rao  
> > >
> > > wrote:
> > > >
> > > > > One of the Chef users confirmed that Chef integration could still
> > work
> > > if
> > > > > all configs are moved to ZK. My rough understanding of how Chef
> works
> > > is
> > > > > that a user first registers a service host with a Chef server.
> After
> > > > that,
> > > > > a Chef client will be run on the service host. The user can then
> push
> > > > > config changes intended for a service/host to the Chef server. The
> > > server
> > > > > is then responsible for pushing the changes to Chef clients. Chef
> > > clients
> > > > > support pluggable logic. For example, it can generate a config file
> > > that
> > > > > Kafka broker will take. If we move all configs to ZK, we can
> > customize
> > > > the
> > > > > Chef client to use our config CLI to make the config changes in
> > Kafka.
> > > In
> > > > > this model, one probably

Build failed in Jenkins: KafkaPreCommit #96

2015-05-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-1884; Add logging upon metadata response errors; reviewed by 
Guozhang Wang

--
[...truncated 411 lines...]
org.apache.kafka.common.record.RecordTest > testEquality[58] PASSED

org.apache.kafka.common.record.RecordTest > testFields[59] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[59] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[59] PASSED

org.apache.kafka.common.record.RecordTest > testFields[60] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[60] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[60] PASSED

org.apache.kafka.common.record.RecordTest > testFields[61] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[61] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[61] PASSED

org.apache.kafka.common.record.RecordTest > testFields[62] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[62] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[62] PASSED

org.apache.kafka.common.record.RecordTest > testFields[63] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[63] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[63] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[0] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[1] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[2] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[3] PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testSimple 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
PASSED

org.apache.kafka.common.metrics.MetricsTest > testMetricName PASSED

org.apache.kafka.common.metrics.MetricsTest > testSimpleStats PASSED

org.apache.kafka.common.metrics.MetricsTest > testHierarchicalSensors PASSED

org.apache.kafka.common.metrics.MetricsTest > testBadSensorHiearchy PASSED

org.apache.kafka.common.metrics.MetricsTest > testEventWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testTimeWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testOldDataHasNoEffect PASSED

org.apache.kafka.common.metrics.MetricsTest > testDuplicateMetricName PASSED

org.apache.kafka.common.metrics.MetricsTest > testQuotas PASSED

org.apache.kafka.common.metrics.MetricsTest > testPercentiles PASSED

org.apache.kafka.common.metrics.JmxReporterTest > testJmxRegistration PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testHistogram PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testConstantBinScheme 
PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testLinearBinScheme PASSED

org.apache.kafka.common.config.ConfigDefTest > testBasicTypes PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultRange PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidators PASSED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
PASSED

org.apache.kafka.common.serialization.SerializationTest > testStringSerializer 
PASSED

org.apache.kafka.common.serialization.SerializationTest > testIntegerSerializer 
PASSED

org.apache.kafka.common.network.SelectorTest > testServerDisconnect PASSED

org.apache.kafka.common.network.SelectorTest > testClientDisconnect PASSED

org.apache.kafka.common.network.SelectorTest > testCantSendWithInProgress PASSED

org.apache.kafka.common.network.SelectorTest > testCantSendWithoutConnecting 
PASSED

org.apache.kafka.common.network.SelectorTest > testNoRouteToHost PASSED

org.apache.kafka.common.network.SelectorTest > testConnectionRefused PASSED

org.apache.kafka.common.network.SelectorTest > testNormalOperation PASSED

org.apache.kafka.common.network.SelectorTest > testSendLargeRequest PASSED

org.apache.kafka.common.network.SelectorTest > testEmptyRequest PASSED

org.apache.kafka.common.network.SelectorTest > testExistingConnectionId PASSED

org.apache.kafka.common.network.SelectorTest > testMute PASSED

org.apache.kafka.common.requests.RequestResponseTest > testSerialization PASSED

org.apache.kafka.clients.NetworkClientTest > testReadyAndDisconnect PASSED

org.apache.kafka.clients.NetworkClientTest > testSendToUnreadyNode PASSED

org.apache.kafka.clients.NetworkClientTest > testSimpleRequestResp

[jira] [Commented] (KAFKA-2172) Round-robin partition assignment strategy too restrictive

2015-05-07 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533651#comment-14533651
 ] 

Joel Koshy commented on KAFKA-2172:
---

The restriction really has to do with simplicity in the assignment code. i.e., 
it is definitely possible to remove the restriction. The main motivation for 
round-robin was for heavy consumers such as the mirror-maker. In these 
consumers it is (usually, but not always) less of an issue to take down all 
instances reconfigure and bring them back up if you want to change 
subscriptions. I agree this is too restrictive though in practice.

[~bbaugher] Sure we can consider alternate assignment algorithms. We don't need 
an optimal solution - in fact optimal can be very complicated and very 
subjective. There are some obvious nice-haves though. E.g., all partitions of a 
topic should ideally not go to the same consumer instance if there are other 
instances willing to read from that topic.

It may be useful to come up with one or more approaches and do some simulations 
(with different assignments, consumer counts, partition counts, etc.) and see 
well those approaches perform.


> Round-robin partition assignment strategy too restrictive
> -
>
> Key: KAFKA-2172
> URL: https://issues.apache.org/jira/browse/KAFKA-2172
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Rosenberg
>
> The round-ropin partition assignment strategy, was introduced for the 
> high-level consumer, starting with 0.8.2.1.  This appears to be a very 
> attractive feature, but it has an unfortunate restriction, which prevents it 
> from being easily utilized.  That is that it requires all consumers in the 
> consumer group have identical topic regex selectors, and that they have the 
> same number of consumer threads.
> It turns out this is not always the case for our deployments.  It's not 
> unusual to run multiple consumers within a single process (with different 
> topic selectors), or we might have multiple processes dedicated for different 
> topic subsets.  Agreed, we could change these to have separate group ids for 
> each sub topic selector (but unfortunately, that's easier said than done).  
> In several cases, we do at least have separate client.ids set for each 
> sub-consumer, so it would be incrementally better if we could at least loosen 
> the requirement such that each set of topics selected by a groupid/clientid 
> pair are the same.
> But, if we want to do a rolling restart for a new version of a consumer 
> config, the cluster will likely be in a state where it's not possible to have 
> a single config until the full rolling restart completes across all nodes.  
> This results in a consumer outage while the rolling restart is happening.
> Finally, it's especially problematic if we want to canary a new version for a 
> period before rolling to the whole cluster.
> I'm not sure why this restriction should exist (as it obviously does not 
> exist for the 'range' assignment strategy).  It seems it could be made to 
> work reasonably well with heterogenous topic selection and heterogenous 
> thread counts.  The documentation states that "The round-robin partition 
> assignor lays out all the available partitions and all the available consumer 
> threads. It then proceeds to do a round-robin assignment from partition to 
> consumer thread."
> If the assignor can "lay out all the available partitions and all the 
> available consumer threads", it should be able to uniformly assign partitions 
> to the available threads.  In each case, if a thread belongs to a consumer 
> that doesn't have that partition selected, just move to the next available 
> thread that does have the selection, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-05-07 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review82953
---


Thanks for the patch. Just a few quick comments.


clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java


Do we want to return Principal or Subject? It seems that ssl only gives a 
Principal back. However, sasl can give back subject which has multiple 
principals?



clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java


Ssl can trigger rehandshake. How is the rehandshake triggered? Do we need 
to set the handshakeStatus accordingly?



clients/src/main/java/org/apache/kafka/common/network/TransportLayer.java


Shouldn't we standardize the return on either int or long?


- Jun Rao


On April 28, 2015, 7:31 a.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33620/
> ---
> 
> (Updated April 28, 2015, 7:31 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1690
> https://issues.apache.org/jira/browse/KAFKA-1690
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1690. new java producer needs ssl support as a client.
> 
> 
> Diffs
> -
> 
>   checkstyle/checkstyle.xml a215ff36e9252879f1e0be5a86fef9a875bb8f38 
>   checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 
>   clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
> 0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> cf32e4e7c40738fe6d8adc36ae0cfad459ac5b0b 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 42c72198a0325e234cf1d428b687663099de884e 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 09ecb427c9f4482dd064428815128b1c426dc921 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 42b12928781463b56fc4a45d96bb4da2745b6d95 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 5a575553d30c1c0bda9ffef9e9b9eafae28deba5 
>   clients/src/main/java/org/apache/kafka/common/config/SecurityConfig.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/Channel.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlainTextTransportLayer.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/SSLFactory.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
> b5f8d83e89f9026dc0853e5f92c00b2d7f043e22 
>   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
> 57de0585e5e9a53eb9dcd99cac1ab3eb2086a302 
>   clients/src/main/java/org/apache/kafka/common/network/TransportLayer.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
> d3394ee669e1c2254403e95393511d0a17e6f250 
>   clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
> f73eedb030987f018d8446bb1dcd98d19fa97331 
>   clients/src/test/java/org/apache/kafka/common/network/EchoServer.java 
> PRE-CREATION 
>   clients/src/test/java/org/apache/kafka/common/network/SSLSelectorTest.java 
> PRE-CREATION 
>   clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
> d5b306b026e788b4e5479f3419805aa49ae889f3 
>   clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
> 2ebe3c21f611dc133a2dbb8c7dfb0845f8c21498 
>   clients/src/test/java/org/apache/kafka/test/TestSSLUtils.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/33620/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



[jira] [Commented] (KAFKA-1690) new java producer needs ssl support as a client

2015-05-07 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533747#comment-14533747
 ] 

Jun Rao commented on KAFKA-1690:


Joel mentioned the use case that returning just the peer principal in ssl may 
not be enough. Is the plan to allow a pluggable Authorizer such that one can 
put additional into the returned Principal?

> new java producer needs ssl support as a client
> ---
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33620: Patch for KAFKA-1690

2015-05-07 Thread Sriharsha Chintalapani


> On May 8, 2015, 1:41 a.m., Jun Rao wrote:
> > clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java,
> >  lines 37-39
> > 
> >
> > Do we want to return Principal or Subject? It seems that ssl only gives 
> > a Principal back. However, sasl can give back subject which has multiple 
> > principals?

Thanks for the review Jun. I am in the process of sending new patch.  Current 
session in KAFKA-1683 takes in UserPrincipal hence the reason to return 
userPrincipal. Also SASL doesn't return Subject it uses subject on client and 
server to do authentication handshake. Once this is done on the server side we 
can call saslServer.getAuthorizationId() to get clients principal which is just 
a string. More info here 
http://docs.oracle.com/javase/7/docs/api/javax/security/sasl/SaslServer.html#getAuthorizationID%28%29
 . I am going to upload the patch sasl by next week will make it more clear.


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review82953
---


On April 28, 2015, 7:31 a.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33620/
> ---
> 
> (Updated April 28, 2015, 7:31 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1690
> https://issues.apache.org/jira/browse/KAFKA-1690
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1690. new java producer needs ssl support as a client.
> 
> 
> Diffs
> -
> 
>   checkstyle/checkstyle.xml a215ff36e9252879f1e0be5a86fef9a875bb8f38 
>   checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 
>   clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
> 0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> cf32e4e7c40738fe6d8adc36ae0cfad459ac5b0b 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 42c72198a0325e234cf1d428b687663099de884e 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 09ecb427c9f4482dd064428815128b1c426dc921 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 42b12928781463b56fc4a45d96bb4da2745b6d95 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 5a575553d30c1c0bda9ffef9e9b9eafae28deba5 
>   clients/src/main/java/org/apache/kafka/common/config/SecurityConfig.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/Channel.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlainTextTransportLayer.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/SSLFactory.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
> b5f8d83e89f9026dc0853e5f92c00b2d7f043e22 
>   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
> 57de0585e5e9a53eb9dcd99cac1ab3eb2086a302 
>   clients/src/main/java/org/apache/kafka/common/network/TransportLayer.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
> d3394ee669e1c2254403e95393511d0a17e6f250 
>   clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
> f73eedb030987f018d8446bb1dcd98d19fa97331 
>   clients/src/test/java/org/apache/kafka/common/network/EchoServer.java 
> PRE-CREATION 
>   clients/src/test/java/org/apache/kafka/common/network/SSLSelectorTest.java 
> PRE-CREATION 
>   clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
> d5b306b026e788b4e5479f3419805aa49ae889f3 
>   clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
> 2ebe3c21f611dc133a2dbb8c7dfb0845f8c21498 
>   clients/src/test/java/org/apache/kafka/test/TestSSLUtils.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/33620/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



Re: Review Request 33620: Patch for KAFKA-1690

2015-05-07 Thread Sriharsha Chintalapani


> On May 8, 2015, 1:41 a.m., Jun Rao wrote:
> > clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java,
> >  line 288
> > 
> >
> > Ssl can trigger rehandshake. How is the rehandshake triggered? Do we 
> > need to set the handshakeStatus accordingly?

For the purpose of our clients we dont' need ssl re-negotiation . SSL 
rengotiation is needed when a client makes a connection as anonymous and later 
wants to present their identity than they can start ssl re-negotiation to 
establish a new identity. Here is bit of info 
https://devcentral.f5.com/articles/ssl-profiles-part-6-ssl-renegotiation . If 
you feel this is necessary I can add it .


- Sriharsha


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33620/#review82953
---


On April 28, 2015, 7:31 a.m., Sriharsha Chintalapani wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33620/
> ---
> 
> (Updated April 28, 2015, 7:31 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1690
> https://issues.apache.org/jira/browse/KAFKA-1690
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1690. new java producer needs ssl support as a client.
> 
> 
> Diffs
> -
> 
>   checkstyle/checkstyle.xml a215ff36e9252879f1e0be5a86fef9a875bb8f38 
>   checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 
>   clients/src/main/java/org/apache/kafka/clients/ClientUtils.java 
> 0d68bf1e1e90fe9d5d4397ddf817b9a9af8d9f7a 
>   clients/src/main/java/org/apache/kafka/clients/CommonClientConfigs.java 
> cf32e4e7c40738fe6d8adc36ae0cfad459ac5b0b 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 42c72198a0325e234cf1d428b687663099de884e 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 09ecb427c9f4482dd064428815128b1c426dc921 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 42b12928781463b56fc4a45d96bb4da2745b6d95 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 5a575553d30c1c0bda9ffef9e9b9eafae28deba5 
>   clients/src/main/java/org/apache/kafka/common/config/SecurityConfig.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/Authenticator.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/Channel.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/network/DefaultAuthenticator.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/network/PlainTextTransportLayer.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/SSLFactory.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/network/SSLTransportLayer.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/common/network/Selectable.java 
> b5f8d83e89f9026dc0853e5f92c00b2d7f043e22 
>   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
> 57de0585e5e9a53eb9dcd99cac1ab3eb2086a302 
>   clients/src/main/java/org/apache/kafka/common/network/TransportLayer.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/common/protocol/SecurityProtocol.java 
> d3394ee669e1c2254403e95393511d0a17e6f250 
>   clients/src/main/java/org/apache/kafka/common/utils/Utils.java 
> f73eedb030987f018d8446bb1dcd98d19fa97331 
>   clients/src/test/java/org/apache/kafka/common/network/EchoServer.java 
> PRE-CREATION 
>   clients/src/test/java/org/apache/kafka/common/network/SSLSelectorTest.java 
> PRE-CREATION 
>   clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
> d5b306b026e788b4e5479f3419805aa49ae889f3 
>   clients/src/test/java/org/apache/kafka/common/utils/UtilsTest.java 
> 2ebe3c21f611dc133a2dbb8c7dfb0845f8c21498 
>   clients/src/test/java/org/apache/kafka/test/TestSSLUtils.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/33620/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sriharsha Chintalapani
> 
>



[jira] [Commented] (KAFKA-1690) new java producer needs ssl support as a client

2015-05-07 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533756#comment-14533756
 ] 

Sriharsha Chintalapani commented on KAFKA-1690:
---

[~junrao] [~jjkoshy] I do have Authenticator as an interface and I can make 
this as pluggable so that anyone can build their own form UserPrincipal. 


> new java producer needs ssl support as a client
> ---
>
> Key: KAFKA-1690
> URL: https://issues.apache.org/jira/browse/KAFKA-1690
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Joe Stein
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.3
>
> Attachments: KAFKA-1690.patch, KAFKA-1690.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33088: add heartbeat to coordinator

2015-05-07 Thread Onur Karaman


> On May 7, 2015, 10:07 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala, line 53
> > 
> >
> > Add @nonthreadsafe for this function.

It would be awkward to have the class claim it's @threadsafe yet have one of 
its methods marked as @nonthreadsafe. How should we should address this?


- Onur


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33088/#review82921
---


On May 5, 2015, 5:50 p.m., Onur Karaman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33088/
> ---
> 
> (Updated May 5, 2015, 5:50 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1334
> https://issues.apache.org/jira/browse/KAFKA-1334
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> add heartbeat to coordinator
> 
> todo:
> - see how it performs under real load
> - add error code handling on the consumer side
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
>  e55ab11df4db0b0084f841a74cbcf819caf780d5 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 36aa412404ff1458c7bef0feecaaa8bc45bed9c7 
>   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
> 456b602245e111880e1b8b361319cabff38ee0e9 
>   core/src/main/scala/kafka/coordinator/ConsumerRegistry.scala 
> 2f5797064d4131ecfc9d2750d9345a9fa3972a9a 
>   core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala 
> PRE-CREATION 
>   core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
> 6a6bc7bc4ceb648b67332e789c2c33de88e4cd86 
>   core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
> df60cbc35d09937b4e9c737c67229889c69d8698 
>   core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
> 8defa2e41c92f1ebe255177679d275c70dae5b3e 
>   core/src/main/scala/kafka/coordinator/Group.scala PRE-CREATION 
>   core/src/main/scala/kafka/coordinator/GroupRegistry.scala 
> 94ef5829b3a616c90018af1db7627bfe42e259e5 
>   core/src/main/scala/kafka/coordinator/HeartbeatBucket.scala 
> 821e26e97eaa97b5f4520474fff0fedbf406c82a 
>   core/src/main/scala/kafka/coordinator/PartitionAssignor.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/DelayedOperationKey.scala 
> b673e43b0ba401b2e22f27aef550e3ab0ef4323c 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> b4004aa3a1456d337199aa1245fb0ae61f6add46 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> c63f4ba9d622817ea8636d4e6135fba917ce085a 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 18680ce100f10035175cc0263ba7787ab0f6a17a 
>   core/src/test/scala/unit/kafka/coordinator/CoordinatorMetadataTest.scala 
> PRE-CREATION 
>   core/src/test/scala/unit/kafka/coordinator/GroupTest.scala PRE-CREATION 
>   core/src/test/scala/unit/kafka/coordinator/PartitionAssignorTest.scala 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/33088/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Onur Karaman
> 
>



[jira] [Commented] (KAFKA-2121) prevent potential resource leak in KafkaProducer and KafkaConsumer

2015-05-07 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533822#comment-14533822
 ] 

Guozhang Wang commented on KAFKA-2121:
--

Committed the followup patch to trunk.

> prevent potential resource leak in KafkaProducer and KafkaConsumer
> --
>
> Key: KAFKA-2121
> URL: https://issues.apache.org/jira/browse/KAFKA-2121
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Steven Zhen Wu
>Assignee: Steven Zhen Wu
> Fix For: 0.8.3
>
> Attachments: KAFKA-2121.patch, KAFKA-2121.patch, KAFKA-2121.patch, 
> KAFKA-2121_2015-04-16_09:55:14.patch, KAFKA-2121_2015-04-16_10:43:55.patch, 
> KAFKA-2121_2015-04-18_20:09:20.patch, KAFKA-2121_2015-04-19_20:08:45.patch, 
> KAFKA-2121_2015-04-19_20:30:18.patch, KAFKA-2121_2015-04-20_09:06:09.patch, 
> KAFKA-2121_2015-04-20_09:51:51.patch, KAFKA-2121_2015-04-20_09:52:46.patch, 
> KAFKA-2121_2015-04-20_09:57:49.patch, KAFKA-2121_2015-04-20_22:48:31.patch, 
> KAFKA-2121_2015-05-01_15:42:30.patch
>
>
> On Mon, Apr 13, 2015 at 7:17 PM, Guozhang Wang  wrote:
> It is a valid problem and we should correct it as soon as possible, I'm
> with Ewen regarding the solution.
> On Mon, Apr 13, 2015 at 5:05 PM, Ewen Cheslack-Postava 
> wrote:
> > Steven,
> >
> > Looks like there is even more that could potentially be leaked -- since key
> > and value serializers are created and configured at the end, even the IO
> > thread allocated by the producer could leak. Given that, I think 1 isn't a
> > great option since, as you said, it doesn't really address the underlying
> > issue.
> >
> > 3 strikes me as bad from a user experience perspective. It's true we might
> > want to introduce additional constructors to make testing easier, but the
> > more components I need to allocate myself and inject into the producer's
> > constructor, the worse the default experience is. And since you would have
> > to inject the dependencies to get correct, non-leaking behavior, it will
> > always be more code than previously (and a backwards incompatible change).
> > Additionally, the code creating a the producer would have be more
> > complicated since it would have to deal with the cleanup carefully whereas
> > it previously just had to deal with the exception. Besides, for testing
> > specifically, you can avoid exposing more constructors just for testing by
> > using something like PowerMock that let you mock private methods. That
> > requires a bit of code reorganization, but doesn't affect the public
> > interface at all.
> >
> > So my take is that a variant of 2 is probably best. I'd probably do two
> > things. First, make close() safe to call even if some fields haven't been
> > initialized, which presumably just means checking for null fields. (You
> > might also want to figure out if all the methods close() calls are
> > idempotent and decide whether some fields should be marked non-final and
> > cleared to null when close() is called). Second, add the try/catch as you
> > suggested, but just use close().
> >
> > -Ewen
> >
> >
> > On Mon, Apr 13, 2015 at 3:53 PM, Steven Wu  wrote:
> >
> > > Here is the resource leak problem that we have encountered when 0.8.2
> > java
> > > KafkaProducer failed in constructor. here is the code snippet of
> > > KafkaProducer to illustrate the problem.
> > >
> > > ---
> > > public KafkaProducer(ProducerConfig config, Serializer keySerializer,
> > > Serializer valueSerializer) {
> > >
> > > // create metrcis reporter via reflection
> > > List reporters =
> > >
> > >
> > config.getConfiguredInstances(ProducerConfig.METRIC_REPORTER_CLASSES_CONFIG,
> > > MetricsReporter.class);
> > >
> > > // validate bootstrap servers
> > > List addresses =
> > >
> > >
> > ClientUtils.parseAndValidateAddresses(config.getList(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG));
> > >
> > > }
> > > ---
> > >
> > > let's say MyMetricsReporter creates a thread in constructor. if hostname
> > > validation threw an exception, constructor won't call the close method of
> > > MyMetricsReporter to clean up the resource. as a result, we created
> > thread
> > > leak issue. this becomes worse when we try to auto recovery (i.e. keep
> > > creating KafkaProducer again -> failing again -> more thread leaks).
> > >
> > > there are multiple options of fixing this.
> > >
> > > 1) just move the hostname validation to the beginning. but this is only
> > fix
> > > one symtom. it didn't fix the fundamental problem. what if some other
> > lines
> > > throw an exception.
> > >
> > > 2) use try-catch. in the catch section, try to call close methods for any
> > > non-null objects constructed so far.
> > >
> > > 3) explici

[jira] [Updated] (KAFKA-2121) prevent potential resource leak in KafkaProducer and KafkaConsumer

2015-05-07 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2121:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> prevent potential resource leak in KafkaProducer and KafkaConsumer
> --
>
> Key: KAFKA-2121
> URL: https://issues.apache.org/jira/browse/KAFKA-2121
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Steven Zhen Wu
>Assignee: Steven Zhen Wu
> Fix For: 0.8.3
>
> Attachments: KAFKA-2121.patch, KAFKA-2121.patch, KAFKA-2121.patch, 
> KAFKA-2121_2015-04-16_09:55:14.patch, KAFKA-2121_2015-04-16_10:43:55.patch, 
> KAFKA-2121_2015-04-18_20:09:20.patch, KAFKA-2121_2015-04-19_20:08:45.patch, 
> KAFKA-2121_2015-04-19_20:30:18.patch, KAFKA-2121_2015-04-20_09:06:09.patch, 
> KAFKA-2121_2015-04-20_09:51:51.patch, KAFKA-2121_2015-04-20_09:52:46.patch, 
> KAFKA-2121_2015-04-20_09:57:49.patch, KAFKA-2121_2015-04-20_22:48:31.patch, 
> KAFKA-2121_2015-05-01_15:42:30.patch
>
>
> On Mon, Apr 13, 2015 at 7:17 PM, Guozhang Wang  wrote:
> It is a valid problem and we should correct it as soon as possible, I'm
> with Ewen regarding the solution.
> On Mon, Apr 13, 2015 at 5:05 PM, Ewen Cheslack-Postava 
> wrote:
> > Steven,
> >
> > Looks like there is even more that could potentially be leaked -- since key
> > and value serializers are created and configured at the end, even the IO
> > thread allocated by the producer could leak. Given that, I think 1 isn't a
> > great option since, as you said, it doesn't really address the underlying
> > issue.
> >
> > 3 strikes me as bad from a user experience perspective. It's true we might
> > want to introduce additional constructors to make testing easier, but the
> > more components I need to allocate myself and inject into the producer's
> > constructor, the worse the default experience is. And since you would have
> > to inject the dependencies to get correct, non-leaking behavior, it will
> > always be more code than previously (and a backwards incompatible change).
> > Additionally, the code creating a the producer would have be more
> > complicated since it would have to deal with the cleanup carefully whereas
> > it previously just had to deal with the exception. Besides, for testing
> > specifically, you can avoid exposing more constructors just for testing by
> > using something like PowerMock that let you mock private methods. That
> > requires a bit of code reorganization, but doesn't affect the public
> > interface at all.
> >
> > So my take is that a variant of 2 is probably best. I'd probably do two
> > things. First, make close() safe to call even if some fields haven't been
> > initialized, which presumably just means checking for null fields. (You
> > might also want to figure out if all the methods close() calls are
> > idempotent and decide whether some fields should be marked non-final and
> > cleared to null when close() is called). Second, add the try/catch as you
> > suggested, but just use close().
> >
> > -Ewen
> >
> >
> > On Mon, Apr 13, 2015 at 3:53 PM, Steven Wu  wrote:
> >
> > > Here is the resource leak problem that we have encountered when 0.8.2
> > java
> > > KafkaProducer failed in constructor. here is the code snippet of
> > > KafkaProducer to illustrate the problem.
> > >
> > > ---
> > > public KafkaProducer(ProducerConfig config, Serializer keySerializer,
> > > Serializer valueSerializer) {
> > >
> > > // create metrcis reporter via reflection
> > > List reporters =
> > >
> > >
> > config.getConfiguredInstances(ProducerConfig.METRIC_REPORTER_CLASSES_CONFIG,
> > > MetricsReporter.class);
> > >
> > > // validate bootstrap servers
> > > List addresses =
> > >
> > >
> > ClientUtils.parseAndValidateAddresses(config.getList(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG));
> > >
> > > }
> > > ---
> > >
> > > let's say MyMetricsReporter creates a thread in constructor. if hostname
> > > validation threw an exception, constructor won't call the close method of
> > > MyMetricsReporter to clean up the resource. as a result, we created
> > thread
> > > leak issue. this becomes worse when we try to auto recovery (i.e. keep
> > > creating KafkaProducer again -> failing again -> more thread leaks).
> > >
> > > there are multiple options of fixing this.
> > >
> > > 1) just move the hostname validation to the beginning. but this is only
> > fix
> > > one symtom. it didn't fix the fundamental problem. what if some other
> > lines
> > > throw an exception.
> > >
> > > 2) use try-catch. in the catch section, try to call close methods for any
> > > non-null objects constructed so far.
> > >
> > > 3) explicitly declare the dependency

Build failed in Jenkins: Kafka-trunk #485

2015-05-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2121; Fix Closeable backward-compatibility; reviewed by 
Guozhang Wang

--
Started by an SCM change
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5a47ef8ecbdf574bb18bd9ee5397188097924558 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5a47ef8ecbdf574bb18bd9ee5397188097924558
 > git rev-list 31dadf0242e2c4884035f9da1b67d2e916e33743 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson1770346994852455535.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 26.59 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson2723318643192003156.sh
+ ./gradlew -PscalaVersion=2.10.1 test
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:clients:compileJava
:clients:processResources UP-TO-DATE
:clients:classes
:clients:checkstyleMain
:clients:compileTestJava/x1/jenkins/jenkins-slave/workspace/Kafka-trunk/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:22:
 cannot find symbol
symbol  : class MockSerializer
location: package org.apache.kafka.test
import org.apache.kafka.test.MockSerializer;
^
/x1/jenkins/jenkins-slave/workspace/Kafka-trunk/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:59:
 package MockSerializer does not exist
final int oldInitCount = MockSerializer.INIT_COUNT.get();
   ^
/x1/jenkins/jenkins-slave/workspace/Kafka-trunk/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:60:
 package MockSerializer does not exist
final int oldCloseCount = MockSerializer.CLOSE_COUNT.get();
^
/x1/jenkins/jenkins-slave/workspace/Kafka-trunk/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:63:
 cannot find symbol
symbol  : class MockSerializer
location: class org.apache.kafka.clients.producer.KafkaProducerTest
props, new MockSerializer(), new MockSerializer());
   ^
/x1/jenkins/jenkins-slave/workspace/Kafka-trunk/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:63:
 cannot find symbol
symbol  : class MockSerializer
location: class org.apache.kafka.clients.producer.KafkaProducerTest
props, new MockSerializer(), new MockSerializer());
 ^
/x1/jenkins/jenkins-slave/workspace/Kafka-trunk/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:64:
 package MockSerializer does not exist
Assert.assertEquals(oldInitCount + 2, MockSerializer.INIT_COUNT.get());
^
/x1/jenkins/jenkins-slave/workspace/Kafka-trunk/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:65:
 package MockSerializer does not exist
Assert.assertEquals(oldCloseCount, MockSerializer.CLOSE_COUNT.get());
 ^
/x1/jenkins/jenkins-slave/workspace/Kafka-trunk/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:68:
 package MockSerializer does not exist
Assert.assertEquals(oldInitCount + 2, MockSerializer.INIT_COUNT.get());
^
/x1/jenkins/jenkins-slave/workspace/Kafka-trunk/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:69:
 package MockSerializer does not exist
Assert.ass

Build failed in Jenkins: KafkaPreCommit #97

2015-05-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2121; Fix Closeable backward-compatibility; reviewed by 
Guozhang Wang

--
Started by an SCM change
Building remotely on ubuntu3 (Ubuntu ubuntu legacy-ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5a47ef8ecbdf574bb18bd9ee5397188097924558 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5a47ef8ecbdf574bb18bd9ee5397188097924558
 > git rev-list 31dadf0242e2c4884035f9da1b67d2e916e33743 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[KafkaPreCommit] $ /bin/bash -xe /tmp/hudson1874078695634367512.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 16.893 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
[KafkaPreCommit] $ /bin/bash -xe /tmp/hudson4159885707429150413.sh
+ ./gradlew -PscalaVersion=2.10.1 test
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.0/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.1
:clients:compileJava
:clients:processResources UP-TO-DATE
:clients:classes
:clients:checkstyleMain
:clients:compileTestJava/x1/jenkins/jenkins-slave/workspace/KafkaPreCommit/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:22:
 cannot find symbol
symbol  : class MockSerializer
location: package org.apache.kafka.test
import org.apache.kafka.test.MockSerializer;
^
/x1/jenkins/jenkins-slave/workspace/KafkaPreCommit/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:59:
 package MockSerializer does not exist
final int oldInitCount = MockSerializer.INIT_COUNT.get();
   ^
/x1/jenkins/jenkins-slave/workspace/KafkaPreCommit/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:60:
 package MockSerializer does not exist
final int oldCloseCount = MockSerializer.CLOSE_COUNT.get();
^
/x1/jenkins/jenkins-slave/workspace/KafkaPreCommit/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:63:
 cannot find symbol
symbol  : class MockSerializer
location: class org.apache.kafka.clients.producer.KafkaProducerTest
props, new MockSerializer(), new MockSerializer());
   ^
/x1/jenkins/jenkins-slave/workspace/KafkaPreCommit/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:63:
 cannot find symbol
symbol  : class MockSerializer
location: class org.apache.kafka.clients.producer.KafkaProducerTest
props, new MockSerializer(), new MockSerializer());
 ^
/x1/jenkins/jenkins-slave/workspace/KafkaPreCommit/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:64:
 package MockSerializer does not exist
Assert.assertEquals(oldInitCount + 2, MockSerializer.INIT_COUNT.get());
^
/x1/jenkins/jenkins-slave/workspace/KafkaPreCommit/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:65:
 package MockSerializer does not exist
Assert.assertEquals(oldCloseCount, MockSerializer.CLOSE_COUNT.get());
 ^
/x1/jenkins/jenkins-slave/workspace/KafkaPreCommit/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:68:
 package MockSerializer does not exist
Assert.assertEquals(oldInitCount + 2, MockSerializer.INIT_COUNT.get());
^
/x1/jenkins/jenkins-slave/workspace/KafkaPreCommit/clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java:69:
 package MockSeria

Jenkins build is back to normal : Kafka-trunk #486

2015-05-07 Thread Apache Jenkins Server
See 



Build failed in Jenkins: KafkaPreCommit #98

2015-05-07 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2121; add missing file

--
[...truncated 1268 lines...]
kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.MetricsTest > testMetricsLeak PASSED

kafka.consumer.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup PASSED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping PASSED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
PASSED

kafka.server.ReplicaManagerTest > testIllegalRequiredAcks PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.SimpleFetchTest > testReadFromLog PASSED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.ServerShutdownTest > testCleanShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled PASSED

kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup PASSED

kafka.server.ServerShutdownTest > testConsecutiveShutdown PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndHoursProvided 
PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
PASSED

kafka.server.KafkaConfigTest > testLogRetentionUnlimited PASSED

kafka.server.KafkaConfigTest > testLogRetentionValid PASSED

kafka.server.KafkaConfigTest > testAdvertiseDefaults PASSED

kafka.server.KafkaConfigTest > testAdvertiseConfigured PASSED

kafka.server.KafkaConfigTest > testDuplicateListeners PASSED

kafka.server.KafkaConfigTest > testBadListenerProtocol PASSED

kafka.server.KafkaConfigTest > testListenerDefaults PASSED

kafka.server.KafkaConfigTest > testVersionConfiguration PASSED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testUncleanElectionDisabled PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid PASSED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeBothMsAndHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRollTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testDefaultCompressionType PASSED

kafka.server.KafkaConfigTest > testValidCompressionType PASSED

kafka.server.KafkaConfigTest > testInvalidCompressionType PASSED

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets PASSED

kafka.server.OffsetCommitTest > testLargeMetadataPayload PASSED

kafka.server.OffsetCommitTest > testOffsetExpiration PASSED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.ReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailure