[jira] [Created] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread Adrian Muraru (JIRA)
Adrian Muraru created KAFKA-3736:


 Summary: Add http metrics reporter
 Key: KAFKA-3736
 URL: https://issues.apache.org/jira/browse/KAFKA-3736
 Project: Kafka
  Issue Type: New Feature
  Components: core
Reporter: Adrian Muraru


The current builtin JMX metrics reporter is pretty heavy in terms of load and 
collection. A new http lightweight reporter is proposed to expose the metrics 
via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3736: Add HTTP Metrics reporter

2016-05-20 Thread amuraru
GitHub user amuraru opened a pull request:

https://github.com/apache/kafka/pull/1412

KAFKA-3736: Add HTTP Metrics reporter



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hstack/kafka KAFKA-3736

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1412.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1412


commit 23eccd28156b08e4843ce97f3e64354c804a83c1
Author: Adrian Muraru 
Date:   2016-05-19T20:02:18Z

KAFKA-3736: Add HTTP Metrics reporter




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293002#comment-15293002
 ] 

ASF GitHub Bot commented on KAFKA-3736:
---

GitHub user amuraru opened a pull request:

https://github.com/apache/kafka/pull/1412

KAFKA-3736: Add HTTP Metrics reporter



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hstack/kafka KAFKA-3736

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1412.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1412


commit 23eccd28156b08e4843ce97f3e64354c804a83c1
Author: Adrian Muraru 
Date:   2016-05-19T20:02:18Z

KAFKA-3736: Add HTTP Metrics reporter




> Add http metrics reporter
> -
>
> Key: KAFKA-3736
> URL: https://issues.apache.org/jira/browse/KAFKA-3736
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Adrian Muraru
>
> The current builtin JMX metrics reporter is pretty heavy in terms of load and 
> collection. A new http lightweight reporter is proposed to expose the metrics 
> via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3732) Add an auto accept option to kafka-acls.sh

2016-05-20 Thread Mickael Maison (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293026#comment-15293026
 ] 

Mickael Maison commented on KAFKA-3732:
---

I considered removing the prompt altogether, but thought it might be better to 
keep the existing behaviour. Not sure how many people are using ACLs and have 
build scripts around kafka-acls.sh. 
Regarding the option name, I thought "yes" was more descriptive than "force" 
(and it's shorter !) but I'm happy to rename it if we decide to keep the 
prompt. 

Just let me know and I'll update the PR.

Also I can't seem to be able to assign this JIRA to me, can someone do this for 
me or give me the permission ?

> Add an auto accept option to kafka-acls.sh
> --
>
> Key: KAFKA-3732
> URL: https://issues.apache.org/jira/browse/KAFKA-3732
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.10.0.0
>Reporter: Mickael Maison
>Priority: Minor
>
> When removing ACLs, kafka-acls.sh always prompts the user to confirm the ACL 
> change. Having an option to auto accept would make it easier to use 
> kafka-acls.sh in scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: set charset of Javadoc to UTF-8

2016-05-20 Thread sasakitoa
GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/1413

MINOR: set charset of Javadoc to UTF-8

Currently javadoc doesn't specify charset.
This pull reqeust will set this to UTF-8.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka javadoc_garbled

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1413.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1413


commit 50f1d5bec7904f1ae5d78f66cde67e22710ac18b
Author: Sasaki Toru 
Date:   2016-05-20T09:23:26Z

set charset to UTF-8 in build.gradle




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3733) Avoid long command lines by setting CLASSPATH in environment

2016-05-20 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated KAFKA-3733:
-
Fix Version/s: 0.10.0.0
   Status: Patch Available  (was: Open)

> Avoid long command lines by setting CLASSPATH in environment
> 
>
> Key: KAFKA-3733
> URL: https://issues.apache.org/jira/browse/KAFKA-3733
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Adrian Muraru
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> {{kafka-run-class.sh}} sets the JVM classpath in the command line via {{-cp}}.
> This generates long command lines that gets trimmed by the shell in commands 
> like ps, pgrep,etc.
> An alternative is to set the CLASSPATH in environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread Adrian Muraru (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Muraru updated KAFKA-3736:
-
Fix Version/s: 0.10.0.0
   Status: Patch Available  (was: Open)

> Add http metrics reporter
> -
>
> Key: KAFKA-3736
> URL: https://issues.apache.org/jira/browse/KAFKA-3736
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Adrian Muraru
> Fix For: 0.10.0.0
>
>
> The current builtin JMX metrics reporter is pretty heavy in terms of load and 
> collection. A new http lightweight reporter is proposed to expose the metrics 
> via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293234#comment-15293234
 ] 

Ben Stopford commented on KAFKA-3736:
-

Nice little idea! I think it might require a KIP as technically it's a public 
interface. [~ijuma] can confirm.

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

> Add http metrics reporter
> -
>
> Key: KAFKA-3736
> URL: https://issues.apache.org/jira/browse/KAFKA-3736
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Adrian Muraru
> Fix For: 0.10.0.0
>
>
> The current builtin JMX metrics reporter is pretty heavy in terms of load and 
> collection. A new http lightweight reporter is proposed to expose the metrics 
> via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3733) Avoid long command lines by setting CLASSPATH in environment

2016-05-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293244#comment-15293244
 ] 

Ismael Juma commented on KAFKA-3733:


Thanks for the JIRA and PR. It's too late for 0.10.0.0, so updated the fix 
version.

> Avoid long command lines by setting CLASSPATH in environment
> 
>
> Key: KAFKA-3733
> URL: https://issues.apache.org/jira/browse/KAFKA-3733
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Adrian Muraru
>Priority: Minor
> Fix For: 0.10.0.1
>
>
> {{kafka-run-class.sh}} sets the JVM classpath in the command line via {{-cp}}.
> This generates long command lines that gets trimmed by the shell in commands 
> like ps, pgrep,etc.
> An alternative is to set the CLASSPATH in environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3733) Avoid long command lines by setting CLASSPATH in environment

2016-05-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3733:
---
Fix Version/s: (was: 0.10.0.0)
   0.10.0.1

> Avoid long command lines by setting CLASSPATH in environment
> 
>
> Key: KAFKA-3733
> URL: https://issues.apache.org/jira/browse/KAFKA-3733
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Adrian Muraru
>Priority: Minor
> Fix For: 0.10.0.1
>
>
> {{kafka-run-class.sh}} sets the JVM classpath in the command line via {{-cp}}.
> This generates long command lines that gets trimmed by the shell in commands 
> like ps, pgrep,etc.
> An alternative is to set the CLASSPATH in environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3736:
---
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0

> Add http metrics reporter
> -
>
> Key: KAFKA-3736
> URL: https://issues.apache.org/jira/browse/KAFKA-3736
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Adrian Muraru
> Fix For: 0.10.1.0
>
>
> The current builtin JMX metrics reporter is pretty heavy in terms of load and 
> collection. A new http lightweight reporter is proposed to expose the metrics 
> via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293249#comment-15293249
 ] 

Ismael Juma commented on KAFKA-3736:


Yes, we need a KIP indeed. It also adds new dependencies to the core jar, which 
we may want to avoid (we could introduce a new module perhaps).

> Add http metrics reporter
> -
>
> Key: KAFKA-3736
> URL: https://issues.apache.org/jira/browse/KAFKA-3736
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Adrian Muraru
> Fix For: 0.10.1.0
>
>
> The current builtin JMX metrics reporter is pretty heavy in terms of load and 
> collection. A new http lightweight reporter is proposed to expose the metrics 
> via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3718) propagate all KafkaConfig __consumer_offsets configs to OffsetConfig instantiation

2016-05-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3718:
---
Fix Version/s: (was: 0.10.0.0)
   0.10.0.1

> propagate all KafkaConfig __consumer_offsets configs to OffsetConfig 
> instantiation
> --
>
> Key: KAFKA-3718
> URL: https://issues.apache.org/jira/browse/KAFKA-3718
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Fix For: 0.10.0.1
>
>
> Kafka has two configurable compression codecs: the one used by the client 
> (source codec) and the one finally used when storing into the log (target 
> codec). The target codec defaults to KafkaConfig.compressionType and can be 
> dynamically configured through zookeeper.
> The GroupCoordinator appends group membership information into the 
> __consumer_offsets topic by:
> 1. making a message with group membership information
> 2. making a MessageSet with the single message compressed with the source 
> codec
> 3. doing a log.append on the MessageSet
> Without this patch, KafkaConfig.offsetsTopicCompressionCodec doesn't get 
> propagated to OffsetConfig instantiation, so GroupMetadataManager uses a 
> source codec of NoCompressionCodec when making the MessageSet. Let's say we 
> have enough group information such that the message formed exceeds 
> KafkaConfig.messageMaxBytes before compression but would fall below the 
> threshold after compression using our source codec. Even if we had 
> dynamically configured __consumer_offsets with our favorite compression 
> codec, the log.append will throw RecordTooLargeException during 
> analyzeAndValidateMessageSet since the message was unexpectedly uncompressed 
> instead of having been compressed with the source codec defined by 
> KafkaConfig.offsetsTopicCompressionCodec.
> NOTE: even after this issue is resolved, preliminary tests show that LinkedIn 
> will still hit RecordTooLargeException with large groups that consume many 
> topics (like MirrorMakers with wildcard consumption of .*) since fully 
> expanded subscription and assignment state for each member is put into a 
> single record. But this is a first step in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3718) propagate all KafkaConfig __consumer_offsets configs to OffsetConfig instantiation

2016-05-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293251#comment-15293251
 ] 

Ismael Juma commented on KAFKA-3718:


This is too late for 0.10.0.0, so updated the fix version.

> propagate all KafkaConfig __consumer_offsets configs to OffsetConfig 
> instantiation
> --
>
> Key: KAFKA-3718
> URL: https://issues.apache.org/jira/browse/KAFKA-3718
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Fix For: 0.10.0.1
>
>
> Kafka has two configurable compression codecs: the one used by the client 
> (source codec) and the one finally used when storing into the log (target 
> codec). The target codec defaults to KafkaConfig.compressionType and can be 
> dynamically configured through zookeeper.
> The GroupCoordinator appends group membership information into the 
> __consumer_offsets topic by:
> 1. making a message with group membership information
> 2. making a MessageSet with the single message compressed with the source 
> codec
> 3. doing a log.append on the MessageSet
> Without this patch, KafkaConfig.offsetsTopicCompressionCodec doesn't get 
> propagated to OffsetConfig instantiation, so GroupMetadataManager uses a 
> source codec of NoCompressionCodec when making the MessageSet. Let's say we 
> have enough group information such that the message formed exceeds 
> KafkaConfig.messageMaxBytes before compression but would fall below the 
> threshold after compression using our source codec. Even if we had 
> dynamically configured __consumer_offsets with our favorite compression 
> codec, the log.append will throw RecordTooLargeException during 
> analyzeAndValidateMessageSet since the message was unexpectedly uncompressed 
> instead of having been compressed with the source codec defined by 
> KafkaConfig.offsetsTopicCompressionCodec.
> NOTE: even after this issue is resolved, preliminary tests show that LinkedIn 
> will still hit RecordTooLargeException with large groups that consume many 
> topics (like MirrorMakers with wildcard consumption of .*) since fully 
> expanded subscription and assignment state for each member is put into a 
> single record. But this is a first step in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-58 - Make Log Compaction Point Configurable

2016-05-20 Thread Tom Crayford
Hi,

>From our perspective (running thousands of Kafka clusters), the main issues
we see with compacted topics *aren't* disk space usage, or IO utilization
of the log cleaner.

Size matters a *lot* to the usability of consumers bootstrapping from the
beginning - in fact we've been debating tuning out the log segment size for
compacted topics to 100MB, because right now leaving 1GB of uncompacted log
makes some bootstrapping take way too long (especially for non JVM clients,
even in fast languages like Go they're not as capable of high throughput as
the JVM clients). I'm wondering if that should be a default in Kafka itself
as well, and would be happy to contribute that kind of change upstream.
Kafka already tunes the __consumer_offsets topic down to 100MB per segment
for this exact reason.

Secondly, the docs don't make it clear (and this has confused dozens of
well intentioned, smart folk that we've talked to, and likely thousands of
Kafka users across the board) that compaction is an *alternative* to time
based retention. Lots of folk used compaction assuming "it's like time
based retention, but with even less space usage". Switching between the two
is thankfully easy, but it's been a very confusing thing to understand. I'd
like to contribute back clearer docs to Kafka about this. Should I send a
PR? Would that be welcome?

Thirdly, most users *don't* want to tune Kafka's settings at all, or even
know how or when they should. Whilst some amount of tuning is inevitable,
the drive Gwen has towards "less tuning" is very positive from our
perspective. Most users of most software (including technical users of data
storage and messaging systems) want to "just use it" and not worry about
"do I need to monitor a thousand things and then tune another thousand
based on my metrics". Whilst some of that is unavoidable (for sure), it
feels like compaction tuning should be something the project provides
*great* general purpose defaults for most users, which cover most of the
cases, which leave tuning just to the 1% of folk who really really care.
The current defaults seem to be doing well here (barring the above note
about log compaction size), and any future changes here should keep this up.

Thanks

Tom Crayford
Heroku Kafka

On Fri, May 20, 2016 at 4:48 AM, Jay Kreps  wrote:

> Hey Gwen,
>
> Yeah specifying in bytes versus the utilization percent would have been
> easier to implement. The argument against that is that basically users are
> super terrible at predicting and updating data sizes as stuff grows and
> you'd have to really set this then for each individual log perhaps?
> Currently I think that the utilization number of 50% is pretty reasonable
> for most people and you only need to tune it if you really want to
> optimize. But if you set a fixed size compaction threshold in bytes then
> how aggressive this is and the resulting utilization totally depends on the
> compacted size of the data in the topic. i.e. if it defaults to 20GB then
> that becomes the minimum size of the log, so if you end up with a bunch of
> topics with 100mb of compacted data they all end up growing to 20GB. As a
> user if you think you've written 100*100mb worth of compacted partitions
> but Kafka has 100*20GB of data I think you'd be a bit shocked.
>
> Ben--I think your proposal attempts to minimize total I/O by waiting until
> the compaction buffer will be maxed out. Each unique key in the uncompacted
> log uses 24 bytes of compaction buffer iirc but since you don't know the
> number of unique keys it's a bit hard to guess this. You could assume they
> are all unique and only compact when you have N/24 messages in the
> uncompacted log where N is the compaction buffer size in bytes. The issue
> as with Gwen's proposal is that by doing this you really lose control of
> disk utilization which might be a bit unintuitive. Your idea of just using
> the free disk space might fix this though it might be somewhat complex in
> the mixed setting with both compacted and non-compacted topics.
>
> One other thing worth noting is that compaction isn't just for disk space.
> A consumer that bootstraps from the beginning (a la state restore in Kafka
> Streams) has to fully read and process the whole log so I think you want to
> compact even when you still have free space.
>
> -Jay
>
>
>
> On Wed, May 18, 2016 at 10:29 PM, Gwen Shapira  wrote:
>
> > Oops :)
> >
> > The docs are definitely not doing the feature any favors, but I didn't
> mean
> > to imply the feature is thoughtless.
> >
> > Here's the thing I'm not getting: You are trading off disk space for IO
> > efficiency. Thats reasonable. But why not allow users to specify space in
> > bytes?
> >
> > Basically tell the LogCompacter: Once I have X bytes of dirty data (or
> post
> > KIP-58, X bytes of data that needs cleaning), please compact it to the
> best
> > of your ability (which in steady state will be into almost nothing).
> >
> > Since we know how big the compaction buffer is and how 

[jira] [Updated] (KAFKA-3728) EndToEndAuthorizationTest offsets_topic misconfigured

2016-05-20 Thread Edoardo Comar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edoardo Comar updated KAFKA-3728:
-
Summary: EndToEndAuthorizationTest offsets_topic misconfigured  (was: 
inconsistent behavior of Consumer.poll() when assigned vs subscribed)

> EndToEndAuthorizationTest offsets_topic misconfigured
> -
>
> Key: KAFKA-3728
> URL: https://issues.apache.org/jira/browse/KAFKA-3728
> Project: Kafka
>  Issue Type: Bug
>Reporter: Edoardo Comar
>
> A consumer that is manually assigned a topic-partition is able to consume 
> messages that a consumer that subscribes to the topic can not.
> To reproduce : take the test 
> EndToEndAuthorizationTest.testProduceConsume 
> (eg the SaslSslEndToEndAuthorizationTest implementation)
>  
> it passes ( = messages are consumed) 
> if the consumer is assigned the single topic-partition
>   consumers.head.assign(List(tp).asJava)
> but fails 
> if the consumer subscribes to the topic - changing the line to :
>   consumers.head.subscribe(List(topic).asJava)
> The failure when subscribed shows this error about synchronization:
>  org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:455)
> The test passes in both cases (subscribe and assign) with the setting
>   this.serverConfig.setProperty(KafkaConfig.MinInSyncReplicasProp, "1")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3728) EndToEndAuthorizationTest offsets_topic misconfigured

2016-05-20 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293272#comment-15293272
 ] 

Edoardo Comar commented on KAFKA-3728:
--

Thanks [~rsivaram] 
it wasn't obvious that ack=-1 requires a number of replicas equal to 
min.insync.replicas, even when the topic has fewer replicas than the ISR.

Possibly due to timeouts, even setting OffsetsTopicReplicationFactorProp to  
min.insync.replicas (=3) doesn't make the test pass reliably.
so I also set the OffsetCommitRequiredAcksProp to 1.

PR coming

> EndToEndAuthorizationTest offsets_topic misconfigured
> -
>
> Key: KAFKA-3728
> URL: https://issues.apache.org/jira/browse/KAFKA-3728
> Project: Kafka
>  Issue Type: Bug
>Reporter: Edoardo Comar
>
> A consumer that is manually assigned a topic-partition is able to consume 
> messages that a consumer that subscribes to the topic can not.
> To reproduce : take the test 
> EndToEndAuthorizationTest.testProduceConsume 
> (eg the SaslSslEndToEndAuthorizationTest implementation)
>  
> it passes ( = messages are consumed) 
> if the consumer is assigned the single topic-partition
>   consumers.head.assign(List(tp).asJava)
> but fails 
> if the consumer subscribes to the topic - changing the line to :
>   consumers.head.subscribe(List(topic).asJava)
> The failure when subscribed shows this error about synchronization:
>  org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:455)
> The test passes in both cases (subscribe and assign) with the setting
>   this.serverConfig.setProperty(KafkaConfig.MinInSyncReplicasProp, "1")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3728 EndToEndAuthorizationTest offsets_t...

2016-05-20 Thread edoardocomar
GitHub user edoardocomar opened a pull request:

https://github.com/apache/kafka/pull/1414

KAFKA-3728 EndToEndAuthorizationTest offsets_topic misconfigured

Set OffsetsTopicReplicationFactorProp to 3 like MinInSyncReplicasProp
and OffsetCommitRequiredAcksProp to 1 to avoid timeouts

unit test for consumer that subscribes added 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edoardocomar/kafka KAFKA-3728

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1414.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1414


commit eab6d3f7aa1facc873332228fa32f35e98c02b2d
Author: Edoardo Comar 
Date:   2016-05-19T15:52:15Z

KAFKA-3728 EndToEndAuthorizationTest offsets_topic misconfigured

Set OffsetsTopicReplicationFactorProp to 3 like MinInSyncReplicasProp
and OffsetCommitRequiredAcksProp to 1 to avoid timeouts




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3728) EndToEndAuthorizationTest offsets_topic misconfigured

2016-05-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293278#comment-15293278
 ] 

ASF GitHub Bot commented on KAFKA-3728:
---

GitHub user edoardocomar opened a pull request:

https://github.com/apache/kafka/pull/1414

KAFKA-3728 EndToEndAuthorizationTest offsets_topic misconfigured

Set OffsetsTopicReplicationFactorProp to 3 like MinInSyncReplicasProp
and OffsetCommitRequiredAcksProp to 1 to avoid timeouts

unit test for consumer that subscribes added 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edoardocomar/kafka KAFKA-3728

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1414.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1414


commit eab6d3f7aa1facc873332228fa32f35e98c02b2d
Author: Edoardo Comar 
Date:   2016-05-19T15:52:15Z

KAFKA-3728 EndToEndAuthorizationTest offsets_topic misconfigured

Set OffsetsTopicReplicationFactorProp to 3 like MinInSyncReplicasProp
and OffsetCommitRequiredAcksProp to 1 to avoid timeouts




> EndToEndAuthorizationTest offsets_topic misconfigured
> -
>
> Key: KAFKA-3728
> URL: https://issues.apache.org/jira/browse/KAFKA-3728
> Project: Kafka
>  Issue Type: Bug
>Reporter: Edoardo Comar
>
> A consumer that is manually assigned a topic-partition is able to consume 
> messages that a consumer that subscribes to the topic can not.
> To reproduce : take the test 
> EndToEndAuthorizationTest.testProduceConsume 
> (eg the SaslSslEndToEndAuthorizationTest implementation)
>  
> it passes ( = messages are consumed) 
> if the consumer is assigned the single topic-partition
>   consumers.head.assign(List(tp).asJava)
> but fails 
> if the consumer subscribes to the topic - changing the line to :
>   consumers.head.subscribe(List(topic).asJava)
> The failure when subscribed shows this error about synchronization:
>  org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:455)
> The test passes in both cases (subscribe and assign) with the setting
>   this.serverConfig.setProperty(KafkaConfig.MinInSyncReplicasProp, "1")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293322#comment-15293322
 ] 

Adrian Muraru commented on KAFKA-3736:
--

Agree - let me see if I can extract this to a {{-metrics-reporter}} module.

> Add http metrics reporter
> -
>
> Key: KAFKA-3736
> URL: https://issues.apache.org/jira/browse/KAFKA-3736
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Adrian Muraru
> Fix For: 0.10.1.0
>
>
> The current builtin JMX metrics reporter is pretty heavy in terms of load and 
> collection. A new http lightweight reporter is proposed to expose the metrics 
> via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293323#comment-15293323
 ] 

Adrian Muraru commented on KAFKA-3736:
--

I can add a KIP no problem but I was wondering if this is really a new feature 
given the fact the support is already there and there are two already reporters 
availbale:
JMX and CSV.

> Add http metrics reporter
> -
>
> Key: KAFKA-3736
> URL: https://issues.apache.org/jira/browse/KAFKA-3736
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Adrian Muraru
> Fix For: 0.10.1.0
>
>
> The current builtin JMX metrics reporter is pretty heavy in terms of load and 
> collection. A new http lightweight reporter is proposed to expose the metrics 
> via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293328#comment-15293328
 ] 

Ben Stopford commented on KAFKA-3736:
-

I think it's the "Any change that impacts the public interfaces of the project" 
part that's triggering inclusion here. 

> Add http metrics reporter
> -
>
> Key: KAFKA-3736
> URL: https://issues.apache.org/jira/browse/KAFKA-3736
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Adrian Muraru
> Fix For: 0.10.1.0
>
>
> The current builtin JMX metrics reporter is pretty heavy in terms of load and 
> collection. A new http lightweight reporter is proposed to expose the metrics 
> via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3736) Add http metrics reporter

2016-05-20 Thread Adrian Muraru (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293377#comment-15293377
 ] 

Adrian Muraru commented on KAFKA-3736:
--

But is this change changing the public interfaces? 

> Add http metrics reporter
> -
>
> Key: KAFKA-3736
> URL: https://issues.apache.org/jira/browse/KAFKA-3736
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Adrian Muraru
> Fix For: 0.10.1.0
>
>
> The current builtin JMX metrics reporter is pretty heavy in terms of load and 
> collection. A new http lightweight reporter is proposed to expose the metrics 
> via a local http port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Default log.roll.ms to the default retention or the retention in that topic

2016-05-20 Thread Tom Crayford
Hi,

Kafka has a configuration property, log.roll.ms (and log.roll.hours because
Kafka likes having lots of conflicting settings). By default it's set at
168 hours, which exactly matches the default retention of 168 hours.

# Reminder of what the setting does, please skip if you know
This setting controls when the broker will force a new log segment file to
be created. Retention works by looking at the older log segments and
deleting those whose file modification time is past the window specified.

The default of 168 hours becomes confusing when users configure different
retention windows themselves - if you set a retention window *lower* than
that limit, but have a low volume topic, retention will only be applied
once every 7 days. This is often confusing, and may lead to nasty
consequences if e.g. you had a compliance reason to actually only keep say,
4 days of data, you now have to know to tune log.roll.ms or log.roll.hours.

Instead, we could default `log.roll.ms` to:

a) the default retention window if no per topic one is set
b) the per topic retention setting if it's set

Clearly if `log.roll.ms` or `log.roll.hours` *are* explicitly set, we can
use them still, which avoids breaking backwards compatibility for the most
part.

There's only one complication here, which is that if you set your retention
super low (say you set it to 100ms), Kafka will now roll the log file that
often, which would lead to performance issues and number of files issues. I
think we can and maybe should reject having that setting be so low anyway
(either in the topic creation command, or at broker bootup), but finding a
good default lower bound there might be tricky. An alternative would be to
limit `log.roll.ms` to a certain lower bound, even with this defaulting
behaviour.

I think making log retention behave with it's intention without having to
understand log.roll.ms would be a notable improvement to most users of
Kafka, and has few drawbacks except "a small matter of coding".

I'd be happy to write up a KIP if y'all think this warrents it, and/or
write a pull request for this change.

Thanks

Tom Crayford
Heroku Kafka


[jira] [Created] (KAFKA-3737) Closing connection during produce request should be log with WARN level.

2016-05-20 Thread Florian Hussonnois (JIRA)
Florian Hussonnois created KAFKA-3737:
-

 Summary: Closing connection during produce request should be log 
with WARN level.
 Key: KAFKA-3737
 URL: https://issues.apache.org/jira/browse/KAFKA-3737
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.1
Reporter: Florian Hussonnois
Priority: Trivial


Currently if an an error occurred during a produce request the exeception is 
log as info.

INFO [KafkaApi-0] Closing connection due to error during produce request with 
correlation id 24 from client id console-producer with ack=0
Topic and partition to exceptions: [test,0] -> 
kafka.common.MessageSizeTooLargeException (kafka.server.KafkaApis)

It could be more conveniant to use a WARN level to ease the tracing of this 
errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3737: Change log level for error during ...

2016-05-20 Thread fhussonnois
GitHub user fhussonnois opened a pull request:

https://github.com/apache/kafka/pull/1415

KAFKA-3737: Change log level for error during produce request

Minor change for https://issues.apache.org/jira/browse/KAFKA-3737

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhussonnois/kafka kafka-3737

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1415.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1415


commit e2dac4a834c84808a8d171dbfdc149f6207e6d4a
Author: Florian Hussonnois 
Date:   2016-05-20T14:24:29Z

KAFKA-3737: Change log level for error during produce request




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3737) Closing connection during produce request should be log with WARN level.

2016-05-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293440#comment-15293440
 ] 

ASF GitHub Bot commented on KAFKA-3737:
---

GitHub user fhussonnois opened a pull request:

https://github.com/apache/kafka/pull/1415

KAFKA-3737: Change log level for error during produce request

Minor change for https://issues.apache.org/jira/browse/KAFKA-3737

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhussonnois/kafka kafka-3737

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1415.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1415


commit e2dac4a834c84808a8d171dbfdc149f6207e6d4a
Author: Florian Hussonnois 
Date:   2016-05-20T14:24:29Z

KAFKA-3737: Change log level for error during produce request




> Closing connection during produce request should be log with WARN level.
> 
>
> Key: KAFKA-3737
> URL: https://issues.apache.org/jira/browse/KAFKA-3737
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
>Reporter: Florian Hussonnois
>Priority: Trivial
>
> Currently if an an error occurred during a produce request the exeception is 
> log as info.
> INFO [KafkaApi-0] Closing connection due to error during produce request with 
> correlation id 24 from client id console-producer with ack=0
> Topic and partition to exceptions: [test,0] -> 
> kafka.common.MessageSizeTooLargeException (kafka.server.KafkaApis)
> It could be more conveniant to use a WARN level to ease the tracing of this 
> errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2450) Kafka 0.8.2.1 kafka-console-consumer.sh broken

2016-05-20 Thread Joseph Lawson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293478#comment-15293478
 ] 

Joseph Lawson commented on KAFKA-2450:
--

I got this on kafka_2.10-0.8.2.1 and kafka_2.10-0.8.2.2

> Kafka 0.8.2.1 kafka-console-consumer.sh broken
> --
>
> Key: KAFKA-2450
> URL: https://issues.apache.org/jira/browse/KAFKA-2450
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: Linux 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12 
> 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.7.0_79"
> OpenJDK Runtime Environment (IcedTea 2.5.5) (7u79-2.5.5-0ubuntu0.14.04.2)
> OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
>Reporter: Yu Jin
>Assignee: Manikumar Reddy
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> root@ip-172-31-21-168:/opt/kafka# bin/kafka-console-consumer.sh
> Exception in thread "main" java.lang.NoSuchMethodError: 
> kafka.utils.CommandLineUtils$.printUsageAndDie(Ljoptsimple/OptionParser;Ljava/lang/String;)V
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:88)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3567) Add --security-protocol option to console consumer and producer

2016-05-20 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293490#comment-15293490
 ] 

Bharat Viswanadham commented on KAFKA-3567:
---

I have opened a pull request for this Jira.
This is my first time contribution. According to code contribution guide lines, 
an automatic comment will be placed when test cases are successfully passed.
But I am not sure, which step I have missed.

So, copying the mail which I have received.

GitHub user bharatviswa504 opened a pull request:

https://github.com/apache/kafka/pull/1409

Kafka 3567:Add --security-protocol option to console consumer and producer

Creating a new pull request, because of branch is out of date.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bharatviswa504/kafka bharatv/Kafka-3567-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1409.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1409


commit 4e21dc6567a36c30ee075005783cdf47145f4832

> Add --security-protocol option to console consumer and producer
> ---
>
> Key: KAFKA-3567
> URL: https://issues.apache.org/jira/browse/KAFKA-3567
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Sriharsha Chintalapani
>Assignee: Bharat Viswanadham
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2450) Kafka 0.8.2.1 kafka-console-consumer.sh broken

2016-05-20 Thread Joseph Lawson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293497#comment-15293497
 ] 

Joseph Lawson commented on KAFKA-2450:
--

Turns out I had my $CLASSPATH set. Unsetting that fixed it.

> Kafka 0.8.2.1 kafka-console-consumer.sh broken
> --
>
> Key: KAFKA-2450
> URL: https://issues.apache.org/jira/browse/KAFKA-2450
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: Linux 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12 
> 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.7.0_79"
> OpenJDK Runtime Environment (IcedTea 2.5.5) (7u79-2.5.5-0ubuntu0.14.04.2)
> OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
>Reporter: Yu Jin
>Assignee: Manikumar Reddy
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> root@ip-172-31-21-168:/opt/kafka# bin/kafka-console-consumer.sh
> Exception in thread "main" java.lang.NoSuchMethodError: 
> kafka.utils.CommandLineUtils$.printUsageAndDie(Ljoptsimple/OptionParser;Ljava/lang/String;)V
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:88)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2450) Kafka 0.8.2.1 kafka-console-consumer.sh broken

2016-05-20 Thread Joseph Lawson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293483#comment-15293483
 ] 

Joseph Lawson commented on KAFKA-2450:
--

{quote}
./kafka-console-producer.sh --topic bidhistory --broker-list localhost:9092
Exception in thread "main" java.lang.NoSuchMethodError: 
kafka.utils.CommandLineUtils$.parseKeyValueArgs(Lscala/collection/Iterable;)Ljava/util/Properties;
at 
kafka.tools.ConsoleProducer$ProducerConfig.(ConsoleProducer.scala:245)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:35)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
{quote}

> Kafka 0.8.2.1 kafka-console-consumer.sh broken
> --
>
> Key: KAFKA-2450
> URL: https://issues.apache.org/jira/browse/KAFKA-2450
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: Linux 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12 
> 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.7.0_79"
> OpenJDK Runtime Environment (IcedTea 2.5.5) (7u79-2.5.5-0ubuntu0.14.04.2)
> OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
>Reporter: Yu Jin
>Assignee: Manikumar Reddy
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> root@ip-172-31-21-168:/opt/kafka# bin/kafka-console-consumer.sh
> Exception in thread "main" java.lang.NoSuchMethodError: 
> kafka.utils.CommandLineUtils$.printUsageAndDie(Ljoptsimple/OptionParser;Ljava/lang/String;)V
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:88)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3738) Add system test against memory leak in Kafka Streams

2016-05-20 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3738:


 Summary: Add system test against memory leak in Kafka Streams
 Key: KAFKA-3738
 URL: https://issues.apache.org/jira/browse/KAFKA-3738
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang


Since Streams has external dependences that are originated from C++, it is more 
likely to have memory leaks. We should consider adding a system test for 
validating object leaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-58 - Make Log Compaction Point Configurable

2016-05-20 Thread Gwen Shapira
Tom,

Documentation improvements are always welcome. The docs are in /docs under
the main repository, just sent a PR for trunk and we are good :)

Segment sizes - I have some objections, but this can be discussed in its
own thread. I feel like I did enough hijacking and Eric may get annoyed at
some point.

Gwen

On Fri, May 20, 2016 at 5:19 AM, Tom Crayford  wrote:

> Hi,
>
> From our perspective (running thousands of Kafka clusters), the main issues
> we see with compacted topics *aren't* disk space usage, or IO utilization
> of the log cleaner.
>
> Size matters a *lot* to the usability of consumers bootstrapping from the
> beginning - in fact we've been debating tuning out the log segment size for
> compacted topics to 100MB, because right now leaving 1GB of uncompacted log
> makes some bootstrapping take way too long (especially for non JVM clients,
> even in fast languages like Go they're not as capable of high throughput as
> the JVM clients). I'm wondering if that should be a default in Kafka itself
> as well, and would be happy to contribute that kind of change upstream.
> Kafka already tunes the __consumer_offsets topic down to 100MB per segment
> for this exact reason.
>
> Secondly, the docs don't make it clear (and this has confused dozens of
> well intentioned, smart folk that we've talked to, and likely thousands of
> Kafka users across the board) that compaction is an *alternative* to time
> based retention. Lots of folk used compaction assuming "it's like time
> based retention, but with even less space usage". Switching between the two
> is thankfully easy, but it's been a very confusing thing to understand. I'd
> like to contribute back clearer docs to Kafka about this. Should I send a
> PR? Would that be welcome?
>
> Thirdly, most users *don't* want to tune Kafka's settings at all, or even
> know how or when they should. Whilst some amount of tuning is inevitable,
> the drive Gwen has towards "less tuning" is very positive from our
> perspective. Most users of most software (including technical users of data
> storage and messaging systems) want to "just use it" and not worry about
> "do I need to monitor a thousand things and then tune another thousand
> based on my metrics". Whilst some of that is unavoidable (for sure), it
> feels like compaction tuning should be something the project provides
> *great* general purpose defaults for most users, which cover most of the
> cases, which leave tuning just to the 1% of folk who really really care.
> The current defaults seem to be doing well here (barring the above note
> about log compaction size), and any future changes here should keep this
> up.
>
> Thanks
>
> Tom Crayford
> Heroku Kafka
>
> On Fri, May 20, 2016 at 4:48 AM, Jay Kreps  wrote:
>
> > Hey Gwen,
> >
> > Yeah specifying in bytes versus the utilization percent would have been
> > easier to implement. The argument against that is that basically users
> are
> > super terrible at predicting and updating data sizes as stuff grows and
> > you'd have to really set this then for each individual log perhaps?
> > Currently I think that the utilization number of 50% is pretty reasonable
> > for most people and you only need to tune it if you really want to
> > optimize. But if you set a fixed size compaction threshold in bytes then
> > how aggressive this is and the resulting utilization totally depends on
> the
> > compacted size of the data in the topic. i.e. if it defaults to 20GB then
> > that becomes the minimum size of the log, so if you end up with a bunch
> of
> > topics with 100mb of compacted data they all end up growing to 20GB. As a
> > user if you think you've written 100*100mb worth of compacted partitions
> > but Kafka has 100*20GB of data I think you'd be a bit shocked.
> >
> > Ben--I think your proposal attempts to minimize total I/O by waiting
> until
> > the compaction buffer will be maxed out. Each unique key in the
> uncompacted
> > log uses 24 bytes of compaction buffer iirc but since you don't know the
> > number of unique keys it's a bit hard to guess this. You could assume
> they
> > are all unique and only compact when you have N/24 messages in the
> > uncompacted log where N is the compaction buffer size in bytes. The issue
> > as with Gwen's proposal is that by doing this you really lose control of
> > disk utilization which might be a bit unintuitive. Your idea of just
> using
> > the free disk space might fix this though it might be somewhat complex in
> > the mixed setting with both compacted and non-compacted topics.
> >
> > One other thing worth noting is that compaction isn't just for disk
> space.
> > A consumer that bootstraps from the beginning (a la state restore in
> Kafka
> > Streams) has to fully read and process the whole log so I think you want
> to
> > compact even when you still have free space.
> >
> > -Jay
> >
> >
> >
> > On Wed, May 18, 2016 at 10:29 PM, Gwen Shapira 
> wrote:
> >
> > > Oops :)
> > >
> > > The docs are definitely not doing th

[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Clint Hillerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293589#comment-15293589
 ] 

Clint Hillerman commented on KAFKA-3730:


Thanks for the response.

I'm fairly certain I updated the code. I actually tried both 0.9.0.0 and 
0.9.0.1. Removed the old install dir, including the bin dir where I run kafka 
from. Stopped kafka and replaced with the new version. Updated the config to 
have the  `inter.broker.protocol.version=0.8.2.X`. And restarted kafka on all 
the boxes. I did this process on all boxes one at a time. Everything appeared 
to work fine. Then when I tryed to bump the version up to 9.0.0.0 it started 
printing that error. 

I just checked and all of my files in the lib dirs have the 0.9.0.0 (or 0.9.0.1 
when I try that version). I only have three nodes, so it's easy to check them 
all and make sure. 

Would having  the zookeepers be on the same boxes cause any trouble? Do I need 
to do change any settings or anything on the zookeeper side?

The thing I noticed now is the cluster says that it's in sync with 4 nodes even 
though I only have 3.

 I'm taking this cluster over from someone else, so I wasn't involved with the 
initial setup, but I'm fairly certain there is no forth node.

In my configs I have the broker ids set to 100, 101, and 102, but when I do a 
describe I see a node 12. Node 12 is sometimes the leader and it's in sync. I 
made sure I don't have two version of kafka running on one of the nodes or 
something.

Is there a way for me to check what zookeeper thinks node 12 is? Or do you have 
any advice on figuring out why it think node 12 is in sync? Could it be that at 
one point 102 was typed as 12 and kafka is just handling 102 and 12 the same?





> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293606#comment-15293606
 ] 

Ismael Juma edited comment on KAFKA-3730 at 5/20/16 4:01 PM:
-

ZooKeeper stores information about the brokers under:

{code}
/brokers/ids/{id}
{code}

If you could include that information for all brokers here, that would be 
helpful.


was (Author: ijuma):
ZooKeeper stores information about the brokers under:

`/brokers/ids/{id}`

If you could include that information for all brokers here, that would be 
helpful.

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293606#comment-15293606
 ] 

Ismael Juma commented on KAFKA-3730:


ZooKeeper stores information about the brokers under:

`/brokers/ids/{id}`

If you could include that information for all brokers here, that would be 
helpful.

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Clint Hillerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293613#comment-15293613
 ] 

Clint Hillerman commented on KAFKA-3730:


Ok, I'll include that when I can. 

I don't have /brokers/. I tried cd /brokers and got nothing.

Is there a config location where I could have changed the location?

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293624#comment-15293624
 ] 

Ismael Juma commented on KAFKA-3730:


That's a path inside ZooKeeper.

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293626#comment-15293626
 ] 

Vahid Hashemian commented on KAFKA-3730:


Did you try it using the ZooKeeper shell (bin/zookeeper-shell.sh)?

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Clint Hillerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293628#comment-15293628
 ] 

Clint Hillerman commented on KAFKA-3730:


Like it's a path that I can't change? That's really weird. Thanks again for the 
quick responses. I didn't expect an answer for weeks.

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Clint Hillerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293631#comment-15293631
 ] 

Clint Hillerman commented on KAFKA-3730:


Oh no I didn't. I've never used the zookeeper-shell before. Trying it now. 
Ignore the other comment with the "Like it's a path I can't change".

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Clint Hillerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293636#comment-15293636
 ] 

Clint Hillerman commented on KAFKA-3730:


I'm having trouble finding the script:

I look in:
/opt/mapr/zookeeper/zookeeper-3.4.5/bin

And an ls of that dir: 

README.txt zkCli.cmd  zkEnv.cmd  zkServer.cmd  zookeeper*
zkCleanup.sh*  zkCli.sh*  zkEnv.sh*  zkServer.sh*

I'm trying to run some them, but not having any luck finding the dir.


> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293641#comment-15293641
 ] 

Ismael Juma commented on KAFKA-3730:


You can change it by using ZooKeeper's chroot functionality. but it wasn't 
clear to me from your response whether I had made it clear that it was a path 
within ZooKeeper or a filesystem directory.

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Clint Hillerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293646#comment-15293646
 ] 

Clint Hillerman commented on KAFKA-3730:


Omg I got it. You right from the beginning. There was 4th node running. Thanks 
a bunch. Is there anything I can do as thinks besides upvoting things on this 
page.

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.10.0.0 RC6

2016-05-20 Thread Harsha
+1 . Ran a 3-node cluster with few system tests on our side. Looks good.

-Harsha

On Thu, May 19, 2016, at 07:47 PM, Jun Rao wrote:
> Thanks for running the release. +1 from me. Verified the quickstart.
> 
> Jun
> 
> On Tue, May 17, 2016 at 10:00 PM, Gwen Shapira  wrote:
> 
> > Hello Kafka users, developers and client-developers,
> >
> > This is the seventh (!) candidate for release of Apache Kafka
> > 0.10.0.0. This is a major release that includes: (1) New message
> > format including timestamps (2) client interceptor API (3) Kafka
> > Streams.
> >
> > This RC was rolled out to fix an issue with our packaging that caused
> > dependencies to leak in ways that broke our licensing, and an issue
> > with protocol versions that broke upgrade for LinkedIn and others who
> > may run from trunk. Thanks to Ewen, Ismael, Becket and Jun for the
> > finding and fixing of issues.
> >
> > Release notes for the 0.10.0.0 release:
> > http://home.apache.org/~gwenshap/0.10.0.0-rc6/RELEASE_NOTES.html
> >
> > Lets try to vote within the 72h release vote window and get this baby
> > out already!
> >
> > *** Please download, test and vote by Friday, May 20, 23:59 PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~gwenshap/0.10.0.0-rc6/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * java-doc
> > http://home.apache.org/~gwenshap/0.10.0.0-rc6/javadoc/
> >
> > * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
> >
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=065899a3bc330618e420673acf9504d123b800f3
> >
> > * Documentation:
> > http://kafka.apache.org/0100/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0100/protocol.html
> >
> > /**
> >
> > Thanks,
> >
> > Gwen
> >


[jira] [Commented] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Clint Hillerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293678#comment-15293678
 ] 

Clint Hillerman commented on KAFKA-3730:


I don't know how to edit that horrible english.

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Specifiy keyalg RSA for SSL key generation.

2016-05-20 Thread harshach
GitHub user harshach opened a pull request:

https://github.com/apache/kafka/pull/1416

Specifiy keyalg RSA for SSL key generation.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/harshach/kafka ssl-doc-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1416.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1416


commit 2131452fac8f7a7a037d1e09d4258995f5335d9b
Author: Sriharsha Chintalapani 
Date:   2016-05-20T16:46:04Z

Specifiy keyalg RSA for SSL key generation.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] 0.10.0.0 RC6

2016-05-20 Thread Guozhang Wang
+1. Validated maven (should be
https://repository.apache.org/content/groups/staging/org/apache/kafka/ btw)
and binary libraries, quick start.

On Fri, May 20, 2016 at 9:36 AM, Harsha  wrote:

> +1 . Ran a 3-node cluster with few system tests on our side. Looks good.
>
> -Harsha
>
> On Thu, May 19, 2016, at 07:47 PM, Jun Rao wrote:
> > Thanks for running the release. +1 from me. Verified the quickstart.
> >
> > Jun
> >
> > On Tue, May 17, 2016 at 10:00 PM, Gwen Shapira 
> wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the seventh (!) candidate for release of Apache Kafka
> > > 0.10.0.0. This is a major release that includes: (1) New message
> > > format including timestamps (2) client interceptor API (3) Kafka
> > > Streams.
> > >
> > > This RC was rolled out to fix an issue with our packaging that caused
> > > dependencies to leak in ways that broke our licensing, and an issue
> > > with protocol versions that broke upgrade for LinkedIn and others who
> > > may run from trunk. Thanks to Ewen, Ismael, Becket and Jun for the
> > > finding and fixing of issues.
> > >
> > > Release notes for the 0.10.0.0 release:
> > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/RELEASE_NOTES.html
> > >
> > > Lets try to vote within the 72h release vote window and get this baby
> > > out already!
> > >
> > > *** Please download, test and vote by Friday, May 20, 23:59 PT
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * java-doc
> > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/javadoc/
> > >
> > > * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
> > >
> > >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=065899a3bc330618e420673acf9504d123b800f3
> > >
> > > * Documentation:
> > > http://kafka.apache.org/0100/documentation.html
> > >
> > > * Protocol:
> > > http://kafka.apache.org/0100/protocol.html
> > >
> > > /**
> > >
> > > Thanks,
> > >
> > > Gwen
> > >
>



-- 
-- Guozhang


[jira] [Resolved] (KAFKA-3730) Problem when updating from 0.8.2 to 0.9.0

2016-05-20 Thread Clint Hillerman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clint Hillerman resolved KAFKA-3730.

Resolution: Not A Problem

Operator error

> Problem when updating from 0.8.2 to 0.9.0
> -
>
> Key: KAFKA-3730
> URL: https://issues.apache.org/jira/browse/KAFKA-3730
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
> Environment: SUSE SLE 10.3 64bit
>Reporter: Clint Hillerman
>Priority: Critical
>  Labels: newbie
>
> Hello,
> I'm having trouble upgrading a 3 node kafka cluster from 0.8.2.1 to 0.9.0.0. 
> I have followed the steps in the upgrade guide here:
> http://kafka.apache.org/documentation.html
> Also, my zookeepers are on the same box as kafka. Each node is both a 
> zookeeper and a broker.
> Here's what I did:
> On each box one at a time I,
> - stopped kafka.
> - replaced the code with the new version. Just removed the old kafka dir and 
> untared the new 0.9.0.0 version into it's place. Note: the data dir is in a 
> different location and was not deleted.
> - copied the server.properties file from the 0.8.2.1 version to the 0.9.0.0 
> config dir.
> - added the "inter.broker.protocol.version=0.8.2.X" line to the 
> server.properties in 0.9.0.0's config dir.
> - restarted kafka
> After I completed that process on all 3 broker/zookeeper boxes, I switched 
> the version to 0.9.0.0 in the server.properties on one broker and restarted 
> kafka.
> This caused an error in my server.log. About one every few seconds:
> [2016-05-18 15:00:27,956] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@45597bba. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading field 'topic': java.nio.BufferUnderflowException 
> (kafka.server.ReplicaFetcherThread)
> A few other things I tried:
> Restarting zookeepers. There status was also correct when I ran "server 
> mapr-zookeeper" qstatus.
> The same process with 9.1 and got this error instead:
> [2016-05-18 14:07:15,545] WARN [ReplicaFetcherThread-0-12], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@484ad173. Possible cause: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'responses': Error reading array of size 1078124, only 176 bytes available 
> (kafka.server.ReplicaFetcherThread)
> Restarting everything at once (all broker and zookeeper processes)
> Please let me know if I should provide more information or if posted this in 
> the wrong location. I'm also not sure if this is the right location to post 
> bugs like this. If there is a forum or something where this is more 
> appropriate please point in that direction.
> Thanks,
> cmhillerman



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.10.0.0 RC6

2016-05-20 Thread Ewen Cheslack-Postava
+1 validated connect with a couple of simple connectors and console
producer/consumer.

-Ewen

On Fri, May 20, 2016 at 9:53 AM, Guozhang Wang  wrote:

> +1. Validated maven (should be
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> btw)
> and binary libraries, quick start.
>
> On Fri, May 20, 2016 at 9:36 AM, Harsha  wrote:
>
> > +1 . Ran a 3-node cluster with few system tests on our side. Looks good.
> >
> > -Harsha
> >
> > On Thu, May 19, 2016, at 07:47 PM, Jun Rao wrote:
> > > Thanks for running the release. +1 from me. Verified the quickstart.
> > >
> > > Jun
> > >
> > > On Tue, May 17, 2016 at 10:00 PM, Gwen Shapira 
> > wrote:
> > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the seventh (!) candidate for release of Apache Kafka
> > > > 0.10.0.0. This is a major release that includes: (1) New message
> > > > format including timestamps (2) client interceptor API (3) Kafka
> > > > Streams.
> > > >
> > > > This RC was rolled out to fix an issue with our packaging that caused
> > > > dependencies to leak in ways that broke our licensing, and an issue
> > > > with protocol versions that broke upgrade for LinkedIn and others who
> > > > may run from trunk. Thanks to Ewen, Ismael, Becket and Jun for the
> > > > finding and fixing of issues.
> > > >
> > > > Release notes for the 0.10.0.0 release:
> > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/RELEASE_NOTES.html
> > > >
> > > > Lets try to vote within the 72h release vote window and get this baby
> > > > out already!
> > > >
> > > > *** Please download, test and vote by Friday, May 20, 23:59 PT
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > http://kafka.apache.org/KEYS
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/
> > > >
> > > > * Maven artifacts to be voted upon:
> > > > https://repository.apache.org/content/groups/staging/
> > > >
> > > > * java-doc
> > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/javadoc/
> > > >
> > > > * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
> > > >
> > > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=065899a3bc330618e420673acf9504d123b800f3
> > > >
> > > > * Documentation:
> > > > http://kafka.apache.org/0100/documentation.html
> > > >
> > > > * Protocol:
> > > > http://kafka.apache.org/0100/protocol.html
> > > >
> > > > /**
> > > >
> > > > Thanks,
> > > >
> > > > Gwen
> > > >
> >
>
>
>
> --
> -- Guozhang
>



-- 
Thanks,
Ewen


[jira] [Created] (KAFKA-3739) Add no-arg constructor for library provided serdes

2016-05-20 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3739:


 Summary: Add no-arg constructor for library provided serdes
 Key: KAFKA-3739
 URL: https://issues.apache.org/jira/browse/KAFKA-3739
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang


We need to add the no-arg constructor explicitly for those library-provided 
serdes such as {{WindowedSerde}} that already have constructors with arguments. 
Otherwise they cannot be used through configs which are expecting to construct 
them via reflections with no-arg constructors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.10.0.0 RC6

2016-05-20 Thread Joe Stein
+1 ran quick start from source and binary release

On Fri, May 20, 2016 at 1:07 PM, Ewen Cheslack-Postava 
wrote:

> +1 validated connect with a couple of simple connectors and console
> producer/consumer.
>
> -Ewen
>
> On Fri, May 20, 2016 at 9:53 AM, Guozhang Wang  wrote:
>
> > +1. Validated maven (should be
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > btw)
> > and binary libraries, quick start.
> >
> > On Fri, May 20, 2016 at 9:36 AM, Harsha  wrote:
> >
> > > +1 . Ran a 3-node cluster with few system tests on our side. Looks
> good.
> > >
> > > -Harsha
> > >
> > > On Thu, May 19, 2016, at 07:47 PM, Jun Rao wrote:
> > > > Thanks for running the release. +1 from me. Verified the quickstart.
> > > >
> > > > Jun
> > > >
> > > > On Tue, May 17, 2016 at 10:00 PM, Gwen Shapira 
> > > wrote:
> > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > > This is the seventh (!) candidate for release of Apache Kafka
> > > > > 0.10.0.0. This is a major release that includes: (1) New message
> > > > > format including timestamps (2) client interceptor API (3) Kafka
> > > > > Streams.
> > > > >
> > > > > This RC was rolled out to fix an issue with our packaging that
> caused
> > > > > dependencies to leak in ways that broke our licensing, and an issue
> > > > > with protocol versions that broke upgrade for LinkedIn and others
> who
> > > > > may run from trunk. Thanks to Ewen, Ismael, Becket and Jun for the
> > > > > finding and fixing of issues.
> > > > >
> > > > > Release notes for the 0.10.0.0 release:
> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/RELEASE_NOTES.html
> > > > >
> > > > > Lets try to vote within the 72h release vote window and get this
> baby
> > > > > out already!
> > > > >
> > > > > *** Please download, test and vote by Friday, May 20, 23:59 PT
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > http://kafka.apache.org/KEYS
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/
> > > > >
> > > > > * Maven artifacts to be voted upon:
> > > > > https://repository.apache.org/content/groups/staging/
> > > > >
> > > > > * java-doc
> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/javadoc/
> > > > >
> > > > > * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
> > > > >
> > > > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=065899a3bc330618e420673acf9504d123b800f3
> > > > >
> > > > > * Documentation:
> > > > > http://kafka.apache.org/0100/documentation.html
> > > > >
> > > > > * Protocol:
> > > > > http://kafka.apache.org/0100/protocol.html
> > > > >
> > > > > /**
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Gwen
> > > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
>
> --
> Thanks,
> Ewen
>


[jira] [Created] (KAFKA-3740) Add configs for RocksDBStores

2016-05-20 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3740:


 Summary: Add configs for RocksDBStores
 Key: KAFKA-3740
 URL: https://issues.apache.org/jira/browse/KAFKA-3740
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang


Today most of the rocksDB configs are hard written inside {{RocksDBStore}}, or 
the default values are directly used. We need to make them configurable for 
advanced users. For example, some default values may not work perfectly for 
some scenarios: 
https://github.com/HenryCaiHaiying/kafka/commit/ccc4e25b110cd33eea47b40a2f6bf17ba0924576
 

One way of doing that is to introduce a "RocksDBStoreConfigs" objects similar 
to "StreamsConfig", which defines all related rocksDB options configs, that can 
be passed as key-value pairs to "StreamsConfig".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3740) Add configs for RocksDBStores

2016-05-20 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293908#comment-15293908
 ] 

Guozhang Wang commented on KAFKA-3740:
--

[~h...@pinterest.com] Are you interested in picking it up?

> Add configs for RocksDBStores
> -
>
> Key: KAFKA-3740
> URL: https://issues.apache.org/jira/browse/KAFKA-3740
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: api, newbie
>
> Today most of the rocksDB configs are hard written inside {{RocksDBStore}}, 
> or the default values are directly used. We need to make them configurable 
> for advanced users. For example, some default values may not work perfectly 
> for some scenarios: 
> https://github.com/HenryCaiHaiying/kafka/commit/ccc4e25b110cd33eea47b40a2f6bf17ba0924576
>  
> One way of doing that is to introduce a "RocksDBStoreConfigs" objects similar 
> to "StreamsConfig", which defines all related rocksDB options configs, that 
> can be passed as key-value pairs to "StreamsConfig".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3739) Add no-arg constructor for library provided serdes

2016-05-20 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-3739:
-

Assignee: Liquan Pei

> Add no-arg constructor for library provided serdes
> --
>
> Key: KAFKA-3739
> URL: https://issues.apache.org/jira/browse/KAFKA-3739
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Liquan Pei
>  Labels: newbie, user-experience
>
> We need to add the no-arg constructor explicitly for those library-provided 
> serdes such as {{WindowedSerde}} that already have constructors with 
> arguments. Otherwise they cannot be used through configs which are expecting 
> to construct them via reflections with no-arg constructors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3720 : Deprecated BufferExhaustedExcepti...

2016-05-20 Thread MayureshGharat
GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/1417

KAFKA-3720 : Deprecated BufferExhaustedException and also removed its use 
and the related sensor metric

BufferExhaustedException is no longer used and should be deprecated and the 
corresponding metrics should be removed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-3720

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1417.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1417


commit c5b726da480abe4a9e9d8d249c09730f9bb7ea78
Author: MayureshGharat 
Date:   2016-05-20T18:44:31Z

Deprecated BufferExhaustedException and also removed its use and the 
related sensor metric




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3720) Remove BufferExhaustException from doSend() in KafkaProducer

2016-05-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293959#comment-15293959
 ] 

ASF GitHub Bot commented on KAFKA-3720:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/1417

KAFKA-3720 : Deprecated BufferExhaustedException and also removed its use 
and the related sensor metric

BufferExhaustedException is no longer used and should be deprecated and the 
corresponding metrics should be removed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-3720

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1417.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1417


commit c5b726da480abe4a9e9d8d249c09730f9bb7ea78
Author: MayureshGharat 
Date:   2016-05-20T18:44:31Z

Deprecated BufferExhaustedException and also removed its use and the 
related sensor metric




> Remove BufferExhaustException from doSend() in KafkaProducer
> 
>
> Key: KAFKA-3720
> URL: https://issues.apache.org/jira/browse/KAFKA-3720
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> KafkaProducer no longer throws BufferExhaustException. We should remove it 
> from the catch clause. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3735) RocksDB objects needs to be disposed after usage

2016-05-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3735.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.1

Issue resolved by pull request 1411
[https://github.com/apache/kafka/pull/1411]

> RocksDB objects needs to be disposed after usage
> 
>
> Key: KAFKA-3735
> URL: https://issues.apache.org/jira/browse/KAFKA-3735
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.0.1
>
>
> The RocksDB JNI interface {{RocksObject}} has a dispose() function which need 
> to be explicitly triggered after it is not reused, otherwise GC may not be 
> able to de-reference it off-heap and hence effective lead to memory leak. 
> See: https://github.com/facebook/rocksdb/issues/752#issuecomment-146511412
> We need to make sure all library-controlled RocksDB objects are disposed 
> after usage, and also indicate users to {{close}} those out-of-control 
> objects.
> Note that RocksDB community is also going to replace the {{dispose}} API by 
> extending {{AutoClosable}} in the future, so this ticket may need to be 
> re-visited when upgrade RocksDB versions:
> https://www.facebook.com/groups/rocksdb.dev/permalink/870848569680325/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3735: Dispose all RocksObejcts upon comp...

2016-05-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1411


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3735) RocksDB objects needs to be disposed after usage

2016-05-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15293968#comment-15293968
 ] 

ASF GitHub Bot commented on KAFKA-3735:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1411


> RocksDB objects needs to be disposed after usage
> 
>
> Key: KAFKA-3735
> URL: https://issues.apache.org/jira/browse/KAFKA-3735
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.0.1
>
>
> The RocksDB JNI interface {{RocksObject}} has a dispose() function which need 
> to be explicitly triggered after it is not reused, otherwise GC may not be 
> able to de-reference it off-heap and hence effective lead to memory leak. 
> See: https://github.com/facebook/rocksdb/issues/752#issuecomment-146511412
> We need to make sure all library-controlled RocksDB objects are disposed 
> after usage, and also indicate users to {{close}} those out-of-control 
> objects.
> Note that RocksDB community is also going to replace the {{dispose}} API by 
> extending {{AutoClosable}} in the future, so this ticket may need to be 
> re-visited when upgrade RocksDB versions:
> https://www.facebook.com/groups/rocksdb.dev/permalink/870848569680325/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.10.0.0 RC6

2016-05-20 Thread Dana Powers
+1 -- passed kafka-python integration tests

-Dana

On Fri, May 20, 2016 at 11:16 AM, Joe Stein  wrote:
> +1 ran quick start from source and binary release
>
> On Fri, May 20, 2016 at 1:07 PM, Ewen Cheslack-Postava 
> wrote:
>
>> +1 validated connect with a couple of simple connectors and console
>> producer/consumer.
>>
>> -Ewen
>>
>> On Fri, May 20, 2016 at 9:53 AM, Guozhang Wang  wrote:
>>
>> > +1. Validated maven (should be
>> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> > btw)
>> > and binary libraries, quick start.
>> >
>> > On Fri, May 20, 2016 at 9:36 AM, Harsha  wrote:
>> >
>> > > +1 . Ran a 3-node cluster with few system tests on our side. Looks
>> good.
>> > >
>> > > -Harsha
>> > >
>> > > On Thu, May 19, 2016, at 07:47 PM, Jun Rao wrote:
>> > > > Thanks for running the release. +1 from me. Verified the quickstart.
>> > > >
>> > > > Jun
>> > > >
>> > > > On Tue, May 17, 2016 at 10:00 PM, Gwen Shapira 
>> > > wrote:
>> > > >
>> > > > > Hello Kafka users, developers and client-developers,
>> > > > >
>> > > > > This is the seventh (!) candidate for release of Apache Kafka
>> > > > > 0.10.0.0. This is a major release that includes: (1) New message
>> > > > > format including timestamps (2) client interceptor API (3) Kafka
>> > > > > Streams.
>> > > > >
>> > > > > This RC was rolled out to fix an issue with our packaging that
>> caused
>> > > > > dependencies to leak in ways that broke our licensing, and an issue
>> > > > > with protocol versions that broke upgrade for LinkedIn and others
>> who
>> > > > > may run from trunk. Thanks to Ewen, Ismael, Becket and Jun for the
>> > > > > finding and fixing of issues.
>> > > > >
>> > > > > Release notes for the 0.10.0.0 release:
>> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/RELEASE_NOTES.html
>> > > > >
>> > > > > Lets try to vote within the 72h release vote window and get this
>> baby
>> > > > > out already!
>> > > > >
>> > > > > *** Please download, test and vote by Friday, May 20, 23:59 PT
>> > > > >
>> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
>> > > > > http://kafka.apache.org/KEYS
>> > > > >
>> > > > > * Release artifacts to be voted upon (source and binary):
>> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/
>> > > > >
>> > > > > * Maven artifacts to be voted upon:
>> > > > > https://repository.apache.org/content/groups/staging/
>> > > > >
>> > > > > * java-doc
>> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/javadoc/
>> > > > >
>> > > > > * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
>> > > > >
>> > > > >
>> > >
>> >
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=065899a3bc330618e420673acf9504d123b800f3
>> > > > >
>> > > > > * Documentation:
>> > > > > http://kafka.apache.org/0100/documentation.html
>> > > > >
>> > > > > * Protocol:
>> > > > > http://kafka.apache.org/0100/protocol.html
>> > > > >
>> > > > > /**
>> > > > >
>> > > > > Thanks,
>> > > > >
>> > > > > Gwen
>> > > > >
>> > >
>> >
>> >
>> >
>> > --
>> > -- Guozhang
>> >
>>
>>
>>
>> --
>> Thanks,
>> Ewen
>>


Build failed in Jenkins: kafka-trunk-jdk8 #642

2016-05-20 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3735: Dispose all RocksObejcts upon completeness

--
[...truncated 4026 lines...]
kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testBuildOffsetMapFakeLarge PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED
:test_core_2_11
Building project 'core' with Scala version 2.11.8
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:compileTestJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processTestResources UP-TO-DATE
:kafka-trunk-jdk8:clients:testClasses UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:401:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:56:
 no valid targets for annotation on variable _file - it is discarded unused. 
You may specify targets with meta-annotations, e.g. @(volatile @param)
class OffsetIndex(@volatile private[this] var _file: File, val baseOffset: 
Long, val maxIndexSize: Int = -1) extends Logging {
   ^
:301:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
 ^
:246:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
Console.readLine().equalsIgnoreCase("y")
^
:378:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn
if (!Console.readLine().equalsIgnoreCase("y")) {
   

[jira] [Commented] (KAFKA-3740) Add configs for RocksDBStores

2016-05-20 Thread Henry Cai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15294204#comment-15294204
 ] 

Henry Cai commented on KAFKA-3740:
--

You can assign this one to me.

> Add configs for RocksDBStores
> -
>
> Key: KAFKA-3740
> URL: https://issues.apache.org/jira/browse/KAFKA-3740
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: api, newbie
>
> Today most of the rocksDB configs are hard written inside {{RocksDBStore}}, 
> or the default values are directly used. We need to make them configurable 
> for advanced users. For example, some default values may not work perfectly 
> for some scenarios: 
> https://github.com/HenryCaiHaiying/kafka/commit/ccc4e25b110cd33eea47b40a2f6bf17ba0924576
>  
> One way of doing that is to introduce a "RocksDBStoreConfigs" objects similar 
> to "StreamsConfig", which defines all related rocksDB options configs, that 
> can be passed as key-value pairs to "StreamsConfig".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3740) Add configs for RocksDBStores

2016-05-20 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15294232#comment-15294232
 ] 

Guozhang Wang commented on KAFKA-3740:
--

I have added you to the contributor list, so you should be able to assign it to 
yourself. Let me know if it doesn't work for you.

> Add configs for RocksDBStores
> -
>
> Key: KAFKA-3740
> URL: https://issues.apache.org/jira/browse/KAFKA-3740
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: api, newbie
>
> Today most of the rocksDB configs are hard written inside {{RocksDBStore}}, 
> or the default values are directly used. We need to make them configurable 
> for advanced users. For example, some default values may not work perfectly 
> for some scenarios: 
> https://github.com/HenryCaiHaiying/kafka/commit/ccc4e25b110cd33eea47b40a2f6bf17ba0924576
>  
> One way of doing that is to introduce a "RocksDBStoreConfigs" objects similar 
> to "StreamsConfig", which defines all related rocksDB options configs, that 
> can be passed as key-value pairs to "StreamsConfig".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3735) RocksDB objects needs to be disposed after usage

2016-05-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3735:
-
Fix Version/s: (was: 0.10.0.1)
   0.10.1.0

> RocksDB objects needs to be disposed after usage
> 
>
> Key: KAFKA-3735
> URL: https://issues.apache.org/jira/browse/KAFKA-3735
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> The RocksDB JNI interface {{RocksObject}} has a dispose() function which need 
> to be explicitly triggered after it is not reused, otherwise GC may not be 
> able to de-reference it off-heap and hence effective lead to memory leak. 
> See: https://github.com/facebook/rocksdb/issues/752#issuecomment-146511412
> We need to make sure all library-controlled RocksDB objects are disposed 
> after usage, and also indicate users to {{close}} those out-of-control 
> objects.
> Note that RocksDB community is also going to replace the {{dispose}} API by 
> extending {{AutoClosable}} in the future, so this ticket may need to be 
> re-visited when upgrade RocksDB versions:
> https://www.facebook.com/groups/rocksdb.dev/permalink/870848569680325/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-742) Existing directories under the Kafka data directory without any data cause process to not start

2016-05-20 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15294250#comment-15294250
 ] 

David Tucker commented on KAFKA-742:


And the "lost+found" issue still exists in Kafka 0.9 even when a non-root user 
is given ownership of the mount point.   There is a workaround : create a 
single sub-directory and point to that.


> Existing directories under the Kafka data directory without any data cause 
> process to not start
> ---
>
> Key: KAFKA-742
> URL: https://issues.apache.org/jira/browse/KAFKA-742
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.8.0
>Reporter: Chris Curtin
>Assignee: Ashish K Singh
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-742.1.patch, KAFKA-742.patch
>
>
> I incorrectly setup the configuration file to have the metrics go to 
> /var/kafka/metrics while the logs were in /var/kafka. On startup I received 
> the following error then the daemon exited:
> 30   [main] INFO  kafka.log.LogManager  - [Log Manager on Broker 0] Loading 
> log 'metrics'
> 32   [main] FATAL kafka.server.KafkaServerStartable  - Fatal error during 
> KafkaServerStable startup. Prepare to shutdown
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1937)
> at 
> kafka.log.LogManager.kafka$log$LogManager$$parseTopicPartitionName(LogManager.scala:335)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:112)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:109)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:109)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:101)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
> at kafka.log.LogManager.loadLogs(LogManager.scala:101)
> at kafka.log.LogManager.(LogManager.scala:62)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:59)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
> at kafka.Kafka$.main(Kafka.scala:46)
> at kafka.Kafka.main(Kafka.scala)
> 34   [main] INFO  kafka.server.KafkaServer  - [Kafka Server 0], shutting down
> This was on a brand new cluster so no data or metrics logs existed yet.
> Moving the metrics to their own directory (not a child of the logs) allowed 
> the daemon to start.
> Took a few minutes to figure out what was wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3740) Add configs for RocksDBStores

2016-05-20 Thread Henry Cai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15294270#comment-15294270
 ] 

Henry Cai commented on KAFKA-3740:
--

Don't see that 'assign' button

> Add configs for RocksDBStores
> -
>
> Key: KAFKA-3740
> URL: https://issues.apache.org/jira/browse/KAFKA-3740
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: api, newbie
>
> Today most of the rocksDB configs are hard written inside {{RocksDBStore}}, 
> or the default values are directly used. We need to make them configurable 
> for advanced users. For example, some default values may not work perfectly 
> for some scenarios: 
> https://github.com/HenryCaiHaiying/kafka/commit/ccc4e25b110cd33eea47b40a2f6bf17ba0924576
>  
> One way of doing that is to introduce a "RocksDBStoreConfigs" objects similar 
> to "StreamsConfig", which defines all related rocksDB options configs, that 
> can be passed as key-value pairs to "StreamsConfig".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3741) KStream config for changelog min.in.sync.replicas

2016-05-20 Thread Roger Hoover (JIRA)
Roger Hoover created KAFKA-3741:
---

 Summary: KStream config for changelog min.in.sync.replicas
 Key: KAFKA-3741
 URL: https://issues.apache.org/jira/browse/KAFKA-3741
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.0.0
Reporter: Roger Hoover
Assignee: Guozhang Wang


Kafka Streams currently allows you to specify a replication factor for 
changelog and repartition topics that it creates.  It should also allow you to 
specify min.in.sync.replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3741) KStream config for changelog min.in.sync.replicas

2016-05-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3741:
-
Labels: api  (was: )

> KStream config for changelog min.in.sync.replicas
> -
>
> Key: KAFKA-3741
> URL: https://issues.apache.org/jira/browse/KAFKA-3741
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Roger Hoover
>Assignee: Guozhang Wang
>  Labels: api
>
> Kafka Streams currently allows you to specify a replication factor for 
> changelog and repartition topics that it creates.  It should also allow you 
> to specify min.in.sync.replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3740) Add configs for RocksDBStores

2016-05-20 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15294302#comment-15294302
 ] 

Guozhang Wang commented on KAFKA-3740:
--

That is because you can only assign to yourself, but not others.

On the "assignee" tab, when you move your pointer to it, do you see it having 
the edit button, or if there is a "Assign to me" below it?

> Add configs for RocksDBStores
> -
>
> Key: KAFKA-3740
> URL: https://issues.apache.org/jira/browse/KAFKA-3740
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: api, newbie
>
> Today most of the rocksDB configs are hard written inside {{RocksDBStore}}, 
> or the default values are directly used. We need to make them configurable 
> for advanced users. For example, some default values may not work perfectly 
> for some scenarios: 
> https://github.com/HenryCaiHaiying/kafka/commit/ccc4e25b110cd33eea47b40a2f6bf17ba0924576
>  
> One way of doing that is to introduce a "RocksDBStoreConfigs" objects similar 
> to "StreamsConfig", which defines all related rocksDB options configs, that 
> can be passed as key-value pairs to "StreamsConfig".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3740) Add configs for RocksDBStores

2016-05-20 Thread Henry Cai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15294310#comment-15294310
 ] 

Henry Cai commented on KAFKA-3740:
--

Worked now.

> Add configs for RocksDBStores
> -
>
> Key: KAFKA-3740
> URL: https://issues.apache.org/jira/browse/KAFKA-3740
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Henry Cai
>  Labels: api, newbie
>
> Today most of the rocksDB configs are hard written inside {{RocksDBStore}}, 
> or the default values are directly used. We need to make them configurable 
> for advanced users. For example, some default values may not work perfectly 
> for some scenarios: 
> https://github.com/HenryCaiHaiying/kafka/commit/ccc4e25b110cd33eea47b40a2f6bf17ba0924576
>  
> One way of doing that is to introduce a "RocksDBStoreConfigs" objects similar 
> to "StreamsConfig", which defines all related rocksDB options configs, that 
> can be passed as key-value pairs to "StreamsConfig".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3740) Add configs for RocksDBStores

2016-05-20 Thread Henry Cai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Cai reassigned KAFKA-3740:


Assignee: Henry Cai

> Add configs for RocksDBStores
> -
>
> Key: KAFKA-3740
> URL: https://issues.apache.org/jira/browse/KAFKA-3740
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Henry Cai
>  Labels: api, newbie
>
> Today most of the rocksDB configs are hard written inside {{RocksDBStore}}, 
> or the default values are directly used. We need to make them configurable 
> for advanced users. For example, some default values may not work perfectly 
> for some scenarios: 
> https://github.com/HenryCaiHaiying/kafka/commit/ccc4e25b110cd33eea47b40a2f6bf17ba0924576
>  
> One way of doing that is to introduce a "RocksDBStoreConfigs" objects similar 
> to "StreamsConfig", which defines all related rocksDB options configs, that 
> can be passed as key-value pairs to "StreamsConfig".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3742) Can't run connect-distributed with -daemon flag

2016-05-20 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-3742:
-

 Summary: Can't run connect-distributed with -daemon flag
 Key: KAFKA-3742
 URL: https://issues.apache.org/jira/browse/KAFKA-3742
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.0
Reporter: Geoff Anderson
Priority: Minor


Running on deb package install on ubuntu 14.04. Discovered while experimenting 
various different kafka components. 

This error probably applies to other scripts as well.

Running connect-distributed thusly
{code}connect-distributed -daemon /tmp/connect-distributed.properties{code}

gives errors like this 
{code}
root@worker1:/home/vagrant# connect-distributed -daemon 
/tmp/connect-distributed.properties
Exception in thread "main" java.io.FileNotFoundException: -daemon (No such file 
or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:146)
at java.io.FileInputStream.(FileInputStream.java:101)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:446)
at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:61)
{code}

Note that this runs:
connect-distributed /tmp/connect-distributed.properties -daemon
However, the daemon flag is not activated in this case

Underlying cause:
kafka-run-class assumes -daemon comes before the classpath

The scripts for which -daemon works use something like
{code}
EXTRA_ARGS="-name kafkaServer -loggc"

COMMAND=$1
case $COMMAND in
  -daemon)
EXTRA_ARGS="-daemon "$EXTRA_ARGS
shift
;;
  *)
;;
esac

exec $base_dir/kafka-run-class $EXTRA_ARGS 
io.confluent.support.metrics.SupportedKafka "$@"
{code}

but connect-distributed does this:
{code}
exec $(dirname $0)/kafka-run-class 
org.apache.kafka.connect.cli.ConnectDistributed "$@"
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3742) Can't run connect-distributed with -daemon flag

2016-05-20 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-3742:
-

Assignee: Liquan Pei

> Can't run connect-distributed with -daemon flag
> ---
>
> Key: KAFKA-3742
> URL: https://issues.apache.org/jira/browse/KAFKA-3742
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Geoff Anderson
>Assignee: Liquan Pei
>Priority: Minor
>
> Running on deb package install on ubuntu 14.04. Discovered while 
> experimenting various different kafka components. 
> This error probably applies to other scripts as well.
> Running connect-distributed thusly
> {code}connect-distributed -daemon /tmp/connect-distributed.properties{code}
> gives errors like this 
> {code}
> root@worker1:/home/vagrant# connect-distributed -daemon 
> /tmp/connect-distributed.properties
> Exception in thread "main" java.io.FileNotFoundException: -daemon (No such 
> file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at java.io.FileInputStream.(FileInputStream.java:101)
>   at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:446)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:61)
> {code}
> Note that this runs:
> connect-distributed /tmp/connect-distributed.properties -daemon
> However, the daemon flag is not activated in this case
> Underlying cause:
> kafka-run-class assumes -daemon comes before the classpath
> The scripts for which -daemon works use something like
> {code}
> EXTRA_ARGS="-name kafkaServer -loggc"
> COMMAND=$1
> case $COMMAND in
>   -daemon)
> EXTRA_ARGS="-daemon "$EXTRA_ARGS
> shift
> ;;
>   *)
> ;;
> esac
> exec $base_dir/kafka-run-class $EXTRA_ARGS 
> io.confluent.support.metrics.SupportedKafka "$@"
> {code}
> but connect-distributed does this:
> {code}
> exec $(dirname $0)/kafka-run-class 
> org.apache.kafka.connect.cli.ConnectDistributed "$@"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3742) Can't run connect-distributed with -daemon flag

2016-05-20 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-3742:
--
Description: 
Running on ubuntu 14.04. Discovered while experimenting various different kafka 
components. 

This error probably applies to other scripts as well.

Running connect-distributed thusly
{code}connect-distributed -daemon /tmp/connect-distributed.properties{code}

gives errors like this 
{code}
root@worker1:/home/vagrant# connect-distributed -daemon 
/tmp/connect-distributed.properties
Exception in thread "main" java.io.FileNotFoundException: -daemon (No such file 
or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:146)
at java.io.FileInputStream.(FileInputStream.java:101)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:446)
at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:61)
{code}

Note that this runs:
connect-distributed /tmp/connect-distributed.properties -daemon
However, the daemon flag is not activated in this case

Underlying cause:
kafka-run-class assumes -daemon comes before the classpath

The scripts for which -daemon works use something like
{code}
EXTRA_ARGS="-name kafkaServer -loggc"

COMMAND=$1
case $COMMAND in
  -daemon)
EXTRA_ARGS="-daemon "$EXTRA_ARGS
shift
;;
  *)
;;
esac

exec $base_dir/kafka-run-class $EXTRA_ARGS 
io.confluent.support.metrics.SupportedKafka "$@"
{code}

but connect-distributed does this:
{code}
exec $(dirname $0)/kafka-run-class 
org.apache.kafka.connect.cli.ConnectDistributed "$@"
{code}


  was:
Running on deb package install on ubuntu 14.04. Discovered while experimenting 
various different kafka components. 

This error probably applies to other scripts as well.

Running connect-distributed thusly
{code}connect-distributed -daemon /tmp/connect-distributed.properties{code}

gives errors like this 
{code}
root@worker1:/home/vagrant# connect-distributed -daemon 
/tmp/connect-distributed.properties
Exception in thread "main" java.io.FileNotFoundException: -daemon (No such file 
or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:146)
at java.io.FileInputStream.(FileInputStream.java:101)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:446)
at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:61)
{code}

Note that this runs:
connect-distributed /tmp/connect-distributed.properties -daemon
However, the daemon flag is not activated in this case

Underlying cause:
kafka-run-class assumes -daemon comes before the classpath

The scripts for which -daemon works use something like
{code}
EXTRA_ARGS="-name kafkaServer -loggc"

COMMAND=$1
case $COMMAND in
  -daemon)
EXTRA_ARGS="-daemon "$EXTRA_ARGS
shift
;;
  *)
;;
esac

exec $base_dir/kafka-run-class $EXTRA_ARGS 
io.confluent.support.metrics.SupportedKafka "$@"
{code}

but connect-distributed does this:
{code}
exec $(dirname $0)/kafka-run-class 
org.apache.kafka.connect.cli.ConnectDistributed "$@"
{code}



> Can't run connect-distributed with -daemon flag
> ---
>
> Key: KAFKA-3742
> URL: https://issues.apache.org/jira/browse/KAFKA-3742
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Geoff Anderson
>Assignee: Liquan Pei
>Priority: Minor
>
> Running on ubuntu 14.04. Discovered while experimenting various different 
> kafka components. 
> This error probably applies to other scripts as well.
> Running connect-distributed thusly
> {code}connect-distributed -daemon /tmp/connect-distributed.properties{code}
> gives errors like this 
> {code}
> root@worker1:/home/vagrant# connect-distributed -daemon 
> /tmp/connect-distributed.properties
> Exception in thread "main" java.io.FileNotFoundException: -daemon (No such 
> file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at java.io.FileInputStream.(FileInputStream.java:101)
>   at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:446)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:61)
> {code}
> Note that this runs:
> connect-distributed /tmp/connect-distributed.properties -daemon
> However, the daemon flag is not activated in this case
> Underlying cause:
> kafka-run-class assumes -daemon comes before the classpath
> The scripts for which -daemon works use something like
> {code}
> EXTRA_ARGS="-name kafkaServer -loggc"
> COMMAND=$1
> case $COMMAND in
>   -daemon)
> EXTRA_ARGS="-daemon "$EXTRA_ARGS
> shift
> ;;
>   *)
> ;;
> esac
> exec $base_dir/kafka-r

[jira] [Updated] (KAFKA-3742) Can't run connect-distributed.sh with -daemon flag

2016-05-20 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei updated KAFKA-3742:
--
Summary: Can't run connect-distributed.sh with -daemon flag  (was: Can't 
run connect-distributed with -daemon flag)

> Can't run connect-distributed.sh with -daemon flag
> --
>
> Key: KAFKA-3742
> URL: https://issues.apache.org/jira/browse/KAFKA-3742
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Geoff Anderson
>Assignee: Liquan Pei
>Priority: Minor
>
> Running on ubuntu 14.04. Discovered while experimenting various different 
> kafka components. 
> This error probably applies to other scripts as well.
> Running connect-distributed thusly
> {code}connect-distributed -daemon /tmp/connect-distributed.properties{code}
> gives errors like this 
> {code}
> root@worker1:/home/vagrant# connect-distributed -daemon 
> /tmp/connect-distributed.properties
> Exception in thread "main" java.io.FileNotFoundException: -daemon (No such 
> file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at java.io.FileInputStream.(FileInputStream.java:101)
>   at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:446)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:61)
> {code}
> Note that this runs:
> connect-distributed /tmp/connect-distributed.properties -daemon
> However, the daemon flag is not activated in this case
> Underlying cause:
> kafka-run-class assumes -daemon comes before the classpath
> The scripts for which -daemon works use something like
> {code}
> EXTRA_ARGS="-name kafkaServer -loggc"
> COMMAND=$1
> case $COMMAND in
>   -daemon)
> EXTRA_ARGS="-daemon "$EXTRA_ARGS
> shift
> ;;
>   *)
> ;;
> esac
> exec $base_dir/kafka-run-class $EXTRA_ARGS 
> io.confluent.support.metrics.SupportedKafka "$@"
> {code}
> but connect-distributed does this:
> {code}
> exec $(dirname $0)/kafka-run-class 
> org.apache.kafka.connect.cli.ConnectDistributed "$@"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.10.0.0 RC6

2016-05-20 Thread Vahid S Hashemian
+1. I was able to successfully create a topic and run a producer and 
consumer against it from the source on Ubuntu 15.04, Mac OS X Yosemite, 
and Windows 7.
--Vahid 



From:   Dana Powers 
To: "dev@kafka.apache.org" 
Date:   05/20/2016 12:13 PM
Subject:Re: [VOTE] 0.10.0.0 RC6



+1 -- passed kafka-python integration tests

-Dana

On Fri, May 20, 2016 at 11:16 AM, Joe Stein  wrote:
> +1 ran quick start from source and binary release
>
> On Fri, May 20, 2016 at 1:07 PM, Ewen Cheslack-Postava 

> wrote:
>
>> +1 validated connect with a couple of simple connectors and console
>> producer/consumer.
>>
>> -Ewen
>>
>> On Fri, May 20, 2016 at 9:53 AM, Guozhang Wang  
wrote:
>>
>> > +1. Validated maven (should be
>> > 
https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> > btw)
>> > and binary libraries, quick start.
>> >
>> > On Fri, May 20, 2016 at 9:36 AM, Harsha  wrote:
>> >
>> > > +1 . Ran a 3-node cluster with few system tests on our side. Looks
>> good.
>> > >
>> > > -Harsha
>> > >
>> > > On Thu, May 19, 2016, at 07:47 PM, Jun Rao wrote:
>> > > > Thanks for running the release. +1 from me. Verified the 
quickstart.
>> > > >
>> > > > Jun
>> > > >
>> > > > On Tue, May 17, 2016 at 10:00 PM, Gwen Shapira 

>> > > wrote:
>> > > >
>> > > > > Hello Kafka users, developers and client-developers,
>> > > > >
>> > > > > This is the seventh (!) candidate for release of Apache Kafka
>> > > > > 0.10.0.0. This is a major release that includes: (1) New 
message
>> > > > > format including timestamps (2) client interceptor API (3) 
Kafka
>> > > > > Streams.
>> > > > >
>> > > > > This RC was rolled out to fix an issue with our packaging that
>> caused
>> > > > > dependencies to leak in ways that broke our licensing, and an 
issue
>> > > > > with protocol versions that broke upgrade for LinkedIn and 
others
>> who
>> > > > > may run from trunk. Thanks to Ewen, Ismael, Becket and Jun for 
the
>> > > > > finding and fixing of issues.
>> > > > >
>> > > > > Release notes for the 0.10.0.0 release:
>> > > > > 
http://home.apache.org/~gwenshap/0.10.0.0-rc6/RELEASE_NOTES.html
>> > > > >
>> > > > > Lets try to vote within the 72h release vote window and get 
this
>> baby
>> > > > > out already!
>> > > > >
>> > > > > *** Please download, test and vote by Friday, May 20, 23:59 PT
>> > > > >
>> > > > > Kafka's KEYS file containing PGP keys we use to sign the 
release:
>> > > > > http://kafka.apache.org/KEYS
>> > > > >
>> > > > > * Release artifacts to be voted upon (source and binary):
>> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/
>> > > > >
>> > > > > * Maven artifacts to be voted upon:
>> > > > > https://repository.apache.org/content/groups/staging/
>> > > > >
>> > > > > * java-doc
>> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/javadoc/
>> > > > >
>> > > > > * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
>> > > > >
>> > > > >
>> > >
>> >
>> 
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=065899a3bc330618e420673acf9504d123b800f3

>> > > > >
>> > > > > * Documentation:
>> > > > > http://kafka.apache.org/0100/documentation.html
>> > > > >
>> > > > > * Protocol:
>> > > > > http://kafka.apache.org/0100/protocol.html
>> > > > >
>> > > > > /**
>> > > > >
>> > > > > Thanks,
>> > > > >
>> > > > > Gwen
>> > > > >
>> > >
>> >
>> >
>> >
>> > --
>> > -- Guozhang
>> >
>>
>>
>>
>> --
>> Thanks,
>> Ewen
>>







[jira] [Updated] (KAFKA-3742) Can't run connect-distributed.sh with -daemon flag

2016-05-20 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei updated KAFKA-3742:
--
Description: 
Running on ubuntu 14.04. Discovered while experimenting various different kafka 
components. 

This error probably applies to other scripts as well.

Running connect-distributed.sh thusly
{code}connect-distributed.sh -daemon /tmp/connect-distributed.properties{code}

gives errors like this 
{code}
root@worker1:/home/vagrant# connect-distributed.sh -daemon 
/tmp/connect-distributed.properties
Exception in thread "main" java.io.FileNotFoundException: -daemon (No such file 
or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:146)
at java.io.FileInputStream.(FileInputStream.java:101)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:446)
at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:61)
{code}

Note that this runs:
connect-distributed.sh /tmp/connect-distributed.properties -daemon
However, the daemon flag is not activated in this case

Underlying cause:
kafka-run-class.sh assumes -daemon comes before the classpath

The scripts for which -daemon works use something like
{code}
EXTRA_ARGS="-name kafkaServer -loggc"

COMMAND=$1
case $COMMAND in
  -daemon)
EXTRA_ARGS="-daemon "$EXTRA_ARGS
shift
;;
  *)
;;
esac

exec $base_dir/kafka-run-class.sh $EXTRA_ARGS 
io.confluent.support.metrics.SupportedKafka "$@"
{code}

but connect-distributed does this:
{code}
exec $(dirname $0)/kafka-run-class.sh 
org.apache.kafka.connect.cli.ConnectDistributed "$@"
{code}


  was:
Running on ubuntu 14.04. Discovered while experimenting various different kafka 
components. 

This error probably applies to other scripts as well.

Running connect-distributed thusly
{code}connect-distributed -daemon /tmp/connect-distributed.properties{code}

gives errors like this 
{code}
root@worker1:/home/vagrant# connect-distributed -daemon 
/tmp/connect-distributed.properties
Exception in thread "main" java.io.FileNotFoundException: -daemon (No such file 
or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:146)
at java.io.FileInputStream.(FileInputStream.java:101)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:446)
at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:61)
{code}

Note that this runs:
connect-distributed /tmp/connect-distributed.properties -daemon
However, the daemon flag is not activated in this case

Underlying cause:
kafka-run-class assumes -daemon comes before the classpath

The scripts for which -daemon works use something like
{code}
EXTRA_ARGS="-name kafkaServer -loggc"

COMMAND=$1
case $COMMAND in
  -daemon)
EXTRA_ARGS="-daemon "$EXTRA_ARGS
shift
;;
  *)
;;
esac

exec $base_dir/kafka-run-class $EXTRA_ARGS 
io.confluent.support.metrics.SupportedKafka "$@"
{code}

but connect-distributed does this:
{code}
exec $(dirname $0)/kafka-run-class 
org.apache.kafka.connect.cli.ConnectDistributed "$@"
{code}



> Can't run connect-distributed.sh with -daemon flag
> --
>
> Key: KAFKA-3742
> URL: https://issues.apache.org/jira/browse/KAFKA-3742
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Geoff Anderson
>Assignee: Liquan Pei
>Priority: Minor
>
> Running on ubuntu 14.04. Discovered while experimenting various different 
> kafka components. 
> This error probably applies to other scripts as well.
> Running connect-distributed.sh thusly
> {code}connect-distributed.sh -daemon /tmp/connect-distributed.properties{code}
> gives errors like this 
> {code}
> root@worker1:/home/vagrant# connect-distributed.sh -daemon 
> /tmp/connect-distributed.properties
> Exception in thread "main" java.io.FileNotFoundException: -daemon (No such 
> file or directory)
>   at java.io.FileInputStream.open(Native Method)
>   at java.io.FileInputStream.(FileInputStream.java:146)
>   at java.io.FileInputStream.(FileInputStream.java:101)
>   at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:446)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:61)
> {code}
> Note that this runs:
> connect-distributed.sh /tmp/connect-distributed.properties -daemon
> However, the daemon flag is not activated in this case
> Underlying cause:
> kafka-run-class.sh assumes -daemon comes before the classpath
> The scripts for which -daemon works use something like
> {code}
> EXTRA_ARGS="-name kafkaServer -loggc"
> COMMAND=$1
> case $COMMAND in
>   -daemon)
> EXTRA_ARGS="-daemon "$EXTRA_ARGS
> shift
> ;;
>   *)
> ;;
> esac
> exec $base_

KAFKA-3722 : Discussion about custom PrincipalBuilder and Authorizer configs

2016-05-20 Thread Mayuresh Gharat
Hi All,

I came across an issue with plugging in a custom PrincipalBuilder class
using the config "principal.builder.class" along with a custom Authorizer
class using the config "authorizer.class.name".

Consider the following scenario :

For PlainText we don't supply any PrincipalBuilder. For SSL we want to
supply a PrincipalBuilder using the property "principal.builder.class".

a) Now consider we have a broker running on these 2 ports and supply that
custom principalBuilder class using that config.

b) The interbroker communication is using PlainText. I am using a single
broker cluster for testing.

c) Now we issue a produce request on the SSL port of the broker.

d) The controller tries to build a channel for plaintext with this broker
for the new topic instructions.

e) PlainText tries to use the principal builder specified in the
"principal.builder.class" config which was meant only for SSL port since
the code path is same "ChannelBuilders.createPrincipalBuilder(configs)".

f) In the custom principal Builder if we are trying to do some cert checks
or down conversion of transportLayer to SSLTransportLayer so that we can
use its functionality we get error/exception at runtime.

The basic idea is the PlainText channel should not be using the
PrincipalBuilder meant for other types of channels.

Now there are few options/workarounds to avoid this :

1) Do instanceOf check in Authorizer.authorize() on TransportLayer instance
passed in and do the correct handling. This is not intuitive and imposes a
strict coding rule on the programmer.

2) TransportLayer should expose an API for telling the security protocol
type. This is not too intuitive either.

3) Add extra configs for Authorizer and PrincipalBuilder for each channel
type. This gives us a flexibility for the PrincipalBuilder and Authorizer
handle requests on different types of ports in a different way.

4) PrincipalBuilder.buildPrincipal() should take in extra parameter for the
type of protocol and we should document this in javadoc to use it to handle
the type of request. This is little better than 1) and 2) but again imposes
a strict coding rule on the programmer.

Just wanted to know what the community thinks about this and get any
suggestions/feedback . There's some discussion about this here :
https://github.com/apache/kafka/pull/1403

Thanks,

Mayuresh


Re: [VOTE] 0.10.0.0 RC6

2016-05-20 Thread Ashish Singh
+1, verified quickstart with source and binary release.

On Saturday, May 21, 2016, Vahid S Hashemian 
wrote:

> +1. I was able to successfully create a topic and run a producer and
> consumer against it from the source on Ubuntu 15.04, Mac OS X Yosemite,
> and Windows 7.
> --Vahid
>
>
>
> From:   Dana Powers >
> To: "dev@kafka.apache.org "  >
> Date:   05/20/2016 12:13 PM
> Subject:Re: [VOTE] 0.10.0.0 RC6
>
>
>
> +1 -- passed kafka-python integration tests
>
> -Dana
>
> On Fri, May 20, 2016 at 11:16 AM, Joe Stein  > wrote:
> > +1 ran quick start from source and binary release
> >
> > On Fri, May 20, 2016 at 1:07 PM, Ewen Cheslack-Postava
> >
> > wrote:
> >
> >> +1 validated connect with a couple of simple connectors and console
> >> producer/consumer.
> >>
> >> -Ewen
> >>
> >> On Fri, May 20, 2016 at 9:53 AM, Guozhang Wang  >
> wrote:
> >>
> >> > +1. Validated maven (should be
> >> >
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >> > btw)
> >> > and binary libraries, quick start.
> >> >
> >> > On Fri, May 20, 2016 at 9:36 AM, Harsha  > wrote:
> >> >
> >> > > +1 . Ran a 3-node cluster with few system tests on our side. Looks
> >> good.
> >> > >
> >> > > -Harsha
> >> > >
> >> > > On Thu, May 19, 2016, at 07:47 PM, Jun Rao wrote:
> >> > > > Thanks for running the release. +1 from me. Verified the
> quickstart.
> >> > > >
> >> > > > Jun
> >> > > >
> >> > > > On Tue, May 17, 2016 at 10:00 PM, Gwen Shapira
> >
> >> > > wrote:
> >> > > >
> >> > > > > Hello Kafka users, developers and client-developers,
> >> > > > >
> >> > > > > This is the seventh (!) candidate for release of Apache Kafka
> >> > > > > 0.10.0.0. This is a major release that includes: (1) New
> message
> >> > > > > format including timestamps (2) client interceptor API (3)
> Kafka
> >> > > > > Streams.
> >> > > > >
> >> > > > > This RC was rolled out to fix an issue with our packaging that
> >> caused
> >> > > > > dependencies to leak in ways that broke our licensing, and an
> issue
> >> > > > > with protocol versions that broke upgrade for LinkedIn and
> others
> >> who
> >> > > > > may run from trunk. Thanks to Ewen, Ismael, Becket and Jun for
> the
> >> > > > > finding and fixing of issues.
> >> > > > >
> >> > > > > Release notes for the 0.10.0.0 release:
> >> > > > >
> http://home.apache.org/~gwenshap/0.10.0.0-rc6/RELEASE_NOTES.html
> >> > > > >
> >> > > > > Lets try to vote within the 72h release vote window and get
> this
> >> baby
> >> > > > > out already!
> >> > > > >
> >> > > > > *** Please download, test and vote by Friday, May 20, 23:59 PT
> >> > > > >
> >> > > > > Kafka's KEYS file containing PGP keys we use to sign the
> release:
> >> > > > > http://kafka.apache.org/KEYS
> >> > > > >
> >> > > > > * Release artifacts to be voted upon (source and binary):
> >> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/
> >> > > > >
> >> > > > > * Maven artifacts to be voted upon:
> >> > > > > https://repository.apache.org/content/groups/staging/
> >> > > > >
> >> > > > > * java-doc
> >> > > > > http://home.apache.org/~gwenshap/0.10.0.0-rc6/javadoc/
> >> > > > >
> >> > > > > * tag to be voted upon (off 0.10.0 branch) is the 0.10.0.0 tag:
> >> > > > >
> >> > > > >
> >> > >
> >> >
> >>
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=065899a3bc330618e420673acf9504d123b800f3
>
> >> > > > >
> >> > > > > * Documentation:
> >> > > > > http://kafka.apache.org/0100/documentation.html
> >> > > > >
> >> > > > > * Protocol:
> >> > > > > http://kafka.apache.org/0100/protocol.html
> >> > > > >
> >> > > > > /**
> >> > > > >
> >> > > > > Thanks,
> >> > > > >
> >> > > > > Gwen
> >> > > > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > -- Guozhang
> >> >
> >>
> >>
> >>
> >> --
> >> Thanks,
> >> Ewen
> >>
>
>
>
>
>
>

-- 
Ashish 🎤h


Re: KAFKA-3722 : Discussion about custom PrincipalBuilder and Authorizer configs

2016-05-20 Thread Harsha
Mayuresh,
 Thanks for the write up. With principal builder,
 the idea is to reuse a single principal builder
 across all the security protocols where its
 applicable and given that principal builder has
 access to transportLayer and authenticator it
 should be able to figure out what type of
 transportLayer it is and it should be able
 construct the principal based on that and it should
 handle all the security protocols that we support.
In your options 1,2 & 4 seems to be doing  the same
thing i.e checking what security protocol that a
given transportLayer is and building a principal ,
correct me if I am wrong here.   I like going with 4
as others stated on PR . As passing
security_protocol makes it more specific to the
method that its need to be handled . In the interest
of having less config I think option 4 seems to be
better even though it breaks the interface.

Thanks,
Harsha
On Fri, May 20, 2016, at 05:00 PM, Mayuresh Gharat wrote:
> Hi All,
> 
> I came across an issue with plugging in a custom PrincipalBuilder class
> using the config "principal.builder.class" along with a custom Authorizer
> class using the config "authorizer.class.name".
> 
> Consider the following scenario :
> 
> For PlainText we don't supply any PrincipalBuilder. For SSL we want to
> supply a PrincipalBuilder using the property "principal.builder.class".
> 
> a) Now consider we have a broker running on these 2 ports and supply that
> custom principalBuilder class using that config.
> 
> b) The interbroker communication is using PlainText. I am using a single
> broker cluster for testing.
> 
> c) Now we issue a produce request on the SSL port of the broker.
> 
> d) The controller tries to build a channel for plaintext with this broker
> for the new topic instructions.
> 
> e) PlainText tries to use the principal builder specified in the
> "principal.builder.class" config which was meant only for SSL port since
> the code path is same "ChannelBuilders.createPrincipalBuilder(configs)".
> 
> f) In the custom principal Builder if we are trying to do some cert
> checks
> or down conversion of transportLayer to SSLTransportLayer so that we can
> use its functionality we get error/exception at runtime.
> 
> The basic idea is the PlainText channel should not be using the
> PrincipalBuilder meant for other types of channels.
> 
> Now there are few options/workarounds to avoid this :
> 
> 1) Do instanceOf check in Authorizer.authorize() on TransportLayer
> instance
> passed in and do the correct handling. This is not intuitive and imposes
> a
> strict coding rule on the programmer.
> 
> 2) TransportLayer should expose an API for telling the security protocol
> type. This is not too intuitive either.
> 
> 3) Add extra configs for Authorizer and PrincipalBuilder for each channel
> type. This gives us a flexibility for the PrincipalBuilder and Authorizer
> handle requests on different types of ports in a different way.
> 
> 4) PrincipalBuilder.buildPrincipal() should take in extra parameter for
> the
> type of protocol and we should document this in javadoc to use it to
> handle
> the type of request. This is little better than 1) and 2) but again
> imposes
> a strict coding rule on the programmer.
> 
> Just wanted to know what the community thinks about this and get any
> suggestions/feedback . There's some discussion about this here :
> https://github.com/apache/kafka/pull/1403
> 
> Thanks,
> 
> Mayuresh


[jira] [Updated] (KAFKA-2800) Update outdated dependencies

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2800:
---
Status: Reopened  (was: Closed)

> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates&subj=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2800) Update outdated dependencies

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-2800.

Resolution: Fixed

> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates&subj=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Apache Kafka JIRA Worflow: Add Closed -> Reopen transition

2016-05-20 Thread Manikumar Reddy
Jun/Ismail,

I requested Apache Infra  to change JIRA workflow to add  Closed -> Reopen
transition.
https://issues.apache.org/jira/browse/INFRA-11857

Let me know, If any concerns

Manikumar


[jira] [Updated] (KAFKA-3219) Long topic names mess up broker topic state

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-3219:
---
Status: Reopened  (was: Closed)

> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
> Fix For: 0.10.0.0
>
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:695)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:641)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:142)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3219) Long topic names mess up broker topic state

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-3219.

Resolution: Fixed

> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
> Fix For: 0.10.0.0
>
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:695)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:641)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:142)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2547) Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2547:
---
Status: Reopened  (was: Closed)

> Make DynamicConfigManager to use the ZkNodeChangeNotificationListener 
> introduced as part of KAFKA-2211
> --
>
> Key: KAFKA-2547
> URL: https://issues.apache.org/jira/browse/KAFKA-2547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As part of KAFKA-2211 (https://github.com/apache/kafka/pull/195/files) we 
> introduced a reusable ZkNodeChangeNotificationListener to ensure node changes 
> can be processed in a loss less way. This was pretty much the same code in 
> DynamicConfigManager with little bit of refactoring so it can be reused. We 
> now need to make DynamicConfigManager itself to use this new class once 
> KAFKA-2211 is committed to avoid code duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2547) Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-2547.

Resolution: Fixed

> Make DynamicConfigManager to use the ZkNodeChangeNotificationListener 
> introduced as part of KAFKA-2211
> --
>
> Key: KAFKA-2547
> URL: https://issues.apache.org/jira/browse/KAFKA-2547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As part of KAFKA-2211 (https://github.com/apache/kafka/pull/195/files) we 
> introduced a reusable ZkNodeChangeNotificationListener to ensure node changes 
> can be processed in a loss less way. This was pretty much the same code in 
> DynamicConfigManager with little bit of refactoring so it can be reused. We 
> now need to make DynamicConfigManager itself to use this new class once 
> KAFKA-2211 is committed to avoid code duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Apache Kafka JIRA Worflow: Add Closed -> Reopen transition

2016-05-20 Thread Harsha
Manikumar,
Any reason for this. Before the workflow is to open
a new JIRA if a JIRA closed.
-Harsha

On Fri, May 20, 2016, at 08:54 PM, Manikumar Reddy wrote:
> Jun/Ismail,
> 
> I requested Apache Infra  to change JIRA workflow to add  Closed ->
> Reopen
> transition.
> https://issues.apache.org/jira/browse/INFRA-11857
> 
> Let me know, If any concerns
> 
> Manikumar


Re: Apache Kafka JIRA Worflow: Add Closed -> Reopen transition

2016-05-20 Thread Manikumar Reddy
Hi,

There were some jiras which are closed but not resolved.
I just wanted to close those jiras properly,  so that they won't
appear in jira search. Without this new transition i was not able close
them properly.

Manikumar
On May 21, 2016 11:23 AM, "Harsha"  wrote:

> Manikumar,
> Any reason for this. Before the workflow is to open
> a new JIRA if a JIRA closed.
> -Harsha
>
> On Fri, May 20, 2016, at 08:54 PM, Manikumar Reddy wrote:
> > Jun/Ismail,
> >
> > I requested Apache Infra  to change JIRA workflow to add  Closed ->
> > Reopen
> > transition.
> > https://issues.apache.org/jira/browse/INFRA-11857
> >
> > Let me know, If any concerns
> >
> > Manikumar
>