[GitHub] kafka pull request #2083: KAFKA-4318 Migrate ProducerSendTest to the new con...

2016-11-01 Thread baluchicken
GitHub user baluchicken opened a pull request:

https://github.com/apache/kafka/pull/2083

KAFKA-4318 Migrate ProducerSendTest to the new consumer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/baluchicken/kafka-1 KAFKA-4318

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2083.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2083


commit 6376d7e4623d65c3af01c6dbf9225e8ac43a0b47
Author: Balint Molnar 
Date:   2016-11-01T09:16:17Z

KAFKA-4318 Migrate ProducerSendTest to the new consumer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4318) Migrate ProducerSendTest to the new consumer

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624934#comment-15624934
 ] 

ASF GitHub Bot commented on KAFKA-4318:
---

GitHub user baluchicken opened a pull request:

https://github.com/apache/kafka/pull/2083

KAFKA-4318 Migrate ProducerSendTest to the new consumer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/baluchicken/kafka-1 KAFKA-4318

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2083.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2083


commit 6376d7e4623d65c3af01c6dbf9225e8ac43a0b47
Author: Balint Molnar 
Date:   2016-11-01T09:16:17Z

KAFKA-4318 Migrate ProducerSendTest to the new consumer




> Migrate ProducerSendTest to the new consumer
> 
>
> Key: KAFKA-4318
> URL: https://issues.apache.org/jira/browse/KAFKA-4318
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Balint Molnar
>Assignee: Balint Molnar
>Priority: Minor
>
> BaseProducerSendTest contains a 
> TODO: "we need to migrate to new consumers when 0.9 is final"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2084: KAFKA-4361: Streams does not respect user configs ...

2016-11-01 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2084

KAFKA-4361: Streams does not respect user configs for "default" params

Enable user provided consumer and producer configs to override the streams 
default configs. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4361

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2084.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2084


commit 47f428940fa05d3c733832cf0e228d3a93a102e3
Author: Damian Guy 
Date:   2016-11-01T10:18:55Z

override StreamsConfig consumer and producer defaults




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4361) Streams does not respect user configs for "default" params

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625026#comment-15625026
 ] 

ASF GitHub Bot commented on KAFKA-4361:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2084

KAFKA-4361: Streams does not respect user configs for "default" params

Enable user provided consumer and producer configs to override the streams 
default configs. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kafka-4361

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2084.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2084


commit 47f428940fa05d3c733832cf0e228d3a93a102e3
Author: Damian Guy 
Date:   2016-11-01T10:18:55Z

override StreamsConfig consumer and producer defaults




> Streams does not respect user configs for "default" params
> --
>
> Key: KAFKA-4361
> URL: https://issues.apache.org/jira/browse/KAFKA-4361
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Roger Hoover
>Assignee: Damian Guy
>
> For the config params in CONSUMER_DEFAULT_OVERRIDES 
> (https://github.com/apache/kafka/blob/0.10.1/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L274)
>  such as consumer.max.poll.records and consumer.auto.offset.reset, those 
> parameters are not used and instead overridden by the defaults.  It may not 
> work for some producer config values as well.
> If your application sets those params in the StreamsConfig, they are not used 
> in the underlying Kafka Consumers (and possibly Producers)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2085: KAFKA-4360:Controller may deadLock when autoLead...

2016-11-01 Thread xiguantiaozhan
GitHub user xiguantiaozhan opened a pull request:

https://github.com/apache/kafka/pull/2085

KAFKA-4360:Controller may deadLock when autoLeaderRebalance encounter zk 
expired



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xiguantiaozhan/kafka rebalance_deadlock

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2085.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2085


commit 477bb3ddb6dc337ba68c7c585dc0cb3afa55e2be
Author: xiguantiaozhan 
Date:   2016-11-01T06:27:20Z

avoid deadlock in autoRebalanceScheduler shutdown

commit 980ec8c7a9d4ce4aa19479bf4d542666f237c9ce
Author: tuyang 
Date:   2016-11-01T12:25:12Z

avoid deadlock in ZookeeperLeaderElector




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4348) On Mac OS, KafkaConsumer.poll returns 0 when there are still messages on Kafka server

2016-11-01 Thread Yiquan Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625312#comment-15625312
 ] 

Yiquan Zhou commented on KAFKA-4348:


I think I get the issue with the console consumer as well... Here is what I did 
with kafka_2.11-0.9.0.1 distribution:

I started the zookeeper and the kafka-server, run verifiable-producer to send 
200k messages. Then I started the console consumer with command:

$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --bootstrap-server 
localhost:9092 --topic connect-test --from-beginning --new-consumer 

I can see that messages got printed on the console but then it freezed for ~5s 
before printing some more, exactly the same behavior as calling 
KafkaConsumer.poll. 

But if I run the console consumer without option --new-consumer, it seems that 
the issue doesn't occur. Messages are printed out continuously.

I've run the tests on two Macbook pro and I both got the issue, although they 
have similar configurations... Is there any way that the network settings can 
have any impact on this issue?

> On Mac OS, KafkaConsumer.poll returns 0 when there are still messages on 
> Kafka server
> -
>
> Key: KAFKA-4348
> URL: https://issues.apache.org/jira/browse/KAFKA-4348
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.1
> Environment: Mac OS X EI Capitan, Java 1.8.0_111
>Reporter: Yiquan Zhou
>  Labels: consumer, mac, polling
>
> Steps to reproduce:
> 1. start the zookeeper and kafka server using the default properties from the 
> distribution: 
> $ bin/zookeeper-server-start.sh config/zookeeper.properties
> $ bin/kafka-server-start.sh config/server.properties 
> 2. create a Kafka consumer using the Java API KafkaConsumer.poll(long 
> timeout). It polls the records from the server every second (timeout set to 
> 1000) and prints the number of records polled. The code can be found here: 
> https://gist.github.com/yiquanzhou/a94569a2c4ec8992444c83f3c393f596
> 3. use bin/kafka-verifiable-producer.sh to generate some messages: 
> $ bin/kafka-verifiable-producer.sh --topic connect-test --max-messages 20 
> --broker-list localhost:9092
> wait until all 200k messages are generated and sent to the server. 
> 4. Run the consumer Java code. In the output console of the consumer, we can 
> see that the consumer starts to poll some records, then it polls 0 records 
> for several seconds before polling some more. like this:
> polled 27160 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 26886 records
> polled 26886 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 26701 records
> polled 26214 records
> The bug slows down the consumption of messages a lot. And in our use case, 
> the consumer wrongly assumes that all messages are read from the topic.
> It is only reproducible on Mac OS X but neither on Linux nor Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4360) Controller may deadLock when autoLeaderRebalance encounter zk expired

2016-11-01 Thread Json Tu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625317#comment-15625317
 ] 

Json Tu commented on KAFKA-4360:


I put a pull request:https://github.com/apache/kafka/pull/2085,can someone 
review it?

> Controller may deadLock when autoLeaderRebalance encounter zk expired
> -
>
> Key: KAFKA-4360
> URL: https://issues.apache.org/jira/browse/KAFKA-4360
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Json Tu
>  Labels: bugfix
> Attachments: deadlock_patch, yf-mafka2-common02_jstack.txt
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> when controller has checkAndTriggerPartitionRebalance task in 
> autoRebalanceScheduler,and then zk expired at that time. It will
> run into deadlock.
> we can restore the scene as below,when zk session expired,zk thread will call 
> handleNewSession which defined in SessionExpirationListener, and it will get 
> controllerContext.controllerLock,and then it will 
> autoRebalanceScheduler.shutdown(),which need complete all the task in the 
> autoRebalanceScheduler,but that threadPoll also need get 
> controllerContext.controllerLock,but it has already owned by zk callback 
> thread,which will then run into deadlock.
> because of that,it will cause two problems at least, first is the broker’s id 
> is cannot register to the zookeeper,and it will be considered as dead by new 
> controller,second this procedure can not be stop by kafka-server-stop.sh, 
> because shutdown function
> can not get controllerContext.controllerLock also, we cannot shutdown kafka 
> except using kill -9.
> In my attachment, I upload a jstack file, which was created when my kafka 
> procedure cannot shutdown by kafka-server-stop.sh.
> I have met this scenes for several times,I think this may be a bug that not 
> solved in kafka.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4360) Controller may deadLock when autoLeaderRebalance encounter zk expired

2016-11-01 Thread Json Tu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625317#comment-15625317
 ] 

Json Tu edited comment on KAFKA-4360 at 11/1/16 12:38 PM:
--

I put a pull request:https://github.com/apache/kafka/pull/2085,can someone 
review it?
[~becket_qin] [~junrao] [~guozhang]


was (Author: json tu):
I put a pull request:https://github.com/apache/kafka/pull/2085,can someone 
review it?

> Controller may deadLock when autoLeaderRebalance encounter zk expired
> -
>
> Key: KAFKA-4360
> URL: https://issues.apache.org/jira/browse/KAFKA-4360
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Json Tu
>  Labels: bugfix
> Attachments: deadlock_patch, yf-mafka2-common02_jstack.txt
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> when controller has checkAndTriggerPartitionRebalance task in 
> autoRebalanceScheduler,and then zk expired at that time. It will
> run into deadlock.
> we can restore the scene as below,when zk session expired,zk thread will call 
> handleNewSession which defined in SessionExpirationListener, and it will get 
> controllerContext.controllerLock,and then it will 
> autoRebalanceScheduler.shutdown(),which need complete all the task in the 
> autoRebalanceScheduler,but that threadPoll also need get 
> controllerContext.controllerLock,but it has already owned by zk callback 
> thread,which will then run into deadlock.
> because of that,it will cause two problems at least, first is the broker’s id 
> is cannot register to the zookeeper,and it will be considered as dead by new 
> controller,second this procedure can not be stop by kafka-server-stop.sh, 
> because shutdown function
> can not get controllerContext.controllerLock also, we cannot shutdown kafka 
> except using kill -9.
> In my attachment, I upload a jstack file, which was created when my kafka 
> procedure cannot shutdown by kafka-server-stop.sh.
> I have met this scenes for several times,I think this may be a bug that not 
> solved in kafka.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2086: KAFKA-3751: SASL/SCRAM implementation

2016-11-01 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2086

KAFKA-3751: SASL/SCRAM implementation

Implementation of KIP-84: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3751

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2086.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2086


commit f0f80df586b95d55b12db7810ee180c21811fee0
Author: Rajini Sivaram 
Date:   2016-09-20T09:11:27Z

KAFKA-3751: SASL/SCRAM implementation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3751) Add support for SASL mechanism SCRAM-SHA-256

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625436#comment-15625436
 ] 

ASF GitHub Bot commented on KAFKA-3751:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2086

KAFKA-3751: SASL/SCRAM implementation

Implementation of KIP-84: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3751

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2086.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2086


commit f0f80df586b95d55b12db7810ee180c21811fee0
Author: Rajini Sivaram 
Date:   2016-09-20T09:11:27Z

KAFKA-3751: SASL/SCRAM implementation




> Add support for SASL mechanism SCRAM-SHA-256 
> -
>
> Key: KAFKA-3751
> URL: https://issues.apache.org/jira/browse/KAFKA-3751
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Salted Challenge Response Authentication Mechanism (SCRAM) provides secure 
> authentication and is increasingly being adopted as an alternative to 
> Digest-MD5 which is now obsolete. SCRAM is described in the RFC 
> [https://tools.ietf.org/html/rfc5802]. It will be good to add support for 
> SCRAM-SHA-256 ([https://tools.ietf.org/html/rfc7677]) as a SASL mechanism for 
> Kafka.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3751) Add support for SASL mechanism SCRAM-SHA-256

2016-11-01 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-3751:
--
Fix Version/s: 0.10.2.0
   Status: Patch Available  (was: Open)

KIP corresponding to this JIRA is 
[KIP-84|https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms]

> Add support for SASL mechanism SCRAM-SHA-256 
> -
>
> Key: KAFKA-3751
> URL: https://issues.apache.org/jira/browse/KAFKA-3751
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Salted Challenge Response Authentication Mechanism (SCRAM) provides secure 
> authentication and is increasingly being adopted as an alternative to 
> Digest-MD5 which is now obsolete. SCRAM is described in the RFC 
> [https://tools.ietf.org/html/rfc5802]. It will be good to add support for 
> SCRAM-SHA-256 ([https://tools.ietf.org/html/rfc7677]) as a SASL mechanism for 
> Kafka.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3751) Add support for SASL/SCRAM mechanisms

2016-11-01 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-3751:
--
Description: 
Salted Challenge Response Authentication Mechanism (SCRAM) provides secure 
authentication and is increasingly being adopted as an alternative to 
Digest-MD5 which is now obsolete. SCRAM is described in the RFC 
[https://tools.ietf.org/html/rfc5802]. It will be good to add support for 
SCRAM-SHA-256 ([https://tools.ietf.org/html/rfc7677]) as a SASL mechanism for 
Kafka.

See 
[KIP-84|https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms]
 for details.

  was:Salted Challenge Response Authentication Mechanism (SCRAM) provides 
secure authentication and is increasingly being adopted as an alternative to 
Digest-MD5 which is now obsolete. SCRAM is described in the RFC 
[https://tools.ietf.org/html/rfc5802]. It will be good to add support for 
SCRAM-SHA-256 ([https://tools.ietf.org/html/rfc7677]) as a SASL mechanism for 
Kafka.

Summary: Add support for SASL/SCRAM mechanisms  (was: Add support for 
SASL mechanism SCRAM-SHA-256 )

> Add support for SASL/SCRAM mechanisms
> -
>
> Key: KAFKA-3751
> URL: https://issues.apache.org/jira/browse/KAFKA-3751
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Salted Challenge Response Authentication Mechanism (SCRAM) provides secure 
> authentication and is increasingly being adopted as an alternative to 
> Digest-MD5 which is now obsolete. SCRAM is described in the RFC 
> [https://tools.ietf.org/html/rfc5802]. It will be good to add support for 
> SCRAM-SHA-256 ([https://tools.ietf.org/html/rfc7677]) as a SASL mechanism for 
> Kafka.
> See 
> [KIP-84|https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms]
>  for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2087: MINOR: Fix NPE when Connect offset contains non-pr...

2016-11-01 Thread mfenniak
GitHub user mfenniak opened a pull request:

https://github.com/apache/kafka/pull/2087

MINOR: Fix NPE when Connect offset contains non-primitive type

When storing a non-primitive type in a Connect offset, the following 
NullPointerException will occur:

```
07:18:23.702 [pool-3-thread-1] ERROR o.a.k.c.storage.OffsetStorageWriter - 
CRITICAL: Failed to serialize offset data, making it impossible to commit 
offsets under namespace tenant-db-bootstrap-source. This likely won't recover 
unless the unserializable partition or offset information is overwritten.
07:18:23.702 [pool-3-thread-1] ERROR o.a.k.c.storage.OffsetStorageWriter - 
Cause of serialization failure:
java.lang.NullPointerException: null
at 
org.apache.kafka.connect.storage.OffsetUtils.validateFormat(OffsetUtils.java:51)
at 
org.apache.kafka.connect.storage.OffsetStorageWriter.doFlush(OffsetStorageWriter.java:143)
at 
org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:319)
... snip ...
```

The attached patch fixes the specific case where OffsetUtils.validateFormat 
is attempting to provide a useful error message, but fails to because the 
schemaType method could return null.

This contribution is my original work and I license the work to the project 
under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mfenniak/kafka 
fix-npr-with-clearer-error-message

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2087.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2087


commit 9f4fbefde16d01a6a3ff331be580ee3764675ede
Author: Mathieu Fenniak 
Date:   2016-11-01T13:31:58Z

Fix exception thrown when non-primitive type is stored in offset




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-85: Dynamic JAAS configuration for Kafka clients

2016-11-01 Thread Rajini Sivaram
KIP-85 vote has passed with 4 binding (Harsha, Gwen, Jason, Jun) and 4
non-binding (Mickael, Jim, Edo, me) votes.

Thank you all for your votes and comments. I will update the KIP page and
rebase the PR.

Many thanks,

Rajini



On Mon, Oct 31, 2016 at 11:29 AM, Edoardo Comar  wrote:

> +1 great KIP
> --
> Edoardo Comar
> IBM MessageHub
> eco...@uk.ibm.com
> IBM UK Ltd, Hursley Park, SO21 2JN
>
> IBM United Kingdom Limited Registered in England and Wales with number
> 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6
> 3AU
>
>
>
> From:   Rajini Sivaram 
> To: dev@kafka.apache.org
> Date:   26/10/2016 16:27
> Subject:[VOTE] KIP-85: Dynamic JAAS configuration for Kafka
> clients
>
>
>
> I would like to initiate the voting process for KIP-85: Dynamic JAAS
> configuration for Kafka Clients:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-85%3A+Dynamic+JAAS+
> configuration+for+Kafka+clients
>
>
> This KIP enables Java clients to connect to Kafka using SASL without a
> physical jaas.conf file. This will also be useful to configure multiple
> KafkaClient login contexts when multiple users are supported within a JVM.
>
> Thank you...
>
> Regards,
>
> Rajini
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>



-- 
Regards,

Rajini


[jira] [Created] (KAFKA-4363) Add document for dynamic JAAS configuration property sasl.jaas.config

2016-11-01 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-4363:
-

 Summary: Add document for dynamic JAAS configuration property 
sasl.jaas.config 
 Key: KAFKA-4363
 URL: https://issues.apache.org/jira/browse/KAFKA-4363
 Project: Kafka
  Issue Type: Task
  Components: documentation
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 0.10.2.0


Documentation of property added under 
[KIP-85|https://cwiki.apache.org/confluence/display/KAFKA/KIP-85%3A+Dynamic+JAAS+configuration+for+Kafka+clients].
 Implementation is being added under KAFKA-4259.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-84: Support SASL/SCRAM mechanisms

2016-11-01 Thread Rajini Sivaram
If there are no more comments, I will start vote on this KIP later this
week. In the meantime, please feel free to post any feedback or
suggestions. Initial implementation is here:
https://github.com/apache/kafka/pull/2086.

Thank you,

Rajini

On Thu, Oct 27, 2016 at 11:18 AM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> Jun,
>
> 4) Agree, it does make the implementation simpler. Updated KIP.
> 5) Thank you, that looks neater. Updated KIP.
>
> On Wed, Oct 26, 2016 at 6:59 PM, Jun Rao  wrote:
>
>> Hi, Rajini,
>>
>> Thanks for the reply.
>>
>> 4. Implementation wise, it seems to me that it's simpler to read from the
>> cache than reading directly from ZK since the config manager already
>> propagates all config changes through ZK. Also, it's probably a good idea
>> to limit the places in the code base that directly accesses ZK.
>>
>> 5. Yes, it seems that it makes sense to add the new SCRAM configurations
>> to
>> the existing /config/users/. I am not sure how one would
>> remove the SCRAM configurations in the example though since the properties
>> specified in add-config is not the ones actually being stored. An
>> alternative is to doing sth like the following. It may still feel a bit
>> weird and I am wondering if there is a clearer approach.
>>
>> bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config
>> 'scram_sha-256=[password=alice-secret,iterations=4096],scram_sha-1=
>> [password=alice-secret,iterations=4096]' --entity-type users
>> --entity-name
>> alice
>>
>> bin/kafka-configs.sh --zookeeper localhost:2181 --alter --delete-config
>> 'scram_sha-256,scram_sha-1' --entity-type users --entity-name alice
>>
>> Thanks,
>>
>> Jun
>>
>>
>> On Wed, Oct 26, 2016 at 4:35 AM, Rajini Sivaram <
>> rajinisiva...@googlemail.com> wrote:
>>
>> > Hi Jun,
>> >
>> > Thank you for reviewing the KIP. Answers below:
>> >
>> >
>> >1. Yes, agree, Updated KIP.
>> >2. User specifies a password and iteration count. kaka-configs.sh
>> >generates a random salt and then generates StoredKey and ServerKey
>> for
>> > that
>> >password using the same message formatter implementation used for
>> SCRAM
>> >authentication. I have added some more detail to the KIP (
>> >https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > 84%3A+Support+SASL+SCRAM+mechanisms#KIP-84:SupportSASLSCRAMmechanisms-
>> > Tools).
>> >Does that answer the question?
>> >3. I started off thinking just one (SCRAM-SHA-256) and then thought
>> >another one is required to make sure that the implementation can cope
>> > with
>> >multiple SCRAM mechanisms. But actually you are right, we can support
>> > all.
>> >I haven't added the old md2/md5 mechanisms that aren't very secure,
>> but
>> > I
>> >have included all the SHA algorithms.
>> >4. Since credentials are only required when a connection is made, it
>> >feels like we can just read the latest value from ZK rather than
>> cache
>> > all
>> >users and keep them updated. Having said that, we can always add
>> caching
>> >later if we find that the overhead of reading from ZK every time is
>> too
>> >expensive. Since caching doesn't change any externals, this can be
>> done
>> > in
>> >a JIRA later - would that be ok?
>> >5. Thanks, updated. I have changed the property names to include
>> >mechanism. To avoid four separate properties per mechanism in ZK, I
>> have
>> >changed the format to use a single property (lower-case mechanism
>> name)
>> >with four values concatenated in a format similar to SCRAM messages.
>> >
>> > Do you think storing SCRAM credentials in /config/users/
>> > along with existing quota properties is okay? Or should they be under a
>> > different path (eg. /config/credentials/)? Or should
>> they be
>> > on a completely different path like ACLs with a separate tool instead of
>> > reusing kaka-configs.sh?
>> >
>> > Thank you,
>> >
>> > Rajini
>> >
>> > On Tue, Oct 25, 2016 at 11:55 PM, Jun Rao  wrote:
>> >
>> > > Hi, Rajini,
>> > >
>> > > Thanks for the proposal. Looks good overall and seems quite useful
>> (e.g.
>> > > for supporting delegation tokens). A few comments/questions below.
>> > >
>> > > 1. For the ZK data format change, should we use the same convention
>> as in
>> > > KIP-55 to use encoded user name (i.e., /config/users/)
>> ?
>> > >
>> > > 2. For tooling, could you describe how user typically generates
>> > > scam_server_key and scram_stored_key to be used by kafka-config.sh?
>> > >
>> > > 3. Is there a particular reason to only support sha1 and sha128?
>> Should
>> > we
>> > > support more hashes listed below in the future?
>> > > http://www.iana.org/assignments/hash-function-
>> > > text-names/hash-function-text-names.xhtml
>> > >
>> > > 4. Is there a reason not to cache user credentials in the broker? The
>> > > dynamic config mechanism already supports loading configs into
>> broker's
>> > > cache. Checking credentials from broker's cache is more efficient th

[GitHub] kafka pull request #2073: MINOR: Fix issue in `AsyncProducerTest` where it e...

2016-11-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2073


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4360) Controller may deadLock when autoLeaderRebalance encounter zk expired

2016-11-01 Thread Json Tu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Json Tu updated KAFKA-4360:
---
Reviewer: Jiangjie Qin  (was: Json Tu)

> Controller may deadLock when autoLeaderRebalance encounter zk expired
> -
>
> Key: KAFKA-4360
> URL: https://issues.apache.org/jira/browse/KAFKA-4360
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Json Tu
>  Labels: bugfix
> Attachments: deadlock_patch, yf-mafka2-common02_jstack.txt
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> when controller has checkAndTriggerPartitionRebalance task in 
> autoRebalanceScheduler,and then zk expired at that time. It will
> run into deadlock.
> we can restore the scene as below,when zk session expired,zk thread will call 
> handleNewSession which defined in SessionExpirationListener, and it will get 
> controllerContext.controllerLock,and then it will 
> autoRebalanceScheduler.shutdown(),which need complete all the task in the 
> autoRebalanceScheduler,but that threadPoll also need get 
> controllerContext.controllerLock,but it has already owned by zk callback 
> thread,which will then run into deadlock.
> because of that,it will cause two problems at least, first is the broker’s id 
> is cannot register to the zookeeper,and it will be considered as dead by new 
> controller,second this procedure can not be stop by kafka-server-stop.sh, 
> because shutdown function
> can not get controllerContext.controllerLock also, we cannot shutdown kafka 
> except using kill -9.
> In my attachment, I upload a jstack file, which was created when my kafka 
> procedure cannot shutdown by kafka-server-stop.sh.
> I have met this scenes for several times,I think this may be a bug that not 
> solved in kafka.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-84: Support SASL/SCRAM mechanisms

2016-11-01 Thread Jun Rao
Hi, Rajini,

One more thing. It seems that we should bump up the version of
SaslHandshakeRequest? This way, the client can figure out which SASL
mechanisms the broker is capable of supporting through ApiVersionRequest.
We discussed this briefly as part of KIP-43.

Thanks,

Jun



On Tue, Nov 1, 2016 at 7:41 AM, Rajini Sivaram  wrote:

> If there are no more comments, I will start vote on this KIP later this
> week. In the meantime, please feel free to post any feedback or
> suggestions. Initial implementation is here:
> https://github.com/apache/kafka/pull/2086.
>
> Thank you,
>
> Rajini
>
> On Thu, Oct 27, 2016 at 11:18 AM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
>
> > Jun,
> >
> > 4) Agree, it does make the implementation simpler. Updated KIP.
> > 5) Thank you, that looks neater. Updated KIP.
> >
> > On Wed, Oct 26, 2016 at 6:59 PM, Jun Rao  wrote:
> >
> >> Hi, Rajini,
> >>
> >> Thanks for the reply.
> >>
> >> 4. Implementation wise, it seems to me that it's simpler to read from
> the
> >> cache than reading directly from ZK since the config manager already
> >> propagates all config changes through ZK. Also, it's probably a good
> idea
> >> to limit the places in the code base that directly accesses ZK.
> >>
> >> 5. Yes, it seems that it makes sense to add the new SCRAM configurations
> >> to
> >> the existing /config/users/. I am not sure how one would
> >> remove the SCRAM configurations in the example though since the
> properties
> >> specified in add-config is not the ones actually being stored. An
> >> alternative is to doing sth like the following. It may still feel a bit
> >> weird and I am wondering if there is a clearer approach.
> >>
> >> bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config
> >> 'scram_sha-256=[password=alice-secret,iterations=4096],scram_sha-1=
> >> [password=alice-secret,iterations=4096]' --entity-type users
> >> --entity-name
> >> alice
> >>
> >> bin/kafka-configs.sh --zookeeper localhost:2181 --alter --delete-config
> >> 'scram_sha-256,scram_sha-1' --entity-type users --entity-name alice
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >>
> >> On Wed, Oct 26, 2016 at 4:35 AM, Rajini Sivaram <
> >> rajinisiva...@googlemail.com> wrote:
> >>
> >> > Hi Jun,
> >> >
> >> > Thank you for reviewing the KIP. Answers below:
> >> >
> >> >
> >> >1. Yes, agree, Updated KIP.
> >> >2. User specifies a password and iteration count. kaka-configs.sh
> >> >generates a random salt and then generates StoredKey and ServerKey
> >> for
> >> > that
> >> >password using the same message formatter implementation used for
> >> SCRAM
> >> >authentication. I have added some more detail to the KIP (
> >> >https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > 84%3A+Support+SASL+SCRAM+mechanisms#KIP-84:
> SupportSASLSCRAMmechanisms-
> >> > Tools).
> >> >Does that answer the question?
> >> >3. I started off thinking just one (SCRAM-SHA-256) and then thought
> >> >another one is required to make sure that the implementation can
> cope
> >> > with
> >> >multiple SCRAM mechanisms. But actually you are right, we can
> support
> >> > all.
> >> >I haven't added the old md2/md5 mechanisms that aren't very secure,
> >> but
> >> > I
> >> >have included all the SHA algorithms.
> >> >4. Since credentials are only required when a connection is made,
> it
> >> >feels like we can just read the latest value from ZK rather than
> >> cache
> >> > all
> >> >users and keep them updated. Having said that, we can always add
> >> caching
> >> >later if we find that the overhead of reading from ZK every time is
> >> too
> >> >expensive. Since caching doesn't change any externals, this can be
> >> done
> >> > in
> >> >a JIRA later - would that be ok?
> >> >5. Thanks, updated. I have changed the property names to include
> >> >mechanism. To avoid four separate properties per mechanism in ZK, I
> >> have
> >> >changed the format to use a single property (lower-case mechanism
> >> name)
> >> >with four values concatenated in a format similar to SCRAM
> messages.
> >> >
> >> > Do you think storing SCRAM credentials in /config/users/
> >> > along with existing quota properties is okay? Or should they be under
> a
> >> > different path (eg. /config/credentials/)? Or should
> >> they be
> >> > on a completely different path like ACLs with a separate tool instead
> of
> >> > reusing kaka-configs.sh?
> >> >
> >> > Thank you,
> >> >
> >> > Rajini
> >> >
> >> > On Tue, Oct 25, 2016 at 11:55 PM, Jun Rao  wrote:
> >> >
> >> > > Hi, Rajini,
> >> > >
> >> > > Thanks for the proposal. Looks good overall and seems quite useful
> >> (e.g.
> >> > > for supporting delegation tokens). A few comments/questions below.
> >> > >
> >> > > 1. For the ZK data format change, should we use the same convention
> >> as in
> >> > > KIP-55 to use encoded user name (i.e.,
> /config/users/)
> >> ?
> >> > >
> >> > > 2. For tooling, could you describ

[jira] [Resolved] (KAFKA-4361) Streams does not respect user configs for "default" params

2016-11-01 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4361.
--
   Resolution: Fixed
Fix Version/s: 0.10.2.0

Issue resolved by pull request 2084
[https://github.com/apache/kafka/pull/2084]

> Streams does not respect user configs for "default" params
> --
>
> Key: KAFKA-4361
> URL: https://issues.apache.org/jira/browse/KAFKA-4361
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Roger Hoover
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> For the config params in CONSUMER_DEFAULT_OVERRIDES 
> (https://github.com/apache/kafka/blob/0.10.1/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L274)
>  such as consumer.max.poll.records and consumer.auto.offset.reset, those 
> parameters are not used and instead overridden by the defaults.  It may not 
> work for some producer config values as well.
> If your application sets those params in the StreamsConfig, they are not used 
> in the underlying Kafka Consumers (and possibly Producers)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2084: KAFKA-4361: Streams does not respect user configs ...

2016-11-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2084


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4361) Streams does not respect user configs for "default" params

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626028#comment-15626028
 ] 

ASF GitHub Bot commented on KAFKA-4361:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2084


> Streams does not respect user configs for "default" params
> --
>
> Key: KAFKA-4361
> URL: https://issues.apache.org/jira/browse/KAFKA-4361
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Roger Hoover
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> For the config params in CONSUMER_DEFAULT_OVERRIDES 
> (https://github.com/apache/kafka/blob/0.10.1/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L274)
>  such as consumer.max.poll.records and consumer.auto.offset.reset, those 
> parameters are not used and instead overridden by the defaults.  It may not 
> work for some producer config values as well.
> If your application sets those params in the StreamsConfig, they are not used 
> in the underlying Kafka Consumers (and possibly Producers)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1663

2016-11-01 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Fix issue in `AsyncProducerTest` where it expects the `port`

--
[...truncated 4123 lines...]

kafka.log.LogCleanerTest > testBuildPartialOffsetMap STARTED

kafka.log.LogCleanerTest > testBuildPartialOffsetMap PASSED

kafka.log.LogCleanerTest > testCleaningWithUnkeyedMessages STARTED

kafka.log.LogCleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.LogCleanerTest > testPartialSegmentClean STARTED

kafka.log.LogCleanerTest > testPartialSegmentClean PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > shouldNotDeleteTimeBasedSegmentsWhenNoneReadyToBeDeleted 
STARTED

kafka.log.LogTest > shouldNotDeleteTimeBasedSegmentsWhenNoneReadyToBeDeleted 
PASSED

kafka.log.LogTest > testReadWithMinMessage STARTED

kafka.log.LogTest > testReadWithMinMessage PASSED

kafka.log.LogTest > testIndexRebuild STARTED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls STARTED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck STARTED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete STARTED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > shouldNotDeleteSegmentsWhenPolicyDoesNotIncludeDelete 
STARTED

kafka.log.LogTest > shouldNotDeleteSegmentsWhenPolicyDoesNotIncludeDelete PASSED

kafka.log.LogTest > testReadOutOfRange STARTED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException STARTED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > 
shouldDeleteSegmentsReadyToBeDeletedWhenCleanupPolicyIsCompactAndDelete STARTED

kafka.log.LogTest > 
shouldDeleteSegmentsReadyToBeDeletedWhenCleanupPolicyIsCompactAndDelete PASSED

kafka.log.LogTest > testReadAtLogGap STARTED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll STARTED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog STARTED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck STARTED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation STARTED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints STARTED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset STARTED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testDeleteOldSegmentsMethod STARTED

kafka.log.LogTest > testDeleteOldSegmentsMethod PASSED

kafka.log.LogTest > shouldDeleteSizeBasedSegments STARTED

kafka.log.LogTest > shouldDeleteSizeBasedSegments PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull STARTED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild STARTED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted STARTED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted PASSED

kafka.log.LogTest > testReadWithTooSmallMaxLength STARTED

kafka.log.LogTest > testReadWithTooSmallMaxLength PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved STARTED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages STARTED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload STARTED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog STARTED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset STARTED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate STARTED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName STARTED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles STARTED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

ka

[jira] [Commented] (KAFKA-4352) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626081#comment-15626081
 ] 

ASF GitHub Bot commented on KAFKA-4352:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2082


> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4352
> URL: https://issues.apache.org/jira/browse/KAFKA-4352
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.2.0
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>
> We have seen the following scenario happening frequently. It looks similar to 
> KAFKA-2768 which was thought to be fixed.
> {code}
> Stacktrace
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'version': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:106)
>   at kafka.admin.AdminClient$$anonfun$1.apply(AdminClient.scala:152)
>   at kafka.admin.AdminClient$$anonfun$1.apply(AdminClient.scala:151)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at kafka.admin.AdminClient.describeConsumerGroup(AdminClient.scala:151)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest$WaitUntilConsumerGroupGotClosed.conditionMet(ResetIntegrationTest.java:305)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:270)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:135)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> su

[GitHub] kafka pull request #2082: KAFKA-4352: instable ResetTool integration test

2016-11-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2082


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1012

2016-11-01 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Fix issue in `AsyncProducerTest` where it expects the `port`

--
[...truncated 14592 lines...]
org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigPortIsNotAnInteger STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigPortIsNotAnInteger PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldReturnEmptyClusterMetadataIfItHasntBeenBuilt STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldReturnEmptyClusterMetadataIfItHasntBeenBuilt PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopicThatsSourceIsAnotherInternalTopic STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopicThatsSourceIsAnotherInternalTopic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.st

[jira] [Resolved] (KAFKA-4352) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-11-01 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4352.
--
   Resolution: Fixed
Fix Version/s: 0.10.2.0

Issue resolved by pull request 2082
[https://github.com/apache/kafka/pull/2082]

> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4352
> URL: https://issues.apache.org/jira/browse/KAFKA-4352
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.2.0
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.0
>
>
> We have seen the following scenario happening frequently. It looks similar to 
> KAFKA-2768 which was thought to be fixed.
> {code}
> Stacktrace
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'version': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:106)
>   at kafka.admin.AdminClient$$anonfun$1.apply(AdminClient.scala:152)
>   at kafka.admin.AdminClient$$anonfun$1.apply(AdminClient.scala:151)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at kafka.admin.AdminClient.describeConsumerGroup(AdminClient.scala:151)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest$WaitUntilConsumerGroupGotClosed.conditionMet(ResetIntegrationTest.java:305)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:270)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:135)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>

[DISCUSS] KAFKA-4345: Run decktape test for each pull request

2016-11-01 Thread Raghav Kumar Gautam
Hi,

I want to start a discussion about running ducktape tests for each pull
request. I have been working on KAFKA-4345
 to enable this using
docker on travis-ci.
Pull request: https://github.com/apache/kafka/pull/2064
Working POC: https://travis-ci.org/raghavgautam/kafka/builds/171502069

In the POC I am able to run 124/149 tests out of which 88 pass. The failure
are mostly timing issues. We can run the same scripts on the laptop with
which I am able to run 138/149 tests successfully.

For this to work we need to enable travis-ci for Kafka. I can open a infra
bug to request travis-ci for this. Travis-ci is already running tests for
many apache projects like Storm, Hive, Flume, Thrift etc. see:
https://travis-ci.org/apache/.

Does this sound interesting ? Please comment.

Thanks,
Raghav.


[jira] [Assigned] (KAFKA-4320) Log compaction docs update

2016-11-01 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-4320:
-

Assignee: Ishita Mandhan

> Log compaction docs update
> --
>
> Key: KAFKA-4320
> URL: https://issues.apache.org/jira/browse/KAFKA-4320
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dustin Cote
>Assignee: Ishita Mandhan
>Priority: Minor
>  Labels: newbie
>
> The log compaction docs are out of date.  At least the default is said to be 
> that log compaction is disabled which is not true as of 0.9.0.1.  Probably 
> the whole section needs a once over to make sure it's in line with what is 
> currently there.  This is the section:
> [http://kafka.apache.org/documentation#design_compactionconfig]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2016-11-01 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626328#comment-15626328
 ] 

Ishita Mandhan commented on KAFKA-4307:
---

Are you working on this, [~manasvigupta]?

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1664

2016-11-01 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4361: Streams does not respect user configs for "default" 
params

[wangguoz] KAFKA-4352: instable ResetTool integration test

--
[...truncated 14306 lines...]
org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigPortIsNotAnInteger STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigPortIsNotAnInteger PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldReturnEmptyClusterMetadataIfItHasntBeenBuilt STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldReturnEmptyClusterMetadataIfItHasntBeenBuilt PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignBasic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopicThatsSourceIsAnotherInternalTopic STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopicThatsSourceIsAnotherInternalTopic PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest >

[jira] [Assigned] (KAFKA-2544) Replication tools wiki page needs to be updated

2016-11-01 Thread Ishita Mandhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishita Mandhan reassigned KAFKA-2544:
-

Assignee: Ishita Mandhan

> Replication tools wiki page needs to be updated
> ---
>
> Key: KAFKA-2544
> URL: https://issues.apache.org/jira/browse/KAFKA-2544
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.8.2.1
>Reporter: Stevo Slavic
>Assignee: Ishita Mandhan
>Priority: Minor
>  Labels: documentation, newbie
>
> https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools is 
> outdated, mentions tools which have been heavily refactored or replaced by 
> other tools, e.g. add partition tool, list/create topics tools, etc.
> Please have the replication tools wiki page updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4357) Consumer group describe exception when there is no active member (old consumer)

2016-11-01 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4357:
---
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2075
[https://github.com/apache/kafka/pull/2075]

> Consumer group describe exception when there is no active member (old 
> consumer)
> ---
>
> Key: KAFKA-4357
> URL: https://issues.apache.org/jira/browse/KAFKA-4357
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
> Fix For: 0.10.2.0
>
>
> If the consumer group that is based on old consumer has no active member the 
> following error/exception is raised:
> {code}
> Error: Executing consumer group command failed due to Expected a valid 
> consumer group state, but none found.
> {code}
> The command should instead report the existing offsets within the group (with 
> no data under member column).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2075: KAFKA-4357: Fix consumer group describe output whe...

2016-11-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2075


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4357) Consumer group describe exception when there is no active member (old consumer)

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626438#comment-15626438
 ] 

ASF GitHub Bot commented on KAFKA-4357:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2075


> Consumer group describe exception when there is no active member (old 
> consumer)
> ---
>
> Key: KAFKA-4357
> URL: https://issues.apache.org/jira/browse/KAFKA-4357
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
> Fix For: 0.10.2.0
>
>
> If the consumer group that is based on old consumer has no active member the 
> following error/exception is raised:
> {code}
> Error: Executing consumer group command failed due to Expected a valid 
> consumer group state, but none found.
> {code}
> The command should instead report the existing offsets within the group (with 
> no data under member column).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-84: Support SASL/SCRAM mechanisms

2016-11-01 Thread Rajini Sivaram
Jun,

I have added the following text to the KIP. Does this match your
expectation?

*SaslHandshakeRequest version will be increased from 0 to 1 so that clients
can determine if the broker is capable of supporting SCRAM mechanisms using
ApiVersionsRequest. Java clients will not be updated to use
ApiVersionsRequest to choose SASL mechanism under this KIP. Java clients
will continue to use their configured SASL mechanism and will fail
connection if the requested mechanism is not enabled in the broker.*

Thank you,

Rajini

On Tue, Nov 1, 2016 at 4:54 PM, Jun Rao  wrote:

> Hi, Rajini,
>
> One more thing. It seems that we should bump up the version of
> SaslHandshakeRequest? This way, the client can figure out which SASL
> mechanisms the broker is capable of supporting through ApiVersionRequest.
> We discussed this briefly as part of KIP-43.
>
> Thanks,
>
> Jun
>
>
>
> On Tue, Nov 1, 2016 at 7:41 AM, Rajini Sivaram <
> rajinisiva...@googlemail.com
> > wrote:
>
> > If there are no more comments, I will start vote on this KIP later this
> > week. In the meantime, please feel free to post any feedback or
> > suggestions. Initial implementation is here:
> > https://github.com/apache/kafka/pull/2086.
> >
> > Thank you,
> >
> > Rajini
> >
> > On Thu, Oct 27, 2016 at 11:18 AM, Rajini Sivaram <
> > rajinisiva...@googlemail.com> wrote:
> >
> > > Jun,
> > >
> > > 4) Agree, it does make the implementation simpler. Updated KIP.
> > > 5) Thank you, that looks neater. Updated KIP.
> > >
> > > On Wed, Oct 26, 2016 at 6:59 PM, Jun Rao  wrote:
> > >
> > >> Hi, Rajini,
> > >>
> > >> Thanks for the reply.
> > >>
> > >> 4. Implementation wise, it seems to me that it's simpler to read from
> > the
> > >> cache than reading directly from ZK since the config manager already
> > >> propagates all config changes through ZK. Also, it's probably a good
> > idea
> > >> to limit the places in the code base that directly accesses ZK.
> > >>
> > >> 5. Yes, it seems that it makes sense to add the new SCRAM
> configurations
> > >> to
> > >> the existing /config/users/. I am not sure how one would
> > >> remove the SCRAM configurations in the example though since the
> > properties
> > >> specified in add-config is not the ones actually being stored. An
> > >> alternative is to doing sth like the following. It may still feel a
> bit
> > >> weird and I am wondering if there is a clearer approach.
> > >>
> > >> bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config
> > >> 'scram_sha-256=[password=alice-secret,iterations=4096],scram_sha-1=
> > >> [password=alice-secret,iterations=4096]' --entity-type users
> > >> --entity-name
> > >> alice
> > >>
> > >> bin/kafka-configs.sh --zookeeper localhost:2181 --alter
> --delete-config
> > >> 'scram_sha-256,scram_sha-1' --entity-type users --entity-name alice
> > >>
> > >> Thanks,
> > >>
> > >> Jun
> > >>
> > >>
> > >> On Wed, Oct 26, 2016 at 4:35 AM, Rajini Sivaram <
> > >> rajinisiva...@googlemail.com> wrote:
> > >>
> > >> > Hi Jun,
> > >> >
> > >> > Thank you for reviewing the KIP. Answers below:
> > >> >
> > >> >
> > >> >1. Yes, agree, Updated KIP.
> > >> >2. User specifies a password and iteration count. kaka-configs.sh
> > >> >generates a random salt and then generates StoredKey and
> ServerKey
> > >> for
> > >> > that
> > >> >password using the same message formatter implementation used for
> > >> SCRAM
> > >> >authentication. I have added some more detail to the KIP (
> > >> >https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > 84%3A+Support+SASL+SCRAM+mechanisms#KIP-84:
> > SupportSASLSCRAMmechanisms-
> > >> > Tools).
> > >> >Does that answer the question?
> > >> >3. I started off thinking just one (SCRAM-SHA-256) and then
> thought
> > >> >another one is required to make sure that the implementation can
> > cope
> > >> > with
> > >> >multiple SCRAM mechanisms. But actually you are right, we can
> > support
> > >> > all.
> > >> >I haven't added the old md2/md5 mechanisms that aren't very
> secure,
> > >> but
> > >> > I
> > >> >have included all the SHA algorithms.
> > >> >4. Since credentials are only required when a connection is made,
> > it
> > >> >feels like we can just read the latest value from ZK rather than
> > >> cache
> > >> > all
> > >> >users and keep them updated. Having said that, we can always add
> > >> caching
> > >> >later if we find that the overhead of reading from ZK every time
> is
> > >> too
> > >> >expensive. Since caching doesn't change any externals, this can
> be
> > >> done
> > >> > in
> > >> >a JIRA later - would that be ok?
> > >> >5. Thanks, updated. I have changed the property names to include
> > >> >mechanism. To avoid four separate properties per mechanism in
> ZK, I
> > >> have
> > >> >changed the format to use a single property (lower-case mechanism
> > >> name)
> > >> >with four values concatenated in a format similar to SCRAM
> > messages.
> > >> 

Re: [DISCUSS] KIP-87 - Add Compaction Tombstone Flag

2016-11-01 Thread Becket Qin
My two cents on the magic value bumping.

Although the broker can infer from the request version to determine whether
it should read the new attribute bit or not, currently the request version
is not propagated to the replica manager.
Bumping up the magic value does not seem a big overhead in this case. It
also help avoid the potential ambiguity. maybe it is better to bump up the
magic value.

Thanks,

Jiangjie (Becket) Qin

On Mon, Oct 31, 2016 at 4:17 PM, Jay Kreps  wrote:

> Xavier,
>
> Yeah I think that post KIP-58 it is possible to depend on the delivery of
> messages in compacted topics, if you override the default compaction time.
> Prior to that it was true that you could control the delete retention, but
> any message including the tombstone could be compacted away prior to
> delivery. That was what I meant by non-deterministic. Now that we have that
> KIP I agree that at least it can be given an SLA like any other message
> (which was what I was meaning by deterministic).
>
> -Jay
>
> On Fri, Oct 28, 2016 at 10:15 AM, Xavier Léauté 
> wrote:
>
> > >
> > > I kind of agree with James that it is a bit questionable how valuable
> any
> > > data in a delete marker can be since it will be deleted somewhat
> > > nondeterministically.
> > >
> >
> > One could argue that even in normal topics, assuming a time-based log
> > retention policy is in place, any message will be deleted somewhat
> > nondeterministally, so why treat the compacted ones any differently? To
> me
> > at least, the retention setting for delete messages seems to be the
> > counterpart to the time-based retention setting for normal topics.
> >
> > Currently the semantics of the messages are in the eye of the
> beholder--you
> > > can choose to interpret a stream as either being appends or revisions
> as
> > > you choose. This proposal is changing that so that the semantics are
> > > determined by the sender.
> >
> >
> > Let's imagine someone wanted to augment this stream to include audit logs
> > for each record update, e.g. which user made the change. One would want
> to
> > include that information as part of the message, and have the ability to
> > mark a deletion.
> >
> > I don't think it changes the semantics in this case, you can still choose
> > to interpret the data as a stream of audit log entries (inserts),
> ignoring
> > the tombstone flag, or you can interpret it as a table modeling only the
> > latest version of each record. Whether a compacted or normal topic is
> used
> > shouldn't matter to the sender.
> >
>


[jira] [Commented] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2016-11-01 Thread Manasvi Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626544#comment-15626544
 ] 

Manasvi Gupta commented on KAFKA-4307:
--

Yes.

Sorry, I was on holiday for Diwali. Will have patch by tomorrow.



> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-84: Support SASL/SCRAM mechanisms

2016-11-01 Thread Gwen Shapira
Wait, I thought SaslHandshakeResponse includes a list of mechanisms
supported, so I'm not sure why we need to bump the version?

I expect clients will send SaslHandshakeRequest_V0, see which mechanisms
are supported and make a call based on that? Which means KIP-35 is not
required in that case? Am I missing something?

On Tue, Nov 1, 2016 at 1:07 PM, Rajini Sivaram  wrote:

> Jun,
>
> I have added the following text to the KIP. Does this match your
> expectation?
>
> *SaslHandshakeRequest version will be increased from 0 to 1 so that clients
> can determine if the broker is capable of supporting SCRAM mechanisms using
> ApiVersionsRequest. Java clients will not be updated to use
> ApiVersionsRequest to choose SASL mechanism under this KIP. Java clients
> will continue to use their configured SASL mechanism and will fail
> connection if the requested mechanism is not enabled in the broker.*
>
> Thank you,
>
> Rajini
>
> On Tue, Nov 1, 2016 at 4:54 PM, Jun Rao  wrote:
>
> > Hi, Rajini,
> >
> > One more thing. It seems that we should bump up the version of
> > SaslHandshakeRequest? This way, the client can figure out which SASL
> > mechanisms the broker is capable of supporting through ApiVersionRequest.
> > We discussed this briefly as part of KIP-43.
> >
> > Thanks,
> >
> > Jun
> >
> >
> >
> > On Tue, Nov 1, 2016 at 7:41 AM, Rajini Sivaram <
> > rajinisiva...@googlemail.com
> > > wrote:
> >
> > > If there are no more comments, I will start vote on this KIP later this
> > > week. In the meantime, please feel free to post any feedback or
> > > suggestions. Initial implementation is here:
> > > https://github.com/apache/kafka/pull/2086.
> > >
> > > Thank you,
> > >
> > > Rajini
> > >
> > > On Thu, Oct 27, 2016 at 11:18 AM, Rajini Sivaram <
> > > rajinisiva...@googlemail.com> wrote:
> > >
> > > > Jun,
> > > >
> > > > 4) Agree, it does make the implementation simpler. Updated KIP.
> > > > 5) Thank you, that looks neater. Updated KIP.
> > > >
> > > > On Wed, Oct 26, 2016 at 6:59 PM, Jun Rao  wrote:
> > > >
> > > >> Hi, Rajini,
> > > >>
> > > >> Thanks for the reply.
> > > >>
> > > >> 4. Implementation wise, it seems to me that it's simpler to read
> from
> > > the
> > > >> cache than reading directly from ZK since the config manager already
> > > >> propagates all config changes through ZK. Also, it's probably a good
> > > idea
> > > >> to limit the places in the code base that directly accesses ZK.
> > > >>
> > > >> 5. Yes, it seems that it makes sense to add the new SCRAM
> > configurations
> > > >> to
> > > >> the existing /config/users/. I am not sure how one
> would
> > > >> remove the SCRAM configurations in the example though since the
> > > properties
> > > >> specified in add-config is not the ones actually being stored. An
> > > >> alternative is to doing sth like the following. It may still feel a
> > bit
> > > >> weird and I am wondering if there is a clearer approach.
> > > >>
> > > >> bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config
> > > >> 'scram_sha-256=[password=alice-secret,iterations=4096],scram_sha-1=
> > > >> [password=alice-secret,iterations=4096]' --entity-type users
> > > >> --entity-name
> > > >> alice
> > > >>
> > > >> bin/kafka-configs.sh --zookeeper localhost:2181 --alter
> > --delete-config
> > > >> 'scram_sha-256,scram_sha-1' --entity-type users --entity-name alice
> > > >>
> > > >> Thanks,
> > > >>
> > > >> Jun
> > > >>
> > > >>
> > > >> On Wed, Oct 26, 2016 at 4:35 AM, Rajini Sivaram <
> > > >> rajinisiva...@googlemail.com> wrote:
> > > >>
> > > >> > Hi Jun,
> > > >> >
> > > >> > Thank you for reviewing the KIP. Answers below:
> > > >> >
> > > >> >
> > > >> >1. Yes, agree, Updated KIP.
> > > >> >2. User specifies a password and iteration count.
> kaka-configs.sh
> > > >> >generates a random salt and then generates StoredKey and
> > ServerKey
> > > >> for
> > > >> > that
> > > >> >password using the same message formatter implementation used
> for
> > > >> SCRAM
> > > >> >authentication. I have added some more detail to the KIP (
> > > >> >https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >> > 84%3A+Support+SASL+SCRAM+mechanisms#KIP-84:
> > > SupportSASLSCRAMmechanisms-
> > > >> > Tools).
> > > >> >Does that answer the question?
> > > >> >3. I started off thinking just one (SCRAM-SHA-256) and then
> > thought
> > > >> >another one is required to make sure that the implementation
> can
> > > cope
> > > >> > with
> > > >> >multiple SCRAM mechanisms. But actually you are right, we can
> > > support
> > > >> > all.
> > > >> >I haven't added the old md2/md5 mechanisms that aren't very
> > secure,
> > > >> but
> > > >> > I
> > > >> >have included all the SHA algorithms.
> > > >> >4. Since credentials are only required when a connection is
> made,
> > > it
> > > >> >feels like we can just read the latest value from ZK rather
> than
> > > >> cache
> > > >> > all
> > > >> >users and keep them updated.

Build failed in Jenkins: kafka-trunk-jdk8 #1013

2016-11-01 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4361: Streams does not respect user configs for "default" 
params

[wangguoz] KAFKA-4352: instable ResetTool integration test

--
[...truncated 14363 lines...]
org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldSetClusterMetadataOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithInternalTopics PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHaveCompactionPropSetIfSupplied STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHaveCompactionPropSetIfSupplied PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsNull STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsNull PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldConfigureRetentionMsWithAdditionalRetentionWhenCompactAndDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldConfigureRetentionMsWithAdditionalRetentionWhenCompactAndDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldBeCompactedIfCleanupPolicyCompactOrCompactAndDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldBeCompactedIfCleanupPolicyCompactOrCompactAndDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotBeCompactedWhenCleanupPolicyIsDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotBeCompactedWhenCleanupPolicyIsDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldUseCleanupPolicyFromConfigIfSupplied STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldUseCleanupPolicyFromConfigIfSupplied PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenCompact STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenCompact PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenDelete PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.stream

[jira] [Created] (KAFKA-4364) Sink tasks expose secrets in DEBUG logging

2016-11-01 Thread Ryan P (JIRA)
Ryan P created KAFKA-4364:
-

 Summary: Sink tasks expose secrets in DEBUG logging
 Key: KAFKA-4364
 URL: https://issues.apache.org/jira/browse/KAFKA-4364
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Ryan P
Assignee: Ewen Cheslack-Postava


As it stands today worker tasks print secrets such as Key/Trust store passwords 
to their respective logs. 

https://github.com/confluentinc/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L213-L214

i.e.

[2016-11-01 12:50:59,254] DEBUG Initializing connector test-sink with config 
{consumer.ssl.truststore.password=password, 
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector, 
connection.password=password, producer.security.protocol=SSL, 
producer.ssl.truststore.password=password, topics=orders, tasks.max=1, 
consumer.ssl.truststore.location=/tmp/truststore/kafka.trustore.jks, 
producer.ssl.truststore.location=/tmp/truststore/kafka.trustore.jks, 
connection.user=connect, name=test-sink, auto.create=true, 
consumer.security.protocol=SSL, 
connection.url=jdbc:postgresql://localhost/test} 
(org.apache.kafka.connect.runtime.WorkerConnector:71)





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-84: Support SASL/SCRAM mechanisms

2016-11-01 Thread Rajini Sivaram
Gwen,

I had thought the same too and hence I am assuming that Java clients could
simply use SaslHandshakeRequest. SaslHandshakeRequest returns the list of
mechanisms enabled in the broker. I think Jun's point was that by
incrementing the version of SaslHandshakeRequest, clients can use
ApiVersionsRequest to figure out the mechanisms the broker is capable of
supporting and use that information to choose a mechanism to send in
SaslHandshakeRequest. Not sure how useful this actually is, so will wait
for Jun's response.



On Tue, Nov 1, 2016 at 8:18 PM, Gwen Shapira  wrote:

> Wait, I thought SaslHandshakeResponse includes a list of mechanisms
> supported, so I'm not sure why we need to bump the version?
>
> I expect clients will send SaslHandshakeRequest_V0, see which mechanisms
> are supported and make a call based on that? Which means KIP-35 is not
> required in that case? Am I missing something?
>
> On Tue, Nov 1, 2016 at 1:07 PM, Rajini Sivaram <
> rajinisiva...@googlemail.com
> > wrote:
>
> > Jun,
> >
> > I have added the following text to the KIP. Does this match your
> > expectation?
> >
> > *SaslHandshakeRequest version will be increased from 0 to 1 so that
> clients
> > can determine if the broker is capable of supporting SCRAM mechanisms
> using
> > ApiVersionsRequest. Java clients will not be updated to use
> > ApiVersionsRequest to choose SASL mechanism under this KIP. Java clients
> > will continue to use their configured SASL mechanism and will fail
> > connection if the requested mechanism is not enabled in the broker.*
> >
> > Thank you,
> >
> > Rajini
> >
> > On Tue, Nov 1, 2016 at 4:54 PM, Jun Rao  wrote:
> >
> > > Hi, Rajini,
> > >
> > > One more thing. It seems that we should bump up the version of
> > > SaslHandshakeRequest? This way, the client can figure out which SASL
> > > mechanisms the broker is capable of supporting through
> ApiVersionRequest.
> > > We discussed this briefly as part of KIP-43.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > >
> > >
> > > On Tue, Nov 1, 2016 at 7:41 AM, Rajini Sivaram <
> > > rajinisiva...@googlemail.com
> > > > wrote:
> > >
> > > > If there are no more comments, I will start vote on this KIP later
> this
> > > > week. In the meantime, please feel free to post any feedback or
> > > > suggestions. Initial implementation is here:
> > > > https://github.com/apache/kafka/pull/2086.
> > > >
> > > > Thank you,
> > > >
> > > > Rajini
> > > >
> > > > On Thu, Oct 27, 2016 at 11:18 AM, Rajini Sivaram <
> > > > rajinisiva...@googlemail.com> wrote:
> > > >
> > > > > Jun,
> > > > >
> > > > > 4) Agree, it does make the implementation simpler. Updated KIP.
> > > > > 5) Thank you, that looks neater. Updated KIP.
> > > > >
> > > > > On Wed, Oct 26, 2016 at 6:59 PM, Jun Rao  wrote:
> > > > >
> > > > >> Hi, Rajini,
> > > > >>
> > > > >> Thanks for the reply.
> > > > >>
> > > > >> 4. Implementation wise, it seems to me that it's simpler to read
> > from
> > > > the
> > > > >> cache than reading directly from ZK since the config manager
> already
> > > > >> propagates all config changes through ZK. Also, it's probably a
> good
> > > > idea
> > > > >> to limit the places in the code base that directly accesses ZK.
> > > > >>
> > > > >> 5. Yes, it seems that it makes sense to add the new SCRAM
> > > configurations
> > > > >> to
> > > > >> the existing /config/users/. I am not sure how one
> > would
> > > > >> remove the SCRAM configurations in the example though since the
> > > > properties
> > > > >> specified in add-config is not the ones actually being stored. An
> > > > >> alternative is to doing sth like the following. It may still feel
> a
> > > bit
> > > > >> weird and I am wondering if there is a clearer approach.
> > > > >>
> > > > >> bin/kafka-configs.sh --zookeeper localhost:2181 --alter
> --add-config
> > > > >> 'scram_sha-256=[password=alice-secret,iterations=4096],
> scram_sha-1=
> > > > >> [password=alice-secret,iterations=4096]' --entity-type users
> > > > >> --entity-name
> > > > >> alice
> > > > >>
> > > > >> bin/kafka-configs.sh --zookeeper localhost:2181 --alter
> > > --delete-config
> > > > >> 'scram_sha-256,scram_sha-1' --entity-type users --entity-name
> alice
> > > > >>
> > > > >> Thanks,
> > > > >>
> > > > >> Jun
> > > > >>
> > > > >>
> > > > >> On Wed, Oct 26, 2016 at 4:35 AM, Rajini Sivaram <
> > > > >> rajinisiva...@googlemail.com> wrote:
> > > > >>
> > > > >> > Hi Jun,
> > > > >> >
> > > > >> > Thank you for reviewing the KIP. Answers below:
> > > > >> >
> > > > >> >
> > > > >> >1. Yes, agree, Updated KIP.
> > > > >> >2. User specifies a password and iteration count.
> > kaka-configs.sh
> > > > >> >generates a random salt and then generates StoredKey and
> > > ServerKey
> > > > >> for
> > > > >> > that
> > > > >> >password using the same message formatter implementation used
> > for
> > > > >> SCRAM
> > > > >> >authentication. I have added some more detail to the KIP (
> > > > >> >https://cwiki.apache.org/confluence/di

[jira] [Commented] (KAFKA-4307) Inconsistent parameters between console producer and consumer

2016-11-01 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626723#comment-15626723
 ] 

Ishita Mandhan commented on KAFKA-4307:
---

Sure, take your time! Was just checking to see if it was up for grabs :)

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4307
> URL: https://issues.apache.org/jira/browse/KAFKA-4307
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2080: KAFKA-4302: Simplify KTableSource

2016-11-01 Thread mjsax
Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/2080


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2079: HOTFIX: improve error message on invalid input rec...

2016-11-01 Thread mjsax
Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/2079


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4302) Simplify KTableSource

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626727#comment-15626727
 ] 

ASF GitHub Bot commented on KAFKA-4302:
---

Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/2080


> Simplify KTableSource
> -
>
> Key: KAFKA-4302
> URL: https://issues.apache.org/jira/browse/KAFKA-4302
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> With the new "interactive queries" feature, source tables are always 
> materialized. Thus, we can remove the stale flag {{KTableSoure#materialized}} 
> (which is always true now) to simply to code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2062: Expose name of processor on KStream

2016-11-01 Thread thijsc
Github user thijsc closed the pull request at:

https://github.com/apache/kafka/pull/2062


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2088: Cross compile to Scala 2.12.0-RC2

2016-11-01 Thread leachbj
GitHub user leachbj opened a pull request:

https://github.com/apache/kafka/pull/2088

Cross compile to Scala 2.12.0-RC2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/leachbj/kafka scala-2.12.0-RC2-build

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2088.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2088


commit 8717b77e73237817c93719407a4af1b9d5f6e5c1
Author: Bernard Leach 
Date:   2016-11-01T22:01:34Z

Cross compile to Scala 2.12.0-RC2




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4360) Controller may deadLock when autoLeaderRebalance encounter zk expired

2016-11-01 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626869#comment-15626869
 ] 

Guozhang Wang commented on KAFKA-4360:
--

Thanks for the find [~Json Tu]. And I agree with Jiangjie that we could 
consider moving {{onControllerResignation}} out of the lock itself.

> Controller may deadLock when autoLeaderRebalance encounter zk expired
> -
>
> Key: KAFKA-4360
> URL: https://issues.apache.org/jira/browse/KAFKA-4360
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Json Tu
>  Labels: bugfix
> Attachments: deadlock_patch, yf-mafka2-common02_jstack.txt
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> when controller has checkAndTriggerPartitionRebalance task in 
> autoRebalanceScheduler,and then zk expired at that time. It will
> run into deadlock.
> we can restore the scene as below,when zk session expired,zk thread will call 
> handleNewSession which defined in SessionExpirationListener, and it will get 
> controllerContext.controllerLock,and then it will 
> autoRebalanceScheduler.shutdown(),which need complete all the task in the 
> autoRebalanceScheduler,but that threadPoll also need get 
> controllerContext.controllerLock,but it has already owned by zk callback 
> thread,which will then run into deadlock.
> because of that,it will cause two problems at least, first is the broker’s id 
> is cannot register to the zookeeper,and it will be considered as dead by new 
> controller,second this procedure can not be stop by kafka-server-stop.sh, 
> because shutdown function
> can not get controllerContext.controllerLock also, we cannot shutdown kafka 
> except using kill -9.
> In my attachment, I upload a jstack file, which was created when my kafka 
> procedure cannot shutdown by kafka-server-stop.sh.
> I have met this scenes for several times,I think this may be a bug that not 
> solved in kafka.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2089: Fix documentation of compaction

2016-11-01 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/2089

Fix documentation of compaction

Also cleaned up some of the language around compaction guarantees.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka fix-documentation-of-compaction

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2089.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2089


commit 0af1a864dda32cbf58d270c935dc5e75a38b7d18
Author: Apurva Mehta 
Date:   2016-11-01T22:21:30Z

MINOR: fix duplicate line in docs for compaction.

commit 03c5bddced47719178117e8c9e2b5b23f472e085
Author: Apurva Mehta 
Date:   2016-11-01T22:27:35Z

Fix line length to be consistent with the rest of the file




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4362) Consumer can fail after reassignment of the offsets topic partition

2016-11-01 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-4362:
--
Summary: Consumer can fail after reassignment of the offsets topic 
partition  (was: Offset commits fail after a partition reassignment)

Yes definitely an issue, so I'm updating the title. So the reassignment of the 
offsets topic will perpetually cause offset commits to fail. A new consumer 
joining the group will talk to the new coordinator and incorrectly becomes an 
isolated group. Any rebalance of the remaining instances of the actual group 
(that's still talking to the old coordinator) can hit this error and die:

{code}
[2016-11-01 15:37:56,120] WARN Auto offset commit failed for group testgroup: 
Unexpected error in commit: The server experienced an unexpected error when 
processing the request 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
...


...

[2016-11-01 15:37:56,120] INFO Revoking previously assigned partitions 
[testtopic-0, testtopic-1] for group testgroup 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2016-11-01 15:37:56,120] INFO (Re-)joining group testgroup 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2016-11-01 15:37:56,124] ERROR Error processing message, terminating consumer 
process:  (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: The 
server experienced an unexpected error when processing the request
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:518)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:485)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:742)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:722)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:186)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:149)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:116)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:479)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:316)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:256)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:308)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:100)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:120)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:75)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
{code}

> Consumer can fail after reassignment of the offsets topic partition
> ---
>
> Key: KAFKA-4362
> URL: https://issues.apache.org/jira/browse/KAFKA-4362
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Joel Koshy
>Assignee: Jiangjie Qin
>
> When a consumer offsets topic partition reassignment completes, an offset 
> commit shows this:
> {code}
> java.lang.IllegalArgumentException: Message format version for partition 100 
> not found
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$14.apply(GroupMetadataManager.scala:633)
>  ~[kafka_2.10.jar:?]
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$14.apply(GroupMetadataManager.scala:633)
>  ~[kafka_2.10.jar:?]
> at scala.Option.getOrElse(Option.scala:120) ~[scala-library-2.10.4.jar:?]
> at 
> kafka.coordinator.GroupMetadataManager.kafka$coordinator$GroupMetadataManager$$getMessageFormatVersionAndTimestamp(GroupMetadataManager.scala:632)
>  ~[kafka_2.10.jar:?]
> at

[jira] [Issue Comment Deleted] (KAFKA-3554) Generate actual data with specific compression ratio and add multi-thread support in the ProducerPerformance tool.

2016-11-01 Thread David (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David updated KAFKA-3554:
-
Comment: was deleted

(was: Hey guys, Is there any ConsumerPerformance code in Java instead of Scala? 
I looked at the kafka repo and I only see ProducerPerformance code.)

> Generate actual data with specific compression ratio and add multi-thread 
> support in the ProducerPerformance tool.
> --
>
> Key: KAFKA-3554
> URL: https://issues.apache.org/jira/browse/KAFKA-3554
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.2.0
>
>
> Currently the ProducerPerformance always generate the payload with same 
> bytes. This does not quite well to test the compressed data because the 
> payload is extremely compressible no matter how big the payload is.
> We can make some changes to make it more useful for compressed messages. 
> Currently I am generating the payload containing integer from a given range. 
> By adjusting the range of the integers, we can get different compression 
> ratios. 
> API wise, we can either let user to specify the integer range or the expected 
> compression ratio (we will do some probing to get the corresponding range for 
> the users)
> Besides that, in many cases, it is useful to have multiple producer threads 
> when the producer threads themselves are bottleneck. Admittedly people can 
> run multiple ProducerPerformance to achieve similar result, but it is still 
> different from the real case when people actually use the producer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1665

2016-11-01 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4357; Fix consumer group describe output when there is no active

--
[...truncated 7983 lines...]

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicNotExisting 
STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicNotExisting 
PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionNotMatchingInternalTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionNotMatchingInternalTopic PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithoutTopicDescribeAccess 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicExisting STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorizationWithTopicExisting PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testPatternSubscriptionMatchingInternalTopic PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization STARTED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testN

[jira] [Commented] (KAFKA-4113) Allow KTable bootstrap

2016-11-01 Thread Greg Fodor (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627097#comment-15627097
 ] 

Greg Fodor commented on KAFKA-4113:
---

Hey [~guozhang], I have been able to reproduce a bootstrapping issue on a fresh 
local node, and I think there might be some stuff I either need clarity on or 
may even be a bug.

The root cause seems to be here:

https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/AbstractTask.java#L137

For a completely new node/topology with a KTable topic with existing state, 
there is no consumer metadata, so this initializes the offset limit to 0, which 
results in the state restoration loop to basically not consume any records. 
I've only reproduced this in a local case where I was sinking data to a KTable 
topic and then initialized the topology for the first time, which is a one-time 
event, but I'm wondering if this offset limit default of zero could be causing 
issues later in the lifecycle of the topology as well.

> Allow KTable bootstrap
> --
>
> Key: KAFKA-4113
> URL: https://issues.apache.org/jira/browse/KAFKA-4113
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
>
> On the mailing list, there are multiple request about the possibility to 
> "fully populate" a KTable before actual stream processing start.
> Even if it is somewhat difficult to define, when the initial populating phase 
> should end, there are multiple possibilities:
> The main idea is, that there is a rarely updated topic that contains the 
> data. Only after this topic got read completely and the KTable is ready, the 
> application should start processing. This would indicate, that on startup, 
> the current partition sizes must be fetched and stored, and after KTable got 
> populated up to those offsets, stream processing can start.
> Other discussed ideas are:
> 1) an initial fixed time period for populating
> (it might be hard for a user to estimate the correct value)
> 2) an "idle" period, ie, if no update to a KTable for a certain time is
> done, we consider it as populated
> 3) a timestamp cut off point, ie, all records with an older timestamp
> belong to the initial populating phase
> The API change is not decided yet, and the API desing is part of this JIRA.
> One suggestion (for option (4)) was:
> {noformat}
> KTable table = builder.table("topic", 1000); // populate the table without 
> reading any other topics until see one record with timestamp 1000.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2090: KAFKA-4269: Follow up for 0.10.1 branch -update to...

2016-11-01 Thread bbejeck
GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/2090

KAFKA-4269: Follow up for 0.10.1 branch -update topic subscriptions for 
regex 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka 
KAFKA-4269_follow_up_for_updating_topic_groups_for_regex_subscription

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2090.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2090


commit 260a1378f07a739c9c84d2e87738e545072b5bc0
Author: Bill Bejeck 
Date:   2016-10-20T04:04:28Z

KAFKA-4269: Update topic subscription when regex pattern specified out of 
topicGroups method

…d out of topicGroups method. The topicGroups method only called from 
StreamPartitionAssignor when KafkaStreams object  is the leader, needs to be 
executed for clients.

Author: bbejeck 

Reviewers: Damian Guy , Guozhang Wang 


Closes #2005 from 
bbejeck/KAFKA-4269_multiple_kstream_instances_mult_consumers_npe




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4269) Multiple KStream instances with at least one Regex source causes NPE when using multiple consumers

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627157#comment-15627157
 ] 

ASF GitHub Bot commented on KAFKA-4269:
---

GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/2090

KAFKA-4269: Follow up for 0.10.1 branch -update topic subscriptions for 
regex 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka 
KAFKA-4269_follow_up_for_updating_topic_groups_for_regex_subscription

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2090.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2090


commit 260a1378f07a739c9c84d2e87738e545072b5bc0
Author: Bill Bejeck 
Date:   2016-10-20T04:04:28Z

KAFKA-4269: Update topic subscription when regex pattern specified out of 
topicGroups method

…d out of topicGroups method. The topicGroups method only called from 
StreamPartitionAssignor when KafkaStreams object  is the leader, needs to be 
executed for clients.

Author: bbejeck 

Reviewers: Damian Guy , Guozhang Wang 


Closes #2005 from 
bbejeck/KAFKA-4269_multiple_kstream_instances_mult_consumers_npe




> Multiple KStream instances with at least one Regex source causes NPE when 
> using multiple consumers
> --
>
> Key: KAFKA-4269
> URL: https://issues.apache.org/jira/browse/KAFKA-4269
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.10.2.0
>
>
> I discovered this issue while doing testing for for KAFKA-4114. 
> KAFKA-4131 fixed the issue of a _single_ KStream with a regex source on 
> partitioned topics across multiple consumers.
> //KAFKA-4131 fixed this case assuming an "foo*" topics are partitioned
> KStream kstream = builder.source(Pattern.compile("foo.*"));
> KafkaStream stream = new KafkaStreams(builder, props);
> stream.start();  
> This is a new issue where there are _multiple_
> KStream instances (and one has a regex source) within a single KafkaStreams 
> object. When running the second or "following"
> consumer there are NPE errors generated in the RecordQueue.addRawRecords 
> method when attempting to consume records. 
> For example:
> KStream kstream = builder.source(Pattern.compile("foo.*"));
> KStream kstream2 = builder.source(.): //can be regex or named topic 
> sources
> KafkaStream stream = new KafkaStreams(builder, props);
> stream.start();
> By adding an additional KStream instance like above (whether Regex or Named 
> topic) causes a NPE when run as "follower"
> From my initial debugging I can see the TopicPartition assignments being set 
> on the "follower" KafkaStreams instance, but need to track down why and where 
> all assignments aren't being set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4362) Consumer can fail after reassignment of the offsets topic partition

2016-11-01 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-4362:

Assignee: Mayuresh Gharat  (was: Jiangjie Qin)

> Consumer can fail after reassignment of the offsets topic partition
> ---
>
> Key: KAFKA-4362
> URL: https://issues.apache.org/jira/browse/KAFKA-4362
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Joel Koshy
>Assignee: Mayuresh Gharat
>
> When a consumer offsets topic partition reassignment completes, an offset 
> commit shows this:
> {code}
> java.lang.IllegalArgumentException: Message format version for partition 100 
> not found
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$14.apply(GroupMetadataManager.scala:633)
>  ~[kafka_2.10.jar:?]
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$14.apply(GroupMetadataManager.scala:633)
>  ~[kafka_2.10.jar:?]
> at scala.Option.getOrElse(Option.scala:120) ~[scala-library-2.10.4.jar:?]
> at 
> kafka.coordinator.GroupMetadataManager.kafka$coordinator$GroupMetadataManager$$getMessageFormatVersionAndTimestamp(GroupMetadataManager.scala:632)
>  ~[kafka_2.10.jar:?]
> at 
> ...
> {code}
> The issue is that the replica has been deleted so the 
> {{GroupMetadataManager.getMessageFormatVersionAndTimestamp}} throws this 
> exception instead which propagates as an unknown error.
> Unfortunately consumers don't respond to this and will fail their offset 
> commits.
> One workaround in the above situation is to bounce the cluster - the consumer 
> will be forced to rediscover the group coordinator.
> (Incidentally, the message incorrectly prints the number of partitions 
> instead of the actual partition.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2089: MINOR: Fix documentation of compaction

2016-11-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2089


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-87 - Add Compaction Tombstone Flag

2016-11-01 Thread Mayuresh Gharat
On a side note just a use case that came to my mind, if we ever plan to add
headers to kafka, we can provide this functionality of compaction based on
headers and make the compaction policy pluggable on the broker side. This
might give flexibility for doing compaction in future.

Thanks,

Mayuresh

On Tue, Nov 1, 2016 at 1:15 PM, Becket Qin  wrote:

> My two cents on the magic value bumping.
>
> Although the broker can infer from the request version to determine whether
> it should read the new attribute bit or not, currently the request version
> is not propagated to the replica manager.
> Bumping up the magic value does not seem a big overhead in this case. It
> also help avoid the potential ambiguity. maybe it is better to bump up the
> magic value.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Oct 31, 2016 at 4:17 PM, Jay Kreps  wrote:
>
> > Xavier,
> >
> > Yeah I think that post KIP-58 it is possible to depend on the delivery of
> > messages in compacted topics, if you override the default compaction
> time.
> > Prior to that it was true that you could control the delete retention,
> but
> > any message including the tombstone could be compacted away prior to
> > delivery. That was what I meant by non-deterministic. Now that we have
> that
> > KIP I agree that at least it can be given an SLA like any other message
> > (which was what I was meaning by deterministic).
> >
> > -Jay
> >
> > On Fri, Oct 28, 2016 at 10:15 AM, Xavier Léauté 
> > wrote:
> >
> > > >
> > > > I kind of agree with James that it is a bit questionable how valuable
> > any
> > > > data in a delete marker can be since it will be deleted somewhat
> > > > nondeterministically.
> > > >
> > >
> > > One could argue that even in normal topics, assuming a time-based log
> > > retention policy is in place, any message will be deleted somewhat
> > > nondeterministally, so why treat the compacted ones any differently? To
> > me
> > > at least, the retention setting for delete messages seems to be the
> > > counterpart to the time-based retention setting for normal topics.
> > >
> > > Currently the semantics of the messages are in the eye of the
> > beholder--you
> > > > can choose to interpret a stream as either being appends or revisions
> > as
> > > > you choose. This proposal is changing that so that the semantics are
> > > > determined by the sender.
> > >
> > >
> > > Let's imagine someone wanted to augment this stream to include audit
> logs
> > > for each record update, e.g. which user made the change. One would want
> > to
> > > include that information as part of the message, and have the ability
> to
> > > mark a deletion.
> > >
> > > I don't think it changes the semantics in this case, you can still
> choose
> > > to interpret the data as a stream of audit log entries (inserts),
> > ignoring
> > > the tombstone flag, or you can interpret it as a table modeling only
> the
> > > latest version of each record. Whether a compacted or normal topic is
> > used
> > > shouldn't matter to the sender.
> > >
> >
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Jenkins build is back to normal : kafka-trunk-jdk7 #1666

2016-11-01 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #1015

2016-11-01 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Fix documentation of compaction

--
[...truncated 7976 lines...]

kafka.producer.ProducerTest > testSendToNewTopic STARTED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout STARTED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage STARTED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo STARTED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker STARTED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.common.ConfigTest > testInvalidGroupIds STARTED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds STARTED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
STARTED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
PASSED

kafka.common.TopicTest > testInvalidTopicNames STARTED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision STARTED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > 

[GitHub] kafka pull request #2050: Replaced unnecessary isDefined and get on option v...

2016-11-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2050


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4362) Consumer can fail after reassignment of the offsets topic partition

2016-11-01 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627824#comment-15627824
 ] 

Jason Gustafson commented on KAFKA-4362:


[~jjkoshy] Good find. Can you clarify what you mean when you say this causes 
offset commits to perpetually fail? I would expect that subsequent offset 
commits after partition assignment completes would get the 
COORDINATOR_NOT_AVAILABLE error. Also, that seems like the proper error code 
instead of trying to handle UNKNOWN on the client side?

> Consumer can fail after reassignment of the offsets topic partition
> ---
>
> Key: KAFKA-4362
> URL: https://issues.apache.org/jira/browse/KAFKA-4362
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Joel Koshy
>Assignee: Mayuresh Gharat
>
> When a consumer offsets topic partition reassignment completes, an offset 
> commit shows this:
> {code}
> java.lang.IllegalArgumentException: Message format version for partition 100 
> not found
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$14.apply(GroupMetadataManager.scala:633)
>  ~[kafka_2.10.jar:?]
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$14.apply(GroupMetadataManager.scala:633)
>  ~[kafka_2.10.jar:?]
> at scala.Option.getOrElse(Option.scala:120) ~[scala-library-2.10.4.jar:?]
> at 
> kafka.coordinator.GroupMetadataManager.kafka$coordinator$GroupMetadataManager$$getMessageFormatVersionAndTimestamp(GroupMetadataManager.scala:632)
>  ~[kafka_2.10.jar:?]
> at 
> ...
> {code}
> The issue is that the replica has been deleted so the 
> {{GroupMetadataManager.getMessageFormatVersionAndTimestamp}} throws this 
> exception instead which propagates as an unknown error.
> Unfortunately consumers don't respond to this and will fail their offset 
> commits.
> One workaround in the above situation is to bounce the cluster - the consumer 
> will be forced to rediscover the group coordinator.
> (Incidentally, the message incorrectly prints the number of partitions 
> instead of the actual partition.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4348) On Mac OS, KafkaConsumer.poll returns 0 when there are still messages on Kafka server

2016-11-01 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627866#comment-15627866
 ] 

Jason Gustafson commented on KAFKA-4348:


This sounds like the same issue as KAFKA-3135. We never really got to the 
bottom of it, but I think a workaround was increasing the size of the receive 
buffer (receive.buffer.bytes). Maybe we can close this issue and continue the 
discussion there?

> On Mac OS, KafkaConsumer.poll returns 0 when there are still messages on 
> Kafka server
> -
>
> Key: KAFKA-4348
> URL: https://issues.apache.org/jira/browse/KAFKA-4348
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.1
> Environment: Mac OS X EI Capitan, Java 1.8.0_111
>Reporter: Yiquan Zhou
>  Labels: consumer, mac, polling
>
> Steps to reproduce:
> 1. start the zookeeper and kafka server using the default properties from the 
> distribution: 
> $ bin/zookeeper-server-start.sh config/zookeeper.properties
> $ bin/kafka-server-start.sh config/server.properties 
> 2. create a Kafka consumer using the Java API KafkaConsumer.poll(long 
> timeout). It polls the records from the server every second (timeout set to 
> 1000) and prints the number of records polled. The code can be found here: 
> https://gist.github.com/yiquanzhou/a94569a2c4ec8992444c83f3c393f596
> 3. use bin/kafka-verifiable-producer.sh to generate some messages: 
> $ bin/kafka-verifiable-producer.sh --topic connect-test --max-messages 20 
> --broker-list localhost:9092
> wait until all 200k messages are generated and sent to the server. 
> 4. Run the consumer Java code. In the output console of the consumer, we can 
> see that the consumer starts to poll some records, then it polls 0 records 
> for several seconds before polling some more. like this:
> polled 27160 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 26886 records
> polled 26886 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 0 records
> polled 26701 records
> polled 26214 records
> The bug slows down the consumption of messages a lot. And in our use case, 
> the consumer wrongly assumes that all messages are read from the topic.
> It is only reproducible on Mac OS X but neither on Linux nor Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3986) completedReceives can contain closed channels

2016-11-01 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627907#comment-15627907
 ] 

Jason Gustafson commented on KAFKA-3986:


I think this is handled in the patch for KAFKA-3703: 
https://github.com/apache/kafka/pull/1836/.

> completedReceives can contain closed channels 
> --
>
> Key: KAFKA-3986
> URL: https://issues.apache.org/jira/browse/KAFKA-3986
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Reporter: Ryan P
> Fix For: 0.10.0.2, 0.10.1.1
>
>
> I'm not entirely sure why at this point but it is possible to throw a Null 
> Pointer Exception when processingCompletedReceives. This happens when a 
> fairly substantial number of simultaneously initiated connections are 
> initiated with the server. 
> The processor thread does carry on but it may be worth investigating how the 
> channel could be both closed and  completedReceives. 
> The NPE in question is thrown here:
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/network/SocketServer.scala#L490
> It can not be consistently reproduced either. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)