[GitHub] kafka pull request #1887: HOTFIX: Added check for metadata unavailable

2016-09-20 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/1887

HOTFIX: Added check for metadata unavailable



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka hotfix-metadata-unavailable

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1887.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1887


commit 872ad69c9f7cc23fb4001c4a05a50783a0f96ab2
Author: Eno Thereska 
Date:   2016-09-20T07:10:08Z

Added check for metadata unavailable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1888: MINOR: remove unused code from InternalTopicManage...

2016-09-20 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1888

MINOR: remove unused code from InternalTopicManager

Remove isValidCleanupPolicy and related fields as they are never used. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka minor-remove-unused

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1888.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1888


commit a3a4b4e349ca05a73395302caa5f6e36486960e8
Author: Damian Guy 
Date:   2016-09-20T08:40:01Z

remove unused code




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4192) Update upgrade documentation to mention inter.broker.protocol.version

2016-09-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4192:
---
Fix Version/s: (was: 0.10.1.0)

> Update upgrade documentation to mention inter.broker.protocol.version
> -
>
> Key: KAFKA-4192
> URL: https://issues.apache.org/jira/browse/KAFKA-4192
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
>
> Because of KIP-74, the upgrade instructions need to mention the need to use 
> inter.broker.protocol.version during the upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4192) Update upgrade documentation to mention inter.broker.protocol.version

2016-09-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4192.

   Resolution: Fixed
 Assignee: Jiangjie Qin  (was: Ismael Juma)
Fix Version/s: 0.10.1.0

[~becket_qin] fixed this as part of KIP-79.

> Update upgrade documentation to mention inter.broker.protocol.version
> -
>
> Key: KAFKA-4192
> URL: https://issues.apache.org/jira/browse/KAFKA-4192
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Jiangjie Qin
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> Because of KIP-74, the upgrade instructions need to mention the need to use 
> inter.broker.protocol.version during the upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4192) Update upgrade documentation for 0.10.1.0 to mention inter.broker.protocol.version

2016-09-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4192:
---
Summary: Update upgrade documentation for 0.10.1.0 to mention 
inter.broker.protocol.version  (was: Update upgrade documentation to mention 
inter.broker.protocol.version)

> Update upgrade documentation for 0.10.1.0 to mention 
> inter.broker.protocol.version
> --
>
> Key: KAFKA-4192
> URL: https://issues.apache.org/jira/browse/KAFKA-4192
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Jiangjie Qin
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> Because of KIP-74, the upgrade instructions need to mention the need to use 
> inter.broker.protocol.version during the upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4196) Transient test failure: DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK

2016-09-20 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-4196:
--

 Summary: Transient test failure: 
DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK
 Key: KAFKA-4196
 URL: https://issues.apache.org/jira/browse/KAFKA-4196
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma


The error:
{code}
java.lang.AssertionError: Admin path /admin/delete_topic/topic path not deleted 
even after a replica is restarted
at org.junit.Assert.fail(Assert.java:88)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:752)
at kafka.utils.TestUtils$.verifyTopicDeletion(TestUtils.scala:1017)
at 
kafka.admin.DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK(DeleteConsumerGroupTest.scala:156)
{code}

Caused by a broken invariant in the Controller: a partition exists in 
`ControllerContext.partitionLeadershipInfo`, but not 
`controllerContext.partitionReplicaAssignment`.

{code}
[2016-09-20 06:45:13,967] ERROR [BrokerChangeListener on Controller 1]: Error 
while handling broker changes 
(kafka.controller.ReplicaStateMachine$BrokerChangeListener:103)
java.util.NoSuchElementException: key not found: [topic,0]
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:58)
at scala.collection.mutable.HashMap.apply(HashMap.scala:64)
at 
kafka.controller.ControllerBrokerRequestBatch.kafka$controller$ControllerBrokerRequestBatch$$updateMetadataRequestMapFor$1(ControllerChannelManager.scala:310)
at 
kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
at 
kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at 
kafka.controller.ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers(ControllerChannelManager.scala:343)
at 
kafka.controller.KafkaController.sendUpdateMetadataRequest(KafkaController.scala:1030)
at 
kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:492)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:376)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:357)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
at 
kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:355)
at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:843)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1558

2016-09-20 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Bump to version 0.10.2

[jason] KAFKA-4193; Fix for intermittent failure in FetcherTest

--
[...truncated 12689 lines...]

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullWindowsOnWindowedAggregate PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameOnWindowedAggregate STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameOnWindowedAggregate PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameWithWindowedReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullStoreNameWithWindowedReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullInitializerOnAggregate STARTED

org.apache.kafka.streams.kstream.internals.KGroupedStreamImplTest > 
shouldNotHaveNullInitializerOnAggregate PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testLeftJoin STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testLeftJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testWindowing STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream STARTED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream PASSED

org.apache.kafka.streams.kstream.internals.KStreamBranchTest > 
testKStreamBranch STARTED

org.apache.kafka.streams.kstream.internals.KStreamBranchTest > 
testKStreamBranch PASSED

org.apache.kafka.streams.kstream.internals.KStreamMapValuesTest > 
testFlatMapValues STARTED

org.apache.kafka.streams.kstream.internals.KStreamMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
STARTED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues STARTED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilter 
STARTED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilter PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullMapperOnMapValues STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullMapperOnMapValues PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullPredicateOnFilter STARTED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
shouldNotAllowNullPredicateOnFilter PASSED

org.

[jira] [Commented] (KAFKA-4196) Transient test failure: DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK

2016-09-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15506176#comment-15506176
 ] 

Ismael Juma commented on KAFKA-4196:


cc [~junrao] [~fpj]

> Transient test failure: 
> DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK
> ---
>
> Key: KAFKA-4196
> URL: https://issues.apache.org/jira/browse/KAFKA-4196
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>  Labels: transient-unit-test-failure
>
> The error:
> {code}
> java.lang.AssertionError: Admin path /admin/delete_topic/topic path not 
> deleted even after a replica is restarted
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:752)
>   at kafka.utils.TestUtils$.verifyTopicDeletion(TestUtils.scala:1017)
>   at 
> kafka.admin.DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK(DeleteConsumerGroupTest.scala:156)
> {code}
> Caused by a broken invariant in the Controller: a partition exists in 
> `ControllerContext.partitionLeadershipInfo`, but not 
> `controllerContext.partitionReplicaAssignment`.
> {code}
> [2016-09-20 06:45:13,967] ERROR [BrokerChangeListener on Controller 1]: Error 
> while handling broker changes 
> (kafka.controller.ReplicaStateMachine$BrokerChangeListener:103)
> java.util.NoSuchElementException: key not found: [topic,0]
>   at scala.collection.MapLike$class.default(MapLike.scala:228)
>   at scala.collection.AbstractMap.default(Map.scala:58)
>   at scala.collection.mutable.HashMap.apply(HashMap.scala:64)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.kafka$controller$ControllerBrokerRequestBatch$$updateMetadataRequestMapFor$1(ControllerChannelManager.scala:310)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.KafkaController.sendUpdateMetadataRequest(KafkaController.scala:1030)
>   at 
> kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:492)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:376)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:355)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:843)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4133) Provide a configuration to control consumer max in-flight fetches

2016-09-20 Thread Mickael Maison (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15506187#comment-15506187
 ] 

Mickael Maison commented on KAFKA-4133:
---

Due to a KIP # collision, it is now KIP-81: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-81%3A+Max+in-flight+fetches
 

> Provide a configuration to control consumer max in-flight fetches
> -
>
> Key: KAFKA-4133
> URL: https://issues.apache.org/jira/browse/KAFKA-4133
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Mickael Maison
>
> With KIP-74, we now have a good way to limit the size of fetch responses, but 
> it may still be difficult for users to control overall memory since the 
> consumer will send fetches in parallel to all the brokers which own 
> partitions that it is subscribed to. To give users finer control, it might 
> make sense to add a `max.in.flight.fetches` setting to limit the total number 
> of concurrent fetches at any time. This would require a KIP since it's a new 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1889: MINOR: Increase `zkConnectionTimeout` and timeout ...

2016-09-20 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1889

MINOR: Increase `zkConnectionTimeout` and timeout in `testReachableServer`

We had a number of failures recently due to these timeouts being too low. 
It's a particular problem if multiple forks are used while running the tests.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka increase-zk-timeout-in-tests

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1889.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1889


commit 6314b9a9c8eb2cb52bbfd1b5cbf7b8163161ceaf
Author: Ismael Juma 
Date:   2016-09-20T09:57:39Z

MINOR: Increase `zkConnectionTimeout` and timeout in `testReachableServer`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1890: MINOR: Use `nanoTime` in `testRequestExpiry` to fi...

2016-09-20 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1890

MINOR: Use `nanoTime` in `testRequestExpiry` to fix transient test failure

We recently switched `SystemTimer` to use `nanoTime`, but we were still
using `System.currentTimeMillis` in the test. That would sometimes mean
that we would measure elapsed time as lower than expected.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
fix-test-request-satisfaction-transient-failure

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1890.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1890


commit 75c6d6d8e1c2ae3e741f4243410278770edc2c9f
Author: Ismael Juma 
Date:   2016-09-20T10:17:06Z

MINOR: Use `nanoTime` in `testRequestExpiry` to fix transient test failure

We recently switched `SystemTimer` to use `nanoTime`, but we were still
using `System.currentTimeMillis` in the test. That would sometimes mean
that we would measure elapsed time as lower than expected.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1887: HOTFIX: Added check for metadata unavailable

2016-09-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1887


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1891: MINOR: add javadoc comment to PersistenKeyValueFac...

2016-09-20 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/1891

MINOR: add javadoc comment to PersistenKeyValueFactory.enableCaching

missing javadoc on public API method PersistenKeyValueFactory.enableCaching

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka minor-java-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1891.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1891


commit 74efa35f9b79119734e75b8ca63d5671617a9cba
Author: Damian Guy 
Date:   2016-09-20T10:44:44Z

add javadoc comment to PersistenKeyValueFactory.enableCaching




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #900

2016-09-20 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4193; Fix for intermittent failure in FetcherTest

--
[...truncated 13489 lines...]
org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierWithLoggedConfig STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierWithLoggedConfig PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierNotLogged STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierNotLogged PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAgg

Jenkins build is back to normal : kafka-trunk-jdk8 #901

2016-09-20 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #1892: KAFKA-[TBD]: Make ReassignPartitionsTest System Te...

2016-09-20 Thread benstopford
GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/1892

KAFKA-[TBD]: Make ReassignPartitionsTest System Test move data

The ReassignPartitionsTest system tests doesn't reassign any replicas (i.e. 
move data). 

This is a simple issue. It uses a 3 node cluster with replication factor of 
3, so whilst the replicas are jumbled around, nothing actually is moved from 
machine to machine when the assignment is executed. 

This fix just ups the number of nodes to 4 so things move. 

Tests pass locally. 
There are runs pending on the two branch builders (#94, #551)

No Jira included as Jira is currently down. Will update. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka fix_reassignment_test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1892.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1892


commit 0327f1bc70ce37e1f82d519d2a3a95de0062e45a
Author: Ben Stopford 
Date:   2016-09-20T12:39:00Z

KAFKA-TBD: Tiny, but important, fix to reassignment test, which currently 
doesn't reassign any data.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4197) Make ReassignPartitionsTest System Test move data

2016-09-20 Thread Ben Stopford (JIRA)
Ben Stopford created KAFKA-4197:
---

 Summary: Make ReassignPartitionsTest System Test move data
 Key: KAFKA-4197
 URL: https://issues.apache.org/jira/browse/KAFKA-4197
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.1.0
Reporter: Ben Stopford
Priority: Minor


The ReassignPartitionsTest system tests doesn't reassign any replicas (i.e. 
move data).

This is a simple issue. It uses a 3 node cluster with replication factor of 3, 
so whilst the replicas are jumbled around, nothing actually is moved from 
machine to machine when the assignment is executed.

This fix just ups the number of nodes to 4 so things move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4198) Transient test failure: ConsumerBounceTest.testConsumptionWithBrokerFailures

2016-09-20 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-4198:
--

 Summary: Transient test failure: 
ConsumerBounceTest.testConsumptionWithBrokerFailures
 Key: KAFKA-4198
 URL: https://issues.apache.org/jira/browse/KAFKA-4198
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma


The issue seems to be that we call `startup` while `shutdown` is still taking 
place.

{code}
java.lang.AssertionError: expected:<107> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
kafka.api.ConsumerBounceTest$$anonfun$consumeWithBrokerFailures$2.apply(ConsumerBounceTest.scala:91)
at 
kafka.api.ConsumerBounceTest$$anonfun$consumeWithBrokerFailures$2.apply(ConsumerBounceTest.scala:90)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:90)
at 
kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:70)
{code}

{code}
java.lang.IllegalStateException: Kafka server is still shutting down, cannot 
re-start!
at kafka.server.KafkaServer.startup(KafkaServer.scala:184)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$restartDeadBrokers$2.apply$mcVI$sp(KafkaServerTestHarness.scala:117)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$restartDeadBrokers$2.apply(KafkaServerTestHarness.scala:116)
at 
kafka.integration.KafkaServerTestHarness$$anonfun$restartDeadBrokers$2.apply(KafkaServerTestHarness.scala:116)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at 
kafka.integration.KafkaServerTestHarness$class.restartDeadBrokers(KafkaServerTestHarness.scala:116)
at 
kafka.api.ConsumerBounceTest.restartDeadBrokers(ConsumerBounceTest.scala:34)
at 
kafka.api.ConsumerBounceTest$BounceBrokerScheduler.doWork(ConsumerBounceTest.scala:158)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4197) Make ReassignPartitionsTest System Test move data

2016-09-20 Thread Ben Stopford (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15506446#comment-15506446
 ] 

Ben Stopford commented on KAFKA-4197:
-

https://github.com/apache/kafka/pull/1892

> Make ReassignPartitionsTest System Test move data
> -
>
> Key: KAFKA-4197
> URL: https://issues.apache.org/jira/browse/KAFKA-4197
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>Priority: Minor
>
> The ReassignPartitionsTest system tests doesn't reassign any replicas (i.e. 
> move data).
> This is a simple issue. It uses a 3 node cluster with replication factor of 
> 3, so whilst the replicas are jumbled around, nothing actually is moved from 
> machine to machine when the assignment is executed.
> This fix just ups the number of nodes to 4 so things move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4197) Make ReassignPartitionsTest System Test move data

2016-09-20 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-4197:

Assignee: Ben Stopford
  Status: Patch Available  (was: Open)

> Make ReassignPartitionsTest System Test move data
> -
>
> Key: KAFKA-4197
> URL: https://issues.apache.org/jira/browse/KAFKA-4197
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>Priority: Minor
>
> The ReassignPartitionsTest system tests doesn't reassign any replicas (i.e. 
> move data).
> This is a simple issue. It uses a 3 node cluster with replication factor of 
> 3, so whilst the replicas are jumbled around, nothing actually is moved from 
> machine to machine when the assignment is executed.
> This fix just ups the number of nodes to 4 so things move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Kafka duplicate offset at Consumer

2016-09-20 Thread Ghosh, Achintya (Contractor)
Hi there,

I see a lot of same offset value kafka consumer receives hence it creates a lot 
of duplicate messages. What could be the reason and how we can solve this issue?

Thanks
Achintya


Build failed in Jenkins: kafka-trunk-jdk7 #1559

2016-09-20 Thread Apache Jenkins Server
See 

Changes:

[ismael] HOTFIX: Added check for metadata unavailable

--
[...truncated 7259 lines...]

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList STARTED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas STARTED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic STARTED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition STARTED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed STARTED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown STARTED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic STARTED

kafka.api.RackAwareAutoTopicCreationTest > testAutoCreateTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.A

[jira] [Created] (KAFKA-4199) When a window store segment is dropped we should also clear any corresponding cached entries

2016-09-20 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-4199:
-

 Summary: When a window store segment is dropped we should also 
clear any corresponding cached entries
 Key: KAFKA-4199
 URL: https://issues.apache.org/jira/browse/KAFKA-4199
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.1.0
Reporter: Damian Guy
Assignee: Guozhang Wang
Priority: Minor


In KIP-63 we introduced a CachingWindowStore, but it currently doesn't have a 
way to be informed when the underlying store drops a segment. In an ideal 
world, when a segment is dropped we'd also remove the corresponding entries 
from the cache. 

Firstly, we need to understand if it is an issue if they don't get dropped. 
They will naturally be evicted when the cache becomes full, but this could 
impact other stores in the thread. i.e., what if any performance impact exists?

If we find there is an unacceptable performance penalty we might need to add a 
callback to the WindowStore API such that we can be notified when segments are 
removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4184) Test failure in ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15506763#comment-15506763
 ] 

ASF GitHub Bot commented on KAFKA-4184:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1873


> Test failure in 
> ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle
> ---
>
> Key: KAFKA-4184
> URL: https://issues.apache.org/jira/browse/KAFKA-4184
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Ben Stopford
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk7/1544/.
> {code}
> unit.kafka.server.ReplicationQuotasTest > 
> shouldBootstrapTwoBrokersWithFollowerThrottle FAILED
> java.lang.AssertionError: expected:<3.0E7> but was:<2.060725619683527E7>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:553)
> at org.junit.Assert.assertEquals(Assert.java:683)
> at 
> unit.kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$14.apply(ReplicationQuotasTest.scala:172)
> at 
> unit.kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$14.apply(ReplicationQuotasTest.scala:168)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> unit.kafka.server.ReplicationQuotasTest.shouldMatchQuotaReplicatingThroughAnAsymmetricTopology(ReplicationQuotasTest.scala:168)
> at 
> unit.kafka.server.ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle(ReplicationQuotasTest.scala:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1873: KAFKA-4184: Intermitant failures in ReplicationQuo...

2016-09-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1873


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1892: KAFKA-4197: Make ReassignPartitionsTest System Tes...

2016-09-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1892


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4184) Test failure in ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle

2016-09-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4184.

Resolution: Fixed
  Reviewer: Ismael Juma

> Test failure in 
> ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle
> ---
>
> Key: KAFKA-4184
> URL: https://issues.apache.org/jira/browse/KAFKA-4184
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Ben Stopford
> Fix For: 0.10.1.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk7/1544/.
> {code}
> unit.kafka.server.ReplicationQuotasTest > 
> shouldBootstrapTwoBrokersWithFollowerThrottle FAILED
> java.lang.AssertionError: expected:<3.0E7> but was:<2.060725619683527E7>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:553)
> at org.junit.Assert.assertEquals(Assert.java:683)
> at 
> unit.kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$14.apply(ReplicationQuotasTest.scala:172)
> at 
> unit.kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$14.apply(ReplicationQuotasTest.scala:168)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> unit.kafka.server.ReplicationQuotasTest.shouldMatchQuotaReplicatingThroughAnAsymmetricTopology(ReplicationQuotasTest.scala:168)
> at 
> unit.kafka.server.ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle(ReplicationQuotasTest.scala:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4184) Test failure in ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle

2016-09-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4184:
---
Fix Version/s: 0.10.1.0

> Test failure in 
> ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle
> ---
>
> Key: KAFKA-4184
> URL: https://issues.apache.org/jira/browse/KAFKA-4184
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Jason Gustafson
>Assignee: Ben Stopford
> Fix For: 0.10.1.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk7/1544/.
> {code}
> unit.kafka.server.ReplicationQuotasTest > 
> shouldBootstrapTwoBrokersWithFollowerThrottle FAILED
> java.lang.AssertionError: expected:<3.0E7> but was:<2.060725619683527E7>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:553)
> at org.junit.Assert.assertEquals(Assert.java:683)
> at 
> unit.kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$14.apply(ReplicationQuotasTest.scala:172)
> at 
> unit.kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$14.apply(ReplicationQuotasTest.scala:168)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> unit.kafka.server.ReplicationQuotasTest.shouldMatchQuotaReplicatingThroughAnAsymmetricTopology(ReplicationQuotasTest.scala:168)
> at 
> unit.kafka.server.ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle(ReplicationQuotasTest.scala:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4197) Make ReassignPartitionsTest System Test move data

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15506767#comment-15506767
 ] 

ASF GitHub Bot commented on KAFKA-4197:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1892


> Make ReassignPartitionsTest System Test move data
> -
>
> Key: KAFKA-4197
> URL: https://issues.apache.org/jira/browse/KAFKA-4197
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> The ReassignPartitionsTest system tests doesn't reassign any replicas (i.e. 
> move data).
> This is a simple issue. It uses a 3 node cluster with replication factor of 
> 3, so whilst the replicas are jumbled around, nothing actually is moved from 
> machine to machine when the assignment is executed.
> This fix just ups the number of nodes to 4 so things move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4197) Make ReassignPartitionsTest System Test move data

2016-09-20 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4197:
---
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1892
[https://github.com/apache/kafka/pull/1892]

> Make ReassignPartitionsTest System Test move data
> -
>
> Key: KAFKA-4197
> URL: https://issues.apache.org/jira/browse/KAFKA-4197
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> The ReassignPartitionsTest system tests doesn't reassign any replicas (i.e. 
> move data).
> This is a simple issue. It uses a 3 node cluster with replication factor of 
> 3, so whilst the replicas are jumbled around, nothing actually is moved from 
> machine to machine when the assignment is executed.
> This fix just ups the number of nodes to 4 so things move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.1-jdk7 #2

2016-09-20 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4184; Intermittent failures in

[ismael] KAFKA-4197; Make ReassignPartitionsTest System Test move data

--
[...truncated 3904 lines...]

kafka.log.LogTest > testReadAtLogGap STARTED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll STARTED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog STARTED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck STARTED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation STARTED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints STARTED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset STARTED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testDeleteOldSegmentsMethod STARTED

kafka.log.LogTest > testDeleteOldSegmentsMethod PASSED

kafka.log.LogTest > shouldDeleteSizeBasedSegments STARTED

kafka.log.LogTest > shouldDeleteSizeBasedSegments PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull STARTED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets STARTED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild STARTED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted STARTED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted PASSED

kafka.log.LogTest > testReadWithTooSmallMaxLength STARTED

kafka.log.LogTest > testReadWithTooSmallMaxLength PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved STARTED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages STARTED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload STARTED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog STARTED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset STARTED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate STARTED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition STARTED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName STARTED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles STARTED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages STARTED

kafka.log.LogTest > testRebuildTimeIndexForOldMessages PASSED

kafka.log.LogTest > testSizeBasedLogRoll STARTED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
STARTED

kafka.log.LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize 
PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter STARTED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName STARTED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo STARTED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile STARTED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets STARTED

kafka.log.LogTest > testBuildTimeIndexWhenNotAssigningOffsets PASSED

kafka.log.LogConfigTest > shouldValidateThrottledReplicasConfig STARTED

kafka.log.LogConfigTest > shouldValidateThrottledReplicasConfig PASSED

kafka.log.LogConfigTest > testFromPropsEmpty STARTED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps STARTED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid STARTED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.CleanerTest > testBuildOffsetMap STARTED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testBuildOffsetMapFakeLarge STARTED

kafka.log.CleanerTest > testBuildOffsetMapFakeLarge PASSED

kafka.log.CleanerTest > testSegmentGrouping STARTED

kafka.log.CleanerTest > testSegm

Build failed in Jenkins: kafka-trunk-jdk8 #902

2016-09-20 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4184; Intermittent failures in

[ismael] KAFKA-4197; Make ReassignPartitionsTest System Test move data

--
[...truncated 3606 lines...]

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization STARTED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion P

Jenkins build is back to normal : kafka-trunk-jdk7 #1560

2016-09-20 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #1893: KAFKA-4178: Windows in Rate Calculation

2016-09-20 Thread benstopford
GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/1893

KAFKA-4178: Windows in Rate Calculation

ReplicationQuotas included a new Rate class which changed the way the 
window is calculated. @junrao asked that we look to consolidate this. On 
balance I believe there is a good case for both, so this PR pulls out two 
policies for calculating the window, and attempts to explain what they are and 
why we might need them. 

They are:
**Elapsed**: Set the window to the difference between the oldest and newest 
measurement time. Handle NaN. _(Replication Quotas use this)_
**Fixed**: Fix the window to the full duration (10s by default). _(Client 
Quotas use this)_

Replication Quotas uses **Elapsed** as **Fixed** provides underestimates 
during the first window. Underestimates are best avoided as they create a load 
spike when replication starts. So it's preferable to overestimate the rate, and 
hence increase the throttled rate slowly. Elapsed is also significantly easier 
to test. 

I'm totally sure why Client Quotas were changed to use **Fixed**. I know 
there was a NaN issue, but should really be tangential (it is handled in both 
policies). However it seems sensible to slowly throttle on a client's 
connection down to the desired rate when you initialise or change quotas. 

So I think there is a requirement for both types of rate, and hence I've 
included both in this PR. **Elapsed** suits throttled replication, where the 
general use case is to initiate some rebalance and immediately apply a 
conservative throttle. **Fixed** suits client quotas where we want to gently 
throttle a client down to the desired rate over a period of seconds. 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-4178

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1893.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1893






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4178) Replication Throttling: Consolidate Rate Classes

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507247#comment-15507247
 ] 

ASF GitHub Bot commented on KAFKA-4178:
---

GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/1893

KAFKA-4178: Windows in Rate Calculation

ReplicationQuotas included a new Rate class which changed the way the 
window is calculated. @junrao asked that we look to consolidate this. On 
balance I believe there is a good case for both, so this PR pulls out two 
policies for calculating the window, and attempts to explain what they are and 
why we might need them. 

They are:
**Elapsed**: Set the window to the difference between the oldest and newest 
measurement time. Handle NaN. _(Replication Quotas use this)_
**Fixed**: Fix the window to the full duration (10s by default). _(Client 
Quotas use this)_

Replication Quotas uses **Elapsed** as **Fixed** provides underestimates 
during the first window. Underestimates are best avoided as they create a load 
spike when replication starts. So it's preferable to overestimate the rate, and 
hence increase the throttled rate slowly. Elapsed is also significantly easier 
to test. 

I'm totally sure why Client Quotas were changed to use **Fixed**. I know 
there was a NaN issue, but should really be tangential (it is handled in both 
policies). However it seems sensible to slowly throttle on a client's 
connection down to the desired rate when you initialise or change quotas. 

So I think there is a requirement for both types of rate, and hence I've 
included both in this PR. **Elapsed** suits throttled replication, where the 
general use case is to initiate some rebalance and immediately apply a 
conservative throttle. **Fixed** suits client quotas where we want to gently 
throttle a client down to the desired rate over a period of seconds. 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-4178

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1893.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1893






> Replication Throttling: Consolidate Rate Classes
> 
>
> Key: KAFKA-4178
> URL: https://issues.apache.org/jira/browse/KAFKA-4178
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>
> Replication throttling is using a different implementation of Rate to client 
> throttling (Rate & SimpleRate). These should be consolidated so both use the 
> same approach. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4200) Minor issue with throttle argument in kafka-reassign-partitions.sh

2016-09-20 Thread Ben Stopford (JIRA)
Ben Stopford created KAFKA-4200:
---

 Summary: Minor issue with throttle argument in 
kafka-reassign-partitions.sh
 Key: KAFKA-4200
 URL: https://issues.apache.org/jira/browse/KAFKA-4200
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.1.0
Reporter: Ben Stopford


1.
kafka-reassign-partitions —-verify prints Throttle was removed regardless of 
whether a throttle was applied. It should only print this if the value was 
actually changed. 

2.
—verify should exception if the —throttle argument. (check generate too)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4200) Minor issue with throttle argument in kafka-reassign-partitions.sh

2016-09-20 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford reassigned KAFKA-4200:
---

Assignee: Ben Stopford

> Minor issue with throttle argument in kafka-reassign-partitions.sh
> --
>
> Key: KAFKA-4200
> URL: https://issues.apache.org/jira/browse/KAFKA-4200
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> 1.
> kafka-reassign-partitions —-verify prints Throttle was removed regardless of 
> whether a throttle was applied. It should only print this if the value was 
> actually changed. 
> 2.
> —verify should exception if the —throttle argument. (check generate too)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4196) Transient test failure: DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK

2016-09-20 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507287#comment-15507287
 ] 

Flavio Junqueira commented on KAFKA-4196:
-

[~ijuma] is this from trunk?

> Transient test failure: 
> DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK
> ---
>
> Key: KAFKA-4196
> URL: https://issues.apache.org/jira/browse/KAFKA-4196
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>  Labels: transient-unit-test-failure
>
> The error:
> {code}
> java.lang.AssertionError: Admin path /admin/delete_topic/topic path not 
> deleted even after a replica is restarted
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:752)
>   at kafka.utils.TestUtils$.verifyTopicDeletion(TestUtils.scala:1017)
>   at 
> kafka.admin.DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK(DeleteConsumerGroupTest.scala:156)
> {code}
> Caused by a broken invariant in the Controller: a partition exists in 
> `ControllerContext.partitionLeadershipInfo`, but not 
> `controllerContext.partitionReplicaAssignment`.
> {code}
> [2016-09-20 06:45:13,967] ERROR [BrokerChangeListener on Controller 1]: Error 
> while handling broker changes 
> (kafka.controller.ReplicaStateMachine$BrokerChangeListener:103)
> java.util.NoSuchElementException: key not found: [topic,0]
>   at scala.collection.MapLike$class.default(MapLike.scala:228)
>   at scala.collection.AbstractMap.default(Map.scala:58)
>   at scala.collection.mutable.HashMap.apply(HashMap.scala:64)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.kafka$controller$ControllerBrokerRequestBatch$$updateMetadataRequestMapFor$1(ControllerChannelManager.scala:310)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.KafkaController.sendUpdateMetadataRequest(KafkaController.scala:1030)
>   at 
> kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:492)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:376)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:355)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:843)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3888) Allow consumer to send heartbeats in background thread (KIP-62)

2016-09-20 Thread Deepika Kakrania (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507290#comment-15507290
 ] 

Deepika Kakrania commented on KAFKA-3888:
-

Where can I download 0.10.1.0 from? 

> Allow consumer to send heartbeats in background thread (KIP-62)
> ---
>
> Key: KAFKA-3888
> URL: https://issues.apache.org/jira/browse/KAFKA-3888
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.10.1.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> This ticket covers the implementation of KIP-62 as documented here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4196) Transient test failure: DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK

2016-09-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507305#comment-15507305
 ] 

Ismael Juma commented on KAFKA-4196:


Yes. It seems to be rare, so not urgent. I cc'd you in case the problem was 
easy to spot.

> Transient test failure: 
> DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK
> ---
>
> Key: KAFKA-4196
> URL: https://issues.apache.org/jira/browse/KAFKA-4196
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>  Labels: transient-unit-test-failure
>
> The error:
> {code}
> java.lang.AssertionError: Admin path /admin/delete_topic/topic path not 
> deleted even after a replica is restarted
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:752)
>   at kafka.utils.TestUtils$.verifyTopicDeletion(TestUtils.scala:1017)
>   at 
> kafka.admin.DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK(DeleteConsumerGroupTest.scala:156)
> {code}
> Caused by a broken invariant in the Controller: a partition exists in 
> `ControllerContext.partitionLeadershipInfo`, but not 
> `controllerContext.partitionReplicaAssignment`.
> {code}
> [2016-09-20 06:45:13,967] ERROR [BrokerChangeListener on Controller 1]: Error 
> while handling broker changes 
> (kafka.controller.ReplicaStateMachine$BrokerChangeListener:103)
> java.util.NoSuchElementException: key not found: [topic,0]
>   at scala.collection.MapLike$class.default(MapLike.scala:228)
>   at scala.collection.AbstractMap.default(Map.scala:58)
>   at scala.collection.mutable.HashMap.apply(HashMap.scala:64)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.kafka$controller$ControllerBrokerRequestBatch$$updateMetadataRequestMapFor$1(ControllerChannelManager.scala:310)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.KafkaController.sendUpdateMetadataRequest(KafkaController.scala:1030)
>   at 
> kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:492)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:376)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:355)
>   at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:843)
>   at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3888) Allow consumer to send heartbeats in background thread (KIP-62)

2016-09-20 Thread Deepika Kakrania (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507290#comment-15507290
 ] 

Deepika Kakrania edited comment on KAFKA-3888 at 9/20/16 6:26 PM:
--

Where can I download 0.10.1.0 from? We are badly looking for this feature.


was (Author: dk402):
Where can I download 0.10.1.0 from? 

> Allow consumer to send heartbeats in background thread (KIP-62)
> ---
>
> Key: KAFKA-3888
> URL: https://issues.apache.org/jira/browse/KAFKA-3888
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.10.1.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> This ticket covers the implementation of KIP-62 as documented here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1954) Speed Up The Unit Tests

2016-09-20 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507430#comment-15507430
 ] 

Balint Molnar commented on KAFKA-1954:
--

[~sriharsha] if you are not working on this, do you mind if I give it a try?

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1954) Speed Up The Unit Tests

2016-09-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507444#comment-15507444
 ] 

Sriharsha Chintalapani commented on KAFKA-1954:
---

[~baluchicken] feel free to take it over.

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4196) Transient test failure: DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK

2016-09-20 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507443#comment-15507443
 ] 

Flavio Junqueira commented on KAFKA-4196:
-

After a cursory look, it looks like we could have this behavior if:

# There is an event that triggers {{TopicsChangeListener.handleChildChange}}.
# The previous event is followed by a broker change: {{BrokerChangeListener}}.

According to the description, we do have the broker change event. The topics 
event only happens after the topic has been deleted from under 
{{/broker/topics}} in zk, though. If the controller instance that triggers the 
first is the same that deletes the topic, then it doesn't look like we can have 
the behavior above because: 1) all those events are processed under the 
controller context lock; 2) the controller deletes the topic znodes and updates 
{{ControllerContext.partitionLeadershipInfo}} and 
{{controllerContext.partitionReplicaAssignment}}. Consequently, one possibility 
is a race between two controllers. One puzzling point is that the delete znode 
for the topic isn't going away, which indicates that no controller instance is 
completing successfully the delete operation.

I'd need to investigate some more to find the culprit. If it happens again and 
you have a chance, please upload the logs. I'll see if I can repro locally.

> Transient test failure: 
> DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK
> ---
>
> Key: KAFKA-4196
> URL: https://issues.apache.org/jira/browse/KAFKA-4196
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>  Labels: transient-unit-test-failure
>
> The error:
> {code}
> java.lang.AssertionError: Admin path /admin/delete_topic/topic path not 
> deleted even after a replica is restarted
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:752)
>   at kafka.utils.TestUtils$.verifyTopicDeletion(TestUtils.scala:1017)
>   at 
> kafka.admin.DeleteConsumerGroupTest.testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK(DeleteConsumerGroupTest.scala:156)
> {code}
> Caused by a broken invariant in the Controller: a partition exists in 
> `ControllerContext.partitionLeadershipInfo`, but not 
> `controllerContext.partitionReplicaAssignment`.
> {code}
> [2016-09-20 06:45:13,967] ERROR [BrokerChangeListener on Controller 1]: Error 
> while handling broker changes 
> (kafka.controller.ReplicaStateMachine$BrokerChangeListener:103)
> java.util.NoSuchElementException: key not found: [topic,0]
>   at scala.collection.MapLike$class.default(MapLike.scala:228)
>   at scala.collection.AbstractMap.default(Map.scala:58)
>   at scala.collection.mutable.HashMap.apply(HashMap.scala:64)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.kafka$controller$ControllerBrokerRequestBatch$$updateMetadataRequestMapFor$1(ControllerChannelManager.scala:310)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.ControllerBrokerRequestBatch$$anonfun$addUpdateMetadataRequestForBrokers$4.apply(ControllerChannelManager.scala:343)
>   at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>   at 
> kafka.controller.ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers(ControllerChannelManager.scala:343)
>   at 
> kafka.controller.KafkaController.sendUpdateMetadataRequest(KafkaController.scala:1030)
>   at 
> kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:492)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:376)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:358)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:357)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:356)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
>   at 
> kafka.controller.ReplicaStateMachine$BrokerChangeListener.han

[jira] [Commented] (KAFKA-1954) Speed Up The Unit Tests

2016-09-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507511#comment-15507511
 ] 

Ismael Juma commented on KAFKA-1954:


It's worth mentioning that we use one gradle fork per core by default when 
running the test suite. So, adding parallelism at the individual test level 
will help less in that case. However, it will still help when running tests 
individually or if the fork count is overridden (we set it to 1 in Jenkins for 
better test stability).

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1954) Speed Up The Unit Tests

2016-09-20 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507561#comment-15507561
 ] 

Balint Molnar commented on KAFKA-1954:
--

Thanks, [~ijuma].
I realized nearly every test case recreates the server infra (kafka/zookeeper) 
before itself even if it's not needed, so first I would like to refactor the 
classes to restart the infra only the required times.

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: Kafka duplicate offset at Consumer

2016-09-20 Thread Tauzell, Dave
Are you using the new java consumer?   What method are you using to commit 
offsets?

-Dave

-Original Message-
From: Ghosh, Achintya (Contractor) [mailto:achintya_gh...@comcast.com]
Sent: Tuesday, September 20, 2016 8:56 AM
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org
Subject: Kafka duplicate offset at Consumer

Hi there,

I see a lot of same offset value kafka consumer receives hence it creates a lot 
of duplicate messages. What could be the reason and how we can solve this issue?

Thanks
Achintya
This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.


Status of KIP-68: Add a consumed log retention

2016-09-20 Thread Renu Tewari
Hi All,
  Wanted to find out more on the status of KIP-68. There are no details in
the discussion thread and the pointers are not updated. Is this being
actively worked on? This is clearly needed to support smarter and more cost
effective retentions.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-68+Add+a+consumed+log+retention+before+log+retention

best
Renu


Re: [UPDATE] 0.10.1 Release Progress

2016-09-20 Thread Becket Qin
Awesome!

On Mon, Sep 19, 2016 at 11:42 PM, Neha Narkhede  wrote:

> Nice!
> On Mon, Sep 19, 2016 at 11:33 PM Ismael Juma  wrote:
>
> > Well done everyone. :)
> >
> > On 20 Sep 2016 5:23 am, "Jason Gustafson"  wrote:
> >
> > > Thanks everyone for the hard work! The 0.10.1 release branch has been
> > > created. We're now entering the stabilization phase of this release
> which
> > > means we'll focus on bug fixes and testing.
> > >
> > > -Jason
> > >
> > > On Fri, Sep 16, 2016 at 5:00 PM, Jason Gustafson 
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > Thanks everyone for the hard work! Here's an update on the remaining
> > KIPs
> > > > that we are hoping to include:
> > > >
> > > > KIP-78 (clusterId): Review is basically complete. Assuming no
> problems
> > > > emerge, Ismael is planning to merge today.
> > > > KIP-74 (max fetch size): Review is nearing completion, just a few
> minor
> > > > issues remain. This will probably be merged tomorrow or Sunday.
> > > > KIP-55 (secure quotas): The patch has been rebased and probably needs
> > one
> > > > more review pass before merging. Jun is confident it can get in
> before
> > > > Monday.
> > > >
> > > > As for KIP-79, I've made one review pass, but to make it in, we'll
> need
> > > 1)
> > > > some more votes on the vote thread, and 2) a few review iterations.
> > It's
> > > > looking a bit doubtful, but let's see how it goes!
> > > >
> > > > Since we are nearing the feature freeze, it would be helpful if
> people
> > > > begin setting some priorities on the bugs that need to get in before
> > the
> > > > code freeze. I am going to make an effort to prune the list early
> next
> > > > week, so if there are any critical issues you know about, make sure
> > they
> > > > are marked as such.
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > >
> >
> --
> Thanks,
> Neha
>


Re: [UPDATE] 0.10.1 Release Progress

2016-09-20 Thread Mayuresh Gharat
Great !

Thanks,

Mayuresh

On Tue, Sep 20, 2016 at 2:43 PM, Becket Qin  wrote:

> Awesome!
>
> On Mon, Sep 19, 2016 at 11:42 PM, Neha Narkhede  wrote:
>
> > Nice!
> > On Mon, Sep 19, 2016 at 11:33 PM Ismael Juma  wrote:
> >
> > > Well done everyone. :)
> > >
> > > On 20 Sep 2016 5:23 am, "Jason Gustafson"  wrote:
> > >
> > > > Thanks everyone for the hard work! The 0.10.1 release branch has been
> > > > created. We're now entering the stabilization phase of this release
> > which
> > > > means we'll focus on bug fixes and testing.
> > > >
> > > > -Jason
> > > >
> > > > On Fri, Sep 16, 2016 at 5:00 PM, Jason Gustafson  >
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > Thanks everyone for the hard work! Here's an update on the
> remaining
> > > KIPs
> > > > > that we are hoping to include:
> > > > >
> > > > > KIP-78 (clusterId): Review is basically complete. Assuming no
> > problems
> > > > > emerge, Ismael is planning to merge today.
> > > > > KIP-74 (max fetch size): Review is nearing completion, just a few
> > minor
> > > > > issues remain. This will probably be merged tomorrow or Sunday.
> > > > > KIP-55 (secure quotas): The patch has been rebased and probably
> needs
> > > one
> > > > > more review pass before merging. Jun is confident it can get in
> > before
> > > > > Monday.
> > > > >
> > > > > As for KIP-79, I've made one review pass, but to make it in, we'll
> > need
> > > > 1)
> > > > > some more votes on the vote thread, and 2) a few review iterations.
> > > It's
> > > > > looking a bit doubtful, but let's see how it goes!
> > > > >
> > > > > Since we are nearing the feature freeze, it would be helpful if
> > people
> > > > > begin setting some priorities on the bugs that need to get in
> before
> > > the
> > > > > code freeze. I am going to make an effort to prune the list early
> > next
> > > > > week, so if there are any critical issues you know about, make sure
> > > they
> > > > > are marked as such.
> > > > >
> > > > > Thanks,
> > > > > Jason
> > > > >
> > > >
> > >
> > --
> > Thanks,
> > Neha
> >
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[GitHub] kafka pull request #1894: KAFKA-3438: Rack-aware replica assignment should w...

2016-09-20 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/1894

KAFKA-3438: Rack-aware replica assignment should warn of overloaded brokers

When using the `--generate` option of the replica reassignment tool the 
balance of replicas across brokers is checked (only if rack-aware is enabled) 
and in case an imbalance is detected, a proper warning message is appended to 
the output of the tool (which is the suggested assignment).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3438

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1894.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1894


commit 8e71e7f3aaafa189f95228b6f2bb41d765d1bed0
Author: Vahid Hashemian 
Date:   2016-09-20T23:21:02Z

KAFKA-3438: Rack-aware replica assignment should warn of overloaded brokers

When using the `--generate` option of the replica reassignment tool the 
balance
of replicas across brokers is checked (only if rack-aware is enabled) and 
in case
an imbalance is detected, a proper warning message is appended to the 
suggested
reassignment.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3438) Rack Aware Replica Reassignment should warn of overloaded brokers

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15508184#comment-15508184
 ] 

ASF GitHub Bot commented on KAFKA-3438:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/1894

KAFKA-3438: Rack-aware replica assignment should warn of overloaded brokers

When using the `--generate` option of the replica reassignment tool the 
balance of replicas across brokers is checked (only if rack-aware is enabled) 
and in case an imbalance is detected, a proper warning message is appended to 
the output of the tool (which is the suggested assignment).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3438

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1894.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1894


commit 8e71e7f3aaafa189f95228b6f2bb41d765d1bed0
Author: Vahid Hashemian 
Date:   2016-09-20T23:21:02Z

KAFKA-3438: Rack-aware replica assignment should warn of overloaded brokers

When using the `--generate` option of the replica reassignment tool the 
balance
of replicas across brokers is checked (only if rack-aware is enabled) and 
in case
an imbalance is detected, a proper warning message is appended to the 
suggested
reassignment.




> Rack Aware Replica Reassignment should warn of overloaded brokers
> -
>
> Key: KAFKA-3438
> URL: https://issues.apache.org/jira/browse/KAFKA-3438
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Ben Stopford
>Assignee: Vahid Hashemian
> Fix For: 0.10.1.0
>
>
> We've changed the replica reassignment code to be rack aware.
> One problem that might catch users out would be that they rebalance the 
> cluster using kafka-reassign-partitions.sh but their rack configuration means 
> that some high proportion of replicas are pushed onto a single, or small 
> number of, brokers. 
> This should be an easy problem to avoid, by changing the rack assignment 
> information, but we should probably warn users if they are going to create 
> something that is unbalanced. 
> So imagine I have a Kafka cluster of 12 nodes spread over two racks with rack 
> awareness enabled. If I add a 13th machine, on a new rack, and run the 
> rebalance tool, that new machine will get ~6x as many replicas as the least 
> loaded broker. 
> Suggest a warning  be added to the tool output when --generate is called. 
> "The most loaded broker has 2.3x as many replicas as the the least loaded 
> broker. This is likely due to an uneven distribution of brokers across racks. 
> You're advised to alter the rack config so there are approximately the same 
> number of brokers per rack" and displays the individual rack→#brokers and 
> broker→#replicas data for the proposed move.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3438) Rack Aware Replica Reassignment should warn of overloaded brokers

2016-09-20 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3438:
---
Fix Version/s: (was: 0.10.1.0)
   Status: Patch Available  (was: Open)

> Rack Aware Replica Reassignment should warn of overloaded brokers
> -
>
> Key: KAFKA-3438
> URL: https://issues.apache.org/jira/browse/KAFKA-3438
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Ben Stopford
>Assignee: Vahid Hashemian
>
> We've changed the replica reassignment code to be rack aware.
> One problem that might catch users out would be that they rebalance the 
> cluster using kafka-reassign-partitions.sh but their rack configuration means 
> that some high proportion of replicas are pushed onto a single, or small 
> number of, brokers. 
> This should be an easy problem to avoid, by changing the rack assignment 
> information, but we should probably warn users if they are going to create 
> something that is unbalanced. 
> So imagine I have a Kafka cluster of 12 nodes spread over two racks with rack 
> awareness enabled. If I add a 13th machine, on a new rack, and run the 
> rebalance tool, that new machine will get ~6x as many replicas as the least 
> loaded broker. 
> Suggest a warning  be added to the tool output when --generate is called. 
> "The most loaded broker has 2.3x as many replicas as the the least loaded 
> broker. This is likely due to an uneven distribution of brokers across racks. 
> You're advised to alter the rack config so there are approximately the same 
> number of brokers per rack" and displays the individual rack→#brokers and 
> broker→#replicas data for the proposed move.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4178) Replication Throttling: Consolidate Rate Classes

2016-09-20 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15508193#comment-15508193
 ] 

Jun Rao commented on KAFKA-4178:


[~jjkoshy], [~aauradkar], could you provide some context on the changes in 
KAFKA-2443 on the window size in Rate. Do you think we can just consolidate on 
the new window calculation that Ben is proposing here? It will be a lot of 
simpler to there is only one type of Rate that every metric will use.

> Replication Throttling: Consolidate Rate Classes
> 
>
> Key: KAFKA-4178
> URL: https://issues.apache.org/jira/browse/KAFKA-4178
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>
> Replication throttling is using a different implementation of Rate to client 
> throttling (Rate & SimpleRate). These should be consolidated so both use the 
> same approach. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4201) Add a --assignment-strategy option to Mirror Maker

2016-09-20 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-4201:
--

 Summary: Add a --assignment-strategy option to Mirror Maker
 Key: KAFKA-4201
 URL: https://issues.apache.org/jira/browse/KAFKA-4201
 Project: Kafka
  Issue Type: Improvement
Reporter: Vahid Hashemian
Assignee: Vahid Hashemian


The default assignment strategy in mirror maker will be changed from range to 
round robin in an upcoming release ([see 
KAFKA-3818|https://issues.apache.org/jira/browse/KAFKA-3818]). In order to make 
it easier for users to change the assignment strategy, add an 
{{--assignment-strategy}} option to Mirror Maker command line tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4201) Add an --assignment-strategy option to Mirror Maker

2016-09-20 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-4201:
---
Summary: Add an --assignment-strategy option to Mirror Maker  (was: Add a 
--assignment-strategy option to Mirror Maker)

> Add an --assignment-strategy option to Mirror Maker
> ---
>
> Key: KAFKA-4201
> URL: https://issues.apache.org/jira/browse/KAFKA-4201
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>
> The default assignment strategy in mirror maker will be changed from range to 
> round robin in an upcoming release ([see 
> KAFKA-3818|https://issues.apache.org/jira/browse/KAFKA-3818]). In order to 
> make it easier for users to change the assignment strategy, add an 
> {{--assignment-strategy}} option to Mirror Maker command line tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)