[jira] [Work started] (KAFKA-4801) Transient test failure (part 2): ConsumerBounceTest.testConsumptionWithBrokerFailures

2017-05-06 Thread Armin Braun (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4801 started by Armin Braun.
--
> Transient test failure (part 2): 
> ConsumerBounceTest.testConsumptionWithBrokerFailures
> -
>
> Key: KAFKA-4801
> URL: https://issues.apache.org/jira/browse/KAFKA-4801
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Armin Braun
>Assignee: Armin Braun
>Priority: Minor
>  Labels: transient-system-test-failure
>
> There is still some (but very little ... when reproducing this you need more 
> than 100 runs in half the cases statistically) instability left in the test
> {code}
> ConsumerBounceTest.testConsumptionWithBrokerFailures
> {code}
> Resulting in this exception being thrown at a relatively low rate (I'd say 
> def less than 0.5% of all runs on my machine).
> {code}
> kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures FAILED
> java.lang.IllegalArgumentException: You can only check the position for 
> partitions assigned to this consumer.
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1271)
> at 
> kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:96)
> at 
> kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:69)
> {code}
> this was also reported in a comment to the original KAFKA-4198
> https://issues.apache.org/jira/browse/KAFKA-4198?focusedCommentId=15765468&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15765468



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-2910) Failure in kafka.api.SslEndToEndAuthorizationTest.testNoGroupAcl

2017-05-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-2910.

Resolution: Fixed

Closing again as that issue is being tracked via KAFKA-5173.

> Failure in kafka.api.SslEndToEndAuthorizationTest.testNoGroupAcl
> 
>
> Key: KAFKA-2910
> URL: https://issues.apache.org/jira/browse/KAFKA-2910
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Rajini Sivaram
> Fix For: 0.10.0.0
>
>
> {code}
> java.lang.SecurityException: zkEnableSecureAcls is true, but the verification 
> of the JAAS login file failed.
>   at kafka.server.KafkaServer.initZk(KafkaServer.scala:265)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:143)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:66)
>   at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:66)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:66)
>   at 
> kafka.api.SslEndToEndAuthorizationTest.kafka$api$IntegrationTestHarness$$super$setUp(SslEndToEndAuthorizationTest.scala:24)
>   at 
> kafka.api.IntegrationTestHarness$class.setUp(IntegrationTestHarness.scala:58)
>   at 
> kafka.api.SslEndToEndAuthorizationTest.kafka$api$EndToEndAuthorizationTest$$super$setUp(SslEndToEndAuthorizationTest.scala:24)
>   at 
> kafka.api.EndToEndAuthorizationTest$class.setUp(EndToEndAuthorizationTest.scala:141)
>   at 
> kafka.api.SslEndToEndAuthorizationTest.setUp(SslEndToEndAuthorizationTest.scala:24)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
> 

[jira] [Commented] (KAFKA-5184) Transient failure: MultipleListenersWithAdditionalJaasContextTest.testProduceConsume

2017-05-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999371#comment-15999371
 ] 

Ismael Juma commented on KAFKA-5184:


cc [~baluchicken] [~rsivaram]

> Transient failure: 
> MultipleListenersWithAdditionalJaasContextTest.testProduceConsume
> 
>
> Key: KAFKA-5184
> URL: https://issues.apache.org/jira/browse/KAFKA-5184
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/3574/testReport/junit/kafka.server/MultipleListenersWithAdditionalJaasContextTest/testProduceConsume/
> {code}
> Error Message
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
> Stacktrace
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:311)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:811)
>   at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:857)
>   at kafka.utils.TestUtils$.$anonfun$createTopic$1(TestUtils.scala:254)
>   at 
> kafka.utils.TestUtils$.$anonfun$createTopic$1$adapted(TestUtils.scala:253)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
>   at scala.collection.immutable.Range.foreach(Range.scala:156)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:234)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:253)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3(MultipleListenersWithSameSecurityProtocolBaseTest.scala:109)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3$adapted(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.setUp(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.R

[jira] [Commented] (KAFKA-5184) Transient failure: MultipleListenersWithAdditionalJaasContextTest.testProduceConsume

2017-05-06 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999372#comment-15999372
 ] 

Ismael Juma commented on KAFKA-5184:


The relevant part is:

{code}
ava.lang.IllegalStateException: 
kafka.common.BrokerEndPointNotAvailableException: End point with protocol label 
ListenerName(PLAINTEXT) not found for broker 0
at 
kafka.controller.ControllerBrokerRequestBatch.sendRequestsToBrokers(ControllerChannelManager.scala:459)
at 
kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:88)
at 
kafka.controller.KafkaController.onNewPartitionCreation(KafkaController.scala:443)
at 
kafka.controller.KafkaController.onNewTopicCreation(KafkaController.scala:431)
at 
kafka.controller.KafkaController$TopicChange.process(KafkaController.scala:1200)
at 
kafka.controller.KafkaController$ControllerEventThread.doWork(KafkaController.scala:1147)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
Caused by: kafka.common.BrokerEndPointNotAvailableException: End point with 
protocol label ListenerName(PLAINTEXT) not found for broker 0
at kafka.cluster.Broker.$anonfun$getNode$1(Broker.scala:172)
at scala.collection.MapLike.getOrElse(MapLike.scala:128)
at scala.collection.MapLike.getOrElse$(MapLike.scala:126)
at scala.collection.AbstractMap.getOrElse(Map.scala:59)
at kafka.cluster.Broker.getNode(Broker.scala:172)
at 
kafka.controller.ControllerBrokerRequestBatch.$anonfun$sendRequestsToBrokers$6(ControllerChannelManager.scala:361)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:95)
at scala.collection.TraversableLike.map(TraversableLike.scala:234)
at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
{code}

> Transient failure: 
> MultipleListenersWithAdditionalJaasContextTest.testProduceConsume
> 
>
> Key: KAFKA-5184
> URL: https://issues.apache.org/jira/browse/KAFKA-5184
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/3574/testReport/junit/kafka.server/MultipleListenersWithAdditionalJaasContextTest/testProduceConsume/
> {code}
> Error Message
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
> Stacktrace
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:311)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:811)
>   at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:857)
>   at kafka.utils.TestUtils$.$anonfun$createTopic$1(TestUtils.scala:254)
>   at 
> kafka.utils.TestUtils$.$anonfun$createTopic$1$adapted(TestUtils.scala:253)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
>   at scala.collection.immutable.Range.foreach(Range.scala:156)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:234)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:253)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3(MultipleListenersWithSameSecurityProtocolBaseTest.scala:109)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3$adapted(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.setUp(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runn

Re: [DISCUSS] KIP-153 : Include only client traffic in BytesOutPerSec metric

2017-05-06 Thread Edoardo Comar
Thanks for the KIP, Jun
We're constantly reminded of this inconsistency when we look at the 
traffic on the dashboards !
--
Edoardo Comar
IBM MessageHub
eco...@uk.ibm.com
IBM UK Ltd, Hursley Park, SO21 2JN

IBM United Kingdom Limited Registered in England and Wales with number 
741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 
3AU



From:   Jun Rao 
To: "dev@kafka.apache.org" 
Date:   05/05/2017 18:11
Subject:[DISCUSS] KIP-153 : Include only client traffic in 
BytesOutPerSec metric



Hi, Everyone,

We created "KIP-153 : Include only client traffic in BytesOutPerSec 
metric".

https://cwiki.apache.org/confluence/display/KAFKA/KIP-153+%3A+Include+only+client+traffic+in+BytesOutPerSec+metric


Please take a look and provide your feedback.

Thanks,

Jun



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: [VOTE] KIP-144: Exponential backoff for broker reconnect attempts

2017-05-06 Thread Edoardo Comar
+1 (non binding)
thanks
--
Edoardo Comar
IBM MessageHub
eco...@uk.ibm.com
IBM UK Ltd, Hursley Park, SO21 2JN

IBM United Kingdom Limited Registered in England and Wales with number 
741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 
3AU



From:   Jay Kreps 
To: dev@kafka.apache.org
Date:   05/05/2017 23:19
Subject:Re: [VOTE] KIP-144: Exponential backoff for broker 
reconnect attempts



+1
On Fri, May 5, 2017 at 7:29 PM Sriram Subramanian  
wrote:

> +1
>
> On Fri, May 5, 2017 at 6:04 PM, Gwen Shapira  wrote:
>
> > +1
> >
> > On Fri, May 5, 2017 at 3:32 PM, Ismael Juma  wrote:
> >
> > > Hi all,
> > >
> > > Given the simple and non controversial nature of the KIP, I would 
like
> to
> > > start the voting process for KIP-144: Exponential backoff for broker
> > > reconnect attempts:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 144%3A+Exponential+
> > > backoff+for+broker+reconnect+attempts
> > >
> > > The vote will run for a minimum of 72 hours.
> > >
> > > Thanks,
> > > Ismael
> > >
> >
> >
> >
> > --
> > *Gwen Shapira*
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter  | blog
> > 
> >
>



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


[jira] [Updated] (KAFKA-5099) Replica Deletion Regression from KIP-101

2017-05-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5099:
---
Status: Patch Available  (was: Open)

> Replica Deletion Regression from KIP-101
> 
>
> Key: KAFKA-5099
> URL: https://issues.apache.org/jira/browse/KAFKA-5099
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> It appears that replica deletion regressed from KIP-101. Replica deletion 
> happens when a broker receives a StopReplicaRequest with delete=true. Ever 
> since KAFKA-1911, replica deletion has been async, meaning the broker 
> responds with a StopReplicaResponse simply after marking the replica 
> directory as staged for deletion. This marking happens by moving a data log 
> directory and its contents such as /tmp/kafka-logs1/t1-0 to a marked 
> directory like /tmp/kafka-logs1/t1-0.8c9c4c0c61c44cc59ebeb00075a2a07f-delete, 
> acting as a soft-delete. A scheduled thread later actually deletes the data. 
> It appears that the regression occurs while the scheduled thread is actually 
> trying to delete the data, which means the controller considers operations 
> such as partition reassignment and topic deletion complete. But if you look 
> at the log4j logs and data logs, you'll find that the soft-deleted data logs 
> haven't actually won't get deleted. It seems that restarting the broker 
> actually allows for the soft-deleted directories to get deleted.
> Here's the setup:
> {code}
> > ./bin/zookeeper-server-start.sh config/zookeeper.properties
> > export LOG_DIR=logs0 && ./bin/kafka-server-start.sh 
> > config/server0.properties
> > export LOG_DIR=logs1 && ./bin/kafka-server-start.sh 
> > config/server1.properties
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic t0 
> > --replica-assignment 1:0
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic t1 
> > --replica-assignment 1:0
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic t0
> > cat p.txt
> {"partitions":
>  [
>   {"topic": "t1", "partition": 0, "replicas": [0] }
>  ],
> "version":1
> }
> > ./bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 
> > --reassignment-json-file p.txt --execute
> {code}
> Here are sample logs:
> {code}
> [2017-04-20 17:46:54,801] INFO [ReplicaFetcherManager on broker 1] Removed 
> fetcher for partitions t0-0 (kafka.server.ReplicaFetcherManager)
> [2017-04-20 17:46:54,814] INFO Log for partition t0-0 is renamed to 
> /tmp/kafka-logs1/t0-0.bbc8fa126e3e4ff787f6b68d158ab771-delete and is 
> scheduled for deletion (kafka.log.LogManager)
> [2017-04-20 17:47:27,585] INFO Deleting index 
> /tmp/kafka-logs1/t0-0.bbc8fa126e3e4ff787f6b68d158ab771-delete/.index
>  (kafka.log.OffsetIndex)
> [2017-04-20 17:47:27,586] INFO Deleting index 
> /tmp/kafka-logs1/t0-0/.timeindex (kafka.log.TimeIndex)
> [2017-04-20 17:47:27,587] ERROR Exception in deleting 
> Log(/tmp/kafka-logs1/t0-0.bbc8fa126e3e4ff787f6b68d158ab771-delete). Moving it 
> to the end of the queue. (kafka.log.LogManager)
> java.io.FileNotFoundException: 
> /tmp/kafka-logs1/t0-0/leader-epoch-checkpoint.tmp (No such file or directory)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at java.io.FileOutputStream.(FileOutputStream.java:162)
>   at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:41)
>   at 
> kafka.server.checkpoints.LeaderEpochCheckpointFile.write(LeaderEpochCheckpointFile.scala:61)
>   at 
> kafka.server.epoch.LeaderEpochFileCache.kafka$server$epoch$LeaderEpochFileCache$$flush(LeaderEpochFileCache.scala:178)
>   at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$clear$1.apply$mcV$sp(LeaderEpochFileCache.scala:161)
>   at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$clear$1.apply(LeaderEpochFileCache.scala:159)
>   at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$clear$1.apply(LeaderEpochFileCache.scala:159)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
>   at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:221)
>   at 
> kafka.server.epoch.LeaderEpochFileCache.clear(LeaderEpochFileCache.scala:159)
>   at kafka.log.Log.delete(Log.scala:1051)
>   at 
> kafka.log.LogManager.kafka$log$LogManager$$deleteLogs(LogManager.scala:442)
>   at 
> kafka.log.LogManager$$anonfun$startup$5.apply$mcV$sp(LogManager.scala:241)
>   at 
> kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
>   at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.r

Re: [DISCUSS] KIP-133: List and Alter Configs Admin APIs

2017-05-06 Thread Ismael Juma
Hi James,

Yes, that's right, it will return all config values. For topic configs,
that means falling back to the respective broker config value, which could
also be a default. If we fallback to the broker config (whether it's a
default or not), is_default will be true. Does this make sense? And do you
have any concerns related to this?

Ismael

On Sat, May 6, 2017 at 6:19 AM, James Cheng  wrote:

> Hi Ismael,
>
> Thanks for the KIP.
>
> I see that in the ListConfigs Response protocol, that configs have an
> is_default field. Does that mean that it will include *all* config values,
> instead of just overridden ones?
>
> As an example, kafka-config.sh --describe on a topic will, right now, only
> show overridden configs. With ListConfigs, will it show all default configs
> for the topic, which includes the configs that were inherited from the
> broker configs (which themselves, might also be defaults)?
>
> Thanks,
> -James
>
> > On May 4, 2017, at 7:32 PM, Ismael Juma  wrote:
> >
> > Hi all,
> >
> > We've posted "KIP-133: List and Alter Configs Admin APIs" for discussion:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 133%3A+List+and+Alter+Configs+Admin+APIs
> >
> > This completes the first batch of AdminClient APIs so that topic, config
> > and ACL management is supported.
> >
> > Please take a look. Your feedback is appreciated.
> >
> > Thanks,
> > Ismael
>
>


[VOTE] KIP-138: Change punctuate semantics

2017-05-06 Thread Michal Borowiecki

Hi all,

Given I'm not seeing any contentious issues remaining on the discussion 
thread, I'd like to initiate the vote for:


KIP-138: Change punctuate semantics

https://cwiki.apache.org/confluence/display/KAFKA/KIP-138%3A+Change+punctuate+semantics


Thanks,
Michał
--
Signature
 Michal Borowiecki
Senior Software Engineer L4
T:  +44 208 742 1600


+44 203 249 8448



E:  michal.borowie...@openbet.com
W:  www.openbet.com 


OpenBet Ltd

Chiswick Park Building 9

566 Chiswick High Rd

London

W4 5XT

UK




This message is confidential and intended only for the addressee. If you 
have received this message in error, please immediately notify the 
postmas...@openbet.com  and delete it 
from your system as well as any copies. The content of e-mails as well 
as traffic data may be monitored by OpenBet for employment and security 
purposes. To protect the environment please do not print this e-mail 
unless necessary. OpenBet Ltd. Registered Office: Chiswick Park Building 
9, 566 Chiswick High Road, London, W4 5XT, United Kingdom. A company 
registered in England and Wales. Registered no. 3134634. VAT no. 
GB927523612




Re: [VOTE] KIP-144: Exponential backoff for broker reconnect attempts

2017-05-06 Thread Dana Powers
+1 !

On May 6, 2017 4:49 AM, "Edoardo Comar"  wrote:

> +1 (non binding)
> thanks
> --
> Edoardo Comar
> IBM MessageHub
> eco...@uk.ibm.com
> IBM UK Ltd, Hursley Park, SO21 2JN
>
> IBM United Kingdom Limited Registered in England and Wales with number
> 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6
> 3AU
>
>
>
> From:   Jay Kreps 
> To: dev@kafka.apache.org
> Date:   05/05/2017 23:19
> Subject:Re: [VOTE] KIP-144: Exponential backoff for broker
> reconnect attempts
>
>
>
> +1
> On Fri, May 5, 2017 at 7:29 PM Sriram Subramanian 
> wrote:
>
> > +1
> >
> > On Fri, May 5, 2017 at 6:04 PM, Gwen Shapira  wrote:
> >
> > > +1
> > >
> > > On Fri, May 5, 2017 at 3:32 PM, Ismael Juma  wrote:
> > >
> > > > Hi all,
> > > >
> > > > Given the simple and non controversial nature of the KIP, I would
> like
> > to
> > > > start the voting process for KIP-144: Exponential backoff for broker
> > > > reconnect attempts:
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 144%3A+Exponential+
> > > > backoff+for+broker+reconnect+attempts
> > > >
> > > > The vote will run for a minimum of 72 hours.
> > > >
> > > > Thanks,
> > > > Ismael
> > > >
> > >
> > >
> > >
> > > --
> > > *Gwen Shapira*
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter  | blog
> > > 
> > >
> >
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>


[jira] [Updated] (KAFKA-3353) Remove deprecated producer configs.

2017-05-06 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3353:
---
Assignee: Ismael Juma  (was: Ashish Singh)
  Status: Patch Available  (was: Open)

> Remove deprecated producer configs.
> ---
>
> Key: KAFKA-3353
> URL: https://issues.apache.org/jira/browse/KAFKA-3353
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>
> Following producer configs were deprecated in 0.9, it will be a good idea to 
> remove them in 0.11. Removing in 0.10 was not an option, as 0.10 was a short 
> release and direct upgrades from 0.8 to 0.10 are supported.
> * block.on.buffer.full
> * metadata.fetch.timeout.ms
> * timeout.ms



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3353) Remove deprecated producer configs.

2017-05-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999519#comment-15999519
 ] 

ASF GitHub Bot commented on KAFKA-3353:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/2987

KAFKA-3353: Remove deprecated producer configs

These configs have been deprecated since 0.9.0.0:
block.on.buffer.full, metadata.fetch.timeout.ms and timeout.ms

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3353-remove-deprecated-producer-configs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2987.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2987


commit 9beaaeeb5c791a807cbdac44d58ebfd44711
Author: Ismael Juma 
Date:   2017-05-06T18:10:48Z

KAFKA-3353; Remove deprecated producer configs

These configs have been deprecated since 0.9.0.0:
block.on.buffer.full, metadata.fetch.timeout.ms and timeout.ms




> Remove deprecated producer configs.
> ---
>
> Key: KAFKA-3353
> URL: https://issues.apache.org/jira/browse/KAFKA-3353
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Ashish Singh
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>
> Following producer configs were deprecated in 0.9, it will be a good idea to 
> remove them in 0.11. Removing in 0.10 was not an option, as 0.10 was a short 
> release and direct upgrades from 0.8 to 0.10 are supported.
> * block.on.buffer.full
> * metadata.fetch.timeout.ms
> * timeout.ms



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2987: KAFKA-3353: Remove deprecated producer configs

2017-05-06 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/2987

KAFKA-3353: Remove deprecated producer configs

These configs have been deprecated since 0.9.0.0:
block.on.buffer.full, metadata.fetch.timeout.ms and timeout.ms

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3353-remove-deprecated-producer-configs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2987.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2987


commit 9beaaeeb5c791a807cbdac44d58ebfd44711
Author: Ismael Juma 
Date:   2017-05-06T18:10:48Z

KAFKA-3353; Remove deprecated producer configs

These configs have been deprecated since 0.9.0.0:
block.on.buffer.full, metadata.fetch.timeout.ms and timeout.ms




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5121) Implement transaction index for KIP-98

2017-05-06 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5121.

Resolution: Fixed

Issue resolved by pull request 2910
[https://github.com/apache/kafka/pull/2910]

> Implement transaction index for KIP-98
> --
>
> Key: KAFKA-5121
> URL: https://issues.apache.org/jira/browse/KAFKA-5121
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.11.0.0
>
>
> As documented in the KIP-98 proposal, the broker will maintain an index 
> containing all of the aborted transactions for each partition. This index is 
> used to respond to fetches with READ_COMMITTED isolation. This requires the 
> broker maintain the last stable offset (LSO).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2910: KAFKA-5121: Implement transaction index for KIP-98

2017-05-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2910


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5121) Implement transaction index for KIP-98

2017-05-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999524#comment-15999524
 ] 

ASF GitHub Bot commented on KAFKA-5121:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2910


> Implement transaction index for KIP-98
> --
>
> Key: KAFKA-5121
> URL: https://issues.apache.org/jira/browse/KAFKA-5121
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.11.0.0
>
>
> As documented in the KIP-98 proposal, the broker will maintain an index 
> containing all of the aborted transactions for each partition. This index is 
> used to respond to fetches with READ_COMMITTED isolation. This requires the 
> broker maintain the last stable offset (LSO).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1489

2017-05-06 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5121; Implement transaction index for KIP-98

--
[...truncated 857.66 KB...]
kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffs

Build failed in Jenkins: kafka-trunk-jdk7 #2155

2017-05-06 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5121; Implement transaction index for KIP-98

--
[...truncated 1.67 MB...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithM

Re: [DISCUSS] KIP-147: Add missing type parameters to StateStoreSupplier factories and KGroupedStream/Table methods

2017-05-06 Thread Michal Borowiecki

Hi Matthias,

Agreed. I tried your proposal and indeed it would work.

However, I think to maintain full backward compatibility we would also 
need to deprecate Stores.create() and leave it unchanged, while 
providing a new method that returns the more strongly typed Factories.


( This is because PersistentWindowFactory and PersistentSessionFactory 
cannot extend the existing PersistentKeyValueFactory interface, since 
their build() methods will be returning 
TypedStateStoreSupplier> and 
TypedStateStoreSupplier> respectively, which are NOT 
subclasses of TypedStateStoreSupplier>. I do not see 
another way around it. Admittedly, my type covariance skills are 
rudimentary. Does anyone see a better way around this? )


Since create() takes only the store name as argument, and I don't see 
what we could overload it with, the new method would need to have a 
different name.


Alternatively, since create(String) is the only method in Stores, we 
could deprecate the entire class and provide a new one. That would be my 
preference. Any ideas what to call it?



All comments and suggestions appreciated.


Cheers,

Michał


On 04/05/17 21:48, Matthias J. Sax wrote:

I had a quick look into this.

With regard to backward compatibility, I think it would be required do
introduce a new type `TypesStateStoreSupplier` (that extends
`StateStoreSupplier`) and to overload all methods that take a
`StateStoreSupplier` that accept the new type instead of the current one.

This would allow `.build` to return a `TypedStateStoreSupplier` and
thus, would not break any code. As least if I did not miss anything with
regard to some magic of type inference using generics (I am not an
expert in this field).


-Matthias

On 5/4/17 11:32 AM, Matthias J. Sax wrote:

Did not have time to have a look. But backward compatibility is a must
from my point of view.

-Matthias


On 5/4/17 12:56 AM, Michal Borowiecki wrote:

Hello,

I've updated the KIP with missing information.

I would especially appreciate some comments on the compatibility aspects
of this as the proposed change is not fully backwards-compatible.

In the absence of comments I shall call for a vote in the next few days.

Thanks,

Michal


On 30/04/17 23:11, Michal Borowiecki wrote:

Hi community!

I have just drafted KIP-147: Add missing type parameters to
StateStoreSupplier factories and KGroupedStream/Table methods


Please let me know if this a step in the right direction.

All comments welcome.

Thanks,
Michal
--
Signature
 Michal Borowiecki
Senior Software Engineer L4
T:  +44 208 742 1600


+44 203 249 8448



E:  michal.borowie...@openbet.com
W:  www.openbet.com 


OpenBet Ltd

Chiswick Park Building 9

566 Chiswick High Rd

London

W4 5XT

UK




This message is confidential and intended only for the addressee. If
you have received this message in error, please immediately notify the
postmas...@openbet.com  and delete it
from your system as well as any copies. The content of e-mails as well
as traffic data may be monitored by OpenBet for employment and
security purposes. To protect the environment please do not print this
e-mail unless necessary. OpenBet Ltd. Registered Office: Chiswick Park
Building 9, 566 Chiswick High Road, London, W4 5XT, United Kingdom. A
company registered in England and Wales. Registered no. 3134634. VAT
no. GB927523612


--
Signature
 Michal Borowiecki
Senior Software Engineer L4
T:  +44 208 742 1600


+44 203 249 8448



E:  michal.borowie...@openbet.com
W:  www.openbet.com 


OpenBet Ltd

Chiswick Park Building 9

566 Chiswick High Rd

London

W4 5XT

UK




This message is confidential and intended only for the addressee. If you
have received this message in error, please immediately notify the
postmas...@openbet.com  and delete it
from your system as well as any copies. The content of e-mails as well
as traffic data may be monitored by OpenBet for employment and security
purposes. To protect the environment please do not print this e-mail
unless necessary. OpenBet Ltd. Registered Office: Chiswick Park Building
9, 566 Chiswick High Road, London, W4 5XT, United Kingdom. A company
registered in England and Wales. Registered no. 3134634. VAT no.
GB927523612



--
Signature
 Michal Borowiecki
Senior Software Engineer L4
T:  +44 208 742 1600


+44 203 249 8448



E:  michal.borowi

Re: [VOTE] KIP-138: Change punctuate semantics

2017-05-06 Thread Matthias J. Sax
+1

Thanks a lot for this KIP!

-Matthias

On 5/6/17 10:18 AM, Michal Borowiecki wrote:
> Hi all,
> 
> Given I'm not seeing any contentious issues remaining on the discussion
> thread, I'd like to initiate the vote for:
> 
> KIP-138: Change punctuate semantics
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-138%3A+Change+punctuate+semantics
> 
> 
> Thanks,
> Michał
> -- 
> Signature
>  Michal Borowiecki
> Senior Software Engineer L4
>   T:  +44 208 742 1600
> 
>   
>   +44 203 249 8448
> 
>   
>
>   E:  michal.borowie...@openbet.com
>   W:  www.openbet.com 
> 
>   
>   OpenBet Ltd
> 
>   Chiswick Park Building 9
> 
>   566 Chiswick High Rd
> 
>   London
> 
>   W4 5XT
> 
>   UK
> 
>   
> 
> 
> This message is confidential and intended only for the addressee. If you
> have received this message in error, please immediately notify the
> postmas...@openbet.com  and delete it
> from your system as well as any copies. The content of e-mails as well
> as traffic data may be monitored by OpenBet for employment and security
> purposes. To protect the environment please do not print this e-mail
> unless necessary. OpenBet Ltd. Registered Office: Chiswick Park Building
> 9, 566 Chiswick High Road, London, W4 5XT, United Kingdom. A company
> registered in England and Wales. Registered no. 3134634. VAT no.
> GB927523612
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-138: Change punctuate semantics

2017-05-06 Thread Bill Bejeck
+1

Thanks,
Bill

On Sat, May 6, 2017 at 5:58 PM, Matthias J. Sax 
wrote:

> +1
>
> Thanks a lot for this KIP!
>
> -Matthias
>
> On 5/6/17 10:18 AM, Michal Borowiecki wrote:
> > Hi all,
> >
> > Given I'm not seeing any contentious issues remaining on the discussion
> > thread, I'd like to initiate the vote for:
> >
> > KIP-138: Change punctuate semantics
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 138%3A+Change+punctuate+semantics
> >
> >
> > Thanks,
> > Michał
> > --
> > Signature
> >  Michal Borowiecki
> > Senior Software Engineer L4
> >   T:  +44 208 742 1600
> >
> >
> >   +44 203 249 8448
> >
> >
> >
> >   E:  michal.borowie...@openbet.com
> >   W:  www.openbet.com 
> >
> >
> >   OpenBet Ltd
> >
> >   Chiswick Park Building 9
> >
> >   566 Chiswick High Rd
> >
> >   London
> >
> >   W4 5XT
> >
> >   UK
> >
> >
> > 
> >
> > This message is confidential and intended only for the addressee. If you
> > have received this message in error, please immediately notify the
> > postmas...@openbet.com  and delete it
> > from your system as well as any copies. The content of e-mails as well
> > as traffic data may be monitored by OpenBet for employment and security
> > purposes. To protect the environment please do not print this e-mail
> > unless necessary. OpenBet Ltd. Registered Office: Chiswick Park Building
> > 9, 566 Chiswick High Road, London, W4 5XT, United Kingdom. A company
> > registered in England and Wales. Registered no. 3134634. VAT no.
> > GB927523612
> >
>
>


[GitHub] kafka pull request #2025: KAFKA-4293 - improve ByteBufferMessageSet.deepIter...

2017-05-06 Thread radai-rosenblatt
Github user radai-rosenblatt closed the pull request at:

https://github.com/apache/kafka/pull/2025


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4293) ByteBufferMessageSet.deepIterator burns CPU catching EOFExceptions

2017-05-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999626#comment-15999626
 ] 

ASF GitHub Bot commented on KAFKA-4293:
---

Github user radai-rosenblatt closed the pull request at:

https://github.com/apache/kafka/pull/2025


> ByteBufferMessageSet.deepIterator burns CPU catching EOFExceptions
> --
>
> Key: KAFKA-4293
> URL: https://issues.apache.org/jira/browse/KAFKA-4293
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.1
>Reporter: radai rosenblatt
>Assignee: radai rosenblatt
>
> around line 110:
> {noformat}
> try {
> while (true)
> innerMessageAndOffsets.add(readMessageFromStream(compressed))
> } catch {
> case eofe: EOFException =>
> // we don't do anything at all here, because the finally
> // will close the compressed input stream, and we simply
> // want to return the innerMessageAndOffsets
> {noformat}
> the only indication the code has that the end of the oteration was reached is 
> by catching EOFException (which will be thrown inside 
> readMessageFromStream()).
> profiling runs performed at linkedIn show 10% of the total broker CPU time 
> taken up by Throwable.fillInStack() because of this behaviour.
> unfortunately InputStream.available() cannot be relied upon (concrete example 
> - GZipInputStream will not correctly return 0) so the fix would probably be a 
> wire format change to also encode the number of messages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS]: KIP-149: Enabling key access in ValueTransformer, ValueMapper, and ValueJoiner

2017-05-06 Thread Jeyhun Karimov
Hi,

Thanks for comments. I extended PR and KIP to include rich functions. I
will still have to evaluate the cost of deep copying of keys.

Cheers,
Jeyhun

On Fri, May 5, 2017 at 8:02 PM Mathieu Fenniak 
wrote:

> Hey Matthias,
>
> My opinion would be that documenting the immutability of the key is the
> best approach available.  Better than requiring the key to be serializable
> (as with Jeyhun's last pass at the PR), no performance risk.
>
> It'd be different if Java had immutable type constraints of some kind. :-)
>
> Mathieu
>
>
> On Fri, May 5, 2017 at 11:31 AM, Matthias J. Sax 
> wrote:
>
> > Agreed about RichFunction. If we follow this path, it should cover
> > all(?) DSL interfaces.
> >
> > About guarding the key -- I am still not sure what to do about it...
> > Maybe it might be enough to document it (and name the key parameter like
> > `readOnlyKey` to make is very clear). Ultimately, I would prefer to
> > guard against any modification, but I have no good idea how to do this.
> > Not sure what others think about the risk of corrupted partitioning
> > (what would be a user error and we could say, well, bad luck, you got a
> > bug in your code, that's not our fault), vs deep copy with a potential
> > performance hit (that we can't quantity atm without any performance
> test).
> >
> > We do have a performance system test. Maybe it's worth for you to apply
> > the deep copy strategy and run the test. It's very basic performance
> > test only, but might give some insight. If you want to do this, look
> > into folder "tests" for general test setup, and into
> > "tests/kafaktests/benchmarks/streams" to find find the perf test.
> >
> >
> > -Matthias
> >
> > On 5/5/17 8:55 AM, Jeyhun Karimov wrote:
> > > Hi Matthias,
> > >
> > > I think extending KIP to include RichFunctions totally  makes sense.
> So,
> > >  we don't want to guard the keys because it is costly.
> > > If we introduce RichFunctions I think it should not be limited only
> the 3
> > > interfaces proposed in KIP but to wide range of interfaces.
> > > Please correct me if I am wrong.
> > >
> > > Cheers,
> > > Jeyhun
> > >
> > > On Fri, May 5, 2017 at 12:04 AM Matthias J. Sax  >
> > > wrote:
> > >
> > >> One follow up. There was this email on the user list:
> > >>
> > >>
> > >> http://search-hadoop.com/m/Kafka/uyzND17KhCaBzPSZ1?subj=
> > Shouldn+t+the+initializer+of+a+stream+aggregate+accept+the+key+
> > >>
> > >> It might make sense so include Initializer, Adder, and Substractor
> > >> inferface, too.
> > >>
> > >> And we should double check if there are other interface we might miss
> > atm.
> > >>
> > >>
> > >> -Matthias
> > >>
> > >>
> > >> On 5/4/17 1:31 PM, Matthias J. Sax wrote:
> > >>> Thanks for updating the KIP.
> > >>>
> > >>> Deep copying the key will work for sure, but I am actually a little
> bit
> > >>> worried about performance impact... We might want to do some test to
> > >>> quantify this impact.
> > >>>
> > >>>
> > >>> Btw: this remind me about the idea of `RichFunction` interface that
> > >>> would allow users to access record metadata (like timestamp, offset,
> > >>> partition etc) within DSL. This would be a similar concept. Thus, I
> am
> > >>> wondering, if it would make sense to enlarge the scope of this KIP by
> > >>> that? WDYT?
> > >>>
> > >>>
> > >>>
> > >>> -Matthias
> > >>>
> > >>>
> > >>> On 5/3/17 2:08 AM, Jeyhun Karimov wrote:
> >  Hi Mathieu,
> > 
> >  Thanks for feedback. I followed similar approach and updated PR and
> > KIP
> >  accordingly. I tried to guard the key in Processors sending a copy
> of
> > an
> >  actual key.
> >  Because I am doing deep copy of an object, I think memory can be
> > >> bottleneck
> >  in some use-cases.
> > 
> >  Cheers,
> >  Jeyhun
> > 
> >  On Tue, May 2, 2017 at 5:10 PM Mathieu Fenniak <
> > >> mathieu.fenn...@replicon.com>
> >  wrote:
> > 
> > > Hi Jeyhun,
> > >
> > > This approach would change ValueMapper (...etc) to be classes,
> rather
> > >> than
> > > interfaces, which is also a backwards incompatible change.  An
> > >> alternative
> > > approach that would be backwards compatible would be to define new
> > > interfaces, and provide overrides where those interfaces are used.
> > >
> > > Unfortunately, making the key parameter as "final" doesn't change
> > much
> > > about guarding against key change.  It only prevents the parameter
> > >> variable
> > > from being reassigned.  If the key type is a mutable object (eg.
> > >> byte[]),
> > > it can still be mutated. (eg. key[0] = 0).  But I'm not really sure
> > >> there's
> > > much that can be done about that.
> > >
> > > Mathieu
> > >
> > >
> > > On Mon, May 1, 2017 at 5:39 PM, Jeyhun Karimov <
> je.kari...@gmail.com
> > >
> > > wrote:
> > >
> > >> Thanks for comments.
> > >>
> > >> The concerns makes sense. Although we can guard for immutable keys
> > in
> > >> current i

[GitHub] kafka pull request #2988: Added the producer record metadata to the SourceTa...

2017-05-06 Thread GeoSmith
GitHub user GeoSmith opened a pull request:

https://github.com/apache/kafka/pull/2988

Added the producer record metadata to the SourceTask commitRecord

- Added the Producers Record Metadata object to the commitRecord method on 
the SourceTask class so more data is provided to those who override and wish to 
hook into the recordCommit call from the producer. 
-If its a transformation, it will return null which is explained in the 
javadoc. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/GeoSmith/kafka AddRecordMetaDataSourceTask

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2988.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2988


commit a852d7bf651cf60f4fbcfaed7dbf4a42d4badef7
Author: George Smith 
Date:   2017-05-07T03:42:10Z

Added the producer record metadata that is returned from the producer to 
the signature of the commitRecord method on the SourceTask

commit e025ec02040482511d066029f8d90d311e99aff7
Author: George Smith 
Date:   2017-05-07T03:53:16Z

Updated test cases for the SourceTask and the tools




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2988: Added the producer record metadata to the SourceTa...

2017-05-06 Thread GeoSmith
Github user GeoSmith closed the pull request at:

https://github.com/apache/kafka/pull/2988


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2989: MINOR: Adding the RecordMetadata that is returned ...

2017-05-06 Thread GeoSmith
GitHub user GeoSmith opened a pull request:

https://github.com/apache/kafka/pull/2989

MINOR: Adding the RecordMetadata that is returned by the producer to the 
commitRecord method for SourceTask

**Included:**
- Added the producers record metadata object to the commitRecord method on 
the SourceTask class so more data is provided from the producer and it allows 
anyone overriding and hooking into the commitRecord method to receive more 
information about where the record was procuded to.

- If its a transformation, it will send in a null,  which is explained in 
the javadoc.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/GeoSmith/kafka AddRecordMetadataToSourceTask

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2989.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2989


commit 69d74550af36d888b66466766f0854a4ee8da792
Author: George Smith 
Date:   2017-05-07T05:14:52Z

Adding the RecordMetadata that is returned by the producer to the 
commitRecord method for SourceTask




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP 151 - Expose Connector type in REST API

2017-05-06 Thread dan
thanks for the feedback, it all sounds good. i have made the changes to the
pr and the kip.

dan

On Fri, May 5, 2017 at 9:29 AM, Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Thank you for the KIP. It's a nice improvement.
>
> Two small suggestions:
>
> 1) Let's not use all caps to describe the type of the connector. "Source"
> and "Sink" seem more appropriate (but even all lower case would be better).
> 2) It's been discussed in other contexts recently, but I believe finally
> exposing a connector's version here makes more sense than anywhere else at
> the moment. There's an existing interface method to grab the version, and
> publishing it through REST is not affected by any conventions made with
> respect to versioning format (also sorting based on name and version I
> guess is a concern that can be postponed to when we support multiple
> versions of the same connector and this doesn't have to be addressed on a
> KIP anyways).
>
> Let me know what you think. I'll add comments to the PR as well.
> Thanks again.
>
> -Konstantine
>
> On Thu, May 4, 2017 at 4:20 PM, Gwen Shapira  wrote:
>
> > YES PLEASE!
> >
> > On Tue, May 2, 2017 at 1:48 PM, dan  wrote:
> >
> > > hello.
> > >
> > > in an attempt to make the connect rest endpoints more useful i'd like
> to
> > > add the Connector type (Sink/Source) in our rest endpoints to make them
> > > more self descriptive.
> > >
> > > KIP here:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 151+Expose+Connector+type+in+REST+API
> > > initial pr: https://github.com/apache/kafka/pull/2960
> > >
> > > thanks
> > > dan
> > >
> >
> >
> >
> > --
> > *Gwen Shapira*
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter  | blog
> > 
> >
>


[jira] [Commented] (KAFKA-4477) Node reduces its ISR to itself, and doesn't recover. Other nodes do not take leadership, cluster remains sick until node is restarted.

2017-05-06 Thread Arpan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15999690#comment-15999690
 ] 

Arpan commented on KAFKA-4477:
--

I have opened separate issue KAFKA-5153 .

> Node reduces its ISR to itself, and doesn't recover. Other nodes do not take 
> leadership, cluster remains sick until node is restarted.
> --
>
> Key: KAFKA-4477
> URL: https://issues.apache.org/jira/browse/KAFKA-4477
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.0
> Environment: RHEL7
> java version "1.8.0_66"
> Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>Reporter: Michael Andre Pearce (IG)
>Assignee: Apurva Mehta
>Priority: Critical
>  Labels: reliability
> Fix For: 0.10.1.1
>
> Attachments: 2016_12_15.zip, 72_Server_Thread_Dump.txt, 
> 73_Server_Thread_Dump.txt, 74_Server_Thread_Dump, issue_node_1001_ext.log, 
> issue_node_1001.log, issue_node_1002_ext.log, issue_node_1002.log, 
> issue_node_1003_ext.log, issue_node_1003.log, kafka.jstack, 
> server_1_72server.log, server_2_73_server.log, server_3_74Server.log, 
> state_change_controller.tar.gz
>
>
> We have encountered a critical issue that has re-occured in different 
> physical environments. We haven't worked out what is going on. We do though 
> have a nasty work around to keep service alive. 
> We do have not had this issue on clusters still running 0.9.01.
> We have noticed a node randomly shrinking for the partitions it owns the 
> ISR's down to itself, moments later we see other nodes having disconnects, 
> followed by finally app issues, where producing to these partitions is 
> blocked.
> It seems only by restarting the kafka instance java process resolves the 
> issues.
> We have had this occur multiple times and from all network and machine 
> monitoring the machine never left the network, or had any other glitches.
> Below are seen logs from the issue.
> Node 7:
> [2016-12-01 07:01:28,112] INFO Partition 
> [com_ig_trade_v1_position_event--demo--compacted,10] on broker 7: Shrinking 
> ISR for partition [com_ig_trade_v1_position_event--demo--compacted,10] from 
> 1,2,7 to 7 (kafka.cluster.Partition)
> All other nodes:
> [2016-12-01 07:01:38,172] WARN [ReplicaFetcherThread-0-7], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@5aae6d42 
> (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 7 was disconnected before the response was 
> read
> All clients:
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> After this occurs, we then suddenly see on the sick machine an increasing 
> amount of close_waits and file descriptors.
> As a work around to keep service we are currently putting in an automated 
> process that tails and regex's for: and where new_partitions hit just itself 
> we restart the node. 
> "\[(?P.+)\] INFO Partition \[.*\] on broker .* Shrinking ISR for 
> partition \[.*\] from (?P.+) to (?P.+) 
> \(kafka.cluster.Partition\)"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


How do I generate/view the site docs, when developing?

2017-05-06 Thread James Cheng
Hi,

I'm working on some site docs changes, as part of 
https://issues.apache.org/jira/browse/KAFKA-3480

How do I build the docs, so that I can view them locally and make sure they are 
right? It used to be that I could "./gradlew siteDocsTar" and then view the 
created website. But it appears that with the recent redesign, that the local 
website now refers to javascript and server-side-include files that are not 
part of the github repo.

Here is what I see when I open the html file in web browser
http://imgur.com/a/njsU8

Here is what I see when I use a web server that supports Server Side Includes, 
and load the local "website":
http://imgur.com/a/ce5Gc

Does anyone have any guidance?

Thanks,
-James