[jira] [Commented] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-03-11 Thread Viktor Somogyi-Vass (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789241#comment-16789241
 ] 

Viktor Somogyi-Vass commented on KAFKA-8030:


I'll take a look

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-03-11 Thread Viktor Somogyi-Vass (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viktor Somogyi-Vass reassigned KAFKA-8030:
--

Assignee: Viktor Somogyi-Vass

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Viktor Somogyi-Vass
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7925) Constant 100% cpu usage by all kafka brokers

2019-03-11 Thread Abhi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789251#comment-16789251
 ] 

Abhi commented on KAFKA-7925:
-

[~rsivaram] [~ijuma] This issue is a blocker for our deployment. Can this be 
included in the next v2.2.0 release?

> Constant 100% cpu usage by all kafka brokers
> 
>
> Key: KAFKA-7925
> URL: https://issues.apache.org/jira/browse/KAFKA-7925
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0, 2.1.1
> Environment: Java 11, Kafka v2.1.0, Kafka v2.1.1
>Reporter: Abhi
>Priority: Critical
> Attachments: jira-server.log-1, jira-server.log-2, jira-server.log-3, 
> jira-server.log-4, jira-server.log-5, jira-server.log-6, 
> jira_prod.producer.log, threadump20190212.txt
>
>
> Hi,
> I am seeing constant 100% cpu usage on all brokers in our kafka cluster even 
> without any clients connected to any broker.
> This is a bug that we have seen multiple times in our kafka setup that is not 
> yet open to clients. It is becoming a blocker for our deployment now.
> I am seeing lot of connections to other brokers in CLOSE_WAIT state (see 
> below). In thread usage, I am seeing these threads 
> 'kafka-network-thread-6-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0,kafka-network-thread-6-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1,kafka-network-thread-6-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-2'
>  taking up more than 90% of the cpu time in a 60s interval.
> I have attached a thread dump of one of the brokers in the cluster.
> *Java version:*
> openjdk 11.0.2 2019-01-15
> OpenJDK Runtime Environment 18.9 (build 11.0.2+9)
> OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode)
> *Kafka verison:* v2.1.0
>  
> *connections:*
> java 144319 kafkagod 88u IPv4 3063266 0t0 TCP *:35395 (LISTEN)
> java 144319 kafkagod 89u IPv4 3063267 0t0 TCP *:9144 (LISTEN)
> java 144319 kafkagod 104u IPv4 3064219 0t0 TCP 
> mwkafka-prod-02.tbd:47292->mwkafka-zk-prod-05.tbd:2181 (ESTABLISHED)
> java 144319 kafkagod 2003u IPv4 3055115 0t0 TCP *:9092 (LISTEN)
> java 144319 kafkagod 2013u IPv4 7220110 0t0 TCP 
> mwkafka-prod-02.tbd:60724->mwkafka-zk-prod-04.dr:2181 (ESTABLISHED)
> java 144319 kafkagod 2020u IPv4 30012904 0t0 TCP 
> mwkafka-prod-02.tbd:38988->mwkafka-prod-02.nyc:9092 (ESTABLISHED)
> java 144319 kafkagod 2021u IPv4 30012961 0t0 TCP 
> mwkafka-prod-02.tbd:58420->mwkafka-prod-01.nyc:9092 (ESTABLISHED)
> java 144319 kafkagod 2027u IPv4 30015723 0t0 TCP 
> mwkafka-prod-02.tbd:58398->mwkafka-prod-01.nyc:9092 (ESTABLISHED)
> java 144319 kafkagod 2028u IPv4 30015630 0t0 TCP 
> mwkafka-prod-02.tbd:36248->mwkafka-prod-02.dr:9092 (ESTABLISHED)
> java 144319 kafkagod 2030u IPv4 30015726 0t0 TCP 
> mwkafka-prod-02.tbd:39012->mwkafka-prod-02.nyc:9092 (ESTABLISHED)
> java 144319 kafkagod 2031u IPv4 30013619 0t0 TCP 
> mwkafka-prod-02.tbd:38986->mwkafka-prod-02.nyc:9092 (ESTABLISHED)
> java 144319 kafkagod 2032u IPv4 30015604 0t0 TCP 
> mwkafka-prod-02.tbd:36246->mwkafka-prod-02.dr:9092 (ESTABLISHED)
> java 144319 kafkagod 2033u IPv4 30012981 0t0 TCP 
> mwkafka-prod-02.tbd:36924->mwkafka-prod-01.dr:9092 (ESTABLISHED)
> java 144319 kafkagod 2034u IPv4 30012967 0t0 TCP 
> mwkafka-prod-02.tbd:39036->mwkafka-prod-02.nyc:9092 (ESTABLISHED)
> java 144319 kafkagod 2035u IPv4 30012898 0t0 TCP 
> mwkafka-prod-02.tbd:36866->mwkafka-prod-01.dr:9092 (FIN_WAIT2)
> java 144319 kafkagod 2036u IPv4 30004729 0t0 TCP 
> mwkafka-prod-02.tbd:36882->mwkafka-prod-01.dr:9092 (ESTABLISHED)
> java 144319 kafkagod 2037u IPv4 30004914 0t0 TCP 
> mwkafka-prod-02.tbd:58426->mwkafka-prod-01.nyc:9092 (ESTABLISHED)
> java 144319 kafkagod 2038u IPv4 30015651 0t0 TCP 
> mwkafka-prod-02.tbd:36884->mwkafka-prod-01.dr:9092 (ESTABLISHED)
> java 144319 kafkagod 2039u IPv4 30012966 0t0 TCP 
> mwkafka-prod-02.tbd:58422->mwkafka-prod-01.nyc:9092 (ESTABLISHED)
> java 144319 kafkagod 2040u IPv4 30005643 0t0 TCP 
> mwkafka-prod-02.tbd:36252->mwkafka-prod-02.dr:9092 (ESTABLISHED)
> java 144319 kafkagod 2041u IPv4 30012944 0t0 TCP 
> mwkafka-prod-02.tbd:36286->mwkafka-prod-02.dr:9092 (ESTABLISHED)
> java 144319 kafkagod 2042u IPv4 30012973 0t0 TCP 
> mwkafka-prod-02.tbd:9092->mwkafka-prod-01.nyc:51924 (ESTABLISHED)
> java 144319 kafkagod 2043u sock 0,7 0t0 30012463 protocol: TCP
> java 144319 kafkagod 2044u IPv4 30012979 0t0 TCP 
> mwkafka-prod-02.tbd:9092->mwkafka-prod-01.dr:39994 (ESTABLISHED)
> java 144319 kafkagod 2045u IPv4 30012899 0t0 TCP 
> mwkafka-prod-02.tbd:9092->mwkafka-prod-02.nyc:34548 (ESTABLISHED)
> java 144319 kafkagod 2046u sock 0,7 0t0 30003437 protocol: TCP
> java 144319 kafkagod 2047u IPv4 30012980 0t0 TCP 
> mwkafka-prod-02.tbd:9092->mwkafka-prod-02.dr:38120 (ESTABLISHED)
> java 144319 kafkagod 2048u sock 0,7 0t0 30012546 protocol: TCP
> java 144319 kafkagod 2049

[jira] [Resolved] (KAFKA-8072) Transient failure in SslSelectorTest.testCloseOldestConnectionWithMultipleStagedReceives

2019-03-11 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8072.
---
   Resolution: Fixed
Fix Version/s: (was: 2.2.1)
   (was: 2.3.0)
   2.2.0

This is fixed by the change made for KAFKA-7288.

> Transient failure in 
> SslSelectorTest.testCloseOldestConnectionWithMultipleStagedReceives
> 
>
> Key: KAFKA-8072
> URL: https://issues.apache.org/jira/browse/KAFKA-8072
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Bill Bejeck
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.0
>
>
> Failed in build [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20134/]
> Stacktrace
> {noformat}
> Error Message
> java.lang.AssertionError: Channel has bytes buffered
> Stacktrace
> java.lang.AssertionError: Channel has bytes buffered
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at org.junit.Assert.assertFalse(Assert.java:65)
>   at 
> org.apache.kafka.common.network.SelectorTest.createConnectionWithStagedReceives(SelectorTest.java:499)
>   at 
> org.apache.kafka.common.network.SelectorTest.verifyCloseOldestConnectionWithStagedReceives(SelectorTest.java:505)
>   at 
> org.apache.kafka.common.network.SelectorTest.testCloseOldestConnectionWithMultipleStagedReceives(SelectorTest.java:474)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.prox

[jira] [Created] (KAFKA-8090) Replace ControlledShutdown request/response with automated protocol

2019-03-11 Thread Mickael Maison (JIRA)
Mickael Maison created KAFKA-8090:
-

 Summary: Replace ControlledShutdown request/response with 
automated protocol
 Key: KAFKA-8090
 URL: https://issues.apache.org/jira/browse/KAFKA-8090
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6755) MaskField SMT should optionally take a literal value to use instead of using null

2019-03-11 Thread Valeria Vasylieva (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789383#comment-16789383
 ] 

Valeria Vasylieva commented on KAFKA-6755:
--

[~rhauch] thanks for the review! I have introduced suggested changes and 
started the discussion on the KIP. Hope it will not take long time.

> MaskField SMT should optionally take a literal value to use instead of using 
> null
> -
>
> Key: KAFKA-6755
> URL: https://issues.apache.org/jira/browse/KAFKA-6755
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Valeria Vasylieva
>Priority: Major
>  Labels: needs-kip, newbie
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> The existing {{org.apache.kafka.connect.transforms.MaskField}} SMT always 
> uses the null value for the type of field. It'd be nice to *optionally* be 
> able to specify a literal value for the type, where the SMT would convert the 
> literal string value in the configuration to the desired type (using the new 
> {{Values}} methods).
> Use cases: mask out the IP address, or SSN, or other personally identifiable 
> information (PII).
> Since this changes the API, and thus will require a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8091) Flaky test DynamicBrokerReconfigurationTest#testAddRemoveSaslListener

2019-03-11 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-8091:
-

 Summary: Flaky test  
DynamicBrokerReconfigurationTest#testAddRemoveSaslListener 
 Key: KAFKA-8091
 URL: https://issues.apache.org/jira/browse/KAFKA-8091
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.2.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.3.0, 2.2.1


See KAFKA-6824 for details. Since the SSL version of the test is currently 
skipped using @Ignore, fixing this for SASL first and wait for that to be 
stable before re-enabling SSL tests under KAFKA-6824. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8090) Replace ControlledShutdown request/response with automated protocol

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789416#comment-16789416
 ] 

ASF GitHub Bot commented on KAFKA-8090:
---

mimaison commented on pull request #6423: KAFKA-8090: Use automatic RPC 
generation in ControlledShutdown
URL: https://github.com/apache/kafka/pull/6423
 
 
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace ControlledShutdown request/response with automated protocol
> ---
>
> Key: KAFKA-8090
> URL: https://issues.apache.org/jira/browse/KAFKA-8090
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7875) Add KStream#flatTransformValues

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789429#comment-16789429
 ] 

ASF GitHub Bot commented on KAFKA-7875:
---

cadonna commented on pull request #6424: KAFKA-7875: Add 
KStream.flatTransformValues
URL: https://github.com/apache/kafka/pull/6424
 
 
   - Adds flatTrasformValues methods in KStream
   - Adds processor supplier and processor for flatTransformValues
   
   This contribution is my original work and I license the work to the project 
under the project's open source license.
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add KStream#flatTransformValues
> ---
>
> Key: KAFKA-7875
> URL: https://issues.apache.org/jira/browse/KAFKA-7875
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bruno Cadonna
>Priority: Major
>  Labels: kip
>
> Part of KIP-313: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-313%3A+Add+KStream.flatTransform+and+KStream.flatTransformValues]
>  
> Compare https://issues.apache.org/jira/browse/KAFKA-4217



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8091) Flaky test DynamicBrokerReconfigurationTest#testAddRemoveSaslListener

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789444#comment-16789444
 ] 

ASF GitHub Bot commented on KAFKA-8091:
---

rajinisivaram commented on pull request #6425: KAFKA-8091; Wait for processor 
shutdown before testing removed listeners
URL: https://github.com/apache/kafka/pull/6425
 
 
   `DynamicBrokerReconfigurationTest.testAddRemoveSaslListeners` removes a 
listener, waits for the config to be propagated to all brokers and then 
validates that connections to the removed listener fail. But there is a small 
timing window between config update and Processor shutdown. Before validating 
that connections to a removed listener fail, this PR waits for all metrics of 
the removed listener to be deleted, ensuring that the Processors of the 
listener have been shutdown.
   
   Ran the test with the fix 1000 times over the weekend without any failures. 
It failed after ~200 runs without the fix.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky test  DynamicBrokerReconfigurationTest#testAddRemoveSaslListener 
> ---
>
> Key: KAFKA-8091
> URL: https://issues.apache.org/jira/browse/KAFKA-8091
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 2.3.0, 2.2.1
>
>
> See KAFKA-6824 for details. Since the SSL version of the test is currently 
> skipped using @Ignore, fixing this for SASL first and wait for that to be 
> stable before re-enabling SSL tests under KAFKA-6824. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7976) Flaky Test DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789559#comment-16789559
 ] 

ASF GitHub Bot commented on KAFKA-7976:
---

rajinisivaram commented on pull request #6426: KAFKA-7976; Update config before 
notifying controller of unclean leader update
URL: https://github.com/apache/kafka/pull/6426
 
 
   When unclean leader election is enabled dynamically on brokers, we notify 
controller of the update before updating KafkaConfig. When processing this 
event, controller's decision to elect unclean leaders is based on the current 
KafkaConfig, so there is a small timing window when the controller may not 
elect unclean leader because `KafkaConfig` of the server was not yet updated. 
The PR fixes this timing window by using the existing `BrokerReconfigurable` 
interface used by other classes which  rely on the current value of 
`KafkaConfig`.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable
> ---
>
> Key: KAFKA-7976
> URL: https://issues.apache.org/jira/browse/KAFKA-7976
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/28/]
> {quote}java.lang.AssertionError: Unclean leader not elected
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable(DynamicBrokerReconfigurationTest.scala:488){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7939) Flaky Test KafkaAdminClientTest#testCreateTopicsRetryBackoff

2019-03-11 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7939.
--
Resolution: Fixed

Issue resolved by pull request 6418
[https://github.com/apache/kafka/pull/6418]

> Flaky Test KafkaAdminClientTest#testCreateTopicsRetryBackoff
> 
>
> Key: KAFKA-7939
> URL: https://issues.apache.org/jira/browse/KAFKA-7939
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.1, 2.3.0
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/12/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.lang.Object.wait(Object.java:502) at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> org.apache.kafka.clients.admin.KafkaAdminClientTest.testCreateTopicsRetryBackoff(KafkaAdminClientTest.java:347){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7939) Flaky Test KafkaAdminClientTest#testCreateTopicsRetryBackoff

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789624#comment-16789624
 ] 

ASF GitHub Bot commented on KAFKA-7939:
---

omkreddy commented on pull request #6418: KAFKA-7939: Fix timing issue in 
KafkaAdminClientTest.testCreateTopicsRetryBackoff
URL: https://github.com/apache/kafka/pull/6418
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test KafkaAdminClientTest#testCreateTopicsRetryBackoff
> 
>
> Key: KAFKA-7939
> URL: https://issues.apache.org/jira/browse/KAFKA-7939
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/12/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.lang.Object.wait(Object.java:502) at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> org.apache.kafka.clients.admin.KafkaAdminClientTest.testCreateTopicsRetryBackoff(KafkaAdminClientTest.java:347){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8089) High level consumer from MirrorMaker is slow to deal with SSL certification expiration

2019-03-11 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram reassigned KAFKA-8089:
-

Assignee: Rajini Sivaram

> High level consumer from MirrorMaker is slow to deal with SSL certification 
> expiration
> --
>
> Key: KAFKA-8089
> URL: https://issues.apache.org/jira/browse/KAFKA-8089
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 2.0.0
>Reporter: Henry Cai
>Assignee: Rajini Sivaram
>Priority: Critical
>
> We have been using Kafka 2.0's mirror maker (which used High level consumer) 
> to do replication.  The topic is SSL enabled and the certificate will expire 
> at a random time within 12 hours.  When the certificate expired we will see 
> many SSL related exception in the log
>  
> [2019-03-07 18:02:54,128] ERROR [Consumer 
> clientId=kafkamirror-euw1-use1-m10nkafka03-1, 
> groupId=kafkamirror-euw1-use1-m10nkafka03] Connection to node 3005 failed 
> authentication due to: SSL handshake failed 
> (org.apache.kafka.clients.NetworkClient)
> This error will repeat for several hours.
> However even with the SSL error, the preexisting socket connection will still 
> work so the main fetching activities is actually not affected, but the 
> metadata operations from the client and the heartbeats from heartbeat thread 
> will be affected since they might open new socket connections.  I think those 
> errors are most likely originated from those side activities.
> The situation will last several hours until the main fetcher thread tried to 
> open a new connection (usually due to consumer rebalance) and then the SSL 
> Authentication exception will abort the operation and mirror maker will exit.
> During that several hours, the client wouldn't be able to get the latest 
> metadata and heartbeats also falters (we see rebalancing triggered because of 
> this).
> In NetworkClient.processDisconnection(), when the above method prints the 
> ERROR message, can it just throw the AuthenticationException up, this will 
> kill the KafkaConsumer.poll(), and this will speedup the certificate recycle 
> (in our case, we will restart the mirror maker with the new certificate)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-11 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-8062:
--

Assignee: Bill Bejeck

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Assignee: Bill Bejeck
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
> {{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.KafkaConsumer - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
>  groupId=] Unsubscribed all topics or patterns and assigned partitions}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.p.KafkaProducer - [Producer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PENDING_SHUTDOWN to DEAD}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutdown complete}}
> After this calls to KafkaStreams.state() still return REBALANCING
> There is a workaround with requesting KafkaStreams.localThreadsMetadata() and 
> checking each thread's state manually, but that seems very wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7996) KafkaStreams does not pass timeout when closing Producer

2019-03-11 Thread Lee Dongjin (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789769#comment-16789769
 ] 

Lee Dongjin commented on KAFKA-7996:


I double checked this issue, and found the following:

1. All of `Producer`, `Consumer`, and `AdminClient` has 
`CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG` which controls network call 
timeout. However, `Consumer` has additional 
`ConsumerConfig#DEFAULT_API_TIMEOUT_MS_CONFIG` and uses it as a request timeout 
instead of `CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG.`
2. When closing `Consumer` without any timeout duration, it closes with a 
timeout of `Consumer#DEFAULT_CLOSE_TIMEOUT_MS.` But in the case of `Producer` 
and `AdminClient,` there is no similar one - `Producer#close`'s default timeout 
is `Duration.ofMillis(Long.MAX_VALUE).`

As far as I understand, it seems like there is no overall approach to handle 
close timeouts - and if it is true, we have three choices:

1. Add [Producer, AdminClient]#DEFAULT_CLOSE_TIMEOUT_MS and close with this 
default value. This approach doesn't require a KIP.
2. Use `ConsumerConfig#DEFAULT_API_TIMEOUT_MS_CONFIG` as a close timeout for 
all of `Producer`, `Consumer`, and `AdminClient`. This approach also doesn't 
require a KIP.
3. Provide additional timeout options for closing [Producer, Consumer, 
AdminClient] in `KafkaStreams` like the draft PR. This approach provides users 
a way to control the behavior, but it is an API change so requires a KIP.

How do you think? cc/ [~mjsax] [~hachikuji] [~Yohan123] [~guozhang] [~bbejeck]

> KafkaStreams does not pass timeout when closing Producer
> 
>
> Key: KAFKA-7996
> URL: https://issues.apache.org/jira/browse/KAFKA-7996
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Patrik Kleindl
>Assignee: Lee Dongjin
>Priority: Major
>  Labels: needs-kip
>
> [https://confluentcommunity.slack.com/messages/C48AHTCUQ/convo/C48AHTCUQ-1550831721.026100/]
> We are running 2.1 and have a case where the shutdown of a streams 
> application takes several minutes
> I noticed that although we call streams.close with a timeout of 30 seconds 
> the log says
> [Producer 
> clientId=…-8be49feb-8a2e-4088-bdd7-3c197f6107bb-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
> Matthias J Sax [vor 3 Tagen]
> I just checked the code, and yes, we don't provide a timeout for the producer 
> on close()...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8091) Flaky test DynamicBrokerReconfigurationTest#testAddRemoveSaslListener

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789858#comment-16789858
 ] 

ASF GitHub Bot commented on KAFKA-8091:
---

rajinisivaram commented on pull request #6425: KAFKA-8091; Wait for processor 
shutdown before testing removed listeners
URL: https://github.com/apache/kafka/pull/6425
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky test  DynamicBrokerReconfigurationTest#testAddRemoveSaslListener 
> ---
>
> Key: KAFKA-8091
> URL: https://issues.apache.org/jira/browse/KAFKA-8091
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 2.3.0, 2.2.1
>
>
> See KAFKA-6824 for details. Since the SSL version of the test is currently 
> skipped using @Ignore, fixing this for SASL first and wait for that to be 
> stable before re-enabling SSL tests under KAFKA-6824. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7971) Producer in Streams environment

2019-03-11 Thread Maciej Lizewski (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789856#comment-16789856
 ] 

Maciej Lizewski commented on KAFKA-7971:


[~mjsax] this issue is cross posted as 
[https://stackoverflow.com/questions/54794953/detecting-abandoned-processess-in-kafka-streams-2-0]

not the one you provided... just to clarify..

> Producer in Streams environment
> ---
>
> Key: KAFKA-7971
> URL: https://issues.apache.org/jira/browse/KAFKA-7971
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Maciej Lizewski
>Priority: Minor
>  Labels: newbie
>
> Would be nice to have Producers that can emit messages to topic just like any 
> producer but also have access to local stores from streams environment in 
> Spring.
> consider case: I have event sourced ordering process like this:
> [EVENTS QUEUE] -> [MERGING PROCESS] -> [ORDERS CHANGELOG/KTABLE]
> Merging process uses local storage "opened orders" to easily apply new 
> changes.
> Now I want to implement process of closing abandoned orders (orders that were 
> started, but for too long there was no change and they hang in beginning 
> status). Easiest way is to periodically scan "opened orders" store and 
> produce "abandon event" for every order that meets criteria. The obnly way 
> now i to create Transformer with punctuator and connect output to [EVENTS 
> QUEUE]. That is obvious. but Transformer must be also connected to some input 
> stream, but these events must be dropped as we want only the punctuator 
> results. This causes unnecessary overhead in processing input messages 
> (although they are just dropped) and it is not very elegant.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8091) Flaky test DynamicBrokerReconfigurationTest#testAddRemoveSaslListener

2019-03-11 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8091.
---
Resolution: Fixed
  Reviewer: Manikumar

> Flaky test  DynamicBrokerReconfigurationTest#testAddRemoveSaslListener 
> ---
>
> Key: KAFKA-8091
> URL: https://issues.apache.org/jira/browse/KAFKA-8091
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 2.3.0, 2.2.1
>
>
> See KAFKA-6824 for details. Since the SSL version of the test is currently 
> skipped using @Ignore, fixing this for SASL first and wait for that to be 
> stable before re-enabling SSL tests under KAFKA-6824. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7565) NPE in KafkaConsumer

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789868#comment-16789868
 ] 

ASF GitHub Bot commented on KAFKA-7565:
---

jsancio commented on pull request #6427: KAFKA-7565: Better messaging for 
invalid fetch response
URL: https://github.com/apache/kafka/pull/6427
 
 
   Users have reported that when consumer poll wake up is used, it is
   possible to receive fetch responses that don't match the copied topic
   partitions collection for the session when the fetch request was created.
   
   This commit improves the error handling here by throwing an
   IllegalStateException instead of a NullPointerException. And by
   generating a message for the exception that includes a bit of more
   information.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> NPE in KafkaConsumer
> 
>
> Key: KAFKA-7565
> URL: https://issues.apache.org/jira/browse/KAFKA-7565
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.1
>Reporter: Alexey Vakhrenev
>Assignee: Jose Armando Garcia Sancio
>Priority: Critical
>
> The stacktrace is
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onSuccess(Fetcher.java:221)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onSuccess(Fetcher.java:202)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:563)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:390)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:244)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1171)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> {noformat}
> Couldn't find minimal reproducer, but it happens quite often in our system. 
> We use {{pause()}} and {{wakeup()}} methods quite extensively, maybe it is 
> somehow related.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7565) NPE in KafkaConsumer

2019-03-11 Thread Jose Armando Garcia Sancio (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789872#comment-16789872
 ] 

Jose Armando Garcia Sancio commented on KAFKA-7565:
---

The PR/patch above doesn't address the issue but instead improves the 
messaging. It also provides slightly more information that would allow us to 
debug this better in the future.

> NPE in KafkaConsumer
> 
>
> Key: KAFKA-7565
> URL: https://issues.apache.org/jira/browse/KAFKA-7565
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.1
>Reporter: Alexey Vakhrenev
>Assignee: Jose Armando Garcia Sancio
>Priority: Critical
>
> The stacktrace is
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onSuccess(Fetcher.java:221)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onSuccess(Fetcher.java:202)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:563)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:390)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:244)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1171)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> {noformat}
> Couldn't find minimal reproducer, but it happens quite often in our system. 
> We use {{pause()}} and {{wakeup()}} methods quite extensively, maybe it is 
> somehow related.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7224) KIP-328: Add spill-to-disk for Suppression

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789906#comment-16789906
 ] 

ASF GitHub Bot commented on KAFKA-7224:
---

vvcephei commented on pull request #6428: KAFKA-7224: [WIP] Persistent Suppress 
[WIP]
URL: https://github.com/apache/kafka/pull/6428
 
 
   WIP - no need to review. I'm just getting a copy of this onto github.
   
   I'll call for reviews once I think it's ready.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KIP-328: Add spill-to-disk for Suppression
> --
>
> Key: KAFKA-7224
> URL: https://issues.apache.org/jira/browse/KAFKA-7224
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>
> As described in 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-328%3A+Ability+to+suppress+updates+for+KTables]
> Following on KAFKA-7223, implement the spill-to-disk buffering strategy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8092) Flaky Test GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess

2019-03-11 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8092:
--

 Summary: Flaky Test 
GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess
 Key: KAFKA-8092
 URL: https://issues.apache.org/jira/browse/KAFKA-8092
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.2.1


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/64/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testSendOffsetsWithNoConsumerGroupDescribeAccess/]
{quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata not 
propagated after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) 
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
scala.collection.immutable.Range.foreach(Range.scala:158) at 
scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
STDOUT
{quote}[2019-03-11 16:08:29,319] ERROR [KafkaApi-0] Error when handling 
request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38324,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
 (kafka.server.KafkaApis:76) 
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:38324-127.0.0.1:59458-0, 
session=Session(Group:testGroup,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized. [2019-03-11 16:08:29,933] ERROR [Consumer 
clientId=consumer-99, groupId=my-group] Offset commit failed on partition 
topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
failed.] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
[2019-03-11 16:08:29,933] ERROR [Consumer clientId=consumer-99, 
groupId=my-group] Not authorized to commit to topics [topic] 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
[2019-03-11 16:08:31,370] ERROR [KafkaApi-0] Error when handling request: 
clientId=0, correlationId=0, api=UPDATE_METADATA, 
body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=33310,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
 (kafka.server.KafkaApis:76) 
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:33310-127.0.0.1:49676-0, 
session=Session(Group:testGroup,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized. [2019-03-11 16:08:34,437] ERROR [KafkaApi-0] Error when 
handling request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=35999,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
 (kafka.server.KafkaApis:76) 
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:35999-127.0.0.1:48268-0, 
session=Session(Group:testGroup,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized. [2019-03-11 16:08:40,978] ERROR [KafkaApi-0] Error when 
handling request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38267,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
 (kafka.server.KafkaApis:76) 
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=0, connectionId=127.0.0.1:38267-127.0.0.1:53148-0, 
session=Session(Group:testGroup,/127.0.0.1), 
listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, buffer=null) 
is not authorized. [2019-03-11 16:08:44,446] WARN Ignoring unexpected runtime 
exception (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
java.nio.channels.CancelledKeyException at 
sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
sun.nio.ch.SelectionKeyImpl.readyOps(Selecti

[jira] [Updated] (KAFKA-8063) Flaky Test WorkerTest#testConverterOverrides

2019-03-11 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8063:
---
Affects Version/s: 2.2.0

> Flaky Test WorkerTest#testConverterOverrides
> 
>
> Key: KAFKA-8063
> URL: https://issues.apache.org/jira/browse/KAFKA-8063
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20068/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testConverterOverrides/]
> {quote}java.lang.AssertionError: Expectation failure on verify: 
> WorkerSourceTask.run(): expected: 1, actual: 1 at 
> org.easymock.internal.MocksControl.verify(MocksControl.java:242){quote}
> STDOUT
> {quote}[2019-03-07 02:28:25,482] (Test worker) ERROR Failed to start 
> connector test-connector (org.apache.kafka.connect.runtime.Worker:234) 
> org.apache.kafka.connect.errors.ConnectException: Failed to find Connector at 
> org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:46)
>  at 
> org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:101)
>  at 
> org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
>  at 
> org.apache.kafka.connect.runtime.isolation.Plugins$$EnhancerByCGLIB$$205db954.newConnector()
>  at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:226) 
> at 
> org.apache.kafka.connect.runtime.WorkerTest.testStartConnectorFailure(WorkerTest.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
>  at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89) at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
>  at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87) at 
> org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134)
>  at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136)
>  at 
> org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:117)
>  at 
> org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:57)
>  at org.powermock.modules.junit4.PowerMockRunn

[jira] [Commented] (KAFKA-8063) Flaky Test WorkerTest#testConverterOverrides

2019-03-11 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789925#comment-16789925
 ] 

Matthias J. Sax commented on KAFKA-8063:


Failed again: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/65/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testConverterOverrides/]

> Flaky Test WorkerTest#testConverterOverrides
> 
>
> Key: KAFKA-8063
> URL: https://issues.apache.org/jira/browse/KAFKA-8063
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20068/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testConverterOverrides/]
> {quote}java.lang.AssertionError: Expectation failure on verify: 
> WorkerSourceTask.run(): expected: 1, actual: 1 at 
> org.easymock.internal.MocksControl.verify(MocksControl.java:242){quote}
> STDOUT
> {quote}[2019-03-07 02:28:25,482] (Test worker) ERROR Failed to start 
> connector test-connector (org.apache.kafka.connect.runtime.Worker:234) 
> org.apache.kafka.connect.errors.ConnectException: Failed to find Connector at 
> org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:46)
>  at 
> org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:101)
>  at 
> org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
>  at 
> org.apache.kafka.connect.runtime.isolation.Plugins$$EnhancerByCGLIB$$205db954.newConnector()
>  at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:226) 
> at 
> org.apache.kafka.connect.runtime.WorkerTest.testStartConnectorFailure(WorkerTest.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
>  at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89) at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
>  at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87) at 
> org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134)
>  at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136)
>  at 
> org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java

[jira] [Updated] (KAFKA-8063) Flaky Test WorkerTest#testConverterOverrides

2019-03-11 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8063:
---
Fix Version/s: 2.2.1

> Flaky Test WorkerTest#testConverterOverrides
> 
>
> Key: KAFKA-8063
> URL: https://issues.apache.org/jira/browse/KAFKA-8063
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20068/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testConverterOverrides/]
> {quote}java.lang.AssertionError: Expectation failure on verify: 
> WorkerSourceTask.run(): expected: 1, actual: 1 at 
> org.easymock.internal.MocksControl.verify(MocksControl.java:242){quote}
> STDOUT
> {quote}[2019-03-07 02:28:25,482] (Test worker) ERROR Failed to start 
> connector test-connector (org.apache.kafka.connect.runtime.Worker:234) 
> org.apache.kafka.connect.errors.ConnectException: Failed to find Connector at 
> org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:46)
>  at 
> org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:101)
>  at 
> org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
>  at 
> org.apache.kafka.connect.runtime.isolation.Plugins$$EnhancerByCGLIB$$205db954.newConnector()
>  at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:226) 
> at 
> org.apache.kafka.connect.runtime.WorkerTest.testStartConnectorFailure(WorkerTest.java:256)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
>  at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89) at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
>  at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87) at 
> org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160)
>  at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134)
>  at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136)
>  at 
> org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:117)
>  at 
> org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:57)
>  at org.powermock.modules.junit4.PowerMockR

[jira] [Commented] (KAFKA-7971) Producer in Streams environment

2019-03-11 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789930#comment-16789930
 ] 

Matthias J. Sax commented on KAFKA-7971:


Thanks [~redguy666] – seems I copied the wrong link.

> Producer in Streams environment
> ---
>
> Key: KAFKA-7971
> URL: https://issues.apache.org/jira/browse/KAFKA-7971
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Maciej Lizewski
>Priority: Minor
>  Labels: newbie
>
> Would be nice to have Producers that can emit messages to topic just like any 
> producer but also have access to local stores from streams environment in 
> Spring.
> consider case: I have event sourced ordering process like this:
> [EVENTS QUEUE] -> [MERGING PROCESS] -> [ORDERS CHANGELOG/KTABLE]
> Merging process uses local storage "opened orders" to easily apply new 
> changes.
> Now I want to implement process of closing abandoned orders (orders that were 
> started, but for too long there was no change and they hang in beginning 
> status). Easiest way is to periodically scan "opened orders" store and 
> produce "abandon event" for every order that meets criteria. The obnly way 
> now i to create Transformer with punctuator and connect output to [EVENTS 
> QUEUE]. That is obvious. but Transformer must be also connected to some input 
> stream, but these events must be dropped as we want only the punctuator 
> results. This causes unnecessary overhead in processing input messages 
> (although they are just dropped) and it is not very elegant.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (KAFKA-7971) Producer in Streams environment

2019-03-11 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7971:
---
Comment: was deleted

(was: Cross posted as question on SO: 
[https://stackoverflow.com/questions/54796333/using-kafka-streams-just-as-a-state-store-in-a-kafka-consumer-app]

 )

> Producer in Streams environment
> ---
>
> Key: KAFKA-7971
> URL: https://issues.apache.org/jira/browse/KAFKA-7971
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Maciej Lizewski
>Priority: Minor
>  Labels: newbie
>
> Would be nice to have Producers that can emit messages to topic just like any 
> producer but also have access to local stores from streams environment in 
> Spring.
> consider case: I have event sourced ordering process like this:
> [EVENTS QUEUE] -> [MERGING PROCESS] -> [ORDERS CHANGELOG/KTABLE]
> Merging process uses local storage "opened orders" to easily apply new 
> changes.
> Now I want to implement process of closing abandoned orders (orders that were 
> started, but for too long there was no change and they hang in beginning 
> status). Easiest way is to periodically scan "opened orders" store and 
> produce "abandon event" for every order that meets criteria. The obnly way 
> now i to create Transformer with punctuator and connect output to [EVENTS 
> QUEUE]. That is obvious. but Transformer must be also connected to some input 
> stream, but these events must be dropped as we want only the punctuator 
> results. This causes unnecessary overhead in processing input messages 
> (although they are just dropped) and it is not very elegant.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8093) Fix JavaDoc markup

2019-03-11 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8093:
--

 Summary: Fix JavaDoc markup
 Key: KAFKA-8093
 URL: https://issues.apache.org/jira/browse/KAFKA-8093
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Matthias J. Sax


Running `./gradlew install` give the following warning
{code:java}
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java:87:
 warning - Tag @link: reference not found: java.nio.channels.Selector
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java:87:
 warning - Tag @link: reference not found: java.nio.channels.Selector#wakeup() 
wakeup()
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java:34:
 warning - Tag @link: reference not found: 
org.apache.kafka.clients.producer.ProducerRecord ProducerRecord
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.

{code}
Those should be fixed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-11 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8094:
--

 Summary: Iterating over cache with get(key) is inefficient 
 Key: KAFKA-8094
 URL: https://issues.apache.org/jira/browse/KAFKA-8094
 Project: Kafka
  Issue Type: Improvement
Reporter: Sophie Blee-Goldman


Currently, range queries in the caching layer are implemented by creating an 
iterator over the subset of keys in the range, and calling get() on the 
underlying TreeMap for each key. While this protects against 
ConcurrentModificationException, we can improve performance by replacing the 
TreeMap with a concurrent data structure such as ConcurrentSkipListMap and then 
just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8040) Streams needs to handle timeout in initTransactions

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790015#comment-16790015
 ] 

ASF GitHub Bot commented on KAFKA-8040:
---

bbejeck commented on pull request #6416: KAFKA-8040: Streams handle 
initTransactions timeout
URL: https://github.com/apache/kafka/pull/6416
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Streams needs to handle timeout in initTransactions
> ---
>
> Key: KAFKA-8040
> URL: https://issues.apache.org/jira/browse/KAFKA-8040
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Critical
> Fix For: 2.2.0, 2.0.2, 2.1.2
>
>
> Following on KAFKA-6446, Streams needs to handle the new behavior.
> `initTxn` can throw TimeoutException now: default `MAX_BLOCK_MS_CONFIG` in 
> producer is 60 seconds, so I ([~guozhang]) think just wrapping it as 
> StreamsException should be reasonable, similar to what we do for 
> `producer#send`'s TimeoutException 
> ([https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L220-L225]
>  ).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7157) Connect TimestampConverter SMT doesn't handle null values

2019-03-11 Thread Jeff Beagley (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790069#comment-16790069
 ] 

Jeff Beagley commented on KAFKA-7157:
-

I believe, through writing my own SMT, I have found a workaround (will post to 
github shortly). My transform accepts all date fields and immediately sets the 
schema of each field to OPTIONAL_STRING_SCHEMA, and then I format appropriately 
if the value is other than null. 

> Connect TimestampConverter SMT doesn't handle null values
> -
>
> Key: KAFKA-7157
> URL: https://issues.apache.org/jira/browse/KAFKA-7157
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Randall Hauch
>Assignee: Valeria Vasylieva
>Priority: Major
>
> TimestampConverter SMT is not able to handle null values (in any versions), 
> so it's always trying to apply the transformation to the value. Instead, it 
> needs to check for null and use the default value for the new schema's field.
> {noformat}
> [2018-07-03 02:31:52,490] ERROR Task MySourceConnector-2 threw an uncaught 
> and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask) 
> java.lang.NullPointerException 
> at 
> org.apache.kafka.connect.transforms.TimestampConverter$2.toRaw(TimestampConverter.java:137)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.convertTimestamp(TimestampConverter.java:440)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.applyValueWithSchema(TimestampConverter.java:368)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.applyWithSchema(TimestampConverter.java:358)
>  
> at 
> org.apache.kafka.connect.transforms.TimestampConverter.apply(TimestampConverter.java:275)
>  
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:435)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:264) 
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:182)
>  
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:150)
>  
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146) 
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190) 
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  
> at java.lang.Thread.run(Thread.java:748) 
> [2018-07-03 02:31:52,491] ERROR Task is being killed and will not recover 
> until manually restarted (org.apache.kafka.connect.runtime.WorkerTask) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-11 Thread Sophie Blee-Goldman (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790070#comment-16790070
 ] 

Sophie Blee-Goldman commented on KAFKA-8094:


After some investigation, the solution is not as straightforward as simply 
replacing TreeMap -> ConcurrentSkipListMap and using a subMap iterator. 
Iterators are only weakly consistent, and changes to the underlying map are not 
guaranteed to be reflected in the iterator. Implementing it this way may cause, 
for example, evicted entries to still be returned by the iterator (see 
ThreadCacheTest#shouldSkipEntriesWhereValueHasBeenEvictedFromCache)

> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-11 Thread Sophie Blee-Goldman (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790080#comment-16790080
 ] 

Sophie Blee-Goldman commented on KAFKA-8094:


That said, maybe this is another occasion to question whether a returned 
iterator *should* reflect the current state of the cache/store, or the state at 
the time it was created (ie when it was queried) as a snapshot.

 

Personally I still believe the snapshot is more appropriate, and if it allows 
us to make this improvement am all the more in favor of it (of course this 
might not be a *huge* improvement as it only saves us a factor of log(N) ) . 
WDYT [~guozhang]

> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8020) Consider changing design of ThreadCache

2019-03-11 Thread Richard Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790171#comment-16790171
 ] 

Richard Yu commented on KAFKA-8020:
---

Hi [~mjsax]

Well, elements which have been inserted usually have a certain lifetime right? 
So we want to take advantage of that. While elements in a cache are within 
their useful lifespan, they might potentially be queried. Some entries might be 
queried more than others, but once an entry's lifespan expires. It is of no 
use. Lets consider an example, what could potentially happen is that an entry 
is queried right before its lifespan expires. This moves it to the front of the 
LRU Cache, yet its useful lifespan has been exceeded shortly after. This 
basically means that we could have a bunch of dead entries which is of no use 
at the head of the queue. We will have to wait for these dead entries to cycle 
back to the tail before they are evicted. In contrast, time-aware LRU Caches 
are used so that we can eliminate this problem. Entries, no matter their 
location in the queue, will be evicted efficiently should their lifespan expire 
(hence why we need to use hiearchal time wheels, they are the most efficient of 
keeping track of when an element has run out of time). 

In terms of a KIP, I don't think that it requires ones. There shouldn't be any 
changes to public API, and we are offering a performance enhancement, so a 
configuration which chooses between different caching policies shouldn't be 
necessary. 

> Consider changing design of ThreadCache 
> 
>
> Key: KAFKA-8020
> URL: https://issues.apache.org/jira/browse/KAFKA-8020
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Richard Yu
>Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction 
> policy better than traditional LRU models, having more cache hits than 
> misses. In this new policy, if an item is stored beyond its useful lifespan, 
> then it is removed. For example, in {{CachingWindowStore}}, a window usually 
> is of limited size. After it expires, it would no longer be queried for, but 
> it potentially could stay in the ThreadCache for an unnecessary amount of 
> time if it is not evicted (i.e. the number of entries being inserted is few). 
> For better allocation of memory, it would be better if we implement a 
> time-aware LRU Cache which takes into account the lifespan of an entry and 
> removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8020) Consider changing design of ThreadCache

2019-03-11 Thread Richard Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790173#comment-16790173
 ] 

Richard Yu commented on KAFKA-8020:
---

Oh about implementing this policy for all caches. I'm not too sure about that. 
I was only planning on implementing this policy for ThreadCache, since I'm 
somewhat familiar with this part of Kafka Streams. Other caches would've to 
wait I guess, since its out of the scope of this particular issue. (I think)

> Consider changing design of ThreadCache 
> 
>
> Key: KAFKA-8020
> URL: https://issues.apache.org/jira/browse/KAFKA-8020
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Richard Yu
>Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction 
> policy better than traditional LRU models, having more cache hits than 
> misses. In this new policy, if an item is stored beyond its useful lifespan, 
> then it is removed. For example, in {{CachingWindowStore}}, a window usually 
> is of limited size. After it expires, it would no longer be queried for, but 
> it potentially could stay in the ThreadCache for an unnecessary amount of 
> time if it is not evicted (i.e. the number of entries being inserted is few). 
> For better allocation of memory, it would be better if we implement a 
> time-aware LRU Cache which takes into account the lifespan of an entry and 
> removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8020) Consider changing design of ThreadCache

2019-03-11 Thread Richard Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790173#comment-16790173
 ] 

Richard Yu edited comment on KAFKA-8020 at 3/12/19 3:09 AM:


Oh about implementing this policy for all caches. I'm not too sure about that. 
I was only planning on implementing this policy for ThreadCache, since I'm 
somewhat familiar with this part of Kafka Streams. 


was (Author: yohan123):
Oh about implementing this policy for all caches. I'm not too sure about that. 
I was only planning on implementing this policy for ThreadCache, since I'm 
somewhat familiar with this part of Kafka Streams. Other caches would've to 
wait I guess, since its out of the scope of this particular issue. (I think)

> Consider changing design of ThreadCache 
> 
>
> Key: KAFKA-8020
> URL: https://issues.apache.org/jira/browse/KAFKA-8020
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Richard Yu
>Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction 
> policy better than traditional LRU models, having more cache hits than 
> misses. In this new policy, if an item is stored beyond its useful lifespan, 
> then it is removed. For example, in {{CachingWindowStore}}, a window usually 
> is of limited size. After it expires, it would no longer be queried for, but 
> it potentially could stay in the ThreadCache for an unnecessary amount of 
> time if it is not evicted (i.e. the number of entries being inserted is few). 
> For better allocation of memory, it would be better if we implement a 
> time-aware LRU Cache which takes into account the lifespan of an entry and 
> removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-03-11 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790234#comment-16790234
 ] 

Matthias J. Sax commented on KAFKA-7937:


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20240/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsNotExistingGroup/]

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-11 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790233#comment-16790233
 ] 

Matthias J. Sax commented on KAFKA-7965:


Failed again: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3122/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7273) Converters should have access to headers.

2019-03-11 Thread Yaroslav Tkachenko (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790244#comment-16790244
 ] 

Yaroslav Tkachenko commented on KAFKA-7273:
---

Created KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-440%3A+Extend+Connect+Converter+to+support+headers

> Converters should have access to headers.
> -
>
> Key: KAFKA-7273
> URL: https://issues.apache.org/jira/browse/KAFKA-7273
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Jeremy Custenborder
>Assignee: Jeremy Custenborder
>Priority: Major
>
> I found myself wanting to build a converter that stored additional type 
> information within headers. The converter interface does not allow a 
> developer to access to the headers in a Converter. I'm not suggesting that we 
> change the method for serializing them, rather that 
> *org.apache.kafka.connect.header.Headers* be passed in for *fromConnectData* 
> and *toConnectData*. For example something like this.
> {code:java}
> import org.apache.kafka.connect.data.Schema;
> import org.apache.kafka.connect.data.SchemaAndValue;
> import org.apache.kafka.connect.header.Headers;
> import org.apache.kafka.connect.storage.Converter;
> public interface Converter {
>   default byte[] fromConnectData(String topic, Headers headers, Schema 
> schema, Object object) {
> return fromConnectData(topic, schema, object);
>   }
>   default SchemaAndValue toConnectData(String topic, Headers headers, byte[] 
> payload) {
> return toConnectData(topic, payload);
>   }
>   void configure(Map var1, boolean var2);
>   byte[] fromConnectData(String var1, Schema var2, Object var3);
>   SchemaAndValue toConnectData(String var1, byte[] var2);
> }
> {code}
> This would be a similar approach to what was already done with 
> ExtendedDeserializer and ExtendedSerializer in the Kafka client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-11 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790251#comment-16790251
 ] 

Matthias J. Sax commented on KAFKA-7965:


Failed with new error message: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/363/tests]
{quote}org.apache.kafka.common.errors.GroupMaxSizeReachedException: Consumer 
group group2 already has the configured maximum number of members.{quote}
STDOUT
{quote}[2019-03-11 22:16:52,736] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:52,736] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:52,927] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:52,927] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,272] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition closetest-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,273] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition closetest-3 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,273] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition closetest-6 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,273] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition closetest-9 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,275] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition closetest-7 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,275] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition closetest-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,275] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition closetest-4 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,317] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition closetest-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,317] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition closetest-3 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,317] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition closetest-6 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:16:53,317] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition closetest-9 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-11 22:17:06,049] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.s

[jira] [Updated] (KAFKA-8093) Fix JavaDoc markup

2019-03-11 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8093:
---
Description: 
Running `./gradlew install` gives the following warning
{code:java}
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java:87:
 warning - Tag @link: reference not found: java.nio.channels.Selector
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java:87:
 warning - Tag @link: reference not found: java.nio.channels.Selector#wakeup() 
wakeup()
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java:34:
 warning - Tag @link: reference not found: 
org.apache.kafka.clients.producer.ProducerRecord ProducerRecord
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.

{code}
Those should be fixed

  was:
Running `./gradlew install` give the following warning
{code:java}
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java:87:
 warning - Tag @link: reference not found: java.nio.channels.Selector
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java:87:
 warning - Tag @link: reference not found: java.nio.channels.Selector#wakeup() 
wakeup()
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java:34:
 warning - Tag @link: reference not found: 
org.apache.kafka.clients.producer.ProducerRecord ProducerRecord
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.
/Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
 warning - @Header is an unknown tag.

{code}
Those should be fixed


> Fix JavaDoc markup
> --
>
> Key: KAFKA-8093
> URL: https://issues.apache.org/jira/browse/KAFKA-8093
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Matthias J. Sax
>Priority: Trivial
>
> Running `./gradlew install` gives the following warning
> {code:java}
> /Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java:87:
>  warning - Tag @link: reference not found: java.nio.channels.Selector
> /Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/source/SourceTask.java:87:
>  warning - Tag @link: reference not found: 
> java.nio.channels.Selector#wakeup() wakeup()
> /Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/connector/ConnectRecord.java:34:
>  warning - Tag @link: reference not found: 
> org.apache.kafka.clients.producer.ProducerRecord ProducerRecord
> /Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
>  warning - @Header is an unknown tag.
> /Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
>  warning - @Header is an unknown tag.
> /Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/header/Headers.java:261:
>  warning - @Header is an unknown tag.
> /Users/matthias/IdeaProjects/kafka/connect/api/src/main/java/org/apache/kafka/connect/he

[jira] [Updated] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-11 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8094:
---
Component/s: streams

> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8020) Consider changing design of ThreadCache

2019-03-11 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790263#comment-16790263
 ] 

Matthias J. Sax commented on KAFKA-8020:


I think I understand your intention better now. However, I am wondering if you 
have any data that backs your claim? It's tricky to rewrite the caches and we 
should only do it, if we get a reasonable performance improvement (otherwise, 
we risk to introduce bug with no reasonable benefits).
{quote}elements which have been inserted usually have a certain lifetime
{quote}
What would this lifetime be? Do you refer to windowed and session store 
retention time? KeyValue stores don't have a retention time though.
{quote}In terms of a KIP, I don't think that it requires ones. There shouldn't 
be any changes to public API, and we are offering a performance enhancement, so 
a configuration which chooses between different caching policies shouldn't be 
necessary. 

Oh about implementing this policy for all caches. I'm not too sure about that. 
I was only planning on implementing this policy for ThreadCache, since I'm 
somewhat familiar with this part of Kafka Streams. 
{quote}
I see. If you target ThreadCache, I agree that a KIP is not required. We need 
to evaluate if there is a measurable performance improvement though before we 
would merge this change.
 
 

> Consider changing design of ThreadCache 
> 
>
> Key: KAFKA-8020
> URL: https://issues.apache.org/jira/browse/KAFKA-8020
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Richard Yu
>Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction 
> policy better than traditional LRU models, having more cache hits than 
> misses. In this new policy, if an item is stored beyond its useful lifespan, 
> then it is removed. For example, in {{CachingWindowStore}}, a window usually 
> is of limited size. After it expires, it would no longer be queried for, but 
> it potentially could stay in the ThreadCache for an unnecessary amount of 
> time if it is not evicted (i.e. the number of entries being inserted is few). 
> For better allocation of memory, it would be better if we implement a 
> time-aware LRU Cache which takes into account the lifespan of an entry and 
> removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7801) TopicCommand should not be able to alter transaction topic partition count

2019-03-11 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7801.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

Issue resolved by pull request 6109
[https://github.com/apache/kafka/pull/6109]

> TopicCommand should not be able to alter transaction topic partition count
> --
>
> Key: KAFKA-7801
> URL: https://issues.apache.org/jira/browse/KAFKA-7801
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.1.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
> Fix For: 2.3.0
>
>
> To keep align with the way it handles the offset topic, TopicCommand should 
> not be able to alter transaction topic partition count.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7801) TopicCommand should not be able to alter transaction topic partition count

2019-03-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790262#comment-16790262
 ] 

ASF GitHub Bot commented on KAFKA-7801:
---

omkreddy commented on pull request #6109: KAFKA-7801: TopicCommand should not 
be able to alter transaction topic partition count
URL: https://github.com/apache/kafka/pull/6109
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> TopicCommand should not be able to alter transaction topic partition count
> --
>
> Key: KAFKA-7801
> URL: https://issues.apache.org/jira/browse/KAFKA-7801
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.1.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
> Fix For: 2.3.0
>
>
> To keep align with the way it handles the offset topic, TopicCommand should 
> not be able to alter transaction topic partition count.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)