[jira] [Commented] (KAFKA-7950) Kafka tools GetOffsetShell -time description

2019-03-02 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782384#comment-16782384
 ] 

Kartik commented on KAFKA-7950:
---

Added Note, in the description which tells the user that, No offset is returned 
if the timestamp provided is greater than recently committed record timestamp.

> Kafka tools GetOffsetShell -time description 
> -
>
> Key: KAFKA-7950
> URL: https://issues.apache.org/jira/browse/KAFKA-7950
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 2.1.0
>Reporter: Kartik
>Assignee: Kartik
>Priority: Trivial
>
> In Kafka GetOffsetShell tool, The --time description should contain 
> information regarding what happens when the timestamp value  > recently 
> committed timestamp is given.
>  
> Expected: "If timestamp value provided is greater than recently committed 
> timestamp then no offset is returned. "
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7950) Kafka tools GetOffsetShell -time description

2019-03-02 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782385#comment-16782385
 ] 

Kartik commented on KAFKA-7950:
---

Ref: https://issues.apache.org/jira/browse/KAFKA-7794

> Kafka tools GetOffsetShell -time description 
> -
>
> Key: KAFKA-7950
> URL: https://issues.apache.org/jira/browse/KAFKA-7950
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 2.1.0
>Reporter: Kartik
>Assignee: Kartik
>Priority: Trivial
>
> In Kafka GetOffsetShell tool, The --time description should contain 
> information regarding what happens when the timestamp value  > recently 
> committed timestamp is given.
>  
> Expected: "If timestamp value provided is greater than recently committed 
> timestamp then no offset is returned. "
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7950) Kafka tools GetOffsetShell -time description

2019-03-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782387#comment-16782387
 ] 

ASF GitHub Bot commented on KAFKA-7950:
---

Kartikvk1996 commented on pull request #6357: KAFKA-7950: Kafka tools 
GetOffsetShell -time description
URL: https://github.com/apache/kafka/pull/6357
 
 
   Added additional description for the "time" parameter for GetOffsetShell 
which adds " No offset is returned if timestamp provided is greater than 
recently committed record timestamp." in the description.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka tools GetOffsetShell -time description 
> -
>
> Key: KAFKA-7950
> URL: https://issues.apache.org/jira/browse/KAFKA-7950
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 2.1.0
>Reporter: Kartik
>Assignee: Kartik
>Priority: Trivial
>
> In Kafka GetOffsetShell tool, The --time description should contain 
> information regarding what happens when the timestamp value  > recently 
> committed timestamp is given.
>  
> Expected: "If timestamp value provided is greater than recently committed 
> timestamp then no offset is returned. "
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7976) Flaky Test DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable

2019-03-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782401#comment-16782401
 ] 

ASF GitHub Bot commented on KAFKA-7976:
---

omkreddy commented on pull request #6358: [Do not merge]KAFKA-7976: Increase 
TestUtils DEFAULT_MAX_WAIT_MS to 20 seconds 
URL: https://github.com/apache/kafka/pull/6358
 
 
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable
> ---
>
> Key: KAFKA-7976
> URL: https://issues.apache.org/jira/browse/KAFKA-7976
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/28/]
> {quote}java.lang.AssertionError: Unclean leader not elected
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable(DynamicBrokerReconfigurationTest.scala:488){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7997) Replace SaslAuthenticate request/response with automated protocol

2019-03-02 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7997.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

Issue resolved by pull request 6324
[https://github.com/apache/kafka/pull/6324]

> Replace SaslAuthenticate request/response with automated protocol
> -
>
> Key: KAFKA-7997
> URL: https://issues.apache.org/jira/browse/KAFKA-7997
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7997) Replace SaslAuthenticate request/response with automated protocol

2019-03-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782428#comment-16782428
 ] 

ASF GitHub Bot commented on KAFKA-7997:
---

omkreddy commented on pull request #6324: KAFKA-7997: Use automatic RPC 
generation in SaslAuthenticate
URL: https://github.com/apache/kafka/pull/6324
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace SaslAuthenticate request/response with automated protocol
> -
>
> Key: KAFKA-7997
> URL: https://issues.apache.org/jira/browse/KAFKA-7997
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7996) KafkaStreams does not pass timeout when closing Producer

2019-03-02 Thread Lee Dongjin (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782468#comment-16782468
 ] 

Lee Dongjin commented on KAFKA-7996:


[~guozhang] [~mjsax] Sorry for being late. To sum up, the initial issue 
description,
{quote}"{{KafkaStreams#close}} is working incorrectly since {{Producer#close}} 
is called without a timeout."
{quote}
was incorrect, rather
{quote}"Some components in {{KafkaStreams}} ({{Producer, AdminClient}}) are not 
closed properly for lack of timeout."
{quote}
is correct, right? Then, updating the issue description like the latter one 
would be better - I agree [~mjsax]'s opinion that this ticket is still valuable.

For the solution - as of now, {{[Producer, AdminClient]}} don't have default 
close timeout like {{Consumer#DEFAULT_CLOSE_TIMEOUT_MS}}. To solve this, there 
are two approaches like the following:

1. Add {{[Producer, AdminClient]#DEFAULT_CLOSE_TIMEOUT_MS}} and close with this 
value in {{KafkaStreams}}. This approach doesn't require KIP.
 2. Provide additional timeout options for closing {{[Producer, Consumer, 
AdminClient]}} in {{KafkaStreams}}. This approach provides users a way to 
control the behavior, but it is an API change so requires a KIP.

How do you think? I will follow your decision.

cc/ [~pkleindl]

> KafkaStreams does not pass timeout when closing Producer
> 
>
> Key: KAFKA-7996
> URL: https://issues.apache.org/jira/browse/KAFKA-7996
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Patrik Kleindl
>Assignee: Lee Dongjin
>Priority: Major
>  Labels: needs-kip
>
> [https://confluentcommunity.slack.com/messages/C48AHTCUQ/convo/C48AHTCUQ-1550831721.026100/]
> We are running 2.1 and have a case where the shutdown of a streams 
> application takes several minutes
> I noticed that although we call streams.close with a timeout of 30 seconds 
> the log says
> [Producer 
> clientId=…-8be49feb-8a2e-4088-bdd7-3c197f6107bb-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
> Matthias J Sax [vor 3 Tagen]
> I just checked the code, and yes, we don't provide a timeout for the producer 
> on close()...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-03-02 Thread Dmitry Minkovsky (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782481#comment-16782481
 ] 

Dmitry Minkovsky commented on KAFKA-5998:
-

This seems to be happening in streams applications with session stores. And it 
seems to happen after retention periods expire. Maybe something is being 
deleted?

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1
>Reporter: Yogesh BG
>Priority: Major
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-wi

[jira] [Updated] (KAFKA-8020) Consider making ThreadCache a time-aware LRU Cache

2019-03-02 Thread Richard Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Yu updated KAFKA-8020:
--
Description: In distributed systems, time-aware LRU Caches offer a superior 
eviction policy then tradition LRU models, having more cache hits than misses. 
In this new policy, if an item which is stored beyond its useful lifespan, then 
it is removed. For example, in {{CachingWindowStore}}, a window usually is of 
limited size. After it expires, it would no longer be queried for, but it 
potentially could stay in the ThreadCache for an unnecessary amount of time if 
it is not evicted (i.e. the number of entries being inserted is few). For 
better allocation of memory, it would be better if we implement a time-aware 
LRU Cache which takes into account the lifespan of an entry and removes it once 
it has expired.  (was: Currently, in Kafka Streams, ThreadCache is used to 
store {{InternalProcessorContext}}s. Typically, an entry is only needed for a 
certain interval of time. For example, in {{CachingWindowStore}}, a window is 
of fixed size. After it expires, it would no longer be queried for, but it 
potentially could stay in the ThreadCache for an unnecessary amount of time if 
it is not evicted (i.e. the number of entries being inserted is few). For 
better allocation of memory, it would be better if we implement a time-aware 
LRU Cache which takes into account the lifespan of an entry and removes it once 
it has expired.)

> Consider making ThreadCache a time-aware LRU Cache
> --
>
> Key: KAFKA-8020
> URL: https://issues.apache.org/jira/browse/KAFKA-8020
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Richard Yu
>Priority: Major
>
> In distributed systems, time-aware LRU Caches offer a superior eviction 
> policy then tradition LRU models, having more cache hits than misses. In this 
> new policy, if an item which is stored beyond its useful lifespan, then it is 
> removed. For example, in {{CachingWindowStore}}, a window usually is of 
> limited size. After it expires, it would no longer be queried for, but it 
> potentially could stay in the ThreadCache for an unnecessary amount of time 
> if it is not evicted (i.e. the number of entries being inserted is few). For 
> better allocation of memory, it would be better if we implement a time-aware 
> LRU Cache which takes into account the lifespan of an entry and removes it 
> once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8020) Consider making ThreadCache a time-aware LRU Cache

2019-03-02 Thread Richard Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Yu updated KAFKA-8020:
--
Description: In distributed systems, time-aware LRU Caches offers a 
superior eviction policy better than traditional LRU models, having more cache 
hits than misses. In this new policy, if an item which is stored beyond its 
useful lifespan, then it is removed. For example, in {{CachingWindowStore}}, a 
window usually is of limited size. After it expires, it would no longer be 
queried for, but it potentially could stay in the ThreadCache for an 
unnecessary amount of time if it is not evicted (i.e. the number of entries 
being inserted is few). For better allocation of memory, it would be better if 
we implement a time-aware LRU Cache which takes into account the lifespan of an 
entry and removes it once it has expired.  (was: In distributed systems, 
time-aware LRU Caches offer a superior eviction policy then tradition LRU 
models, having more cache hits than misses. In this new policy, if an item 
which is stored beyond its useful lifespan, then it is removed. For example, in 
{{CachingWindowStore}}, a window usually is of limited size. After it expires, 
it would no longer be queried for, but it potentially could stay in the 
ThreadCache for an unnecessary amount of time if it is not evicted (i.e. the 
number of entries being inserted is few). For better allocation of memory, it 
would be better if we implement a time-aware LRU Cache which takes into account 
the lifespan of an entry and removes it once it has expired.)

> Consider making ThreadCache a time-aware LRU Cache
> --
>
> Key: KAFKA-8020
> URL: https://issues.apache.org/jira/browse/KAFKA-8020
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Richard Yu
>Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction 
> policy better than traditional LRU models, having more cache hits than 
> misses. In this new policy, if an item which is stored beyond its useful 
> lifespan, then it is removed. For example, in {{CachingWindowStore}}, a 
> window usually is of limited size. After it expires, it would no longer be 
> queried for, but it potentially could stay in the ThreadCache for an 
> unnecessary amount of time if it is not evicted (i.e. the number of entries 
> being inserted is few). For better allocation of memory, it would be better 
> if we implement a time-aware LRU Cache which takes into account the lifespan 
> of an entry and removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8020) Consider making ThreadCache a time-aware LRU Cache

2019-03-02 Thread Richard Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Yu updated KAFKA-8020:
--
Description: In distributed systems, time-aware LRU Caches offers a 
superior eviction policy better than traditional LRU models, having more cache 
hits than misses. In this new policy, if an item is stored beyond its useful 
lifespan, then it is removed. For example, in {{CachingWindowStore}}, a window 
usually is of limited size. After it expires, it would no longer be queried 
for, but it potentially could stay in the ThreadCache for an unnecessary amount 
of time if it is not evicted (i.e. the number of entries being inserted is 
few). For better allocation of memory, it would be better if we implement a 
time-aware LRU Cache which takes into account the lifespan of an entry and 
removes it once it has expired.  (was: In distributed systems, time-aware LRU 
Caches offers a superior eviction policy better than traditional LRU models, 
having more cache hits than misses. In this new policy, if an item which is 
stored beyond its useful lifespan, then it is removed. For example, in 
{{CachingWindowStore}}, a window usually is of limited size. After it expires, 
it would no longer be queried for, but it potentially could stay in the 
ThreadCache for an unnecessary amount of time if it is not evicted (i.e. the 
number of entries being inserted is few). For better allocation of memory, it 
would be better if we implement a time-aware LRU Cache which takes into account 
the lifespan of an entry and removes it once it has expired.)

> Consider making ThreadCache a time-aware LRU Cache
> --
>
> Key: KAFKA-8020
> URL: https://issues.apache.org/jira/browse/KAFKA-8020
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Richard Yu
>Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction 
> policy better than traditional LRU models, having more cache hits than 
> misses. In this new policy, if an item is stored beyond its useful lifespan, 
> then it is removed. For example, in {{CachingWindowStore}}, a window usually 
> is of limited size. After it expires, it would no longer be queried for, but 
> it potentially could stay in the ThreadCache for an unnecessary amount of 
> time if it is not evicted (i.e. the number of entries being inserted is few). 
> For better allocation of memory, it would be better if we implement a 
> time-aware LRU Cache which takes into account the lifespan of an entry and 
> removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8020) Consider making ThreadCache a time-aware LRU Cache

2019-03-02 Thread Richard Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782531#comment-16782531
 ] 

Richard Yu commented on KAFKA-8020:
---

For optimal insertion times, we would probably use hierarchical time wheels 
which is used in Kafka requests (purgatory). It would be faster than using a 
priority queue and could be created on demand. 

> Consider making ThreadCache a time-aware LRU Cache
> --
>
> Key: KAFKA-8020
> URL: https://issues.apache.org/jira/browse/KAFKA-8020
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Richard Yu
>Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction 
> policy better than traditional LRU models, having more cache hits than 
> misses. In this new policy, if an item is stored beyond its useful lifespan, 
> then it is removed. For example, in {{CachingWindowStore}}, a window usually 
> is of limited size. After it expires, it would no longer be queried for, but 
> it potentially could stay in the ThreadCache for an unnecessary amount of 
> time if it is not evicted (i.e. the number of entries being inserted is few). 
> For better allocation of memory, it would be better if we implement a 
> time-aware LRU Cache which takes into account the lifespan of an entry and 
> removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7651) Flaky test SaslSslAdminClientIntegrationTest.testMinimumRequestTimeouts

2019-03-02 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782594#comment-16782594
 ] 

Matthias J. Sax commented on KAFKA-7651:


Happened again: 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/19939/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMinimumRequestTimeouts/]

> Flaky test SaslSslAdminClientIntegrationTest.testMinimumRequestTimeouts
> ---
>
> Key: KAFKA-7651
> URL: https://issues.apache.org/jira/browse/KAFKA-7651
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> Here is stacktrace from 
> https://builds.apache.org/job/kafka-2.1-jdk8/51/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMinimumRequestTimeouts/
> {code}
> Error Message
> java.lang.AssertionError: Expected an exception of type 
> org.apache.kafka.common.errors.TimeoutException; got type 
> org.apache.kafka.common.errors.SslAuthenticationException
> Stacktrace
> java.lang.AssertionError: Expected an exception of type 
> org.apache.kafka.common.errors.TimeoutException; got type 
> org.apache.kafka.common.errors.SslAuthenticationException
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.utils.TestUtils$.assertFutureExceptionTypeEquals(TestUtils.scala:1404)
>   at 
> kafka.api.AdminClientIntegrationTest.testMinimumRequestTimeouts(AdminClientIntegrationTest.scala:1080)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8031) Flaky Test UserClientIdQuotaTest#testQuotaOverrideDelete

2019-03-02 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8031:
--

 Summary: Flaky Test UserClientIdQuotaTest#testQuotaOverrideDelete
 Key: KAFKA-8031
 URL: https://issues.apache.org/jira/browse/KAFKA-8031
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.api/UserClientIdQuotaTest/testQuotaOverrideDelete/]
{quote}java.lang.AssertionError: Client with id=QuotasTestConsumer-!@#$%^&*() 
should have been throttled at org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.assertTrue(Assert.java:42) at 
kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229) at 
kafka.api.QuotaTestClients.verifyConsumeThrottle(BaseQuotaTest.scala:221) at 
kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:130){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8032) Flaky Test UserQuotaTest#testQuotaOverrideDelete

2019-03-02 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8032:
--

 Summary: Flaky Test UserQuotaTest#testQuotaOverrideDelete
 Key: KAFKA-8032
 URL: https://issues.apache.org/jira/browse/KAFKA-8032
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.api/UserQuotaTest/testQuotaOverrideDelete/]
{quote}java.lang.AssertionError: Client with id=QuotasTestProducer-1 should 
have been throttled at org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.assertTrue(Assert.java:42) at 
kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229) at 
kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215) at 
kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6824) Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener

2019-03-02 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782597#comment-16782597
 ] 

Matthias J. Sax commented on KAFKA-6824:


Happened again: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.server/DynamicBrokerReconfigurationTest/testAddRemoveSaslListeners/]

> Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener
> 
>
> Key: KAFKA-6824
> URL: https://issues.apache.org/jira/browse/KAFKA-6824
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Anna Povzner
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> Observed two failures of this test (both in PR builds) :(
>  
> *Failure #1: (JDK 7 and Scala 2.11 )*
> *17:20:49* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *17:20:49*     java.lang.AssertionError: expected:<10> but was:<12>
> *17:20:49*         at org.junit.Assert.fail(Assert.java:88)
> *17:20:49*         at org.junit.Assert.failNotEquals(Assert.java:834)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:645)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:631)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:959)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:784)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
>  
> *Failure #2: (JDK 8)*
> *18:46:23* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *18:46:23*     java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
> *18:46:23*         at scala.collection.Iterator.foreach(Iterator.scala:929)
> *18:46:23*         at scala.collection.Iterator.foreach$(Iterator.scala:929)
> *18:46:23*         at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)
> *18:46:23*         at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> *18:46:23*         at 
> scala.collection.TraversableLike.map(TraversableLike.scala:234)
> *18:46:23*         at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:227)
> *18:46:23*         at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:816)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
> *18:46:23*
> *18:46:23*         Caused by:
> *18:46:23*         
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8025) Flaky Test RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest#shouldForwardAllDbOptionsCalls

2019-03-02 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782598#comment-16782598
 ] 

Matthias J. Sax commented on KAFKA-8025:


Failed again: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2827/testReport/junit/org.apache.kafka.streams.state.internals/RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest/shouldForwardAllDbOptionsCalls/]

Different stack trace:
{quote}java.lang.AssertionError: Expected: a string matching the pattern 
'Unexpected method call DBOptions\.setBaseBackgroundCompactions((.* *)*):' but: 
was "Unexpected method call DBOptions.setBaseBackgroundCompactions(0 (int)):\n 
DBOptions.close(): expected: 2, actual: 0" at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) at 
org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.verifyDBOptionsMethodCall(RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.java:121)
 at 
org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.shouldForwardAllDbOptionsCalls(RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.java:101){quote}
 

> Flaky Test 
> RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest#shouldForwardAllDbOptionsCalls
> 
>
> Key: KAFKA-8025
> URL: https://issues.apache.org/jira/browse/KAFKA-8025
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Konstantine Karantasis
>Assignee: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> At least one occurence where the following unit test case failed on a jenkins 
> job that didn't involve any related changes. 
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2783/consoleFull]
> I have not been able to reproduce it locally on Linux. (For instance 20 
> consecutive runs of this class pass all test cases)
> {code:java}
> 14:06:13 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest
>  > shouldForwardAllDbOptionsCalls STARTED 14:06:14 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.shouldForwardAllDbOptionsCalls
>  failed, log available in 
> /home/jenkins/jenkins-slave/workspace/kafka-pr-jdk11-scala2.12/streams/build/reports/testOutput/org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.shouldForwardAllDbOptionsCalls.test.stdout
>  14:06:14 14:06:14 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest
>  > shouldForwardAllDbOptionsCalls FAILED 14:06:14     
> java.lang.AssertionError: 14:06:14     Expected: a string matching the 
> pattern 'Unexpected method call DBOptions\.baseBackgroundCompactions((.* 
> 14:06:14     *)*):' 14:06:14          but: was "Unexpected method call 
> DBOptions.baseBackgroundCompactions():\n    DBOptions.close(): expected: 3, 
> actual: 0" 14:06:14         at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) 14:06:14         
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) 14:06:14       
>   at 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.verifyDBOptionsMethodCall(RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.java:121)
>  14:06:14         at 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.shouldForwardAllDbOptionsCalls(RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.java:101)
>  14:06:14 14:06:14 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest
>  > shouldForwardAllColumnFamilyCalls STARTED 14:06:14 14:06:14 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest
>  > shouldForwardAllColumnFamilyCalls PASSED
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7651) Flaky test SaslSslAdminClientIntegrationTest.testMinimumRequestTimeouts

2019-03-02 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782599#comment-16782599
 ] 

Matthias J. Sax commented on KAFKA-7651:


One more: 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/19936/testReport/junit/kafka.api/AdminClientIntegrationTest/testMinimumRequestTimeouts/]

> Flaky test SaslSslAdminClientIntegrationTest.testMinimumRequestTimeouts
> ---
>
> Key: KAFKA-7651
> URL: https://issues.apache.org/jira/browse/KAFKA-7651
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> Here is stacktrace from 
> https://builds.apache.org/job/kafka-2.1-jdk8/51/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMinimumRequestTimeouts/
> {code}
> Error Message
> java.lang.AssertionError: Expected an exception of type 
> org.apache.kafka.common.errors.TimeoutException; got type 
> org.apache.kafka.common.errors.SslAuthenticationException
> Stacktrace
> java.lang.AssertionError: Expected an exception of type 
> org.apache.kafka.common.errors.TimeoutException; got type 
> org.apache.kafka.common.errors.SslAuthenticationException
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.utils.TestUtils$.assertFutureExceptionTypeEquals(TestUtils.scala:1404)
>   at 
> kafka.api.AdminClientIntegrationTest.testMinimumRequestTimeouts(AdminClientIntegrationTest.scala:1080)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-03-02 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8030:
--

 Summary: Flaky Test 
TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
 Key: KAFKA-8030
 URL: https://issues.apache.org/jira/browse/KAFKA-8030
 Project: Kafka
  Issue Type: Bug
  Components: admin, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
{quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
org.junit.Assert.assertTrue(Assert.java:42) at 
org.junit.Assert.assertTrue(Assert.java:53) at 
kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
STDERR
{quote}Option "[replica-assignment]" can't be used with option 
"[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-02 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782600#comment-16782600
 ] 

Matthias J. Sax commented on KAFKA-7965:


One more: 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/19936/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-4332) kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test failure

2019-03-02 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4332:
---
Affects Version/s: 2.3.0

> kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test 
> failure
> -
>
> Key: KAFKA-4332
> URL: https://issues.apache.org/jira/browse/KAFKA-4332
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.1.0, 2.3.0
>Reporter: Jun Rao
>Priority: Critical
>
> kafka.api.UserQuotaTest > testThrottledProducerConsumer FAILED
> java.lang.AssertionError: Should have been throttled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-4332) kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test failure

2019-03-02 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4332:
---
Component/s: unit tests
 core

> kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test 
> failure
> -
>
> Key: KAFKA-4332
> URL: https://issues.apache.org/jira/browse/KAFKA-4332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 0.10.1.0, 2.3.0
>Reporter: Jun Rao
>Priority: Critical
> Fix For: 2.3.0
>
>
> kafka.api.UserQuotaTest > testThrottledProducerConsumer FAILED
> java.lang.AssertionError: Should have been throttled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-4332) kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test failure

2019-03-02 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4332:
---
Fix Version/s: 2.3.0

> kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test 
> failure
> -
>
> Key: KAFKA-4332
> URL: https://issues.apache.org/jira/browse/KAFKA-4332
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.1.0, 2.3.0
>Reporter: Jun Rao
>Priority: Critical
> Fix For: 2.3.0
>
>
> kafka.api.UserQuotaTest > testThrottledProducerConsumer FAILED
> java.lang.AssertionError: Should have been throttled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4332) kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test failure

2019-03-02 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782602#comment-16782602
 ] 

Matthias J. Sax commented on KAFKA-4332:


Happend again: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2829/testReport/junit/kafka.api/UserQuotaTest/testThrottledProducerConsumer/]

STDOUT
{quote}[2019-03-02 03:48:16,694] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka13407197977524686174.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-02 03:48:16,703] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) [2019-03-02 
03:48:17,999] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka13407197977524686174.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-02 03:48:18,013] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) Debug is 
true storeKey true useTicketCache false useKeyTab true doNotPrompt false 
ticketCache is null isInitiator true KeyTab is 
/tmp/kafka10922548166674984360.tmp refreshKrb5Config is false principal is 
kafka/localh...@example.com tryFirstPass is false useFirstPass is false 
storePass is false clearPass is false principal is kafka/localh...@example.com 
Will use keytab Commit Succeeded [2019-03-02 03:48:18,768] WARN SASL 
configuration failed: javax.security.auth.login.LoginException: No JAAS 
configuration section named 'Client' was found in specified JAAS configuration 
file: '/tmp/kafka13407197977524686174.tmp'. Will continue connection to 
Zookeeper server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-02 03:48:18,776] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) Debug is 
true storeKey true useTicketCache false useKeyTab true doNotPrompt false 
ticketCache is null isInitiator true KeyTab is 
/tmp/kafka15170262955234845182.tmp refreshKrb5Config is false principal is 
clie...@example.com tryFirstPass is false useFirstPass is false storePass is 
false clearPass is false principal is clie...@example.com Will use keytab 
Commit Succeeded [2019-03-02 03:48:59,228] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka684991191649714515.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-02 03:48:59,240] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) [2019-03-02 
03:49:01,348] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka684991191649714515.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-02 03:49:01,349] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) Debug is 
true storeKey true useTicketCache false useKeyTab true doNotPrompt false 
ticketCache is null isInitiator true KeyTab is 
/tmp/kafka3350555149809806324.tmp refreshKrb5Config is false principal is 
kafka/localh...@example.com tryFirstPass is false useFirstPass is false 
storePass is false clearPass is false principal is kafka/localh...@example.com 
Will use keytab Commit Succeeded [2019-03-02 03:49:02,868] WARN SASL 
configuration failed: javax.security.auth.login.LoginException: No JAAS 
configuration section named 'Client' was found in specified JAAS configuration 
file: '/tmp/kafka684991191649714515.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-02 03:49:02,904] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) Debug is 
true storeKey true useTicketCache false useKeyTab true doNotPrompt false 
ticketCache is null isInitiator true KeyTab is 
/tmp/kafka3547373607790368490.tmp refreshKrb5Config is false principal is 
clie...@example.com tryFirstPass is false useFirstPass is false storePass is 
false clearPass is false principal is clie...@example.com Will use keytab 
Commit Succeeded [2019-03-02 03:50:03,185] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was 

[jira] [Created] (KAFKA-8033) Flaky Test PlaintextConsumerTest#testFetchInvalidOffset

2019-03-02 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8033:
--

 Summary: Flaky Test PlaintextConsumerTest#testFetchInvalidOffset
 Key: KAFKA-8033
 URL: https://issues.apache.org/jira/browse/KAFKA-8033
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2829/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchInvalidOffset/]
{quote}org.scalatest.junit.JUnitTestFailedError: Expected exception 
org.apache.kafka.clients.consumer.NoOffsetForPartitionException to be thrown, 
but no exception was thrown{quote}
STDOUT prints this over and over again:
{quote}[2019-03-02 04:01:25,576] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7980) Flaky Test SocketServerTest#testConnectionRateLimit

2019-03-02 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782603#comment-16782603
 ] 

Matthias J. Sax commented on KAFKA-7980:


One more: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2829/testReport/junit/kafka.network/SocketServerTest/testConnectionRateLimit/]

> Flaky Test SocketServerTest#testConnectionRateLimit
> ---
>
> Key: KAFKA-7980
> URL: https://issues.apache.org/jira/browse/KAFKA-7980
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
> {quote}java.lang.AssertionError: Connections created too quickly: 4 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.network.SocketServerTest.testConnectionRateLimit(SocketServerTest.scala:1122){quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-4332) kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test failure

2019-03-02 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4332:
---
Priority: Critical  (was: Major)

> kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test 
> failure
> -
>
> Key: KAFKA-4332
> URL: https://issues.apache.org/jira/browse/KAFKA-4332
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.1.0
>Reporter: Jun Rao
>Priority: Critical
>
> kafka.api.UserQuotaTest > testThrottledProducerConsumer FAILED
> java.lang.AssertionError: Should have been throttled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-4332) kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test failure

2019-03-02 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4332:
---
Labels: flaky-test  (was: )

> kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test 
> failure
> -
>
> Key: KAFKA-4332
> URL: https://issues.apache.org/jira/browse/KAFKA-4332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 0.10.1.0, 2.3.0
>Reporter: Jun Rao
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> kafka.api.UserQuotaTest > testThrottledProducerConsumer FAILED
> java.lang.AssertionError: Should have been throttled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8010) kafka-configs.sh does not allow setting config with an equal in the value

2019-03-02 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782622#comment-16782622
 ] 

Kartik commented on KAFKA-8010:
---

Hi [~mimaison] ,

 

If you check the *ConfigCommand.scala* code under 
"*kafka\core\src\main\scala\kafka\admin"*, It expects add-config to be provided 
in single quotes, Since you are providing add-config 
*"*sasl.jaas.config=KafkaServer *"* under double quotes, it's failing.

 

 

Command : 

./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
--entity-name 59 --alter --add-config 
*{color:#33}'{color}*sasl.jaas.config=KafkaServer *'*

 

Can you try the same?

 

Thanks.

> kafka-configs.sh does not allow setting config with an equal in the value
> -
>
> Key: KAFKA-8010
> URL: https://issues.apache.org/jira/browse/KAFKA-8010
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Mickael Maison
>Priority: Major
>
> The sasl.jaas.config typically includes equals in its value. Unfortunately 
> the kafka-configs tool does not parse such values correctly and hits an error:
> ./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config "sasl.jaas.config=KafkaServer \{\n  
> org.apache.kafka.common.security.plain.PlainLoginModule required\n  
> username=\"myuser\"\n  password=\"mypassword\";\n};\nClient \{\n  
> org.apache.zookeeper.server.auth.DigestLoginModule required\n  
> username=\"myuser2\"\n  password=\"mypassword2\;\n};"
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format "key=val"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7312) Transient failure in kafka.api.AdminClientIntegrationTest.testMinimumRequestTimeouts

2019-03-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782624#comment-16782624
 ] 

ASF GitHub Bot commented on KAFKA-7312:
---

omkreddy commented on pull request #6360: KAFKA-7312: Change the broker port in 
AdminClientIntegrationTest (testMinimumRequestTimeouts, testForceClose) tests
URL: https://github.com/apache/kafka/pull/6360
 
 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Transient failure in 
> kafka.api.AdminClientIntegrationTest.testMinimumRequestTimeouts
> 
>
> Key: KAFKA-7312
> URL: https://issues.apache.org/jira/browse/KAFKA-7312
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> {code}
> Error Message
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
> Stacktrace
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:502)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:262)
>   at 
> kafka.utils.TestUtils$.assertFutureExceptionTypeEquals(TestUtils.scala:1345)
>   at 
> kafka.api.AdminClientIntegrationTest.testMinimumRequestTimeouts(AdminClientIntegrationTest.scala:1080)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)