[jira] [Created] (KAFKA-8314) Managing the doc field in case of schema projection - kafka connect

2019-05-02 Thread kaushik srinivas (JIRA)
kaushik srinivas created KAFKA-8314:
---

 Summary: Managing the doc field in case of schema projection - 
kafka connect
 Key: KAFKA-8314
 URL: https://issues.apache.org/jira/browse/KAFKA-8314
 Project: Kafka
  Issue Type: Bug
Reporter: kaushik srinivas


Doc field change in the schema while writing to hdfs using hdfs sink connector 
via connect framework would cause failures in schema projection.

 

java.lang.RuntimeException: 
org.apache.kafka.connect.errors.SchemaProjectorException: Schema parameters not 
equal. source parameters: \{connect.record.doc=xxx} and target parameters: 
\{connect.record.doc=yyy} 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8314) Managing the doc field in case of schema projection - kafka connect

2019-05-02 Thread kaushik srinivas (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831549#comment-16831549
 ] 

kaushik srinivas commented on KAFKA-8314:
-

We have added a check in 
connect/api/src/main/java/org/apache/kafka/connect/data/SchemaProjector.java

as below to exclude non mandatory items of schema in our local build of kafka.
{code:java}
private static void checkMaybeCompatible(Schema source, Schema target) {
if (source.type() != target.type() && !isPromotable(source.type(), 
target.type())) {
throw new SchemaProjectorException("Schema type mismatch. source type: " + 
source.type() + " and target type: " + target.type());
} else if (!Objects.equals(source.name(), target.name())) {
throw new SchemaProjectorException("Schema name mismatch. source name: " + 
source.name() + " and target name: " + target.name());
} else if (!Objects.equals(source.parameters(), target.parameters())) {

// Create a copy of the original source and schema properties maps.
Map sourceParameters = new HashMap<>(source.parameters());
Map targetParameters = new HashMap<>(target.parameters());

// exclude doc,aliases and namespace fields for schema compatability check
sourceParameters.remove("connect.record.doc");
sourceParameters.remove("connect.record.aliases");
sourceParameters.remove("connect.record.namespace");
targetParameters.remove("connect.record.doc");
targetParameters.remove("connect.record.aliases");
targetParameters.remove("connect.record.namespace");

// throw SchemaProjectorException if comparison fails for remaining attributes.
if(!Objects.equals(sourceParameters, targetParameters)) {
throw new SchemaProjectorException("Schema parameters not equal. source 
parameters: " + source.parameters() + " and target parameters: " + 
target.parameters());
}
}
{code}
 

> Managing the doc field in case of schema projection - kafka connect
> ---
>
> Key: KAFKA-8314
> URL: https://issues.apache.org/jira/browse/KAFKA-8314
> Project: Kafka
>  Issue Type: Bug
>Reporter: kaushik srinivas
>Priority: Major
>
> Doc field change in the schema while writing to hdfs using hdfs sink 
> connector via connect framework would cause failures in schema projection.
>  
> java.lang.RuntimeException: 
> org.apache.kafka.connect.errors.SchemaProjectorException: Schema parameters 
> not equal. source parameters: \{connect.record.doc=xxx} and target 
> parameters: \{connect.record.doc=yyy} 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)
Andrew created KAFKA-8315:
-

 Summary: Cannot pass Materialized into a join operation
 Key: KAFKA-8315
 URL: https://issues.apache.org/jira/browse/KAFKA-8315
 Project: Kafka
  Issue Type: Bug
Reporter: Andrew


The documentation says to use `Materialized` not `JoinWindows.until()` 
(https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
 but there is no where to pass a `Materialized` instance to the join operation, 
only to the group operation is supported it seems.

 

Slack conversation here : 
https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831619#comment-16831619
 ] 

John Roesler commented on KAFKA-8315:
-

Hey [~the4thamigo_uk], 

Sorry about that. It looks like the javadoc got messed up in this change: 
https://github.com/apache/kafka/pull/5911/files#diff-35e3523474fa277a63e36a3fe9e22af8

I'll submit a PR to fix it.

In summary, you should use the JoinWindows "grace period" instead of "until". 

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831624#comment-16831624
 ] 

ASF GitHub Bot commented on KAFKA-8315:
---

vvcephei commented on pull request #6664: KAFKA-8315: fix the JoinWindows 
retention deprecation doc
URL: https://github.com/apache/kafka/pull/6664
 
 
   Fix a javadoc mistake introduced in 
https://github.com/apache/kafka/pull/5911/files#diff-35e3523474fa277a63e36a3fe9e22af8
 .
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread John Roesler (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler reassigned KAFKA-8315:
---

Assignee: John Roesler

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831625#comment-16831625
 ] 

John Roesler commented on KAFKA-8315:
-

Ok, submitted the PR https://github.com/apache/kafka/pull/6664 . Sorry for the 
confusion!

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8316) Remove deprecated usage of Slf4jRequestLog, SslContextFactory

2019-05-02 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-8316:
--

 Summary: Remove deprecated usage of Slf4jRequestLog, 
SslContextFactory
 Key: KAFKA-8316
 URL: https://issues.apache.org/jira/browse/KAFKA-8316
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma


Jetty recently deprecated a few classes we use. The following commit suppresses 
the deprecation warnings:

https://github.com/apache/kafka/commit/e66bc6255b2ee42481b54b7fd1d256b9e4ff5741

We should remove the suppressions and use the suggested alternatives.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4304) Extend Interactive Queries for return latest update timestamp per key

2019-05-02 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-4304.

Resolution: Duplicate

> Extend Interactive Queries for return latest update timestamp per key
> -
>
> Key: KAFKA-4304
> URL: https://issues.apache.org/jira/browse/KAFKA-4304
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
>  Labels: kip
>
> Currently, when querying state store, it is not clear when the key was 
> updated last. The ides of this JIRA is to make the latest update timestamp 
> for each key-value-pair of the state store accessible.
> For example, this might be useful to
>  * check if a value was update but did not changed (just compare the update 
> TS)
>  * if you want to consider only recently updated keys
> Covered via KIP-258: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-258%3A+Allow+to+Store+Record+Timestamps+in+RocksDB]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7697) Possible deadlock in kafka.cluster.Partition

2019-05-02 Thread Jackson Westeen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jackson Westeen updated KAFKA-7697:
---
Attachment: kafka.log

> Possible deadlock in kafka.cluster.Partition
> 
>
> Key: KAFKA-7697
> URL: https://issues.apache.org/jira/browse/KAFKA-7697
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Gian Merlino
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.2.0, 2.1.1
>
> Attachments: kafka.log, threaddump.txt
>
>
> After upgrading a fairly busy broker from 0.10.2.0 to 2.1.0, it locked up 
> within a few minutes (by "locked up" I mean that all request handler threads 
> were busy, and other brokers reported that they couldn't communicate with 
> it). I restarted it a few times and it did the same thing each time. After 
> downgrading to 0.10.2.0, the broker was stable. I attached a thread dump from 
> the last attempt on 2.1.0 that shows lots of kafka-request-handler- threads 
> trying to acquire the leaderIsrUpdateLock lock in kafka.cluster.Partition.
> It jumps out that there are two threads that already have some read lock 
> (can't tell which one) and are trying to acquire a second one (on two 
> different read locks: 0x000708184b88 and 0x00070821f188): 
> kafka-request-handler-1 and kafka-request-handler-4. Both are handling a 
> produce request, and in the process of doing so, are calling 
> Partition.fetchOffsetSnapshot while trying to complete a DelayedFetch. At the 
> same time, both of those locks have writers from other threads waiting on 
> them (kafka-request-handler-2 and kafka-scheduler-6). Neither of those locks 
> appear to have writers that hold them (if only because no threads in the dump 
> are deep enough in inWriteLock to indicate that).
> ReentrantReadWriteLock in nonfair mode prioritizes waiting writers over 
> readers. Is it possible that kafka-request-handler-1 and 
> kafka-request-handler-4 are each trying to read-lock the partition that is 
> currently locked by the other one, and they're both parked waiting for 
> kafka-request-handler-2 and kafka-scheduler-6 to get write locks, which they 
> never will, because the former two threads own read locks and aren't giving 
> them up?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7697) Possible deadlock in kafka.cluster.Partition

2019-05-02 Thread N S N MURTHY (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831714#comment-16831714
 ] 

N S N MURTHY commented on KAFKA-7697:
-

[~rsivaram]  We also encountered the same issue with 2.1.0 in our production 
stack.

Below is the sample stack trace. If we want to up grade/down grade Kafka in our 
setup, which version we can go.

kafka-request-handler-3 tid=53 [WAITING] [DAEMON]
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock() 
ReentrantReadWriteLock.java:727
kafka.utils.CoreUtils$.inLock(Lock, Function0) CoreUtils.scala:249
kafka.utils.CoreUtils$.inReadLock(ReadWriteLock, Function0) CoreUtils.scala:257
kafka.cluster.Partition.fetchOffsetSnapshot(Optional, boolean) 
Partition.scala:832
kafka.server.DelayedFetch.$anonfun$tryComplete$1(DelayedFetch, IntRef, Object, 
Tuple2) DelayedFetch.scala:87
kafka.server.DelayedFetch.$anonfun$tryComplete$1$adapted(DelayedFetch, IntRef, 
Object, Tuple2) DelayedFetch.scala:79
kafka.server.DelayedFetch$$Lambda$969.apply(Object)
scala.collection.mutable.ResizableArray.foreach(Function1) 
ResizableArray.scala:58
scala.collection.mutable.ResizableArray.foreach$(ResizableArray, Function1) 
ResizableArray.scala:51
scala.collection.mutable.ArrayBuffer.foreach(Function1) ArrayBuffer.scala:47
kafka.server.DelayedFetch.tryComplete() DelayedFetch.scala:79
kafka.server.DelayedOperation.maybeTryComplete() DelayedOperation.scala:121
kafka.server.DelayedOperationPurgatory$Watchers.tryCompleteWatched() 
DelayedOperation.scala:371
kafka.server.DelayedOperationPurgatory.checkAndComplete(Object) 
DelayedOperation.scala:277
kafka.server.ReplicaManager.tryCompleteDelayedFetch(DelayedOperationKey) 
ReplicaManager.scala:307
kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition, 
MemoryRecords, boolean, int) Partition.scala:743
kafka.cluster.Partition$$Lambda$856.apply()
kafka.utils.CoreUtils$.inLock(Lock, Function0) CoreUtils.scala:251
kafka.utils.CoreUtils$.inReadLock(ReadWriteLock, Function0) CoreUtils.scala:257
kafka.cluster.Partition.appendRecordsToLeader(MemoryRecords, boolean, int) 
Partition.scala:729
kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager, 
boolean, boolean, short, Tuple2) ReplicaManager.scala:735
kafka.server.ReplicaManager$$Lambda$844.apply(Object)
scala.collection.TraversableLike.$anonfun$map$1(Function1, Builder, Object) 
TraversableLike.scala:233
scala.collection.TraversableLike$$Lambda$10.apply(Object)
scala.collection.mutable.HashMap.$anonfun$foreach$1(Function1, DefaultEntry) 
HashMap.scala:145
scala.collection.mutable.HashMap$$Lambda$22.apply(Object)
scala.collection.mutable.HashTable.foreachEntry(Function1) HashTable.scala:235
scala.collection.mutable.HashTable.foreachEntry$(HashTable, Function1) 
HashTable.scala:228
scala.collection.mutable.HashMap.foreachEntry(Function1) HashMap.scala:40
scala.collection.mutable.HashMap.foreach(Function1) HashMap.scala:145
scala.collection.TraversableLike.map(Function1, CanBuildFrom) 
TraversableLike.scala:233
scala.collection.TraversableLike.map$(TraversableLike, Function1, CanBuildFrom) 
TraversableLike.scala:226
scala.collection.AbstractTraversable.map(Function1, CanBuildFrom) 
Traversable.scala:104
kafka.server.ReplicaManager.appendToLocalLog(boolean, boolean, Map, short) 
ReplicaManager.scala:723
kafka.server.ReplicaManager.appendRecords(long, short, boolean, boolean, Map, 
Function1, Option, Function1) ReplicaManager.scala:470
kafka.server.KafkaApis.handleProduceRequest(RequestChannel$Request) 
KafkaApis.scala:482
kafka.server.KafkaApis.handle(RequestChannel$Request) KafkaApis.scala:106
kafka.server.KafkaRequestHandler.run() KafkaRequestHandler.scala:69
java.lang.Thread.run() Thread.java:748

-Murthy

> Possible deadlock in kafka.cluster.Partition
> 
>
> Key: KAFKA-7697
> URL: https://issues.apache.org/jira/browse/KAFKA-7697
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Gian Merlino
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.2.0, 2.1.1
>
> Attachments: kafka.log, threaddump.txt
>
>
> After upgrading a fairly busy broker from 0.10.2.0 to 2.1.0, it locked up 
> within a few minutes (by "locked up" I mean that all request handler threads 
> were busy, and other brokers reported that they couldn't communicate with 
> it). I restarted it a few times and it did the same thing each time. After 
> downgrading to 0.10.2.0, the broker was stable. I attached a thread dump from 
> the last attempt on 2.1.0 that shows lots of kafka-request-handler- threads 
> trying to acquire the leaderIsrUpdateLock lock in kafka.cluster.Partition.
> It jumps out that there are two threads that already have some read lock 
> (can't tell which one) and are trying to acquire a second one (on tw

[jira] [Commented] (KAFKA-7697) Possible deadlock in kafka.cluster.Partition

2019-05-02 Thread Jackson Westeen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831713#comment-16831713
 ] 

Jackson Westeen commented on KAFKA-7697:


Currently experiencing this on Kafka 2.1.0, happen to have a thread dump handy 
if it's useful! We're noticing it occur seemingly randomly, however partition 
reassignment and heavy produce bursts seem to exacerbate this issue. kill -9 
has been the only solution for us thus far, tons of open file descriptors start 
to accumulate.

[^kafka.log]

> Possible deadlock in kafka.cluster.Partition
> 
>
> Key: KAFKA-7697
> URL: https://issues.apache.org/jira/browse/KAFKA-7697
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Gian Merlino
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.2.0, 2.1.1
>
> Attachments: kafka.log, threaddump.txt
>
>
> After upgrading a fairly busy broker from 0.10.2.0 to 2.1.0, it locked up 
> within a few minutes (by "locked up" I mean that all request handler threads 
> were busy, and other brokers reported that they couldn't communicate with 
> it). I restarted it a few times and it did the same thing each time. After 
> downgrading to 0.10.2.0, the broker was stable. I attached a thread dump from 
> the last attempt on 2.1.0 that shows lots of kafka-request-handler- threads 
> trying to acquire the leaderIsrUpdateLock lock in kafka.cluster.Partition.
> It jumps out that there are two threads that already have some read lock 
> (can't tell which one) and are trying to acquire a second one (on two 
> different read locks: 0x000708184b88 and 0x00070821f188): 
> kafka-request-handler-1 and kafka-request-handler-4. Both are handling a 
> produce request, and in the process of doing so, are calling 
> Partition.fetchOffsetSnapshot while trying to complete a DelayedFetch. At the 
> same time, both of those locks have writers from other threads waiting on 
> them (kafka-request-handler-2 and kafka-scheduler-6). Neither of those locks 
> appear to have writers that hold them (if only because no threads in the dump 
> are deep enough in inWriteLock to indicate that).
> ReentrantReadWriteLock in nonfair mode prioritizes waiting writers over 
> readers. Is it possible that kafka-request-handler-1 and 
> kafka-request-handler-4 are each trying to read-lock the partition that is 
> currently locked by the other one, and they're both parked waiting for 
> kafka-request-handler-2 and kafka-scheduler-6 to get write locks, which they 
> never will, because the former two threads own read locks and aren't giving 
> them up?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831735#comment-16831735
 ] 

Andrew commented on KAFKA-8315:
---

Ok, thanks. 

However, I thought retention period was meant to be independent of grace 
period, like it is for grouped aggregations? I can see from the code though 
that the retention is explicitly set to the grace period.

   \{Code}
@SuppressWarnings("deprecation") // continuing to support 
Windows#maintainMs/segmentInterval in fallback mode
    private static  StoreBuilder> 
joinWindowStoreBuilder(final String joinName,

 final JoinWindows windows,

 final Serde keySerde,

 final Serde valueSerde) {
    return Stores.windowStoreBuilder(
    Stores.persistentWindowStore(
    joinName + "-store",
    Duration.ofMillis(windows.size() + windows.gracePeriodMs()),  
<
    Duration.ofMillis(windows.size()),
    true
    ),
    keySerde,
    valueSerde
    );
    }
{Code}


Andy Smith   [1 minute ago]
so it looks like the grace period defines the retention period...

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831735#comment-16831735
 ] 

Andrew edited comment on KAFKA-8315 at 5/2/19 4:27 PM:
---

Ok, thanks.

However, I thought retention period was meant to be independent of grace 
period, like it is for grouped aggregations? I can see from the code though 
that the retention is explicitly set to the grace period.

{Code}
 @SuppressWarnings("deprecation") // continuing to support 
Windows#maintainMs/segmentInterval in fallback mode
     private static  StoreBuilder> 
joinWindowStoreBuilder(final String joinName,
    
  final JoinWindows windows,
    
  final Serde keySerde,
    
  final Serde valueSerde)

{     return Stores.windowStoreBuilder(     
Stores.persistentWindowStore(     joinName + "-store",  
   Duration.ofMillis(windows.size() + windows.gracePeriodMs()),  
<     Duration.ofMillis(windows.size()),    
 true     ),     keySerde,     valueSerde   
  );     }

 

{Code}


 so it looks like the grace period defines the retention period...


was (Author: the4thamigo_uk):
Ok, thanks. 

However, I thought retention period was meant to be independent of grace 
period, like it is for grouped aggregations? I can see from the code though 
that the retention is explicitly set to the grace period.

   \{Code}
@SuppressWarnings("deprecation") // continuing to support 
Windows#maintainMs/segmentInterval in fallback mode
    private static  StoreBuilder> 
joinWindowStoreBuilder(final String joinName,

 final JoinWindows windows,

 final Serde keySerde,

 final Serde valueSerde) {
    return Stores.windowStoreBuilder(
    Stores.persistentWindowStore(
    joinName + "-store",
    Duration.ofMillis(windows.size() + windows.gracePeriodMs()),  
<
    Duration.ofMillis(windows.size()),
    true
    ),
    keySerde,
    valueSerde
    );
    }
{Code}


Andy Smith   [1 minute ago]
so it looks like the grace period defines the retention period...

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831735#comment-16831735
 ] 

Andrew edited comment on KAFKA-8315 at 5/2/19 4:28 PM:
---

Ok, thanks.

However, I thought retention period was meant to be independent of grace 
period, like it is for grouped aggregations? I can see from the code though 
that the retention is explicitly set to the grace period.
{code:java}
     @SuppressWarnings("deprecation") // continuing to support 
Windows#maintainMs/segmentInterval in fallback mode
    private static  StoreBuilder> 
joinWindowStoreBuilder(final String joinName,

 final JoinWindows windows,

 final Serde keySerde,

 final Serde valueSerde) {
    return Stores.windowStoreBuilder(
    Stores.persistentWindowStore(
    joinName + "-store",
    Duration.ofMillis(windows.size() + windows.gracePeriodMs()),
    Duration.ofMillis(windows.size()),
    true
    ),
    keySerde,
    valueSerde
    );
    }
{code}
so it looks like the grace period defines the retention period...


was (Author: the4thamigo_uk):
Ok, thanks.

However, I thought retention period was meant to be independent of grace 
period, like it is for grouped aggregations? I can see from the code though 
that the retention is explicitly set to the grace period.

{Code}
 @SuppressWarnings("deprecation") // continuing to support 
Windows#maintainMs/segmentInterval in fallback mode
     private static  StoreBuilder> 
joinWindowStoreBuilder(final String joinName,
    
  final JoinWindows windows,
    
  final Serde keySerde,
    
  final Serde valueSerde)

{     return Stores.windowStoreBuilder(     
Stores.persistentWindowStore(     joinName + "-store",  
   Duration.ofMillis(windows.size() + windows.gracePeriodMs()),  
<     Duration.ofMillis(windows.size()),    
 true     ),     keySerde,     valueSerde   
  );     }

 

{Code}


 so it looks like the grace period defines the retention period...

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8248) Producer may fail IllegalStateException

2019-05-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831738#comment-16831738
 ] 

ASF GitHub Bot commented on KAFKA-8248:
---

hachikuji commented on pull request #6613: KAFKA-8248; Ensure time updated 
before sending transactional request
URL: https://github.com/apache/kafka/pull/6613
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Producer may fail IllegalStateException
> ---
>
> Key: KAFKA-8248
> URL: https://issues.apache.org/jira/browse/KAFKA-8248
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.0.0
>Reporter: Matthias J. Sax
>Assignee: Jason Gustafson
>Priority: Major
>
> In a Kafka Streams application, we observed the following log from the 
> producer:
> {quote}2019-04-17T01:58:25.898Z 17466081 [kafka-producer-network-thread | 
> client-id-enrichment-gcoint-StreamThread-7-0_10-producer] ERROR 
> org.apache.kafka.clients.producer.internals.Sender - [Producer 
> clientId=client-id-enrichment-gcoint-StreamThread-7-0_10-producer, 
> transactionalId=application-id-enrichment-gcoint-0_10] Uncaught error in 
> kafka producer I/O thread: 
> 2019-04-17T01:58:25.898Z java.lang.IllegalStateException: Attempt to send a 
> request to node 1 which is not ready.
> 2019-04-17T01:58:25.898Z at 
> org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:430)
> 2019-04-17T01:58:25.898Z at 
> org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:411)
> 2019-04-17T01:58:25.898Z at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendTransactionalRequest(Sender.java:362)
> 2019-04-17T01:58:25.898Z at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:214)
> 2019-04-17T01:58:25.898Z at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
> 2019-04-17T01:58:25.898Z at java.lang.Thread.run(Thread.java:748)
> {quote}
> Later, Kafka Streams (running with EOS enabled) shuts down with a 
> `TimeoutException` that occurs during rebalance. It seem that the above error 
> results in this `TimeoutException`. However, and `IllegalStateException` seem 
> to indicate a bug in the producer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831747#comment-16831747
 ] 

Andrew commented on KAFKA-8315:
---

I should also mention my use case from the original slack community 
conversation : 

I am performing a large historical inner join (2 years) of two streams (using 
event time), followed by an aggregation. 

For the join, I have : 2 days window into the past only and a grace period of 2 
days ( I dont want to accept updates to the aggregation beyond this grace 
period).
For the grouped aggregation I have : a tumbling window of 1 second and a grace 
of 4 days

For the grouped aggregation if I also set the group retention using 
Materialized, I can see that this affects the retention period of the 
underlying KSTREAM-AGGREGATE-STATE-STORE topics. This seems to be independent 
of the grace period.

However, using `until()` for the JoinWindows does not do the same for the 
KSTREAM-JOINTHIS and KSTREAM-JOINOTHER topics, as I would have expected. These 
topics always have 120 hours retention period set on the topic.

What I see is that I get no aggregation records other than for the most recent 
120 hour period. So the vast majority of my 2 years fails to be 
joined/aggregated, and outputs nothing.

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831747#comment-16831747
 ] 

Andrew edited comment on KAFKA-8315 at 5/2/19 4:39 PM:
---

I should also mention my use case from the original slack community 
conversation :

I am performing a large historical inner join (2 years) of two streams (using 
event time), followed by an aggregation.

For the join, I have : 2 days of join window into the past only, with a grace 
period of 2 days ( I dont want to accept updates to the aggregation beyond this 
grace period).
 For the grouped aggregation I have : a tumbling window of 1 second and a grace 
of 4 days

For the grouped aggregation if I also set the group retention using 
Materialized, I can see that this affects the retention period of the 
underlying KSTREAM-AGGREGATE-STATE-STORE topics. This seems to be independent 
of the grace period.

However, using `until()` for the JoinWindows does not do the same for the 
KSTREAM-JOINTHIS and KSTREAM-JOINOTHER topics, as I would have expected. These 
topics always have 120 hours retention period set on the topic.

What I see is that I get no aggregation records other than for the most recent 
120 hour period. So the vast majority of my 2 years fails to be 
joined/aggregated, and outputs nothing.


was (Author: the4thamigo_uk):
I should also mention my use case from the original slack community 
conversation : 

I am performing a large historical inner join (2 years) of two streams (using 
event time), followed by an aggregation. 

For the join, I have : 2 days window into the past only and a grace period of 2 
days ( I dont want to accept updates to the aggregation beyond this grace 
period).
For the grouped aggregation I have : a tumbling window of 1 second and a grace 
of 4 days

For the grouped aggregation if I also set the group retention using 
Materialized, I can see that this affects the retention period of the 
underlying KSTREAM-AGGREGATE-STATE-STORE topics. This seems to be independent 
of the grace period.

However, using `until()` for the JoinWindows does not do the same for the 
KSTREAM-JOINTHIS and KSTREAM-JOINOTHER topics, as I would have expected. These 
topics always have 120 hours retention period set on the topic.

What I see is that I get no aggregation records other than for the most recent 
120 hour period. So the vast majority of my 2 years fails to be 
joined/aggregated, and outputs nothing.

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831747#comment-16831747
 ] 

Andrew edited comment on KAFKA-8315 at 5/2/19 4:40 PM:
---

I should also mention my use case from the original slack community 
conversation :

I am performing a large historical inner join (2 years) of two streams (using 
event time), followed by an aggregation.

For the join, I have : 2 days of join window into the past only, with a grace 
period of 2 days ( I dont want to accept updates to the aggregation beyond this 
grace period).
 For the grouped aggregation I have : a tumbling window of 1 second and a grace 
of 4 days

For the grouped aggregation if I also set the group retention using 
Materialized, I can see that this affects the retention period of the 
underlying KSTREAM-AGGREGATE-STATE-STORE topics. This seems to be independent 
of the grace period.

However, using `until()` for the JoinWindows does not do the equvalent for the 
KSTREAM-JOINTHIS and KSTREAM-JOINOTHER topics, as I would have expected. These 
topics always have 120 hours retention period set on the topic.

What I see is that I get no aggregation records other than for the most recent 
120 hour period. So the vast majority of my 2 years fails to be 
joined/aggregated, and outputs nothing.


was (Author: the4thamigo_uk):
I should also mention my use case from the original slack community 
conversation :

I am performing a large historical inner join (2 years) of two streams (using 
event time), followed by an aggregation.

For the join, I have : 2 days of join window into the past only, with a grace 
period of 2 days ( I dont want to accept updates to the aggregation beyond this 
grace period).
 For the grouped aggregation I have : a tumbling window of 1 second and a grace 
of 4 days

For the grouped aggregation if I also set the group retention using 
Materialized, I can see that this affects the retention period of the 
underlying KSTREAM-AGGREGATE-STATE-STORE topics. This seems to be independent 
of the grace period.

However, using `until()` for the JoinWindows does not do the same for the 
KSTREAM-JOINTHIS and KSTREAM-JOINOTHER topics, as I would have expected. These 
topics always have 120 hours retention period set on the topic.

What I see is that I get no aggregation records other than for the most recent 
120 hour period. So the vast majority of my 2 years fails to be 
joined/aggregated, and outputs nothing.

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831747#comment-16831747
 ] 

Andrew edited comment on KAFKA-8315 at 5/2/19 4:53 PM:
---

I should also mention my use case from the original slack community 
conversation :

I am performing a large historical inner join (2 years) of two streams (using 
event time), followed by an aggregation.

For the join, I have : 2 days of join window into the past only, with a grace 
period of 2 days ( I dont want to accept updates to the aggregation beyond this 
grace period).
 For the grouped aggregation I have : a tumbling window of 1 second and a grace 
of 4 days

For the grouped aggregation if I also set the group retention using 
Materialized, I can see that this affects the retention period of the 
underlying KSTREAM-AGGREGATE-STATE-STORE topics. This seems to be independent 
of the grace period.

However, using `until()` for the JoinWindows does not do the equvalent for the 
KSTREAM-JOINTHIS and KSTREAM-JOINOTHER topics, as I would have expected. These 
topics always have 120 hours retention period set on the topic.

What I see is that I get no aggregation records other than for the most recent 
120 hour period. So the vast majority of my 2 years fails to be 
joined/aggregated, and outputs nothing.

{Quote}
Andy Smith   [1 minute ago]
So, it _is_ right, that I should be able to set the retention period 
independently of the grace period?

e.g. the use case is :
Perform a windowed aggregation, say, with a window of 1 day, and a grace of 1 
day, over the whole of 1 year.
I dont want to update my windows if data arrives more than 1 day after each 
window expires as stream-time progresses (hence the grace period is 1 day)
I want to ensure I output joined and aggregated values for all days over the 
entire year

In order to do this I should set :

join window = 1 day
join grace = 1 day
join retention = 1 year
group window = 1 day
group grace = 1 day
group retention = 1 year

This right? (edited)

Matthias J Sax   [< 1 minute ago]
Yes.
{Quote}

 


was (Author: the4thamigo_uk):
I should also mention my use case from the original slack community 
conversation :

I am performing a large historical inner join (2 years) of two streams (using 
event time), followed by an aggregation.

For the join, I have : 2 days of join window into the past only, with a grace 
period of 2 days ( I dont want to accept updates to the aggregation beyond this 
grace period).
 For the grouped aggregation I have : a tumbling window of 1 second and a grace 
of 4 days

For the grouped aggregation if I also set the group retention using 
Materialized, I can see that this affects the retention period of the 
underlying KSTREAM-AGGREGATE-STATE-STORE topics. This seems to be independent 
of the grace period.

However, using `until()` for the JoinWindows does not do the equvalent for the 
KSTREAM-JOINTHIS and KSTREAM-JOINOTHER topics, as I would have expected. These 
topics always have 120 hours retention period set on the topic.

What I see is that I get no aggregation records other than for the most recent 
120 hour period. So the vast majority of my 2 years fails to be 
joined/aggregated, and outputs nothing.

> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> (https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8317) ClassCastException using KTable.suppress()

2019-05-02 Thread Andrew (JIRA)
Andrew created KAFKA-8317:
-

 Summary: ClassCastException using KTable.suppress()
 Key: KAFKA-8317
 URL: https://issues.apache.org/jira/browse/KAFKA-8317
 Project: Kafka
  Issue Type: Bug
Reporter: Andrew


I am trying to use `KTable.suppress()` and I am getting the following error :

{Code}
java.lang.ClassCastException: org.apache.kafka.streams.kstream.Windowed cannot 
be cast to java.lang.String
    at 
org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
    at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.buffer(KTableSuppressProcessor.java:95)
    at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:87)
    at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:40)
    at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
{Code}

My code is as follows :

{Code}
    final KTable, GenericRecord> groupTable = groupedStream
    .aggregate(lastAggregator, lastAggregator, materialized);

    final KTable, GenericRecord> suppressedTable = 
groupTable.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()));

    // write the change-log stream to the topic
    suppressedTable.toStream((k, v) -> k.key())
    .mapValues(joinValueMapper::apply)
    .to(props.joinTopic());
{Code}


The code without using `suppressedTable` works... what am i doing wrong.


Someone else has encountered the same issue :

https://gist.github.com/robie2011/1caa4772b60b5a6f993e6f98e792a380


Slack conversation : 
https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556633088239800



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew updated KAFKA-8315:
--
Description: 
The documentation says to use `Materialized` not `JoinWindows.until()` 
([https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-]),
 but there is no where to pass a `Materialized` instance to the join operation, 
only to the group operation is supported it seems.

 

Slack conversation here : 
[https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300
]

[Additional]

>From what I understand, the retention period should be independent of the 
>grace period, so I think this is more than a documentation fix (see comments 
>below)

  was:
The documentation says to use `Materialized` not `JoinWindows.until()` 
(https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-),
 but there is no where to pass a `Materialized` instance to the join operation, 
only to the group operation is supported it seems.

 

Slack conversation here : 
https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300


> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> ([https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-]),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> [https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300
> ]
> [Additional]
> From what I understand, the retention period should be independent of the 
> grace period, so I think this is more than a documentation fix (see comments 
> below)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8315) Cannot pass Materialized into a join operation

2019-05-02 Thread Andrew (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew updated KAFKA-8315:
--
Description: 
The documentation says to use `Materialized` not `JoinWindows.until()` 
([https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-]),
 but there is no where to pass a `Materialized` instance to the join operation, 
only to the group operation is supported it seems.

 

Slack conversation here : 
[https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300]

[Additional]

>From what I understand, the retention period should be independent of the 
>grace period, so I think this is more than a documentation fix (see comments 
>below)

  was:
The documentation says to use `Materialized` not `JoinWindows.until()` 
([https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-]),
 but there is no where to pass a `Materialized` instance to the join operation, 
only to the group operation is supported it seems.

 

Slack conversation here : 
[https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300
]

[Additional]

>From what I understand, the retention period should be independent of the 
>grace period, so I think this is more than a documentation fix (see comments 
>below)


> Cannot pass Materialized into a join operation
> --
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> ([https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-]),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> [https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300]
> [Additional]
> From what I understand, the retention period should be independent of the 
> grace period, so I think this is more than a documentation fix (see comments 
> below)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8315) Cannot pass Materialized into a join operation - hence cant set retention period independent of grace

2019-05-02 Thread Andrew (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew updated KAFKA-8315:
--
Summary: Cannot pass Materialized into a join operation - hence cant set 
retention period independent of grace  (was: Cannot pass Materialized into a 
join operation)

> Cannot pass Materialized into a join operation - hence cant set retention 
> period independent of grace
> -
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> ([https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-]),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> [https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300]
> [Additional]
> From what I understand, the retention period should be independent of the 
> grace period, so I think this is more than a documentation fix (see comments 
> below)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8313) KafkaStreams state not being updated properly after shutdown

2019-05-02 Thread Eric (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric updated KAFKA-8313:

Attachment: log.txt

> KafkaStreams state not being updated properly after shutdown
> 
>
> Key: KAFKA-8313
> URL: https://issues.apache.org/jira/browse/KAFKA-8313
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Single broker running on Ubuntu Linux.
>Reporter: Eric
>Priority: Minor
> Attachments: log.txt
>
>
> I am running a KafkaStreams inside a DropWizard server and I am trying to 
> detect when my stream shuts down (in case a non-recoverable error occurs).  I 
> was hoping I could use KafkaStreams.setStateListener() to be notified when a 
> state change occurs.  When I query the state, KafkaStreams is stuck in the 
> REBALANCING state even though its threads are all DEAD.
>  
> You can easily reproduce this by doing the following:
>  # Create a topic (I have one with 5 partitions)
>  # Create a simple Kafka stream consuming from that topic
>  # Create a StateListener and register it on that KafkaStreams
>  # Start the Kafka stream
>  # Once everything runs, delete the topic using kafka-topics.sh
> When deleting the topic, you will see the StreamThreads' state transition 
> from RUNNING to PARTITION_REVOKED and you will be notified with the 
> KafkaStreams REBALANCING state.  That's all good and expected.  Then the 
> StreamThreads transition to PENDING_SHUTDOWN and eventually to DEAD and the 
> KafkaStreams state is stuck into the REBALANCING thread.  I was expecting to 
> see a NOT_RUNNING state eventually... am I right?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8313) KafkaStreams state not being updated properly after shutdown

2019-05-02 Thread Eric (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric updated KAFKA-8313:

Attachment: kafka-8313-src.tgz

> KafkaStreams state not being updated properly after shutdown
> 
>
> Key: KAFKA-8313
> URL: https://issues.apache.org/jira/browse/KAFKA-8313
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Single broker running on Ubuntu Linux.
>Reporter: Eric
>Priority: Minor
> Attachments: kafka-8313-src.tgz, log.txt
>
>
> I am running a KafkaStreams inside a DropWizard server and I am trying to 
> detect when my stream shuts down (in case a non-recoverable error occurs).  I 
> was hoping I could use KafkaStreams.setStateListener() to be notified when a 
> state change occurs.  When I query the state, KafkaStreams is stuck in the 
> REBALANCING state even though its threads are all DEAD.
>  
> You can easily reproduce this by doing the following:
>  # Create a topic (I have one with 5 partitions)
>  # Create a simple Kafka stream consuming from that topic
>  # Create a StateListener and register it on that KafkaStreams
>  # Start the Kafka stream
>  # Once everything runs, delete the topic using kafka-topics.sh
> When deleting the topic, you will see the StreamThreads' state transition 
> from RUNNING to PARTITION_REVOKED and you will be notified with the 
> KafkaStreams REBALANCING state.  That's all good and expected.  Then the 
> StreamThreads transition to PENDING_SHUTDOWN and eventually to DEAD and the 
> KafkaStreams state is stuck into the REBALANCING thread.  I was expecting to 
> see a NOT_RUNNING state eventually... am I right?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8313) KafkaStreams state not being updated properly after shutdown

2019-05-02 Thread Eric (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831841#comment-16831841
 ] 

Eric commented on KAFKA-8313:
-

I have added two files to the issue:
 * kafka-8313-src.tgz: contains all the source code and maven setup to 
replicate the issue
 * log.txt: full execution log in DEBUG of the source code above

As you can see in the code, I have added a 
KafkaStreams#setUncaughtExceptionHandler(), but it is never called.  Every line 
in the log from my source code is prefixed by '##' so you can easily see 
what is going on.  

As I explained, you'll need to create a topic named 'input' before running the 
code.  When the application starts, you have 30 seconds to delete the topic.  
The Kafka stream will be closed when that timer is over. 

I tried to debug that issue, but I couldn't run (in debug) Kafka from my 
Eclipse.  Let me know if you need more explanations.

> KafkaStreams state not being updated properly after shutdown
> 
>
> Key: KAFKA-8313
> URL: https://issues.apache.org/jira/browse/KAFKA-8313
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Single broker running on Ubuntu Linux.
>Reporter: Eric
>Priority: Minor
> Attachments: kafka-8313-src.tgz, log.txt
>
>
> I am running a KafkaStreams inside a DropWizard server and I am trying to 
> detect when my stream shuts down (in case a non-recoverable error occurs).  I 
> was hoping I could use KafkaStreams.setStateListener() to be notified when a 
> state change occurs.  When I query the state, KafkaStreams is stuck in the 
> REBALANCING state even though its threads are all DEAD.
>  
> You can easily reproduce this by doing the following:
>  # Create a topic (I have one with 5 partitions)
>  # Create a simple Kafka stream consuming from that topic
>  # Create a StateListener and register it on that KafkaStreams
>  # Start the Kafka stream
>  # Once everything runs, delete the topic using kafka-topics.sh
> When deleting the topic, you will see the StreamThreads' state transition 
> from RUNNING to PARTITION_REVOKED and you will be notified with the 
> KafkaStreams REBALANCING state.  That's all good and expected.  Then the 
> StreamThreads transition to PENDING_SHUTDOWN and eventually to DEAD and the 
> KafkaStreams state is stuck into the REBALANCING thread.  I was expecting to 
> see a NOT_RUNNING state eventually... am I right?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8317) ClassCastException using KTable.suppress()

2019-05-02 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831842#comment-16831842
 ] 

Bill Bejeck commented on KAFKA-8317:


Hi [~the4thamigo_uk],

I don't think you are doing anything wrong, looks like you have uncovered a use 
case that isn't supported at the moment.  Currently, there is a PR 
[https://github.com/apache/kafka/pull/6646] attempting to fix a similar issue. 
Are you interested in submitting a PR to fix the problem?

 

 

> ClassCastException using KTable.suppress()
> --
>
> Key: KAFKA-8317
> URL: https://issues.apache.org/jira/browse/KAFKA-8317
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> I am trying to use `KTable.suppress()` and I am getting the following error :
> {Code}
> java.lang.ClassCastException: org.apache.kafka.streams.kstream.Windowed 
> cannot be cast to java.lang.String
>     at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.buffer(KTableSuppressProcessor.java:95)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:87)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:40)
>     at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
> {Code}
> My code is as follows :
> {Code}
>     final KTable, GenericRecord> groupTable = 
> groupedStream
>     .aggregate(lastAggregator, lastAggregator, materialized);
>     final KTable, GenericRecord> suppressedTable = 
> groupTable.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()));
>     // write the change-log stream to the topic
>     suppressedTable.toStream((k, v) -> k.key())
>     .mapValues(joinValueMapper::apply)
>     .to(props.joinTopic());
> {Code}
> The code without using `suppressedTable` works... what am i doing wrong.
> Someone else has encountered the same issue :
> https://gist.github.com/robie2011/1caa4772b60b5a6f993e6f98e792a380
> Slack conversation : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556633088239800



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (KAFKA-8317) ClassCastException using KTable.suppress()

2019-05-02 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8317:
---
Comment: was deleted

(was: Hi [~the4thamigo_uk],

I don't think you are doing anything wrong, looks like you have uncovered a use 
case that isn't supported at the moment.  Currently, there is a PR 
[https://github.com/apache/kafka/pull/6646] attempting to fix a similar issue. 
Are you interested in submitting a PR to fix the problem?

 

 )

> ClassCastException using KTable.suppress()
> --
>
> Key: KAFKA-8317
> URL: https://issues.apache.org/jira/browse/KAFKA-8317
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> I am trying to use `KTable.suppress()` and I am getting the following error :
> {Code}
> java.lang.ClassCastException: org.apache.kafka.streams.kstream.Windowed 
> cannot be cast to java.lang.String
>     at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.buffer(KTableSuppressProcessor.java:95)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:87)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:40)
>     at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
> {Code}
> My code is as follows :
> {Code}
>     final KTable, GenericRecord> groupTable = 
> groupedStream
>     .aggregate(lastAggregator, lastAggregator, materialized);
>     final KTable, GenericRecord> suppressedTable = 
> groupTable.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()));
>     // write the change-log stream to the topic
>     suppressedTable.toStream((k, v) -> k.key())
>     .mapValues(joinValueMapper::apply)
>     .to(props.joinTopic());
> {Code}
> The code without using `suppressedTable` works... what am i doing wrong.
> Someone else has encountered the same issue :
> https://gist.github.com/robie2011/1caa4772b60b5a6f993e6f98e792a380
> Slack conversation : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556633088239800



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8317) ClassCastException using KTable.suppress()

2019-05-02 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831864#comment-16831864
 ] 

Bill Bejeck edited comment on KAFKA-8317 at 5/2/19 6:58 PM:


Hi [~the4thamigo_uk],

Thanks for reporting this. What version are you using?

In the meantime would something like the following suit your needs?
{noformat}
.suppress(untilTimeLimit(ofMillis(), maxRecords().emitEarlyWhenFull())){noformat}
-Bill


was (Author: bbejeck):
Hi [~the4thamigo_uk],

Thanks for reporting this. What version are you using?

In the meantime would something like the following work?
{noformat}
.suppress(untilTimeLimit(ofMillis(), maxRecords().emitEarlyWhenFull())){noformat}
-Bill

> ClassCastException using KTable.suppress()
> --
>
> Key: KAFKA-8317
> URL: https://issues.apache.org/jira/browse/KAFKA-8317
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> I am trying to use `KTable.suppress()` and I am getting the following error :
> {Code}
> java.lang.ClassCastException: org.apache.kafka.streams.kstream.Windowed 
> cannot be cast to java.lang.String
>     at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.buffer(KTableSuppressProcessor.java:95)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:87)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:40)
>     at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
> {Code}
> My code is as follows :
> {Code}
>     final KTable, GenericRecord> groupTable = 
> groupedStream
>     .aggregate(lastAggregator, lastAggregator, materialized);
>     final KTable, GenericRecord> suppressedTable = 
> groupTable.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()));
>     // write the change-log stream to the topic
>     suppressedTable.toStream((k, v) -> k.key())
>     .mapValues(joinValueMapper::apply)
>     .to(props.joinTopic());
> {Code}
> The code without using `suppressedTable` works... what am i doing wrong.
> Someone else has encountered the same issue :
> https://gist.github.com/robie2011/1caa4772b60b5a6f993e6f98e792a380
> Slack conversation : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556633088239800



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8317) ClassCastException using KTable.suppress()

2019-05-02 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831864#comment-16831864
 ] 

Bill Bejeck commented on KAFKA-8317:


Hi [~the4thamigo_uk],

Thanks for reporting this. What version are you using?

In the meantime would something like the following work?
{noformat}
.suppress(untilTimeLimit(ofMillis(), maxRecords().emitEarlyWhenFull())){noformat}
-Bill

> ClassCastException using KTable.suppress()
> --
>
> Key: KAFKA-8317
> URL: https://issues.apache.org/jira/browse/KAFKA-8317
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> I am trying to use `KTable.suppress()` and I am getting the following error :
> {Code}
> java.lang.ClassCastException: org.apache.kafka.streams.kstream.Windowed 
> cannot be cast to java.lang.String
>     at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.buffer(KTableSuppressProcessor.java:95)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:87)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:40)
>     at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
> {Code}
> My code is as follows :
> {Code}
>     final KTable, GenericRecord> groupTable = 
> groupedStream
>     .aggregate(lastAggregator, lastAggregator, materialized);
>     final KTable, GenericRecord> suppressedTable = 
> groupTable.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()));
>     // write the change-log stream to the topic
>     suppressedTable.toStream((k, v) -> k.key())
>     .mapValues(joinValueMapper::apply)
>     .to(props.joinTopic());
> {Code}
> The code without using `suppressedTable` works... what am i doing wrong.
> Someone else has encountered the same issue :
> https://gist.github.com/robie2011/1caa4772b60b5a6f993e6f98e792a380
> Slack conversation : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556633088239800



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8220) Avoid kicking out members through rebalance timeout

2019-05-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831865#comment-16831865
 ] 

ASF GitHub Bot commented on KAFKA-8220:
---

abbccdda commented on pull request #: KAFKA-8220: Avoid kicking out members 
through rebalance timeout
URL: https://github.com/apache/kafka/pull/
 
 
   To make consumer group members more persist, we want to avoid kick-out 
unjoined members through rebalance timeout. The only exception is when leader 
fails to join, because we will at risk of no assignment computed during sync 
stage. The choice will be kicking off non-responsive leader and choose a new 
leader if possible.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Avoid kicking out members through rebalance timeout
> ---
>
> Key: KAFKA-8220
> URL: https://issues.apache.org/jira/browse/KAFKA-8220
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> As stated in KIP-345, we will no longer evict unjoined members out of the 
> group. We need to take care the edge case when the leader fails to rejoin and 
> switch to a new leader in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8317) ClassCastException using KTable.suppress()

2019-05-02 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831897#comment-16831897
 ] 

John Roesler commented on KAFKA-8317:
-

It looks like the windowed aggregation isn't properly setting the serde for 
downstream. I'll take a look. I thought I'd already fixed this.

> ClassCastException using KTable.suppress()
> --
>
> Key: KAFKA-8317
> URL: https://issues.apache.org/jira/browse/KAFKA-8317
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> I am trying to use `KTable.suppress()` and I am getting the following error :
> {Code}
> java.lang.ClassCastException: org.apache.kafka.streams.kstream.Windowed 
> cannot be cast to java.lang.String
>     at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.buffer(KTableSuppressProcessor.java:95)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:87)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:40)
>     at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
> {Code}
> My code is as follows :
> {Code}
>     final KTable, GenericRecord> groupTable = 
> groupedStream
>     .aggregate(lastAggregator, lastAggregator, materialized);
>     final KTable, GenericRecord> suppressedTable = 
> groupTable.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()));
>     // write the change-log stream to the topic
>     suppressedTable.toStream((k, v) -> k.key())
>     .mapValues(joinValueMapper::apply)
>     .to(props.joinTopic());
> {Code}
> The code without using `suppressedTable` works... what am i doing wrong.
> Someone else has encountered the same issue :
> https://gist.github.com/robie2011/1caa4772b60b5a6f993e6f98e792a380
> Slack conversation : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556633088239800



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8317) ClassCastException using KTable.suppress()

2019-05-02 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831897#comment-16831897
 ] 

John Roesler edited comment on KAFKA-8317 at 5/2/19 7:41 PM:
-

It looks like the windowed aggregation isn't properly setting the serde for 
downstream. I thought I'd already fixed this, but there might be another edge 
case...

Can you provide a more complete code snippet? In particular, we'd need to see 
how you're building the grouped stream before aggregate. Also, how you're 
building `materialized`.

Thanks for the report!


was (Author: vvcephei):
It looks like the windowed aggregation isn't properly setting the serde for 
downstream. I'll take a look. I thought I'd already fixed this.

> ClassCastException using KTable.suppress()
> --
>
> Key: KAFKA-8317
> URL: https://issues.apache.org/jira/browse/KAFKA-8317
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> I am trying to use `KTable.suppress()` and I am getting the following error :
> {Code}
> java.lang.ClassCastException: org.apache.kafka.streams.kstream.Windowed 
> cannot be cast to java.lang.String
>     at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.buffer(KTableSuppressProcessor.java:95)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:87)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:40)
>     at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
> {Code}
> My code is as follows :
> {Code}
>     final KTable, GenericRecord> groupTable = 
> groupedStream
>     .aggregate(lastAggregator, lastAggregator, materialized);
>     final KTable, GenericRecord> suppressedTable = 
> groupTable.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()));
>     // write the change-log stream to the topic
>     suppressedTable.toStream((k, v) -> k.key())
>     .mapValues(joinValueMapper::apply)
>     .to(props.joinTopic());
> {Code}
> The code without using `suppressedTable` works... what am i doing wrong.
> Someone else has encountered the same issue :
> https://gist.github.com/robie2011/1caa4772b60b5a6f993e6f98e792a380
> Slack conversation : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556633088239800



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8315) Cannot pass Materialized into a join operation - hence cant set retention period independent of grace

2019-05-02 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831927#comment-16831927
 ] 

John Roesler commented on KAFKA-8315:
-

Interesting. Thanks for the context. For some reason, the slack link doesn't 
take me to the thread.

Your understanding is almost spot-on.

The grace period defines how long after the window ends it will accept 
late-arriving records and update the results. The retention time defines how 
long we keep the state of the window in storage. Clearly, the retention time 
must be at least big enough to support the updates that may happen during the 
grace period, but it could be much larger, to support Interactive Queries even 
after the window is closed to updates.

Join windows are a little bit different, because they are not queriable. 
Therefore, there is no reason to have any retention beyond the grace period. 
This is also why there's no `Materialized` parameter. The state for the join is 
purely bookeeping, not a "materialized view" in the data processing sense. 
Since you mention the apparent similarity with grouped aggregations, the fact 
that JoinWindows shares a class hierarchy with Windows, and the fact that it 
uses a "normal" WindowStore internally, was mostly for implementation 
convenience. It's actually a little semantically abusive if you really get into 
it, and I've heard a few times that people would like to break joins out and 
clean the whole situation up.

Things get really complicated when we need to compute defaults, though. The 
concepts of "grace" and "retention" used to be coupled into just "retention" 
(aka "until" aka "maintainMs"), and the default was set to 24h. So, if we pick 
any default grace period shorter than 24h, then some apps may start to drop 
late data that didn't before. Also, the "retention" configuration in Windows is 
only deprecated, not removed, so someone may set a retention time on the 
deprecated methods, but not a grace period, and we also need to do the "right 
thing" in that case. This is just in the way of justifying why the code is so 
complicated. Hopefully, we can drop the deprecated methods soonish and clean 
the whole thing up.

So, back to your actual behavior, you *should* see that the window stores for 
join windows use `Duration.ofMillis(windows.size() + windows.gracePeriodMs())`, 
as you pointed out above. The deprecated `until` should be ignored. It's 
possible the topic retention doesn't get updated when you change your configs, 
which would be a bug.

One thing I didn't understand is the arithmetic from your conversation. I'll 
take a shot and maybe you can set me straight...
You want to join 2 years of historical data.
For each join candidate, you only want to look back 2 days, so you set the join 
window to size=2 days.
You want to emit updated join results in the case of time-disordered records, 
but not indefinitely. Specifically, you only want to emit updated results up to 
2 days after the fact, so you set the grace period to 2 days as well.
With these configurations, you should see the retention time on the topic set 
to at least 4 days. 120 hours is 5 days, so this seems about right to me (there 
might be some fudge factor, I'm not sure).

I guess the big problem is that your data fails to join. I'd start with 
identifying some pair of records that you think *should* join, and then 
identify why they don't (could be a join window too small, or it could be the 
grace period too small).

> Cannot pass Materialized into a join operation - hence cant set retention 
> period independent of grace
> -
>
> Key: KAFKA-8315
> URL: https://issues.apache.org/jira/browse/KAFKA-8315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Assignee: John Roesler
>Priority: Major
>
> The documentation says to use `Materialized` not `JoinWindows.until()` 
> ([https://kafka.apache.org/22/javadoc/org/apache/kafka/streams/kstream/JoinWindows.html#until-long-]),
>  but there is no where to pass a `Materialized` instance to the join 
> operation, only to the group operation is supported it seems.
>  
> Slack conversation here : 
> [https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556799561287300]
> [Additional]
> From what I understand, the retention period should be independent of the 
> grace period, so I think this is more than a documentation fix (see comments 
> below)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6521) Store record timestamps in KTable stores

2019-05-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831988#comment-16831988
 ] 

ASF GitHub Bot commented on KAFKA-6521:
---

mjsax commented on pull request #6667: KAFKA-6521: Use timestamped stores for 
KTables
URL: https://github.com/apache/kafka/pull/6667
 
 
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Store record timestamps in KTable stores
> 
>
> Key: KAFKA-6521
> URL: https://issues.apache.org/jira/browse/KAFKA-6521
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>
> Currently, KTables store plain key-value pairs. However, it is desirable to 
> also store a timestamp for the record.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8285) Handle thread-id random switch on JVM for KStream

2019-05-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832114#comment-16832114
 ] 

ASF GitHub Bot commented on KAFKA-8285:
---

guozhangwang commented on pull request #6632: KAFKA-8285: enable localized 
thread IDs in Kafka Streams
URL: https://github.com/apache/kafka/pull/6632
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Handle thread-id random switch on JVM for KStream
> -
>
> Key: KAFKA-8285
> URL: https://issues.apache.org/jira/browse/KAFKA-8285
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Currently we are potentially at risk by being bite for interleaving stream 
> thread ids. It is because we share the same init config number when two 
> stream instances happen to be scheduled under one JVM. This would be bad 
> scenario because we could have different thread-ids throughout restarts, 
> which invalidates static membership.
> For example for once our thread id assigned were 1,2,3,4 for instance A and 
> 5, 6, 7, 8 for instance B. On the restart of both instances, the same atomic 
> update could be applied as 1,3,5,7 for A and 2,4,6,8 for B, which changes 
> their group.instance.ids.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8285) Handle thread-id random switch on JVM for KStream

2019-05-02 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8285.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> Handle thread-id random switch on JVM for KStream
> -
>
> Key: KAFKA-8285
> URL: https://issues.apache.org/jira/browse/KAFKA-8285
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.3.0
>
>
> Currently we are potentially at risk by being bite for interleaving stream 
> thread ids. It is because we share the same init config number when two 
> stream instances happen to be scheduled under one JVM. This would be bad 
> scenario because we could have different thread-ids throughout restarts, 
> which invalidates static membership.
> For example for once our thread id assigned were 1,2,3,4 for instance A and 
> 5, 6, 7, 8 for instance B. On the restart of both instances, the same atomic 
> update could be applied as 1,3,5,7 for A and 2,4,6,8 for B, which changes 
> their group.instance.ids.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8317) ClassCastException using KTable.suppress()

2019-05-02 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832117#comment-16832117
 ] 

Guozhang Wang commented on KAFKA-8317:
--

Seems https://issues.apache.org/jira/browse/KAFKA-8199 reports the same issue. 
Are they blocking https://issues.apache.org/jira/browse/KAFKA-8289 indeed?

> ClassCastException using KTable.suppress()
> --
>
> Key: KAFKA-8317
> URL: https://issues.apache.org/jira/browse/KAFKA-8317
> Project: Kafka
>  Issue Type: Bug
>Reporter: Andrew
>Priority: Major
>
> I am trying to use `KTable.suppress()` and I am getting the following error :
> {Code}
> java.lang.ClassCastException: org.apache.kafka.streams.kstream.Windowed 
> cannot be cast to java.lang.String
>     at 
> org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.buffer(KTableSuppressProcessor.java:95)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:87)
>     at 
> org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessor.process(KTableSuppressProcessor.java:40)
>     at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
> {Code}
> My code is as follows :
> {Code}
>     final KTable, GenericRecord> groupTable = 
> groupedStream
>     .aggregate(lastAggregator, lastAggregator, materialized);
>     final KTable, GenericRecord> suppressedTable = 
> groupTable.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()));
>     // write the change-log stream to the topic
>     suppressedTable.toStream((k, v) -> k.key())
>     .mapValues(joinValueMapper::apply)
>     .to(props.joinTopic());
> {Code}
> The code without using `suppressedTable` works... what am i doing wrong.
> Someone else has encountered the same issue :
> https://gist.github.com/robie2011/1caa4772b60b5a6f993e6f98e792a380
> Slack conversation : 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1556633088239800



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-05-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832134#comment-16832134
 ] 

ASF GitHub Bot commented on KAFKA-8289:
---

guozhangwang commented on pull request #6654: KAFKA-8289: Fix Session 
Expiration and Suppression
URL: https://github.com/apache/kafka/pull/6654
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] 

[jira] [Commented] (KAFKA-7601) Handle message format downgrades during upgrade of message format version

2019-05-02 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832135#comment-16832135
 ] 

ASF GitHub Bot commented on KAFKA-7601:
---

hachikuji commented on pull request #6568: KAFKA-7601; Clear leader epoch cache 
on downgraded format in append
URL: https://github.com/apache/kafka/pull/6568
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Handle message format downgrades during upgrade of message format version
> -
>
> Key: KAFKA-7601
> URL: https://issues.apache.org/jira/browse/KAFKA-7601
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Major
>
> During an upgrade of the message format, there is a short time during which 
> the configured message format version is not consistent across all replicas 
> of a partition. As long as all brokers are using the same binary version 
> (i.e. all have been updated to the latest code), this typically does not 
> cause any problems. Followers will take whatever message format is used by 
> the leader. However, it is possible for leadership to change several times 
> between brokers which support the new format and those which support the old 
> format. This can cause the version used in the log to flap between the 
> different formats until the upgrade is complete. 
> For example, suppose broker 1 has been updated to use v2 and broker 2 is 
> still on v1. When broker 1 is the leader, all new messages will be written in 
> the v2 format. When broker 2 is leader, v1 will be used. If there is any 
> instability in the cluster or if completion of the update is delayed, then 
> the log will be seen to switch back and forth between v1 and v2. Once the 
> update is completed and broker 1 begins using v2, then the message format 
> will stabilize and everything will generally be ok.
> Downgrades of the message format are problematic, even if they are just 
> temporary. There are basically two issues:
> 1. We use the configured message format version to tell whether 
> down-conversion is needed. We assume that the this is always the maximum 
> version used in the log, but that assumption fails in the case of a 
> downgrade. In the worst case, old clients will see the new format and likely 
> fail.
> 2. The logic we use for finding the truncation offset during the become 
> follower transition does not handle flapping between message formats. When 
> the new format is used by the leader, then the epoch cache will be updated 
> correctly. When the old format is in use, the epoch cache won't be updated. 
> This can lead to an incorrect result to OffsetsForLeaderEpoch queries.
> We have actually observed the second problem. The scenario went something 
> like this. Broker 1 is the leader of epoch 0 and writes some messages to the 
> log using the v2 message format. Broker 2 then becomes the leader for epoch 1 
> and writes some messages in the v2 format. On broker 2, the last entry in the 
> epoch cache is epoch 0. No entry is written for epoch 1 because it uses the 
> old format. When broker 1 became a follower, it send an OffsetsForLeaderEpoch 
> query to broker 2 for epoch 0. Since epoch 0 was the last entry in the cache, 
> the log end offset was returned. This resulted in localized log divergence.
> There are a few options to fix this problem. From a high level, we can either 
> be stricter about preventing downgrades of the message format, or we can add 
> additional logic to make downgrades safe. 
> (Disallow downgrades): As an example of the first approach, the leader could 
> always use the maximum of the last version written to the log and the 
> configured message format version. 
> (Allow downgrades): If we want to allow downgrades, then it make makes sense 
> to invalidate and remove all entries in the epoch cache following the message 
> format downgrade. This would basically force us to revert to truncation to 
> the high watermark, which is what you'd expect from a downgrade.  We would 
> also need a solution for the problem of detecting when down-conversion is 
> needed for a fetch request. One option I've been thinking about is enforcing 
> the invariant that each segment uses only one message format version. 
> Whenever the message format changes, we need to roll a new segment. Then we 
> can simply remember which format is in use by each segment to tell whether 
> down-conversion is needed for a given fetch request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7601) Handle message format downgrades during upgrade of message format version

2019-05-02 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7601.

   Resolution: Fixed
Fix Version/s: 2.2.1
   2.1.2

The patch fixes the main problem of ensuring the safety of leader election in 
the presence of unexpected regressions of the message format. It does not solve 
the problem of down-conversion with mixed message formats. 

> Handle message format downgrades during upgrade of message format version
> -
>
> Key: KAFKA-7601
> URL: https://issues.apache.org/jira/browse/KAFKA-7601
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Major
> Fix For: 2.1.2, 2.2.1
>
>
> During an upgrade of the message format, there is a short time during which 
> the configured message format version is not consistent across all replicas 
> of a partition. As long as all brokers are using the same binary version 
> (i.e. all have been updated to the latest code), this typically does not 
> cause any problems. Followers will take whatever message format is used by 
> the leader. However, it is possible for leadership to change several times 
> between brokers which support the new format and those which support the old 
> format. This can cause the version used in the log to flap between the 
> different formats until the upgrade is complete. 
> For example, suppose broker 1 has been updated to use v2 and broker 2 is 
> still on v1. When broker 1 is the leader, all new messages will be written in 
> the v2 format. When broker 2 is leader, v1 will be used. If there is any 
> instability in the cluster or if completion of the update is delayed, then 
> the log will be seen to switch back and forth between v1 and v2. Once the 
> update is completed and broker 1 begins using v2, then the message format 
> will stabilize and everything will generally be ok.
> Downgrades of the message format are problematic, even if they are just 
> temporary. There are basically two issues:
> 1. We use the configured message format version to tell whether 
> down-conversion is needed. We assume that the this is always the maximum 
> version used in the log, but that assumption fails in the case of a 
> downgrade. In the worst case, old clients will see the new format and likely 
> fail.
> 2. The logic we use for finding the truncation offset during the become 
> follower transition does not handle flapping between message formats. When 
> the new format is used by the leader, then the epoch cache will be updated 
> correctly. When the old format is in use, the epoch cache won't be updated. 
> This can lead to an incorrect result to OffsetsForLeaderEpoch queries.
> We have actually observed the second problem. The scenario went something 
> like this. Broker 1 is the leader of epoch 0 and writes some messages to the 
> log using the v2 message format. Broker 2 then becomes the leader for epoch 1 
> and writes some messages in the v2 format. On broker 2, the last entry in the 
> epoch cache is epoch 0. No entry is written for epoch 1 because it uses the 
> old format. When broker 1 became a follower, it send an OffsetsForLeaderEpoch 
> query to broker 2 for epoch 0. Since epoch 0 was the last entry in the cache, 
> the log end offset was returned. This resulted in localized log divergence.
> There are a few options to fix this problem. From a high level, we can either 
> be stricter about preventing downgrades of the message format, or we can add 
> additional logic to make downgrades safe. 
> (Disallow downgrades): As an example of the first approach, the leader could 
> always use the maximum of the last version written to the log and the 
> configured message format version. 
> (Allow downgrades): If we want to allow downgrades, then it make makes sense 
> to invalidate and remove all entries in the epoch cache following the message 
> format downgrade. This would basically force us to revert to truncation to 
> the high watermark, which is what you'd expect from a downgrade.  We would 
> also need a solution for the problem of detecting when down-conversion is 
> needed for a fetch request. One option I've been thinking about is enforcing 
> the invariant that each segment uses only one message format version. 
> Whenever the message format changes, we need to roll a new segment. Then we 
> can simply remember which format is in use by each segment to tell whether 
> down-conversion is needed for a given fetch request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8289) KTable, Long> can't be suppressed

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8289:
---
Fix Version/s: (was: 2.2.1)

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : -

[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832176#comment-16832176
 ] 

Vahid Hashemian commented on KAFKA-8289:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-0

[jira] [Updated] (KAFKA-8229) Connect Sink Task updates nextCommit when commitRequest is true

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8229:
---
Fix Version/s: (was: 2.2.1)
   2.3.0

> Connect Sink Task updates nextCommit when commitRequest is true
> ---
>
> Key: KAFKA-8229
> URL: https://issues.apache.org/jira/browse/KAFKA-8229
> Project: Kafka
>  Issue Type: Bug
>Reporter: Scott Reynolds
>Priority: Major
> Fix For: 2.3.0
>
>
> Today, when a WorkerSinkTask uses context.requestCommit(), the next call to 
> iteration will cause the commit to happen. As part of the commit execution it 
> will also change the nextCommit milliseconds.
> This creates some weird behaviors when a SinkTask calls context.requestCommit 
> multiple times. In our case, we were calling requestCommit when the number of 
> kafka records we processed exceed a threshold. This resulted in the 
> nextCommit being several days in the future and caused it to only commit when 
> the record threshold was reached.
> We expected the task to commit when the record threshold was reached OR when 
> the timer went off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8229) Connect Sink Task updates nextCommit when commitRequest is true

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832177#comment-16832177
 ] 

Vahid Hashemian commented on KAFKA-8229:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Connect Sink Task updates nextCommit when commitRequest is true
> ---
>
> Key: KAFKA-8229
> URL: https://issues.apache.org/jira/browse/KAFKA-8229
> Project: Kafka
>  Issue Type: Bug
>Reporter: Scott Reynolds
>Priority: Major
> Fix For: 2.3.0
>
>
> Today, when a WorkerSinkTask uses context.requestCommit(), the next call to 
> iteration will cause the commit to happen. As part of the commit execution it 
> will also change the nextCommit milliseconds.
> This creates some weird behaviors when a SinkTask calls context.requestCommit 
> multiple times. In our case, we were calling requestCommit when the number of 
> kafka records we processed exceed a threshold. This resulted in the 
> nextCommit being several days in the future and caused it to only commit when 
> the record threshold was reached.
> We expected the task to commit when the record threshold was reached OR when 
> the timer went off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8141) Flaky Test FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832178#comment-16832178
 ] 

Vahid Hashemian commented on KAFKA-8141:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled
> -
>
> Key: KAFKA-8141
> URL: https://issues.apache.org/jira/browse/KAFKA-8141
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.server/FetchRequestDownConversionConfigTest/testV1FetchWithDownConversionDisabled/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73){quote}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8140) Flaky Test SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8140:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs
> 
>
> Key: KAFKA-8140
> URL: https://issues.apache.org/jira/browse/KAFKA-8140
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testDescribeAndAlterConfigs/]
> {quote}java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:74) at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:73) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:56) at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:100)
>  at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81) 
> at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.api.AdminClientIntegrationTest.setUp(AdminClientIntegrationTest.scala:79)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.setUp(SaslSslAdminClientIntegrationTest.scala:64){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8140) Flaky Test SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832179#comment-16832179
 ] 

Vahid Hashemian commented on KAFKA-8140:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs
> 
>
> Key: KAFKA-8140
> URL: https://issues.apache.org/jira/browse/KAFKA-8140
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testDescribeAndAlterConfigs/]
> {quote}java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:74) at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:73) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:56) at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:100)
>  at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81) 
> at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.api.AdminClientIntegrationTest.setUp(AdminClientIntegrationTest.scala:79)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.setUp(SaslSslAdminClientIntegrationTest.scala:64){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8141) Flaky Test FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8141:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled
> -
>
> Key: KAFKA-8141
> URL: https://issues.apache.org/jira/browse/KAFKA-8141
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.server/FetchRequestDownConversionConfigTest/testV1FetchWithDownConversionDisabled/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73){quote}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8139) Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8139:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh
> 
>
> Key: KAFKA-8139
> URL: https://issues.apache.org/jira/browse/KAFKA-8139
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMetadataRefresh/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:334) at 
> java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:391) at 
> java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:719) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379) at 
> scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
>  at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423) 
> at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416)
>  at 
> scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84)
>  at 
> scala.collection.parallel.ParIterableLike.foreach(ParIterableLike.scala:465) 
> at 
> scala.collection.parallel.ParIterableLike.foreach$(ParIterableLike.scala:464) 
> at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:58) at 
> kafka.utils.TestUtils$.shutdownServers(TestUtils.scala:201) at 
> kafka.integration.KafkaServerTestHarness.tearDown(KafkaServerTestHarness.scala:113)
>  at 
> kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:134) 
> at 
> kafka.api.AdminClientIntegrationTest.tearDown(AdminClientIntegrationTest.scala:87)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:90){quote}
> STDOUT
> {quote}[2019-03-20 16:30:35,739] ERROR [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:159) 
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> scala.collecti

[jira] [Updated] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8138:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
> ---
>
> Key: KAFKA-8138
> URL: https://issues.apache.org/jira/browse/KAFKA-8138
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT (truncated)
> {quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8139) Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832180#comment-16832180
 ] 

Vahid Hashemian commented on KAFKA-8139:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh
> 
>
> Key: KAFKA-8139
> URL: https://issues.apache.org/jira/browse/KAFKA-8139
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMetadataRefresh/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:334) at 
> java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:391) at 
> java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:719) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379) at 
> scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
>  at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423) 
> at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416)
>  at 
> scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84)
>  at 
> scala.collection.parallel.ParIterableLike.foreach(ParIterableLike.scala:465) 
> at 
> scala.collection.parallel.ParIterableLike.foreach$(ParIterableLike.scala:464) 
> at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:58) at 
> kafka.utils.TestUtils$.shutdownServers(TestUtils.scala:201) at 
> kafka.integration.KafkaServerTestHarness.tearDown(KafkaServerTestHarness.scala:113)
>  at 
> kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:134) 
> at 
> kafka.api.AdminClientIntegrationTest.tearDown(AdminClientIntegrationTest.scala:87)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:90){quote}
> STDOUT
> {quote}[2019-03-20 16:30:35,739] ERROR [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:159) 
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:94

[jira] [Updated] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8137:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
> --
>
> Key: KAFKA-8137
> URL: https://issues.apache.org/jira/browse/KAFKA-8137
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote}
> STDOUT
> {quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:49,255] ERROR 
> [

[jira] [Commented] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832182#comment-16832182
 ] 

Vahid Hashemian commented on KAFKA-8137:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
> --
>
> Key: KAFKA-8137
> URL: https://issues.apache.org/jira/browse/KAFKA-8137
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote}
> STDOUT
> {quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionExcept

[jira] [Commented] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832181#comment-16832181
 ] 

Vahid Hashemian commented on KAFKA-8138:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
> ---
>
> Key: KAFKA-8138
> URL: https://issues.apache.org/jira/browse/KAFKA-8138
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT (truncated)
> {quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8136) Flaky Test MetadataRequestTest#testAllTopicsRequest

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8136:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test MetadataRequestTest#testAllTopicsRequest
> ---
>
> Key: KAFKA-8136
> URL: https://issues.apache.org/jira/browse/KAFKA-8136
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/78/testReport/junit/kafka.server/MetadataRequestTest/testAllTopicsRequest/]
> {quote}java.lang.AssertionError: Partition [t2,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.server.MetadataRequestTest.testAllTopicsRequest(MetadataRequestTest.scala:201){quote}
> STDOUT
> {quote}[2019-03-20 00:05:17,921] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition replicaDown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:23,520] WARN Unable to 
> read additional data from client sessionid 0x10033b4d8c6, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 
> 00:05:23,794] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] 
> Error for partition testAutoCreate_Topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:30,735] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> notInternal-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> notInternal-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:37,817] WARN Unable to 
> read additional data from client sessionid 0x10033b51c370002, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 
> 00:05:51,571] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] 
> Error for partition t1-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:51,571] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> t1-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:06:22,153] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> t1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:06:22,622] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> t2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:06:35,106] ERROR 
> [ReplicaFe

[jira] [Commented] (KAFKA-8133) Flaky Test MetadataRequestTest#testNoTopicsRequest

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832184#comment-16832184
 ] 

Vahid Hashemian commented on KAFKA-8133:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test MetadataRequestTest#testNoTopicsRequest
> --
>
> Key: KAFKA-8133
> URL: https://issues.apache.org/jira/browse/KAFKA-8133
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.1.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/151/tests]
> {quote}org.apache.kafka.common.errors.TopicExistsException: Topic 't1' 
> already exists.{quote}
> STDOUT:
> {quote}[2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition isr-after-broker-shutdown-0 at 
> offset 0 (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition isr-after-broker-shutdown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition replicaDown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition replicaDown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:20,049] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition testAutoCreate_Topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition notInternal-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition notInternal-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:28,863] WARN Unable to read additional data from client 
> sessionid 0x102fbd81b150003, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376)
> [2019-03-20 03:49:40,478] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition t1-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:40,921] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition t2-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:40,922] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition t2-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)

[jira] [Updated] (KAFKA-8133) Flaky Test MetadataRequestTest#testNoTopicsRequest

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8133:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test MetadataRequestTest#testNoTopicsRequest
> --
>
> Key: KAFKA-8133
> URL: https://issues.apache.org/jira/browse/KAFKA-8133
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.1.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/151/tests]
> {quote}org.apache.kafka.common.errors.TopicExistsException: Topic 't1' 
> already exists.{quote}
> STDOUT:
> {quote}[2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition isr-after-broker-shutdown-0 at 
> offset 0 (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition isr-after-broker-shutdown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition replicaDown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition replicaDown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:20,049] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition testAutoCreate_Topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition notInternal-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition notInternal-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:28,863] WARN Unable to read additional data from client 
> sessionid 0x102fbd81b150003, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376)
> [2019-03-20 03:49:40,478] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition t1-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:40,921] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition t2-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:40,922] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition t2-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not h

[jira] [Commented] (KAFKA-8136) Flaky Test MetadataRequestTest#testAllTopicsRequest

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832183#comment-16832183
 ] 

Vahid Hashemian commented on KAFKA-8136:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test MetadataRequestTest#testAllTopicsRequest
> ---
>
> Key: KAFKA-8136
> URL: https://issues.apache.org/jira/browse/KAFKA-8136
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/78/testReport/junit/kafka.server/MetadataRequestTest/testAllTopicsRequest/]
> {quote}java.lang.AssertionError: Partition [t2,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.server.MetadataRequestTest.testAllTopicsRequest(MetadataRequestTest.scala:201){quote}
> STDOUT
> {quote}[2019-03-20 00:05:17,921] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition replicaDown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:23,520] WARN Unable to 
> read additional data from client sessionid 0x10033b4d8c6, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 
> 00:05:23,794] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] 
> Error for partition testAutoCreate_Topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:30,735] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> notInternal-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> notInternal-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:37,817] WARN Unable to 
> read additional data from client sessionid 0x10033b51c370002, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 
> 00:05:51,571] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] 
> Error for partition t1-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:51,571] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> t1-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:06:22,153] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> t1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:06:22,622] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> t2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This

[jira] [Updated] (KAFKA-8132) Flaky Test MirrorMakerIntegrationTest #testCommaSeparatedRegex

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8132:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test MirrorMakerIntegrationTest #testCommaSeparatedRegex
> --
>
> Key: KAFKA-8132
> URL: https://issues.apache.org/jira/browse/KAFKA-8132
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker, unit tests
>Affects Versions: 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.1.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/150/tests]
> {quote}kafka.tools.MirrorMaker$NoRecordsException
> at kafka.tools.MirrorMaker$ConsumerWrapper.receive(MirrorMaker.scala:483)
> at 
> kafka.tools.MirrorMakerIntegrationTest$$anonfun$testCommaSeparatedRegex$1.apply$mcZ$sp(MirrorMakerIntegrationTest.scala:92)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:742)
> at 
> kafka.tools.MirrorMakerIntegrationTest.testCommaSeparatedRegex(MirrorMakerIntegrationTest.scala:90){quote}
> STDOUT (repeatable outputs):
> {quote}[2019-03-19 21:47:06,115] ERROR [Consumer clientId=consumer-1029, 
> groupId=test-group] Offset commit failed on partition nonexistent-topic1-0 at 
> offset 0: This server does not host this topic-partition. 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812){quote} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8132) Flaky Test MirrorMakerIntegrationTest #testCommaSeparatedRegex

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832185#comment-16832185
 ] 

Vahid Hashemian commented on KAFKA-8132:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test MirrorMakerIntegrationTest #testCommaSeparatedRegex
> --
>
> Key: KAFKA-8132
> URL: https://issues.apache.org/jira/browse/KAFKA-8132
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker, unit tests
>Affects Versions: 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.1.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/150/tests]
> {quote}kafka.tools.MirrorMaker$NoRecordsException
> at kafka.tools.MirrorMaker$ConsumerWrapper.receive(MirrorMaker.scala:483)
> at 
> kafka.tools.MirrorMakerIntegrationTest$$anonfun$testCommaSeparatedRegex$1.apply$mcZ$sp(MirrorMakerIntegrationTest.scala:92)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:742)
> at 
> kafka.tools.MirrorMakerIntegrationTest.testCommaSeparatedRegex(MirrorMakerIntegrationTest.scala:90){quote}
> STDOUT (repeatable outputs):
> {quote}[2019-03-19 21:47:06,115] ERROR [Consumer clientId=consumer-1029, 
> groupId=test-group] Offset commit failed on partition nonexistent-topic1-0 at 
> offset 0: This server does not host this topic-partition. 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812){quote} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KAFKA-8123) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian closed KAFKA-8123.
--

Closing as duplicate.

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated
>  
> 
>
> Key: KAFKA-8123
> URL: https://issues.apache.org/jira/browse/KAFKA-8123
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3474/tests]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for produce quota not updated: Client 
> small-quota-producer-client apiKey PRODUCE requests 1 requestTime 
> 0.015790873650539786 throttleTime 1000.0
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:423)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:421)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
> at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
> at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:421)
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated(RequestQuotaTest.scala:130){quote}
> STDOUT
> {quote}[2019-03-18 21:42:16,637] ERROR [KafkaApi-0] Error when handling 
> request: clientId=unauthorized-CONTROLLED_SHUTDOWN, correlationId=1, 
> api=CONTROLLED_SHUTDOWN, body=\{broker_id=0,broker_epoch=9223372036854775807} 
> (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47612-1, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,655] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=1, api=STOP_REPLICA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:42118-127.0.0.1:47614-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,657] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-LEADER_AND_ISR, correlationId=1, api=LEADER_AND_ISR, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],is_new=true}]}],live_leaders=[\{id=0,host=localhost,port=0}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=1, connectionId=127.0.0.1:42118-127.0.0.1:47616-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,668] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-UPDATE_METADATA, correlationId=1, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],offline_replicas=[]}]}],live_brokers=[\{id=0,end_points=[{port=0,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47618-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,725] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=2, api=STOP_REPLICA, 
> body=\

[jira] [Updated] (KAFKA-8119) KafkaConfig listener accessors may fail during dynamic update

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8119:
---
Fix Version/s: (was: 2.2.1)

> KafkaConfig listener accessors may fail during dynamic update
> -
>
> Key: KAFKA-8119
> URL: https://issues.apache.org/jira/browse/KAFKA-8119
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.3.0
>
>
> Noticed a test failure in DynamicBrokerReconfigurationTest where the test 
> accessing `KafkaConfig#listeners` during dynamic update of listeners threw an 
> exception. In general, most dynamic configs can be updated independently, but 
> listeners and listener security protocol map need to be updated together when 
> new listeners that are not in the map are added or an entry is removed from 
> the map along with the listener. We don't expect to see this failure in the 
> implementation code because dynamic config updates are on a single thread and 
> SocketServer processes the full update together and validates the full config 
> prior to applying the changes. But we should ensure that 
> KafkaConfig#listeners, KafkaConfig#advertisedListeners etc. work even if a 
> dynamic update occurs during the call since these methods are used in tests 
> and could potentially be used in implementation code in future from different 
> threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8119) KafkaConfig listener accessors may fail during dynamic update

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832187#comment-16832187
 ] 

Vahid Hashemian commented on KAFKA-8119:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> KafkaConfig listener accessors may fail during dynamic update
> -
>
> Key: KAFKA-8119
> URL: https://issues.apache.org/jira/browse/KAFKA-8119
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.3.0
>
>
> Noticed a test failure in DynamicBrokerReconfigurationTest where the test 
> accessing `KafkaConfig#listeners` during dynamic update of listeners threw an 
> exception. In general, most dynamic configs can be updated independently, but 
> listeners and listener security protocol map need to be updated together when 
> new listeners that are not in the map are added or an entry is removed from 
> the map along with the listener. We don't expect to see this failure in the 
> implementation code because dynamic config updates are on a single thread and 
> SocketServer processes the full update together and validates the full config 
> prior to applying the changes. But we should ensure that 
> KafkaConfig#listeners, KafkaConfig#advertisedListeners etc. work even if a 
> dynamic update occurs during the call since these methods are used in tests 
> and could potentially be used in implementation code in future from different 
> threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8092) Flaky Test GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832190#comment-16832190
 ] 

Vahid Hashemian commented on KAFKA-8092:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess
> --
>
> Key: KAFKA-8092
> URL: https://issues.apache.org/jira/browse/KAFKA-8092
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/64/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testSendOffsetsWithNoConsumerGroupDescribeAccess/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-11 16:08:29,319] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38324,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38324-127.0.0.1:59458-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:29,933] ERROR [Consumer 
> clientId=consumer-99, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-11 16:08:29,933] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-11 16:08:31,370] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=33310,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:33310-127.0.0.1:49676-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:34,437] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=35999,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:35999-127.0.0.1:48268-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:40,978] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38267,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationExc

[jira] [Commented] (KAFKA-8110) Flaky Test DescribeConsumerGroupTest#testDescribeMembersWithConsumersWithoutAssignedPartitions

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832188#comment-16832188
 ] 

Vahid Hashemian commented on KAFKA-8110:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersWithConsumersWithoutAssignedPartitions
> --
>
> Key: KAFKA-8110
> URL: https://issues.apache.org/jira/browse/KAFKA-8110
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/67/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersWithConsumersWithoutAssignedPartitions/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersWithConsumersWithoutAssignedPartitions(DescribeConsumerGroupTest.scala:372){quote}
> STDOUT
> {quote}[2019-03-14 20:01:52,347] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) TOPIC PARTITION CURRENT-OFFSET 
> LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - TOPIC 
> PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 
> 0 0 0 - - - COORDINATOR (ID) ASSIGNMENT-STRATEGY STATE #MEMBERS 
> localhost:44669 (0){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8110) Flaky Test DescribeConsumerGroupTest#testDescribeMembersWithConsumersWithoutAssignedPartitions

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8110:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersWithConsumersWithoutAssignedPartitions
> --
>
> Key: KAFKA-8110
> URL: https://issues.apache.org/jira/browse/KAFKA-8110
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/67/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersWithConsumersWithoutAssignedPartitions/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersWithConsumersWithoutAssignedPartitions(DescribeConsumerGroupTest.scala:372){quote}
> STDOUT
> {quote}[2019-03-14 20:01:52,347] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) TOPIC PARTITION CURRENT-OFFSET 
> LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - TOPIC 
> PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 
> 0 0 0 - - - COORDINATOR (ID) ASSIGNMENT-STRATEGY STATE #MEMBERS 
> localhost:44669 (0){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8092) Flaky Test GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8092:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess
> --
>
> Key: KAFKA-8092
> URL: https://issues.apache.org/jira/browse/KAFKA-8092
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/64/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testSendOffsetsWithNoConsumerGroupDescribeAccess/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-11 16:08:29,319] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38324,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38324-127.0.0.1:59458-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:29,933] ERROR [Consumer 
> clientId=consumer-99, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-11 16:08:29,933] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-11 16:08:31,370] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=33310,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:33310-127.0.0.1:49676-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:34,437] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=35999,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:35999-127.0.0.1:48268-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:40,978] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38267,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38267-127.0.0.1:53148-0, 
> s

[jira] [Commented] (KAFKA-8087) Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832193#comment-16832193
 ] 

Vahid Hashemian commented on KAFKA-8087:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId
> -
>
> Key: KAFKA-8087
> URL: https://issues.apache.org/jira/browse/KAFKA-8087
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/PlaintextConsumerTest/testConsumingWithNullGroupId/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT
> {quote}[2019-03-09 08:39:02,022] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,022] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,202] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,204] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,511] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:06,568] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,582] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic

[jira] [Updated] (KAFKA-8087) Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8087:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId
> -
>
> Key: KAFKA-8087
> URL: https://issues.apache.org/jira/browse/KAFKA-8087
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/PlaintextConsumerTest/testConsumingWithNullGroupId/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT
> {quote}[2019-03-09 08:39:02,022] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,022] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,202] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,204] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,511] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:06,568] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,582] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,787] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetc

[jira] [Updated] (KAFKA-8086) Flaky Test GroupAuthorizerIntegrationTest#testPatternSubscriptionWithTopicAndGroupRead

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8086:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> GroupAuthorizerIntegrationTest#testPatternSubscriptionWithTopicAndGroupRead
> --
>
> Key: KAFKA-8086
> URL: https://issues.apache.org/jira/browse/KAFKA-8086
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testPatternSubscriptionWithTopicAndGroupRead/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-09 08:40:34,220] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=41020,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:41020-127.0.0.1:52304-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:40:35,336] ERROR [Consumer 
> clientId=consumer-98, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-09 08:40:35,336] ERROR [Consumer clientId=consumer-98, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-09 08:40:41,649] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=36903,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:36903-127.0.0.1:44978-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:40:53,898] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=41067,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:41067-127.0.0.1:40882-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:42:07,717] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=46276,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:46276-127.0.0.1:41362-0, 
> session=Sess

[jira] [Commented] (KAFKA-8085) Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832195#comment-16832195
 ] 

Vahid Hashemian commented on KAFKA-8085:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration
> --
>
> Key: KAFKA-8085
> URL: https://issues.apache.org/jira/browse/KAFKA-8085
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsByDuration/]
> {quote}java.lang.AssertionError: Expected that consumer group has consumed 
> all messages from topic/partition. at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.admin.ResetConsumerGroupOffsetTest.awaitConsumerProgress(ResetConsumerGroupOffsetTest.scala:364)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.produceConsumeAndShutdown(ResetConsumerGroupOffsetTest.scala:359)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsByDuration(ResetConsumerGroupOffsetTest.scala:146){quote}
> STDOUT
> {quote}[2019-03-09 08:39:29,856] WARN Unable to read additional data from 
> client sessionid 0x105f6adb208, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-09 08:39:46,373] 
> WARN Unable to read additional data from client sessionid 0x105f6adf4c50001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8086) Flaky Test GroupAuthorizerIntegrationTest#testPatternSubscriptionWithTopicAndGroupRead

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832194#comment-16832194
 ] 

Vahid Hashemian commented on KAFKA-8086:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> GroupAuthorizerIntegrationTest#testPatternSubscriptionWithTopicAndGroupRead
> --
>
> Key: KAFKA-8086
> URL: https://issues.apache.org/jira/browse/KAFKA-8086
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testPatternSubscriptionWithTopicAndGroupRead/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-09 08:40:34,220] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=41020,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:41020-127.0.0.1:52304-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:40:35,336] ERROR [Consumer 
> clientId=consumer-98, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-09 08:40:35,336] ERROR [Consumer clientId=consumer-98, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-09 08:40:41,649] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=36903,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:36903-127.0.0.1:44978-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:40:53,898] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=41067,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:41067-127.0.0.1:40882-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:42:07,717] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=46276,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Req

[jira] [Updated] (KAFKA-8085) Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8085:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration
> --
>
> Key: KAFKA-8085
> URL: https://issues.apache.org/jira/browse/KAFKA-8085
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsByDuration/]
> {quote}java.lang.AssertionError: Expected that consumer group has consumed 
> all messages from topic/partition. at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.admin.ResetConsumerGroupOffsetTest.awaitConsumerProgress(ResetConsumerGroupOffsetTest.scala:364)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.produceConsumeAndShutdown(ResetConsumerGroupOffsetTest.scala:359)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsByDuration(ResetConsumerGroupOffsetTest.scala:146){quote}
> STDOUT
> {quote}[2019-03-09 08:39:29,856] WARN Unable to read additional data from 
> client sessionid 0x105f6adb208, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-09 08:39:46,373] 
> WARN Unable to read additional data from client sessionid 0x105f6adf4c50001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8083) Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8083:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests
> --
>
> Key: KAFKA-8083
> URL: https://issues.apache.org/jira/browse/KAFKA-8083
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.server/DelegationTokenRequestsTest/testDelegationTokenRequests/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.server.DelegationTokenRequestsTest.setUp(DelegationTokenRequestsTest.scala:46){quote}
> STDOUT
> {quote}[2019-03-09 04:01:31,789] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,789] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) 
> [2019-03-09 04:01:31,793] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,794] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8084) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8084:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-8084
> URL: https://issues.apache.org/jira/browse/KAFKA-8084
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroupWithNoMembers/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:283){quote}
> STDOUT
> {quote}TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG 
> CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) 
> ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:45812 (0) Empty 0{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8084) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832196#comment-16832196
 ] 

Vahid Hashemian commented on KAFKA-8084:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-8084
> URL: https://issues.apache.org/jira/browse/KAFKA-8084
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroupWithNoMembers/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:283){quote}
> STDOUT
> {quote}TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG 
> CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) 
> ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:45812 (0) Empty 0{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8082) Flaky Test ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8082:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown
> ---
>
> Key: KAFKA-8082
> URL: https://issues.apache.org/jira/browse/KAFKA-8082
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.api/ProducerFailureHandlingTest/testNotEnoughReplicasAfterBrokerShutdown/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotEnoughReplicasAfterAppendException: 
> Messages are written to the log, but to fewer in-sync replicas than required. 
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:98)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:67)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
>  at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:270){quote}
> STDOUT
> {quote}[2019-03-09 03:59:24,897] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=1, fetcherId=0] Error for partition topic-1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:28,028] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,046] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> minisrtest-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,245] ERROR 
> [ReplicaManager broker=1] Error processing append operation on partition 
> minisrtest-0 (kafka.server.ReplicaManager:76) 
> org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the 
> current ISR Set(1, 0) is insufficient to satisfy the min.isr requirement of 3 
> for partition minisrtest-0 [2019-03-09 04:00:01,212] ERROR [ReplicaFetcher 
> replicaId=1, leaderId=0, fetcherId=0] Error for partition topic-1-0 at offset 
> 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:02,214] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:03,216] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:23,144] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:24,146] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:25,148] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:44,607] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> minisrtest2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8077:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.0.2, 2.3.0, 2.1.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8083) Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832199#comment-16832199
 ] 

Vahid Hashemian commented on KAFKA-8083:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests
> --
>
> Key: KAFKA-8083
> URL: https://issues.apache.org/jira/browse/KAFKA-8083
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.server/DelegationTokenRequestsTest/testDelegationTokenRequests/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.server.DelegationTokenRequestsTest.setUp(DelegationTokenRequestsTest.scala:46){quote}
> STDOUT
> {quote}[2019-03-09 04:01:31,789] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,789] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) 
> [2019-03-09 04:01:31,793] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,794] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8082) Flaky Test ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832200#comment-16832200
 ] 

Vahid Hashemian commented on KAFKA-8082:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown
> ---
>
> Key: KAFKA-8082
> URL: https://issues.apache.org/jira/browse/KAFKA-8082
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.api/ProducerFailureHandlingTest/testNotEnoughReplicasAfterBrokerShutdown/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotEnoughReplicasAfterAppendException: 
> Messages are written to the log, but to fewer in-sync replicas than required. 
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:98)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:67)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
>  at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:270){quote}
> STDOUT
> {quote}[2019-03-09 03:59:24,897] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=1, fetcherId=0] Error for partition topic-1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:28,028] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,046] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> minisrtest-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,245] ERROR 
> [ReplicaManager broker=1] Error processing append operation on partition 
> minisrtest-0 (kafka.server.ReplicaManager:76) 
> org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the 
> current ISR Set(1, 0) is insufficient to satisfy the min.isr requirement of 3 
> for partition minisrtest-0 [2019-03-09 04:00:01,212] ERROR [ReplicaFetcher 
> replicaId=1, leaderId=0, fetcherId=0] Error for partition topic-1-0 at offset 
> 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:02,214] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:03,216] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:23,144] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:24,146] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:25,148] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:44,607] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> minisrtest2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}

[jira] [Commented] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832201#comment-16832201
 ] 

Vahid Hashemian commented on KAFKA-8077:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.0.2, 2.3.0, 2.1.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-2933) Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-2933:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment
> -
>
> Key: KAFKA-2933
> URL: https://issues.apache.org/jira/browse/KAFKA-2933
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, unit tests
>Affects Versions: 0.10.0.0, 2.2.0
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test, transient-unit-test-failure
> Fix For: 2.3.0, 2.2.2
>
>
> {code}
> kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment FAILED
> java.lang.AssertionError: Did not get valid assignment for partitions 
> [topic1-2, topic2-0, topic1-4, topic-1, topic-0, topic2-1, topic1-0, 
> topic1-3, topic1-1, topic2-2] after we changed subscription
> at org.junit.Assert.fail(Assert.java:88)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:747)
> at 
> kafka.api.PlaintextConsumerTest.validateGroupAssignment(PlaintextConsumerTest.scala:644)
> at 
> kafka.api.PlaintextConsumerTest.changeConsumerGroupSubscriptionAndValidateAssignment(PlaintextConsumerTest.scala:663)
> at 
> kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment(PlaintextConsumerTest.scala:461)
> java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
> multi-threaded access
> {code}
> Example: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1582/console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-2933) Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832202#comment-16832202
 ] 

Vahid Hashemian commented on KAFKA-2933:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment
> -
>
> Key: KAFKA-2933
> URL: https://issues.apache.org/jira/browse/KAFKA-2933
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, unit tests
>Affects Versions: 0.10.0.0, 2.2.0
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test, transient-unit-test-failure
> Fix For: 2.3.0, 2.2.2
>
>
> {code}
> kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment FAILED
> java.lang.AssertionError: Did not get valid assignment for partitions 
> [topic1-2, topic2-0, topic1-4, topic-1, topic-0, topic2-1, topic1-0, 
> topic1-3, topic1-1, topic2-2] after we changed subscription
> at org.junit.Assert.fail(Assert.java:88)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:747)
> at 
> kafka.api.PlaintextConsumerTest.validateGroupAssignment(PlaintextConsumerTest.scala:644)
> at 
> kafka.api.PlaintextConsumerTest.changeConsumerGroupSubscriptionAndValidateAssignment(PlaintextConsumerTest.scala:663)
> at 
> kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment(PlaintextConsumerTest.scala:461)
> java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
> multi-threaded access
> {code}
> Example: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1582/console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6824) Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-6824:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener
> 
>
> Key: KAFKA-6824
> URL: https://issues.apache.org/jira/browse/KAFKA-6824
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Anna Povzner
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> Observed two failures of this test (both in PR builds) :(
>  
> *Failure #1: (JDK 7 and Scala 2.11 )*
> *17:20:49* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *17:20:49*     java.lang.AssertionError: expected:<10> but was:<12>
> *17:20:49*         at org.junit.Assert.fail(Assert.java:88)
> *17:20:49*         at org.junit.Assert.failNotEquals(Assert.java:834)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:645)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:631)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:959)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:784)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
>  
> *Failure #2: (JDK 8)*
> *18:46:23* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *18:46:23*     java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
> *18:46:23*         at scala.collection.Iterator.foreach(Iterator.scala:929)
> *18:46:23*         at scala.collection.Iterator.foreach$(Iterator.scala:929)
> *18:46:23*         at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)
> *18:46:23*         at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> *18:46:23*         at 
> scala.collection.TraversableLike.map(TraversableLike.scala:234)
> *18:46:23*         at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:227)
> *18:46:23*         at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:816)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
> *18:46:23*
> *18:46:23*         Caused by:
> *18:46:23*         
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6824) Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832203#comment-16832203
 ] 

Vahid Hashemian commented on KAFKA-6824:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener
> 
>
> Key: KAFKA-6824
> URL: https://issues.apache.org/jira/browse/KAFKA-6824
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Anna Povzner
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> Observed two failures of this test (both in PR builds) :(
>  
> *Failure #1: (JDK 7 and Scala 2.11 )*
> *17:20:49* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *17:20:49*     java.lang.AssertionError: expected:<10> but was:<12>
> *17:20:49*         at org.junit.Assert.fail(Assert.java:88)
> *17:20:49*         at org.junit.Assert.failNotEquals(Assert.java:834)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:645)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:631)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:959)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:784)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
>  
> *Failure #2: (JDK 8)*
> *18:46:23* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *18:46:23*     java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
> *18:46:23*         at scala.collection.Iterator.foreach(Iterator.scala:929)
> *18:46:23*         at scala.collection.Iterator.foreach$(Iterator.scala:929)
> *18:46:23*         at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)
> *18:46:23*         at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> *18:46:23*         at 
> scala.collection.TraversableLike.map(TraversableLike.scala:234)
> *18:46:23*         at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:227)
> *18:46:23*         at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:816)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
> *18:46:23*
> *18:46:23*         Caused by:
> *18:46:23*         
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7540) Flaky Test ConsumerBounceTest#testClose

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7540:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test ConsumerBounceTest#testClose
> ---
>
> Key: KAFKA-7540
> URL: https://issues.apache.org/jira/browse/KAFKA-7540
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0
>Reporter: John Roesler
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> Observed on Java 8: 
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/17314/testReport/junit/kafka.api/ConsumerBounceTest/testClose/]
>  
> Stacktrace:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
>   at 
> kafka.integration.KafkaServerTestHarness.killBroker(KafkaServerTestHarness.scala:146)
>   at 
> kafka.api.ConsumerBounceTest.checkCloseWithCoordinatorFailure(ConsumerBounceTest.scala:238)
>   at kafka.api.ConsumerBounceTest.testClose(ConsumerBounceTest.scala:211)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at

[jira] [Updated] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7937:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.1.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7540) Flaky Test ConsumerBounceTest#testClose

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832204#comment-16832204
 ] 

Vahid Hashemian commented on KAFKA-7540:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test ConsumerBounceTest#testClose
> ---
>
> Key: KAFKA-7540
> URL: https://issues.apache.org/jira/browse/KAFKA-7540
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0
>Reporter: John Roesler
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> Observed on Java 8: 
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/17314/testReport/junit/kafka.api/ConsumerBounceTest/testClose/]
>  
> Stacktrace:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
>   at 
> kafka.integration.KafkaServerTestHarness.killBroker(KafkaServerTestHarness.scala:146)
>   at 
> kafka.api.ConsumerBounceTest.checkCloseWithCoordinatorFailure(ConsumerBounceTest.scala:238)
>   at kafka.api.ConsumerBounceTest.testClose(ConsumerBounceTest.scala:211)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch

[jira] [Updated] (KAFKA-7946) Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7946:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup
> ---
>
> Key: KAFKA-7946
> URL: https://issues.apache.org/jira/browse/KAFKA-7946
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.NullPointerException at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteNonEmptyGroup(DeleteConsumerGroupsTest.scala:96){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7946) Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832206#comment-16832206
 ] 

Vahid Hashemian commented on KAFKA-7946:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup
> ---
>
> Key: KAFKA-7946
> URL: https://issues.apache.org/jira/browse/KAFKA-7946
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.NullPointerException at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteNonEmptyGroup(DeleteConsumerGroupsTest.scala:96){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832205#comment-16832205
 ] 

Vahid Hashemian commented on KAFKA-7937:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.1.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7947) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldFollowLeaderEpochBasicWorkflow

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7947:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldFollowLeaderEpochBasicWorkflow
> 
>
> Key: KAFKA-7947
> URL: https://issues.apache.org/jira/browse/KAFKA-7947
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.AssertionError: expected: startOffset=0), EpochEntry(epoch=1, startOffset=1))> but 
> was: startOffset=1))> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:144) at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldFollowLeaderEpochBasicWorkflow(EpochDrivenReplicationProtocolAcceptanceTest.scala:101){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7947) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldFollowLeaderEpochBasicWorkflow

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832207#comment-16832207
 ] 

Vahid Hashemian commented on KAFKA-7947:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldFollowLeaderEpochBasicWorkflow
> 
>
> Key: KAFKA-7947
> URL: https://issues.apache.org/jira/browse/KAFKA-7947
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.AssertionError: expected: startOffset=0), EpochEntry(epoch=1, startOffset=1))> but 
> was: startOffset=1))> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:144) at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldFollowLeaderEpochBasicWorkflow(EpochDrivenReplicationProtocolAcceptanceTest.scala:101){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7957) Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7957:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate
> -
>
> Key: KAFKA-7957
> URL: https://issues.apache.org/jira/browse/KAFKA-7957
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/18/]
> {quote}java.lang.AssertionError: Messages not sent at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:356) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:766) at 
> kafka.server.DynamicBrokerReconfigurationTest.startProduceConsume(DynamicBrokerReconfigurationTest.scala:1270)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testMetricsReporterUpdate(DynamicBrokerReconfigurationTest.scala:650){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7964) Flaky Test ConsumerBounceTest#testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7964:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test 
> ConsumerBounceTest#testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> --
>
> Key: KAFKA-7964
> URL: https://issues.apache.org/jira/browse/KAFKA-7964
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: expected:<100> but was:<0> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.junit.Assert.assertEquals(Assert.java:631) at 
> kafka.api.ConsumerBounceTest.receiveExactRecords(ConsumerBounceTest.scala:551)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize$2(ConsumerBounceTest.scala:409)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize$2$adapted(ConsumerBounceTest.scala:408)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize(ConsumerBounceTest.scala:408){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7957) Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832208#comment-16832208
 ] 

Vahid Hashemian commented on KAFKA-7957:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate
> -
>
> Key: KAFKA-7957
> URL: https://issues.apache.org/jira/browse/KAFKA-7957
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/18/]
> {quote}java.lang.AssertionError: Messages not sent at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:356) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:766) at 
> kafka.server.DynamicBrokerReconfigurationTest.startProduceConsume(DynamicBrokerReconfigurationTest.scala:1270)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testMetricsReporterUpdate(DynamicBrokerReconfigurationTest.scala:650){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7969) Flaky Test DescribeConsumerGroupTest#testDescribeOffsetsOfExistingGroupWithNoMembers

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7969:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeOffsetsOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-7969
> URL: https://issues.apache.org/jira/browse/KAFKA-7969
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/24/]
> {quote}java.lang.AssertionError: Expected no active member in describe group 
> results, state: Some(Empty), assignments: Some(List()) at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeOffsetsOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:278{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KAFKA-7978) Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian closed KAFKA-7978.
--

Closing as duplicate.

> Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups
> ---
>
> Key: KAFKA-7978
> URL: https://issues.apache.org/jira/browse/KAFKA-7978
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
> {quote}java.lang.AssertionError: expected:<2> but was:<0> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.junit.Assert.assertEquals(Assert.java:631) at 
> kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1157)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >