[jira] [Commented] (KAFKA-9308) Misses SAN after certificate creation

2020-01-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025096#comment-17025096
 ] 

ASF GitHub Bot commented on KAFKA-9308:
---

soenkeliebau commented on pull request #8009: KAFKA-9308: Reworded the ssl part 
of the security documentation 
URL: https://github.com/apache/kafka/pull/8009
 
 
   This is to fix various issues (mainly as noted by this jira, the problem 
that SAN extension values are not copied to certificates) and add some 
recommendations.
   
   Build the page and reviewed it, used Intellij HTML syntax checker to ensure 
valid HTML syntax.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Misses SAN after certificate creation
> -
>
> Key: KAFKA-9308
> URL: https://issues.apache.org/jira/browse/KAFKA-9308
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.3.1
>Reporter: Agostino Sarubbo
>Priority: Minor
>
> Hello,
> I followed the documentation to use kafka with ssl, however the entire 
> 'procedure' loses at the end the specified SAN.
> To test, run (after the first keytool command and after the latest):
>  
> {code:java}
> keytool -list -v -keystore server.keystore.jks
> {code}
> Reference:
>  [http://kafka.apache.org/documentation.html#security_ssl]
>  
> {code:java}
> #!/bin/bash
> #Step 1
> keytool -keystore server.keystore.jks -alias localhost -validity 365 -keyalg 
> RSA -genkey -ext SAN=DNS:test.test.com
> #Step 2
> openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
> keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
> keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
> #Step 3
> keytool -keystore server.keystore.jks -alias localhost -certreq -file 
> cert-file 
> openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed 
> -days 365 -CAcreateserial -passin pass:test1234 
> keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert 
> keytool -keystore server.keystore.jks -alias localhost -import -file 
> cert-signed
> {code}
>  
> In the detail, the SAN is losed after:
> {code:java}
> keytool -keystore server.keystore.jks -alias localhost -import -file 
> cert-signed
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9315) The Kafka Metrics class should clear the mbeans map when closing

2020-01-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025170#comment-17025170
 ] 

ASF GitHub Bot commented on KAFKA-9315:
---

cmccabe commented on pull request #7851: KAFKA-9315: The Kafka Metrics class 
should clear the mbeans map when closing
URL: https://github.com/apache/kafka/pull/7851
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> The Kafka Metrics class should clear the mbeans map when closing
> 
>
> Key: KAFKA-9315
> URL: https://issues.apache.org/jira/browse/KAFKA-9315
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
>
> The JmxReporter should clear the mbeans map when closing.  Otherwise, metrics 
> may be incorrectly re-registered if the JmxReporter class is used after it is 
> closed.
> For example, calling JmxReporter#close followed by JmxReporter#unregister 
> could result in some of the mbeans that were removed in the close operation 
> being re-registered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9478) Controller may stop react on partition reassignment command in ZooKeeper

2020-01-28 Thread Ivan Yurchenko (Jira)
Ivan Yurchenko created KAFKA-9478:
-

 Summary: Controller may stop react on partition reassignment 
command in ZooKeeper
 Key: KAFKA-9478
 URL: https://issues.apache.org/jira/browse/KAFKA-9478
 Project: Kafka
  Issue Type: Bug
  Components: controller, core
Affects Versions: 2.4.0, 2.4.1
Reporter: Ivan Yurchenko
Assignee: Ivan Yurchenko


Seemingly after 
[bdf2446ccce592f3c000290f11de88520327aa19|https://github.com/apache/kafka/commit/bdf2446ccce592f3c000290f11de88520327aa19],
 the controller may stop watching {{/admin/reassign_partitions}} node in 
ZooKeeper and consequently accept partition reassignment commands via ZooKeeper.

I'm not 100% sure that bdf2446ccce592f3c000290f11de88520327aa19 causes this, 
but it doesn't reproduce on 
[3fe6b5e951db8f7184a4098f8ad8a1afb2b2c1a0|https://github.com/apache/kafka/commit/3fe6b5e951db8f7184a4098f8ad8a1afb2b2c1a0]
 - the one right before it.

Also, reproduces on the trunk HEAD 
[a87decb9e4df5bfa092c26ae4346f65c426f1321|https://github.com/apache/kafka/commit/a87decb9e4df5bfa092c26ae4346f65c426f1321].
h1. How to reproduce

1. Run ZooKeeper and two Kafka brokers.

2. Create a topic with 100 partitions and place them on Broker 0:
{code:bash}
distro/bin/kafka-topics.sh --bootstrap-server localhost:9092,localhost:9093 
--create \
--topic foo \
--replica-assignment $(for i in {0..99}; do echo -n "0,"; done | sed 
's/.$$//')
{code}
3. Add some data:
{code:bash}
seq 1 100 | bin/kafka-console-producer.sh --broker-list 
localhost:9092,localhost:9093 --topic foo
{code}
4. Create the partition reassignment node {{/admin/reassign_partitions}} in Zoo 
and shortly after that update the data in the node (even the same value will 
do). I made a simple Python script for this:
{code:python}
import time
import json
from kazoo.client import KazooClient

zk = KazooClient(hosts='127.0.0.1:2181')
zk.start()

reassign = {
"version": 1,
"partitions":[]
}
for p in range(100):
reassign["partitions"].append({"topic": "foo", "partition": p, 
"replicas": [1]})

zk.create("/admin/reassign_partitions", json.dumps(reassign).encode())

time.sleep(0.05)

zk.set("/admin/reassign_partitions", json.dumps(reassign).encode())
{code}
4. Observe that the controller doesn't react on further updates to 
{{/admin/reassign_partitions}} and doesn't delete the node.

Also, it can be confirmed with
{code:bash}
echo wchc | nc 127.0.0.1 2181
{code}
that there is no watch on the node in ZooKeeper (for this, you should run 
ZooKeeper with {{4lw.commands.whitelist=*}}).

Since it's about timing, it might not work on first attempt, so you might need 
to do 4 a couple of times. However, the reproducibility rate is pretty high.

The data in the topic and the big amount of partitions are not needed per se, 
only to make the timing more favourable.

Controller re-election will solve the issue, but a new controller can be put in 
this state the same way.
h1. Proposed solution

TBD, suggestions are welcome.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9478) Controller may stop react on partition reassignment command in ZooKeeper

2020-01-28 Thread Ismael Juma (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025265#comment-17025265
 ] 

Ismael Juma commented on KAFKA-9478:


Is this different than KAFKA-7854?

> Controller may stop react on partition reassignment command in ZooKeeper
> 
>
> Key: KAFKA-9478
> URL: https://issues.apache.org/jira/browse/KAFKA-9478
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, core
>Affects Versions: 2.4.0, 2.4.1
>Reporter: Ivan Yurchenko
>Assignee: Ivan Yurchenko
>Priority: Major
>
> Seemingly after 
> [bdf2446ccce592f3c000290f11de88520327aa19|https://github.com/apache/kafka/commit/bdf2446ccce592f3c000290f11de88520327aa19],
>  the controller may stop watching {{/admin/reassign_partitions}} node in 
> ZooKeeper and consequently accept partition reassignment commands via 
> ZooKeeper.
> I'm not 100% sure that bdf2446ccce592f3c000290f11de88520327aa19 causes this, 
> but it doesn't reproduce on 
> [3fe6b5e951db8f7184a4098f8ad8a1afb2b2c1a0|https://github.com/apache/kafka/commit/3fe6b5e951db8f7184a4098f8ad8a1afb2b2c1a0]
>  - the one right before it.
> Also, reproduces on the trunk HEAD 
> [a87decb9e4df5bfa092c26ae4346f65c426f1321|https://github.com/apache/kafka/commit/a87decb9e4df5bfa092c26ae4346f65c426f1321].
> h1. How to reproduce
> 1. Run ZooKeeper and two Kafka brokers.
> 2. Create a topic with 100 partitions and place them on Broker 0:
> {code:bash}
> distro/bin/kafka-topics.sh --bootstrap-server localhost:9092,localhost:9093 
> --create \
> --topic foo \
> --replica-assignment $(for i in {0..99}; do echo -n "0,"; done | sed 
> 's/.$$//')
> {code}
> 3. Add some data:
> {code:bash}
> seq 1 100 | bin/kafka-console-producer.sh --broker-list 
> localhost:9092,localhost:9093 --topic foo
> {code}
> 4. Create the partition reassignment node {{/admin/reassign_partitions}} in 
> Zoo and shortly after that update the data in the node (even the same value 
> will do). I made a simple Python script for this:
> {code:python}
> import time
> import json
> from kazoo.client import KazooClient
> zk = KazooClient(hosts='127.0.0.1:2181')
> zk.start()
> reassign = {
>   "version": 1,
>   "partitions":[]
> }
> for p in range(100):
>   reassign["partitions"].append({"topic": "foo", "partition": p, 
> "replicas": [1]})
> zk.create("/admin/reassign_partitions", json.dumps(reassign).encode())
> time.sleep(0.05)
> zk.set("/admin/reassign_partitions", json.dumps(reassign).encode())
> {code}
> 4. Observe that the controller doesn't react on further updates to 
> {{/admin/reassign_partitions}} and doesn't delete the node.
> Also, it can be confirmed with
> {code:bash}
> echo wchc | nc 127.0.0.1 2181
> {code}
> that there is no watch on the node in ZooKeeper (for this, you should run 
> ZooKeeper with {{4lw.commands.whitelist=*}}).
> Since it's about timing, it might not work on first attempt, so you might 
> need to do 4 a couple of times. However, the reproducibility rate is pretty 
> high.
> The data in the topic and the big amount of partitions are not needed per se, 
> only to make the timing more favourable.
> Controller re-election will solve the issue, but a new controller can be put 
> in this state the same way.
> h1. Proposed solution
> TBD, suggestions are welcome.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7854) Behavior change in controller picking up partition reassignment tasks since 1.1.0

2020-01-28 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7854.

Resolution: Won't Fix

I'm marking this as "Won't fix" since KIP-455 introduced a Kafka protocol API 
that provides the desired functionality. Setting reassignment via the znode is 
deprecated and will be removed in a future release.

> Behavior change in controller picking up partition reassignment tasks since 
> 1.1.0
> -
>
> Key: KAFKA-7854
> URL: https://issues.apache.org/jira/browse/KAFKA-7854
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Reporter: Zhanxiang (Patrick) Huang
>Priority: Major
>
> After [https://github.com/apache/kafka/pull/4143,] the controller does not 
> subscribe to data change on /admin/reassign_partitions any more (in order to 
> avoid unnecessarily loading the reassignment data again after controller 
> updating the znode) as opposed to the previous kafka versions. However, there 
> are systems built around kafka relying on the previous behavior to 
> incrementally update the list of partition reassignment since kafka does not 
> natively support that.
>  
> For example, [cruise control|https://github.com/linkedin/cruise-control] can 
> rely on the previous behavior (controller listening to data changes) to 
> maintain the reassignment concurrency by dynamically updating the data in the 
> reassignment znode instead of waiting for the current batch to finish and 
> doing reassignment batch by batch, which can significantly reduce the 
> rebalance time in production clusters. Although directly updating the znode 
> can somehow be viewed as an anti-pattern in the long term, this is necessary 
> since kafka does not natively support incrementally submit more reassignment 
> tasks. However, after our kafka clusters migrate from 0.11 to 2.0, cruise 
> control no longer works because the controller behavior has changed. This 
> reveals the following problems:
>  * These behavior changes may be viewed as internal changes so compatibility 
> is not guaranteed but I think by convention people do view this as public 
> interfaces and rely on the compatibility. In this case, I think we should 
> clearly document the data contract for the partition reassignment task to 
> avoid misusage and making controller changes that break the defined data 
> contract. There may be other cases (e.g. topic deletion) whose data contracts 
> need to be clearly defined and we should keep it in mind when making 
> controller changes.
>  * Kafka does not natively support incrementally submit more reassignment 
> tasks. If we do want to support that nicely, we should consider change how we 
> store the reassignment data to store the data in child nodes and let the 
> controller listen on child node changes, similar to what we do for 
> /admin/delete_topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8503) AdminClient should ignore retries config if a custom timeout is provided

2020-01-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025304#comment-17025304
 ] 

ASF GitHub Bot commented on KAFKA-8503:
---

hachikuji commented on pull request #8011: KAFKA-8503; Add default api timeout 
to AdminClient (KIP-533)
URL: https://github.com/apache/kafka/pull/8011
 
 
   This PR implements `default.api.timeout.ms` as documented by KIP-533. This 
is a rebased version of #6913 with some additional test cases and small 
cleanups.
   
   Co-authored-by: huxi 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> AdminClient should ignore retries config if a custom timeout is provided
> 
>
> Key: KAFKA-8503
> URL: https://issues.apache.org/jira/browse/KAFKA-8503
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> The admin client takes a `retries` config similar to the producer. The 
> default value is 5. Individual APIs also accept an optional timeout, which is 
> defaulted to `request.timeout.ms`. The call will fail if either `retries` or 
> the API timeout is exceeded. This is not very intuitive. I think a user would 
> expect to wait if they provided a timeout and the operation cannot be 
> completed. In general, timeouts are much easier for users to work with and 
> reason about.
> A couple options are either to ignore `retries` in this case or to increase 
> the default value of `retries` to something large and not likely to be 
> exceeded. I propose to do the first. Longer term, we could consider 
> deprecating `retries` and avoiding the overloading of `request.timeout.ms` by 
> providing a `default.api.timeout.ms` similar to the consumer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9389) Document how to use kafka-reassign-partitions.sh to change log dirs for a partition

2020-01-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025366#comment-17025366
 ] 

ASF GitHub Bot commented on KAFKA-9389:
---

mitchell-h commented on pull request #7916: KAFKA-9389 - Document how to use 
kafka-reassign-partitions.sh to change log dirs for a partition
URL: https://github.com/apache/kafka/pull/7916
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Document how to use kafka-reassign-partitions.sh to change log dirs for a 
> partition
> ---
>
> Key: KAFKA-9389
> URL: https://issues.apache.org/jira/browse/KAFKA-9389
> Project: Kafka
>  Issue Type: Improvement
>Reporter: James Cheng
>Assignee: Mitchell
>Priority: Major
>  Labels: newbie
>
> KIP-113 introduced support for moving replicas between log directories. As 
> part of it, support was added to kafka-reassign-partitions.sh so that users 
> can move replicas between log directories. Specifically, when you call 
> "kafka-reassign-partitions.sh --topics-to-move-json-file 
> topics-to-move.json", you can specify a "log_dirs" key in the 
> topics-to-move.json file, and kafka-reassign-partitions.sh will then move 
> those replicas to those directories.
>  
> However, when working on that KIP, we didn't update the docs on 
> kafka.apache.org to describe how to use the new functionality. We should add 
> documentation on that.
>  
> I haven't used it before, but whoever works on this Jira can probably figure 
> it out by experimentation with kafka-reassign-partitions.sh, or by reading 
> KIP-113 page or the associated JIRAs.
>  * 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories]
>  * KAFKA-5163
>  * KAFKA-5694
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9460) Enable TLSv1.2 by default and disable all others protocol versions

2020-01-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025372#comment-17025372
 ] 

ASF GitHub Bot commented on KAFKA-9460:
---

rajinisivaram commented on pull request #7998: KAFKA-9460: Enable TLSv1.2 by 
default and disable all others protocol versions
URL: https://github.com/apache/kafka/pull/7998
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Enable TLSv1.2 by default and disable all others protocol versions
> --
>
> Key: KAFKA-9460
> URL: https://issues.apache.org/jira/browse/KAFKA-9460
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: needs-kip
>
> In KAFKA-7251 support of TLS1.3 was introduced.
> For now, only TLS1.2 and TLS1.3 are recommended for the usage, other versions 
> of TLS considered as obsolete:
> https://www.rfc-editor.org/info/rfc8446
> https://en.wikipedia.org/wiki/Transport_Layer_Security#History_and_development
> But testing of TLS1.3 incomplete, for now.
> We should enable actual versions of the TLS protocol by default to provide to 
> the users only secure implementations.
> Users can enable obsolete versions of the TLS with the configuration if they 
> want to. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9320) Enable TLSv1.3 by default and disable some of the older protocols

2020-01-28 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-9320:
--
Fix Version/s: (was: 2.5.0)

> Enable TLSv1.3 by default and disable some of the older protocols
> -
>
> Key: KAFKA-9320
> URL: https://issues.apache.org/jira/browse/KAFKA-9320
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: needs-kip
>
> KAFKA-7251 added support for TLSv1.3. We should include this in the list of 
> protocols that are enabled by default. We should also disable some of the 
> older protocols that are not secure. This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9474) Kafka RPC protocol should support type 'double'

2020-01-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025388#comment-17025388
 ] 

ASF GitHub Bot commented on KAFKA-9474:
---

bdbyrne commented on pull request #8012: KAFKA-9474: Adds 'double' to the RPC 
protocol types.
URL: https://github.com/apache/kafka/pull/8012
 
 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka RPC protocol should support type 'double'
> ---
>
> Key: KAFKA-9474
> URL: https://issues.apache.org/jira/browse/KAFKA-9474
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Brian Byrne
>Assignee: Brian Byrne
>Priority: Minor
>
> Should be fairly straightforward. Useful for KIP-546.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9389) Document how to use kafka-reassign-partitions.sh to change log dirs for a partition

2020-01-28 Thread James Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025451#comment-17025451
 ] 

James Cheng commented on KAFKA-9389:


[~mitchellh], I noticed you closed this pull request. What's your status? Are 
you waiting for a code review approval at this point?

> Document how to use kafka-reassign-partitions.sh to change log dirs for a 
> partition
> ---
>
> Key: KAFKA-9389
> URL: https://issues.apache.org/jira/browse/KAFKA-9389
> Project: Kafka
>  Issue Type: Improvement
>Reporter: James Cheng
>Assignee: Mitchell
>Priority: Major
>  Labels: newbie
>
> KIP-113 introduced support for moving replicas between log directories. As 
> part of it, support was added to kafka-reassign-partitions.sh so that users 
> can move replicas between log directories. Specifically, when you call 
> "kafka-reassign-partitions.sh --topics-to-move-json-file 
> topics-to-move.json", you can specify a "log_dirs" key in the 
> topics-to-move.json file, and kafka-reassign-partitions.sh will then move 
> those replicas to those directories.
>  
> However, when working on that KIP, we didn't update the docs on 
> kafka.apache.org to describe how to use the new functionality. We should add 
> documentation on that.
>  
> I haven't used it before, but whoever works on this Jira can probably figure 
> it out by experimentation with kafka-reassign-partitions.sh, or by reading 
> KIP-113 page or the associated JIRAs.
>  * 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories]
>  * KAFKA-5163
>  * KAFKA-5694
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9389) Document how to use kafka-reassign-partitions.sh to change log dirs for a partition

2020-01-28 Thread Mitchell (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025455#comment-17025455
 ] 

Mitchell commented on KAFKA-9389:
-

Hi James,

The PR i submitted had enough issues I decided to pull it back and resubmit.  
Expect a new PR tomorrow or Thursday.

> Document how to use kafka-reassign-partitions.sh to change log dirs for a 
> partition
> ---
>
> Key: KAFKA-9389
> URL: https://issues.apache.org/jira/browse/KAFKA-9389
> Project: Kafka
>  Issue Type: Improvement
>Reporter: James Cheng
>Assignee: Mitchell
>Priority: Major
>  Labels: newbie
>
> KIP-113 introduced support for moving replicas between log directories. As 
> part of it, support was added to kafka-reassign-partitions.sh so that users 
> can move replicas between log directories. Specifically, when you call 
> "kafka-reassign-partitions.sh --topics-to-move-json-file 
> topics-to-move.json", you can specify a "log_dirs" key in the 
> topics-to-move.json file, and kafka-reassign-partitions.sh will then move 
> those replicas to those directories.
>  
> However, when working on that KIP, we didn't update the docs on 
> kafka.apache.org to describe how to use the new functionality. We should add 
> documentation on that.
>  
> I haven't used it before, but whoever works on this Jira can probably figure 
> it out by experimentation with kafka-reassign-partitions.sh, or by reading 
> KIP-113 page or the associated JIRAs.
>  * 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories]
>  * KAFKA-5163
>  * KAFKA-5694
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9451) Pass consumer group metadata to producer on commit

2020-01-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025464#comment-17025464
 ] 

ASF GitHub Bot commented on KAFKA-9451:
---

mjsax commented on pull request #8014: KAFKA-9451: Pass group metadata into 
producer in Kafka Streams
URL: https://github.com/apache/kafka/pull/8014
 
 
   **DO NOT MERGE**
   
   Follow up to #7977 (note that the first commit is the squashed version of 
7977; this commits need to be rebased after 7977 is merged).
   
   Second commit (still need to add tests):
   
   Part of KIP-447: when EOS is enabled in Kafka Streams, we need to pass the 
consumer's group metadata into the producer to use the new GroupCoordinator 
fencing mechanism.
   We also need to first commit all task individually before we commit the 
transaction or write the local checkpoint file.
   
   During an upgrade, we will still use a producer per task (and rely on 
transactional fencing), however, we will already pass the metadata to the 
producer to enable GroupCoordinator fencing (in parallel to transactional 
fencing), to prepare for a second round of rebalancing that switches to the 
thread producer model and thus disables transactional fencing.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Pass consumer group metadata to producer on commit
> --
>
> Key: KAFKA-9451
> URL: https://issues.apache.org/jira/browse/KAFKA-9451
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>
> Using producer per thread EOS design, we need to pass the consumer group 
> metadata into `producer.sendOffsetsToTransaction()` to use the new consumer 
> group coordinator fenchning mechanism. We should also reduce the default 
> transaction timeout to 10 seconds (compare the KIP for details).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9479) Describe consumer group --all-groups shows header for each entry

2020-01-28 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9479:
--

 Summary: Describe consumer group --all-groups shows header for 
each entry
 Key: KAFKA-9479
 URL: https://issues.apache.org/jira/browse/KAFKA-9479
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


When using `bin/kafka-consumer-groups.sh --describe --state --all-groups`, we 
print output like the following:

{code}
GROUP  COORDINATOR (ID)  
ASSIGNMENT-STRATEGY  STATE   #MEMBERS
group1 localhost:9092 (3) rangeStable  1

   

GROUP  COORDINATOR (ID)  
ASSIGNMENT-STRATEGY  STATE   #MEMBERS
group2  localhost:9092 (3) rangeStable  1   

 
{code}

It would be nice if we did not show the header for every entry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9480) Value for Task-level Metric process-rate is Constant Zero

2020-01-28 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-9480:


 Summary: Value for Task-level Metric process-rate is Constant Zero 
 Key: KAFKA-9480
 URL: https://issues.apache.org/jira/browse/KAFKA-9480
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.4.0
Reporter: Bruno Cadonna


The value for task-level metric process-rate is constant zero. The value should 
reflect the number of calls to {{process()}}  on source processors which 
clearly cannot be constant zero. 
This behavior applies to built-in metrics version {{latest}}. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9480) Value for Task-level Metric process-rate is Constant Zero

2020-01-28 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna reassigned KAFKA-9480:


Assignee: Bruno Cadonna

> Value for Task-level Metric process-rate is Constant Zero 
> --
>
> Key: KAFKA-9480
> URL: https://issues.apache.org/jira/browse/KAFKA-9480
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> The value for task-level metric process-rate is constant zero. The value 
> should reflect the number of calls to {{process()}}  on source processors 
> which clearly cannot be constant zero. 
> This behavior applies to built-in metrics version {{latest}}. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9389) Document how to use kafka-reassign-partitions.sh to change log dirs for a partition

2020-01-28 Thread Mitchell (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025501#comment-17025501
 ] 

Mitchell commented on KAFKA-9389:
-

New, Correct PR located at: https://github.com/apache/kafka/pull/8016 

> Document how to use kafka-reassign-partitions.sh to change log dirs for a 
> partition
> ---
>
> Key: KAFKA-9389
> URL: https://issues.apache.org/jira/browse/KAFKA-9389
> Project: Kafka
>  Issue Type: Improvement
>Reporter: James Cheng
>Assignee: Mitchell
>Priority: Major
>  Labels: newbie
>
> KIP-113 introduced support for moving replicas between log directories. As 
> part of it, support was added to kafka-reassign-partitions.sh so that users 
> can move replicas between log directories. Specifically, when you call 
> "kafka-reassign-partitions.sh --topics-to-move-json-file 
> topics-to-move.json", you can specify a "log_dirs" key in the 
> topics-to-move.json file, and kafka-reassign-partitions.sh will then move 
> those replicas to those directories.
>  
> However, when working on that KIP, we didn't update the docs on 
> kafka.apache.org to describe how to use the new functionality. We should add 
> documentation on that.
>  
> I haven't used it before, but whoever works on this Jira can probably figure 
> it out by experimentation with kafka-reassign-partitions.sh, or by reading 
> KIP-113 page or the associated JIRAs.
>  * 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories]
>  * KAFKA-5163
>  * KAFKA-5694
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9479) Describe consumer group --all-groups shows header for each entry

2020-01-28 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-9479:
---
Labels: newbie  (was: )

> Describe consumer group --all-groups shows header for each entry
> 
>
> Key: KAFKA-9479
> URL: https://issues.apache.org/jira/browse/KAFKA-9479
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Priority: Major
>  Labels: newbie
>
> When using `bin/kafka-consumer-groups.sh --describe --state --all-groups`, we 
> print output like the following:
> {code}
> GROUP  COORDINATOR (ID)  
> ASSIGNMENT-STRATEGY  STATE   #MEMBERS
> group1 localhost:9092 (3) rangeStable  1  
>   
>
> GROUP  COORDINATOR (ID)  
> ASSIGNMENT-STRATEGY  STATE   #MEMBERS
> group2  localhost:9092 (3) rangeStable  1 
>   
>  
> {code}
> It would be nice if we did not show the header for every entry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9450) Decouple inner state flushing from committing with EOS

2020-01-28 Thread Navinder Brar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025550#comment-17025550
 ] 

Navinder Brar commented on KAFKA-9450:
--

Checked Rocksdb code, event listeners are not available in the jni. It's 
probably in the plan but not available in any of the versions yet.

> Decouple inner state flushing from committing with EOS
> --
>
> Key: KAFKA-9450
> URL: https://issues.apache.org/jira/browse/KAFKA-9450
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Major
>
> When EOS is turned on, the commit interval is set quite low (100ms) and all 
> the store layers are flushed during a commit. This is necessary for 
> forwarding records in the cache to the changelog, but unfortunately also 
> forces rocksdb to flush the current memtable before it's full. The result is 
> a large number of small writes to disk, losing the benefits of batching, and 
> a large number of very small L0 files that are likely to slow compaction.
> Since we have to delete the stores to recreate from scratch anyways during an 
> unclean shutdown with EOS, we may as well skip flushing the innermost 
> StateStore during a commit and only do so during a graceful shutdown, before 
> a rebalance, etc. This is currently blocked on a refactoring of the state 
> store layers to allow decoupling the flush of the caching layer from the 
> actual state store.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9422) Track the set of topics a connector is using

2020-01-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025551#comment-17025551
 ] 

ASF GitHub Bot commented on KAFKA-9422:
---

kkonstantine commented on pull request #8017: KAFKA-9422: Track the set of 
topics a connector is using (KIP-558)
URL: https://github.com/apache/kafka/pull/8017
 
 
   Implementation is complete and current tests have been extended to account 
for the new functionality. 
   
   Will add some more tests (specific to the new feature) shortly and will 
update the description of the commit message here. 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Track the set of topics a connector is using
> 
>
> Key: KAFKA-9422
> URL: https://issues.apache.org/jira/browse/KAFKA-9422
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.5.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Major
> Fix For: 2.5.0
>
>
> Now that soon (after 
> [KIP-158|https://cwiki.apache.org/confluence/display/KAFKA/KIP-158%3A+Kafka+Connect+should+allow+source+connectors+to+set+topic-specific+settings+for+new+topics]
>  is implemented) source connectors will be able to create topics during 
> runtime with custom topic-specific properties, in ways beyond what automatic 
> topic creation could allow, a nice new feature would be to also keep track 
> which topics are actually used per connector, after such a connector is 
> created. 
> This information could be exposed by extending the Connect REST API to add a 
> topics endpoint under the connector endpoint (similar to the status or config 
> endpoints). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9480) Value for Task-level Metric process-rate is Constant Zero

2020-01-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025570#comment-17025570
 ] 

ASF GitHub Bot commented on KAFKA-9480:
---

cadonna commented on pull request #8018: KAFKA-9480: Fix bug that prevented to 
measure task-level process-rate
URL: https://github.com/apache/kafka/pull/8018
 
 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Value for Task-level Metric process-rate is Constant Zero 
> --
>
> Key: KAFKA-9480
> URL: https://issues.apache.org/jira/browse/KAFKA-9480
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> The value for task-level metric process-rate is constant zero. The value 
> should reflect the number of calls to {{process()}}  on source processors 
> which clearly cannot be constant zero. 
> This behavior applies to built-in metrics version {{latest}}. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8085) Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration

2020-01-28 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025632#comment-17025632
 ] 

Matthias J. Sax commented on KAFKA-8085:


Different tes within the same class: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4393/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsExportImportPlanSingleGroupArg/]
{quote}java.lang.AssertionError: expected: 2, bar-1 -> 2)> but 
was: at org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.failNotEquals(Assert.java:835) at 
org.junit.Assert.assertEquals(Assert.java:120) at 
org.junit.Assert.assertEquals(Assert.java:146) at 
kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlanSingleGroupArg(ResetConsumerGroupOffsetTest.scala:388){quote}
And one more: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4393/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsExportImportPlan/]
{quote}java.lang.AssertionError: expected: 2, bar1-1 -> 2)> but 
was: at org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.failNotEquals(Assert.java:835) at 
org.junit.Assert.assertEquals(Assert.java:120) at 
org.junit.Assert.assertEquals(Assert.java:146) at 
kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan(ResetConsumerGroupOffsetTest.scala:426){quote}

> Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration
> --
>
> Key: KAFKA-8085
> URL: https://issues.apache.org/jira/browse/KAFKA-8085
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.5.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsByDuration/]
> {quote}java.lang.AssertionError: Expected that consumer group has consumed 
> all messages from topic/partition. at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.admin.ResetConsumerGroupOffsetTest.awaitConsumerProgress(ResetConsumerGroupOffsetTest.scala:364)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.produceConsumeAndShutdown(ResetConsumerGroupOffsetTest.scala:359)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsByDuration(ResetConsumerGroupOffsetTest.scala:146){quote}
> STDOUT
> {quote}[2019-03-09 08:39:29,856] WARN Unable to read additional data from 
> client sessionid 0x105f6adb208, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-09 08:39:46,373] 
> WARN Unable to read additional data from client sessionid 0x105f6adf4c50001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8967) Flaky test kafka.api.SaslSslAdminClientIntegrationTest.testCreateTopicsResponseMetadataAndConfig

2020-01-28 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025633#comment-17025633
 ] 

Matthias J. Sax commented on KAFKA-8967:


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4393/testReport/junit/kafka.api/SaslSslAdminIntegrationTest/testCreateTopicsResponseMetadataAndConfig/]
{quote}org.scalatest.exceptions.TestFailedException: Expected 
CompletableFuture.get to return an exception at 
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530) at 
org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529) at 
org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389) at 
org.scalatest.Assertions.fail(Assertions.scala:1091) at 
org.scalatest.Assertions.fail$(Assertions.scala:1087) at 
org.scalatest.Assertions$.fail(Assertions.scala:1389) at 
kafka.utils.TestUtils$.assertFutureExceptionTypeEquals(TestUtils.scala:1610) at 
kafka.api.SaslSslAdminIntegrationTest.validateMetadataAndConfigs$1(SaslSslAdminIntegrationTest.scala:418)
 at 
kafka.api.SaslSslAdminIntegrationTest.testCreateTopicsResponseMetadataAndConfig(SaslSslAdminIntegrationTest.scala:422){quote}

> Flaky test 
> kafka.api.SaslSslAdminClientIntegrationTest.testCreateTopicsResponseMetadataAndConfig
> 
>
> Key: KAFKA-8967
> URL: https://issues.apache.org/jira/browse/KAFKA-8967
> Project: Kafka
>  Issue Type: Test
>  Components: core, security, unit tests
>Reporter: Stanislav Kozlovski
>Priority: Major
>  Labels: flaky-test
>
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>   at 
> kafka.api.SaslSslAdminClientIntegrationTest.testCreateTopicsResponseMetadataAndConfig(SaslSslAdminClientIntegrationTest.scala:452)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{code}
> Failed in [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25374]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2020-01-28 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025634#comment-17025634
 ] 

Matthias J. Sax commented on KAFKA-7965:


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/388/testReport/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/]
{quote}org.scalatest.exceptions.TestFailedException: The remaining consumers in 
the group could not fetch the expected records at 
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530) at 
org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529) at 
org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389) at 
org.scalatest.Assertions.fail(Assertions.scala:1091) at 
org.scalatest.Assertions.fail$(Assertions.scala:1087) at 
org.scalatest.Assertions$.fail(Assertions.scala:1389) at 
kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:329){quote}

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 1.1.1, 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2020-01-28 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025635#comment-17025635
 ] 

Matthias J. Sax commented on KAFKA-7965:


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4400/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 1.1.1, 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)