[jira] [Commented] (KAFKA-10277) Relax non-null key requirement for KStream-GlobalKTable joins

2020-07-28 Thread Joel Wee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166211#comment-17166211
 ] 

Joel Wee commented on KAFKA-10277:
--

[~mjsax] Could I pick this up?

At the moment, it also looks like there's no check to enforce that the 
{{keySelector}} returns a non-null join key. So I think it makes sense to relax 
the requirement on the key and restrict it on the keySelector. 

>From the code, it looks like all of this is done in the {{process}} method in 
>{{KStreamKTableJoinProcessor.java}}, so the change should be as simple as 
>changing the check for {{key == null}} in the code there to 
>{{keyMapper.apply(key, value) == null}} (the default keyMapper just returns 
>the key) and then to add some tests. Does that sound right? Method copied 
>below for reference
{code:java}
public void process(final K1 key, final V1 value) {
if (key == null || value == null) { // Change key == null to 
keyMapper.apply(key, value) == null
LOG.warn(
"Skipping record due to null key or value. key=[{}] value=[{}] 
topic=[{}] partition=[{}] offset=[{}]",
key, value, context().topic(), context().partition(), 
context().offset()
);
droppedRecordsSensor.record();
} else {
final K2 mappedKey = keyMapper.apply(key, value);
final V2 value2 = mappedKey == null ? null : 
getValueOrNull(valueGetter.get(mappedKey)); // Can remove this null check
if (leftJoin || value2 != null) {
context().forward(key, joiner.apply(value, value2));
}
}
}
{code}
 

 

> Relax non-null key requirement for KStream-GlobalKTable joins
> -
>
> Key: KAFKA-10277
> URL: https://issues.apache.org/jira/browse/KAFKA-10277
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: beginner, newbie
>
> In general, Kafka Streams requires that key and value are non-null for 
> aggregations and join input records. While this requirement is reasonable in 
> general, for KStream-GlobalKTable joins it's questionable for the stream 
> input record key. The join is based on a `keySelector` and it seems to be 
> sufficient to require that the `keySelector` returns a not-null join-key for 
> the stream record.
> We should consider to relax the non-null key requirement for stream-side 
> input records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10311) Flaky test KafkaAdminClientTest#testMetadataRetries

2020-07-28 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166369#comment-17166369
 ] 

Chia-Ping Tsai commented on KAFKA-10311:


[https://github.com/apache/kafka/pull/9091] has covered this issue.

> Flaky test KafkaAdminClientTest#testMetadataRetries
> ---
>
> Key: KAFKA-10311
> URL: https://issues.apache.org/jira/browse/KAFKA-10311
> Project: Kafka
>  Issue Type: Bug
>Reporter: Boyang Chen
>Priority: Major
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/3545/testReport/junit/org.apache.kafka.clients.admin/KafkaAdminClientTest/testMetadataRetries/]
>  
> h3. Error Message
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=describeTopics, deadlineMs=1595694629113, tries=1, 
> nextAllowedTryMs=1595694629217) timed out at 1595694629117 after 1 attempt(s)
> h3. Stacktrace
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=describeTopics, deadlineMs=1595694629113, tries=1, 
> nextAllowedTryMs=1595694629217) timed out at 1595694629117 after 1 attempt(s) 
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> org.apache.kafka.clients.admin.KafkaAdminClientTest.testMetadataRetries(KafkaAdminClientTest.java:995)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748) Caused by: 
> org.apache.kafka.common.errors.TimeoutException: 
> Call(callName=describeTopics, deadlineMs=1595694629113, tries=1, 
> nextAllowedTryMs=1595694629217) timed out at 1595694629117 after 1 attempt(s) 
> Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
> for a node assignment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10318) Default API timeout must be enforced to be greater than request timeout just like in AdminClient

2020-07-28 Thread Gabor Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166472#comment-17166472
 ] 

Gabor Somogyi commented on KAFKA-10318:
---

cc [~viktorsomogyi]

> Default API timeout must be enforced to be greater than request timeout just 
> like in AdminClient
> 
>
> Key: KAFKA-10318
> URL: https://issues.apache.org/jira/browse/KAFKA-10318
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Gabor Somogyi
>Priority: Major
>
> https://github.com/apache/kafka/blob/66563e712b0b9f84f673b262f2fb87c03110084d/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L545-L555



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10318) Default API timeout must be enforced to be greater than request timeout just like in AdminClient

2020-07-28 Thread Gabor Somogyi (Jira)
Gabor Somogyi created KAFKA-10318:
-

 Summary: Default API timeout must be enforced to be greater than 
request timeout just like in AdminClient
 Key: KAFKA-10318
 URL: https://issues.apache.org/jira/browse/KAFKA-10318
 Project: Kafka
  Issue Type: Bug
Reporter: Gabor Somogyi


https://github.com/apache/kafka/blob/66563e712b0b9f84f673b262f2fb87c03110084d/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L545-L555




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10318) Default API timeout must be enforced to be greater than request timeout just like in AdminClient

2020-07-28 Thread Gabor Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-10318:
--
Affects Version/s: 2.5.0

> Default API timeout must be enforced to be greater than request timeout just 
> like in AdminClient
> 
>
> Key: KAFKA-10318
> URL: https://issues.apache.org/jira/browse/KAFKA-10318
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Gabor Somogyi
>Priority: Major
>
> https://github.com/apache/kafka/blob/66563e712b0b9f84f673b262f2fb87c03110084d/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L545-L555



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10268) dynamic config like "--delete-config log.retention.ms" doesn't work

2020-07-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-10268:
--
Fix Version/s: 2.6.0

> dynamic config like "--delete-config log.retention.ms" doesn't work
> ---
>
> Key: KAFKA-10268
> URL: https://issues.apache.org/jira/browse/KAFKA-10268
> Project: Kafka
>  Issue Type: Bug
>  Components: log, log cleaner
>Affects Versions: 2.1.1
>Reporter: zhifeng.peng
>Assignee: huxihx
>Priority: Major
> Fix For: 2.6.0, 2.7.0
>
> Attachments: server.log.2020-07-13-14
>
>
> After I set "log.retention.ms=301000" to clean the data,i use the cmd
> "bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --delete-config log.retention.ms" to reset 
> to default.
> Static broker configuration like log.retention.hours is 168h and no topic 
> level configuration like retention.ms.
> it did not take effect actually although server.log print the broker 
> configuration like that.
> log.retention.check.interval.ms = 30
>  log.retention.hours = 168
>  log.retention.minutes = null
>  {color:#ff}log.retention.ms = null{color}
>  log.roll.hours = 168
>  log.roll.jitter.hours = 0
>  log.roll.jitter.ms = null
>  log.roll.ms = null
>  log.segment.bytes = 1073741824
>  log.segment.delete.delay.ms = 6
>  
> Then we can see that retention time is still 301000ms from the server.log and 
> segments have been deleted.
> [2020-07-13 14:30:00,958] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Found deletable segments with base offsets 
> [5005329,6040360] due to retention time 301000ms breach (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005329, size 
> 1073741222] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040360, size 
> 1073728116] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Incrementing log start offset to 7075648 
> (kafka.log.Log)
>  [2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Found deletable segments with base offsets 
> [5005330,6040410] {color:#FF}due to retention time 301000ms{color} breach 
> (kafka.log.Log)
>  [2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005330, size 
> 1073732368] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040410, size 
> 1073735366] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Incrementing log start offset to 7075685 
> (kafka.log.Log)
>  [2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Deleting segment 5005329 (kafka.log.Log)
>  [2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Deleting segment 6040360 (kafka.log.Log)
>  [2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Deleting segment 5005330 (kafka.log.Log)
>  [2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Deleting segment 6040410 (kafka.log.Log)
>  [2020-07-13 14:31:01,144] INFO Deleted log 
> /data/kafka_logs-test/test_retention-2/06040360.log.deleted. 
> (kafka.log.LogSegment)
>  [2020-07-13 14:31:01,144] INFO Deleted offset index 
> /data/kafka_logs-test/test_retention-2/06040360.index.deleted. 
> (kafka.log.LogSegment)
>  [2020-07-13 14:31:01,144] INFO Deleted time index 
> /data/kafka_logs-test/test_retention-2/06040360.timeindex.deleted.
>  (kafka.log.LogSegment)
>  
> Here are a few steps to reproduce it.
> 1、set log.retention.ms=301000:
> bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --add-config log.retention.ms=301000
> 2、produce messages to the topic:
> bin/kafka-producer-perf-test.sh --topic test_retention --num-records 1000 
> --throughput -1 --producer-props bootstrap.servers=10.129.104.15:9092 
> --record-size 1024
> 3、reset log.retention.ms to the default:
> bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --delete-config log.retention.ms
>  
> I have attched server.log. You can see the log from row 238 to row 731. 



--
This message was sent by Atlassian Jira
(v8.3.4#80

[jira] [Commented] (KAFKA-10268) dynamic config like "--delete-config log.retention.ms" doesn't work

2020-07-28 Thread Randall Hauch (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166513#comment-17166513
 ] 

Randall Hauch commented on KAFKA-10268:
---

This was also cherry-picked to {{2.6}}, but that branch has been frozen while 
we try to release AK 2.6.0. However, given that this is low-risk, I'll leave it 
on {{2.6}}, and updated the "Fix Versions" field above to include `2.6.0`.

> dynamic config like "--delete-config log.retention.ms" doesn't work
> ---
>
> Key: KAFKA-10268
> URL: https://issues.apache.org/jira/browse/KAFKA-10268
> Project: Kafka
>  Issue Type: Bug
>  Components: log, log cleaner
>Affects Versions: 2.1.1
>Reporter: zhifeng.peng
>Assignee: huxihx
>Priority: Major
> Fix For: 2.6.0, 2.7.0
>
> Attachments: server.log.2020-07-13-14
>
>
> After I set "log.retention.ms=301000" to clean the data,i use the cmd
> "bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --delete-config log.retention.ms" to reset 
> to default.
> Static broker configuration like log.retention.hours is 168h and no topic 
> level configuration like retention.ms.
> it did not take effect actually although server.log print the broker 
> configuration like that.
> log.retention.check.interval.ms = 30
>  log.retention.hours = 168
>  log.retention.minutes = null
>  {color:#ff}log.retention.ms = null{color}
>  log.roll.hours = 168
>  log.roll.jitter.hours = 0
>  log.roll.jitter.ms = null
>  log.roll.ms = null
>  log.segment.bytes = 1073741824
>  log.segment.delete.delay.ms = 6
>  
> Then we can see that retention time is still 301000ms from the server.log and 
> segments have been deleted.
> [2020-07-13 14:30:00,958] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Found deletable segments with base offsets 
> [5005329,6040360] due to retention time 301000ms breach (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005329, size 
> 1073741222] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040360, size 
> 1073728116] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Incrementing log start offset to 7075648 
> (kafka.log.Log)
>  [2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Found deletable segments with base offsets 
> [5005330,6040410] {color:#FF}due to retention time 301000ms{color} breach 
> (kafka.log.Log)
>  [2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005330, size 
> 1073732368] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040410, size 
> 1073735366] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Incrementing log start offset to 7075685 
> (kafka.log.Log)
>  [2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Deleting segment 5005329 (kafka.log.Log)
>  [2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Deleting segment 6040360 (kafka.log.Log)
>  [2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Deleting segment 5005330 (kafka.log.Log)
>  [2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Deleting segment 6040410 (kafka.log.Log)
>  [2020-07-13 14:31:01,144] INFO Deleted log 
> /data/kafka_logs-test/test_retention-2/06040360.log.deleted. 
> (kafka.log.LogSegment)
>  [2020-07-13 14:31:01,144] INFO Deleted offset index 
> /data/kafka_logs-test/test_retention-2/06040360.index.deleted. 
> (kafka.log.LogSegment)
>  [2020-07-13 14:31:01,144] INFO Deleted time index 
> /data/kafka_logs-test/test_retention-2/06040360.timeindex.deleted.
>  (kafka.log.LogSegment)
>  
> Here are a few steps to reproduce it.
> 1、set log.retention.ms=301000:
> bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --add-config log.retention.ms=301000
> 2、produce messages to the topic:
> bin/kafka-producer-perf-test.sh --topic test_retention --num-records 1000 
> --throughput -1 --producer-props bootstrap.servers=10.129.104.15:9092 
> --record-size 1024
> 3、reset log.retention.ms to the default:
> bin/kafka

[jira] [Created] (KAFKA-10319) Fix unknown offset sum on trunk

2020-07-28 Thread John Roesler (Jira)
John Roesler created KAFKA-10319:


 Summary: Fix unknown offset sum on trunk
 Key: KAFKA-10319
 URL: https://issues.apache.org/jira/browse/KAFKA-10319
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Affects Versions: 2.7.0
Reporter: John Roesler
Assignee: Bruno Cadonna


Port [https://github.com/apache/kafka/pull/9066] to trunk



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10287) fix flaky streams/streams_standby_replica_test.py

2020-07-28 Thread John Roesler (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166517#comment-17166517
 ] 

John Roesler commented on KAFKA-10287:
--

Created https://issues.apache.org/jira/browse/KAFKA-10319 to track the trunk 
migration so we can close this ticket for 2.6

> fix flaky streams/streams_standby_replica_test.py
> -
>
> Key: KAFKA-10287
> URL: https://issues.apache.org/jira/browse/KAFKA-10287
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, system tests
>Reporter: Chia-Ping Tsai
>Assignee: Bruno Cadonna
>Priority: Blocker
> Fix For: 2.6.0
>
>
> {quote}
> Module: kafkatest.tests.streams.streams_standby_replica_test
> Class:  StreamsStandbyTask
> Method: test_standby_tasks_rebalance
> {quote}
> It pass occasionally on my local.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10132) Kafka Connect JMX MBeans with String values have type double

2020-07-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-10132:
--
Fix Version/s: (was: 2.6.0)

> Kafka Connect JMX MBeans with String values have type double
> 
>
> Key: KAFKA-10132
> URL: https://issues.apache.org/jira/browse/KAFKA-10132
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.0
>Reporter: Tom Malaher
>Assignee: Rens Groothuijsen
>Priority: Major
>  Labels: pull-request-available
>
> There are quite a few metrics available for source/sink connectors, and many 
> of them are numeric (JMX type "double"), but there are a few attributes that 
> have string values that are still tagged as "double".
> For example:
> Bean: kafka.connect:connector=my-source,type=connector-metrics Attribute: 
> status
> The Attribute Description says: "The status of the connector task. One of 
> 'unassigned', 'running', 'paused', 'failed', or 'destroyed'."
> The value is currently "running" on my instance.
> This causes difficulty for anything that tries to introspect the JMX 
> attribute metadata and then parse/display the data.
> See also 
> [https://stackoverflow.com/questions/50291157/which-jmx-metric-should-be-used-to-monitor-the-status-of-a-connector-in-kafka-co]
>  where this problem is mentioned in one of the answers (dating back to 2018).
> The attribute metadata should be updated to indicate the correct type.
> I suspect the problem lies at line 220 of 
> `org.apache.kafka.common.metrics.JmxReporter` (in version 2.5.0) where a 
> hardcoded `double.class.getName()` is used as the mbean data type even for 
> metrics with a type of String.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10320) Log metrics for future logs never have the is-future tag removed

2020-07-28 Thread Bob Barrett (Jira)
Bob Barrett created KAFKA-10320:
---

 Summary: Log metrics for future logs never have the is-future tag 
removed
 Key: KAFKA-10320
 URL: https://issues.apache.org/jira/browse/KAFKA-10320
 Project: Kafka
  Issue Type: Bug
Reporter: Bob Barrett






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10320) Log metrics for future logs never have the is-future tag removed

2020-07-28 Thread Bob Barrett (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Barrett updated KAFKA-10320:

Description: When we create future logs as part of moving replicas between 
log dirs, the Log metrics are registered with an `is-future` tag. When we 
convert these future logs to current logs, the metrics are not re-registered, 
so the `is-future` tag persists. We should fix these metrics at the time that 
they are converted to current logs.

> Log metrics for future logs never have the is-future tag removed
> 
>
> Key: KAFKA-10320
> URL: https://issues.apache.org/jira/browse/KAFKA-10320
> Project: Kafka
>  Issue Type: Bug
>Reporter: Bob Barrett
>Priority: Major
>
> When we create future logs as part of moving replicas between log dirs, the 
> Log metrics are registered with an `is-future` tag. When we convert these 
> future logs to current logs, the metrics are not re-registered, so the 
> `is-future` tag persists. We should fix these metrics at the time that they 
> are converted to current logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10320) Log metrics for future logs never have the is-future tag removed

2020-07-28 Thread Bob Barrett (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Barrett updated KAFKA-10320:

Priority: Minor  (was: Major)

> Log metrics for future logs never have the is-future tag removed
> 
>
> Key: KAFKA-10320
> URL: https://issues.apache.org/jira/browse/KAFKA-10320
> Project: Kafka
>  Issue Type: Bug
>Reporter: Bob Barrett
>Priority: Minor
>
> When we create future logs as part of moving replicas between log dirs, the 
> Log metrics are registered with an `is-future` tag. When we convert these 
> future logs to current logs, the metrics are not re-registered, so the 
> `is-future` tag persists. We should fix these metrics at the time that they 
> are converted to current logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10224) The license term about jersey is not correct

2020-07-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-10224:
--
Description: 
Kafka 2.3.0 and later bundle jersey 2.28. Since 2.28, jersey changed the 
license type from CDDL/GPLv2+CPE to EPLv2. But in Kafka 2.5.0's LICENSE file 
[https://github.com/apache/kafka/blob/2.5/LICENSE], it still said

"This distribution has a binary dependency on jersey, which is available under 
the CDDL".

This should be corrected ASAP.

  was:
Kafka 2.5.0 bundles jersey 2.28. Since 2.28, jersey changed the license type 
from CDDL/GPLv2+CPE to EPLv2. But in Kafka 2.5.0's LICENSE file 
[https://github.com/apache/kafka/blob/2.5/LICENSE], it still said

"This distribution has a binary dependency on jersey, which is available under 
the CDDL".

This should be corrected ASAP.


> The license term about jersey is not correct
> 
>
> Key: KAFKA-10224
> URL: https://issues.apache.org/jira/browse/KAFKA-10224
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.5.0
>Reporter: jaredli2020
>Assignee: Rens Groothuijsen
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.6.0
>
>
> Kafka 2.3.0 and later bundle jersey 2.28. Since 2.28, jersey changed the 
> license type from CDDL/GPLv2+CPE to EPLv2. But in Kafka 2.5.0's LICENSE file 
> [https://github.com/apache/kafka/blob/2.5/LICENSE], it still said
> "This distribution has a binary dependency on jersey, which is available 
> under the CDDL".
> This should be corrected ASAP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10224) The license term about jersey is not correct

2020-07-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-10224:
--
Affects Version/s: (was: 2.5.0)
   2.3.0

> The license term about jersey is not correct
> 
>
> Key: KAFKA-10224
> URL: https://issues.apache.org/jira/browse/KAFKA-10224
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>Reporter: jaredli2020
>Assignee: Rens Groothuijsen
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.6.0
>
>
> Kafka 2.3.0 and later bundle jersey 2.28. Since 2.28, jersey changed the 
> license type from CDDL/GPLv2+CPE to EPLv2. But in Kafka 2.5.0's LICENSE file 
> [https://github.com/apache/kafka/blob/2.5/LICENSE], it still said
> "This distribution has a binary dependency on jersey, which is available 
> under the CDDL".
> This should be corrected ASAP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10277) Relax non-null key requirement for KStream-GlobalKTable joins

2020-07-28 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-10277:
---

Assignee: Joel Wee

> Relax non-null key requirement for KStream-GlobalKTable joins
> -
>
> Key: KAFKA-10277
> URL: https://issues.apache.org/jira/browse/KAFKA-10277
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Joel Wee
>Priority: Major
>  Labels: beginner, newbie
>
> In general, Kafka Streams requires that key and value are non-null for 
> aggregations and join input records. While this requirement is reasonable in 
> general, for KStream-GlobalKTable joins it's questionable for the stream 
> input record key. The join is based on a `keySelector` and it seems to be 
> sufficient to require that the `keySelector` returns a not-null join-key for 
> the stream record.
> We should consider to relax the non-null key requirement for stream-side 
> input records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10277) Relax non-null key requirement for KStream-GlobalKTable joins

2020-07-28 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166561#comment-17166561
 ] 

Matthias J. Sax commented on KAFKA-10277:
-

{quote}Could I pick this up?
{quote}
Absolutely.
{quote} it also looks like there's no check to enforce that the {{keySelector}} 
returns a non-null join key.
{quote}
Well, there is (at least partly). Exactly the line you highlighted. If the 
returned `mappedKey` is `null`, we don't call `getValueOrNull()`, ie, if the 
key is `null` we don't do the table lookup. And I think we need to keep this 
check. – But you are right that we would still emit the input record if we use 
a leftJoin what sound incorrect.

Also note, that the Processor is used to stream-table and stream-globalTable 
join and thus we need to ensure that it works correctly for both cases. For the 
stream-table join, the provided `keyMapper` is just the key-identify function 
`(k,v) -> k`.

But I would defer the detailed discussion for the PR review?

 

> Relax non-null key requirement for KStream-GlobalKTable joins
> -
>
> Key: KAFKA-10277
> URL: https://issues.apache.org/jira/browse/KAFKA-10277
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Joel Wee
>Priority: Major
>  Labels: beginner, newbie
>
> In general, Kafka Streams requires that key and value are non-null for 
> aggregations and join input records. While this requirement is reasonable in 
> general, for KStream-GlobalKTable joins it's questionable for the stream 
> input record key. The join is based on a `keySelector` and it seems to be 
> sufficient to require that the `keySelector` returns a not-null join-key for 
> the stream record.
> We should consider to relax the non-null key requirement for stream-side 
> input records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10224) The license term about jersey is not correct

2020-07-28 Thread Randall Hauch (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166574#comment-17166574
 ] 

Randall Hauch commented on KAFKA-10224:
---

Kafka upgraded to Jersey 2.28 starting in 2.3.0 via 
[https://github.com/apache/kafka/pull/6665]. 

Also, just to clarify, per 
[https://www.apache.org/legal/resolved.html#weak-copyleft-licenses] it is still 
valid for Apache Kafka to include Jersey in binary form:

{quote}
Software under the following licenses may be included in binary form within an 
Apache product if the inclusion is appropriately labeled (see above):

 Common Development and Distribution Licenses: CDDL 1.0 and CDDL 1.1
 Common Public License: CPL 1.0
 Eclipse Public License: EPL 1.0
 IBM Public License: IPL 1.0
 Mozilla Public Licenses: MPL 1.0, MPL 1.1, and MPL 2.0
 Sun Public License: SPL 1.0
 Open Software License 3.0
 Erlang Public License
 UnRAR License (only for unarchiving)
 SIL Open Font License
 Ubuntu Font License Version 1.0
 IPA Font License Agreement v1.0
 Ruby License (including the older version when GPLv2 was a listed alternative 
Ruby 1.9.2 license)
 Eclipse Public License 2.0: EPL 2.0
{quote}

> The license term about jersey is not correct
> 
>
> Key: KAFKA-10224
> URL: https://issues.apache.org/jira/browse/KAFKA-10224
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>Reporter: jaredli2020
>Assignee: Rens Groothuijsen
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.6.0
>
>
> Kafka 2.3.0 and later bundle jersey 2.28. Since 2.28, jersey changed the 
> license type from CDDL/GPLv2+CPE to EPLv2. But in Kafka 2.5.0's LICENSE file 
> [https://github.com/apache/kafka/blob/2.5/LICENSE], it still said
> "This distribution has a binary dependency on jersey, which is available 
> under the CDDL".
> This should be corrected ASAP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10224) The license term about jersey is not correct

2020-07-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-10224:
--
Fix Version/s: 2.5.2
   2.7.0
   2.4.2
   2.3.2

> The license term about jersey is not correct
> 
>
> Key: KAFKA-10224
> URL: https://issues.apache.org/jira/browse/KAFKA-10224
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>Reporter: jaredli2020
>Assignee: Rens Groothuijsen
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.7.0, 2.5.2
>
>
> Kafka 2.3.0 and later bundle jersey 2.28. Since 2.28, jersey changed the 
> license type from CDDL/GPLv2+CPE to EPLv2. But in Kafka 2.5.0's LICENSE file 
> [https://github.com/apache/kafka/blob/2.5/LICENSE], it still said
> "This distribution has a binary dependency on jersey, which is available 
> under the CDDL".
> This should be corrected ASAP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10316) Consider renaming getter method for Interactive Queries

2020-07-28 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-10316:
---

Assignee: John Thomas

> Consider renaming getter method for Interactive Queries
> ---
>
> Key: KAFKA-10316
> URL: https://issues.apache.org/jira/browse/KAFKA-10316
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: John Thomas
>Priority: Minor
>  Labels: beginner, need-kip, newbie
>
> In the 2.5 release, we introduce new classes for Interactive Queries via 
> KIP-535 (cf https://issues.apache.org/jira/browse/KAFKA-6144). The KIP did 
> not specify the names for getter methods of `KeyQueryMetadata` explicitly and 
> they were added in the PR as `getActiveHost()`, `getStandbyHosts()`, and 
> `getPartition()`.
> However, in Kafka it is custom to not use the `get` prefix for getters and 
> thus the methods should have been added as `activeHost()`, `standbyHosts()`, 
> and `partition()`, respectively.
> We should consider renaming the methods accordingly, by deprecating the 
> existing ones and adding the new ones in parallel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10316) Consider renaming getter method for Interactive Queries

2020-07-28 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166583#comment-17166583
 ] 

Matthias J. Sax commented on KAFKA-10316:
-

Cool. Seems Boyang granted permissions on the wiki. Just added you to the list 
on contributors on Jira and assigned the ticket to you. You can know also 
self-assign ticket.

> Consider renaming getter method for Interactive Queries
> ---
>
> Key: KAFKA-10316
> URL: https://issues.apache.org/jira/browse/KAFKA-10316
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: John Thomas
>Priority: Minor
>  Labels: beginner, need-kip, newbie
>
> In the 2.5 release, we introduce new classes for Interactive Queries via 
> KIP-535 (cf https://issues.apache.org/jira/browse/KAFKA-6144). The KIP did 
> not specify the names for getter methods of `KeyQueryMetadata` explicitly and 
> they were added in the PR as `getActiveHost()`, `getStandbyHosts()`, and 
> `getPartition()`.
> However, in Kafka it is custom to not use the `get` prefix for getters and 
> thus the methods should have been added as `activeHost()`, `standbyHosts()`, 
> and `partition()`, respectively.
> We should consider renaming the methods accordingly, by deprecating the 
> existing ones and adding the new ones in parallel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8037) KTable restore may load bad data

2020-07-28 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166592#comment-17166592
 ] 

Matthias J. Sax commented on KAFKA-8037:


Thanks [~guozhang]. Overall, SGTM.
{quote}During IQ the object deserialized from Materialized serde will be 
returned.{quote}
I did not double check the code, but from the comment above, it seems that 
there is actually no such thing? The serde from Consumed seems to be used for 
the store, as well, and whatever is specified in Materialized is ignored? We 
should verify this.

Can you elaborate (or maybe this will be discussed on the KIP) how 
`topology.optimization` and the per-store config interact?

> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Minor
>  Labels: pull-request-available
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-3992) InstanceAlreadyExistsException Error for Consumers Starting in Parallel

2020-07-28 Thread Preeti Gupta (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-3992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166601#comment-17166601
 ] 

Preeti Gupta commented on KAFKA-3992:
-

I am also facing same issue. Do suggest if you have any solution.

> InstanceAlreadyExistsException Error for Consumers Starting in Parallel
> ---
>
> Key: KAFKA-3992
> URL: https://issues.apache.org/jira/browse/KAFKA-3992
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Alexander Cook
>Assignee: Ewen Cheslack-Postava
>Priority: Major
>
> I see the following error sometimes when I start multiple consumers at about 
> the same time in the same process (separate threads). Everything seems to 
> work fine afterwards, so should this not actually be an ERROR level message, 
> or could there be something going wrong that I don't see? 
> Let me know if I can provide any more info! 
> Error processing messages: Error registering mbean 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
> org.apache.kafka.common.KafkaException: Error registering mbean 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
>  
> Caused by: javax.management.InstanceAlreadyExistsException: 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
> Here is the full stack trace: 
> M[?:com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples:-1]  - 
> Error processing messages: Error registering mbean 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
> org.apache.kafka.common.KafkaException: Error registering mbean 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
>   at 
> org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:159)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:77)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.maybeRegisterConnectionMetrics(Selector.java:641)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:268)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:197)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:187)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:126)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:186)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:857)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:829)
>   at 
> com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples(KafkaConsumerV9.java:129)
>   at 
> com.ibm.streamsx.messaging.kafka.KafkaConsumerV9$1.run(KafkaConsumerV9.java:70)
>   at java.lang.Thread.run(Thread.java:785)
>   at 
> com.ibm.streams.operator.internal.runtime.OperatorThreadFactory$2.run(OperatorThreadFactory.java:137)
> Caused by: javax.management.InstanceAlreadyExistsException: 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:449)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1910)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:978)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:912)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:336)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:534)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:157)
>   ... 18 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8037) KTable restore may load bad data

2020-07-28 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166602#comment-17166602
 ] 

Guozhang Wang commented on KAFKA-8037:
--

Per the interaction: I'm actually suggesting that the source-topic reuse would 
NOT be part of the topology optimization moving forward and are ONLY be 
controlled by the per-store APIs, and hence it would not be related to the 
`topology.optimization` any more.

Per the serde: the current rule is, if `Consumed` is specified only, use that 
for both source topic and state store, and similarly if `Materialized` is 
specified only, use that for both source topic and state store; if both are 
specified, `Consumed` serde will override `Materialized` serde. So today we are 
guaranteed that only one serde will be used. My previous comment was not very 
clear, what I meant is that for IQ we just use the only serde, whatever it is, 
to deserialize the bytes (note they are just raw bytes from source topics 
directly).

> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Minor
>  Labels: pull-request-available
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10132) Kafka Connect JMX MBeans with String values have type double

2020-07-28 Thread Randall Hauch (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166609#comment-17166609
 ] 

Randall Hauch commented on KAFKA-10132:
---

I changed the "Fix Versions" to remove `2.6.0`, since we're currently blocked 
on that branch while we release 2.6.0.

> Kafka Connect JMX MBeans with String values have type double
> 
>
> Key: KAFKA-10132
> URL: https://issues.apache.org/jira/browse/KAFKA-10132
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.0
>Reporter: Tom Malaher
>Assignee: Rens Groothuijsen
>Priority: Major
>  Labels: pull-request-available
>
> There are quite a few metrics available for source/sink connectors, and many 
> of them are numeric (JMX type "double"), but there are a few attributes that 
> have string values that are still tagged as "double".
> For example:
> Bean: kafka.connect:connector=my-source,type=connector-metrics Attribute: 
> status
> The Attribute Description says: "The status of the connector task. One of 
> 'unassigned', 'running', 'paused', 'failed', or 'destroyed'."
> The value is currently "running" on my instance.
> This causes difficulty for anything that tries to introspect the JMX 
> attribute metadata and then parse/display the data.
> See also 
> [https://stackoverflow.com/questions/50291157/which-jmx-metric-should-be-used-to-monitor-the-status-of-a-connector-in-kafka-co]
>  where this problem is mentioned in one of the answers (dating back to 2018).
> The attribute metadata should be updated to indicate the correct type.
> I suspect the problem lies at line 220 of 
> `org.apache.kafka.common.metrics.JmxReporter` (in version 2.5.0) where a 
> hardcoded `double.class.getName()` is used as the mbean data type even for 
> metrics with a type of String.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10316) Consider renaming getter method for Interactive Queries

2020-07-28 Thread John Thomas (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166613#comment-17166613
 ] 

John Thomas commented on KAFKA-10316:
-

[~mjsax] I got the access to create KIP and have created KIP : 
[KIP-648|https://cwiki.apache.org/confluence/display/KAFKA/KIP-648%3A+Renaming+getter+method+for+Interactive+Queries]

Its my first KIP documentation :)  I followed the guidelines and I hope I'm 
conveying the right thing. Could you do a quick check ?

A couple of questions :
 * In the KIP process, the next steps are to start a discussion thread and 
vote. Would this KIP need discussion ? 
 * Since its under Kafka Streams, I've made an entry under [Kafka+Streams 
|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams] Do we need to 
make an entry under 
[Kafka+Improvement+Proposals|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals]
 also.

> Consider renaming getter method for Interactive Queries
> ---
>
> Key: KAFKA-10316
> URL: https://issues.apache.org/jira/browse/KAFKA-10316
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: John Thomas
>Priority: Minor
>  Labels: beginner, need-kip, newbie
>
> In the 2.5 release, we introduce new classes for Interactive Queries via 
> KIP-535 (cf https://issues.apache.org/jira/browse/KAFKA-6144). The KIP did 
> not specify the names for getter methods of `KeyQueryMetadata` explicitly and 
> they were added in the PR as `getActiveHost()`, `getStandbyHosts()`, and 
> `getPartition()`.
> However, in Kafka it is custom to not use the `get` prefix for getters and 
> thus the methods should have been added as `activeHost()`, `standbyHosts()`, 
> and `partition()`, respectively.
> We should consider renaming the methods accordingly, by deprecating the 
> existing ones and adding the new ones in parallel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-10316) Consider renaming getter method for Interactive Queries

2020-07-28 Thread John Thomas (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166613#comment-17166613
 ] 

John Thomas edited comment on KAFKA-10316 at 7/28/20, 6:17 PM:
---

[~mjsax] Yeah! I got the access to create KIP and have created KIP : 
[KIP-648|https://cwiki.apache.org/confluence/display/KAFKA/KIP-648%3A+Renaming+getter+method+for+Interactive+Queries]

Its my first KIP documentation :)  I followed the guidelines and I hope I'm 
conveying the right thing. Could you do a quick check ?

A couple of questions :
 * In the KIP process, the next steps are to start a discussion thread and 
vote. Would this KIP need discussion ? 
 * Since its under Kafka Streams, I've made an entry under [Kafka+Streams 
|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams] Do we need to 
make an entry under 
[Kafka+Improvement+Proposals|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals]
 also.


was (Author: johnthotekat):
[~mjsax] I got the access to create KIP and have created KIP : 
[KIP-648|https://cwiki.apache.org/confluence/display/KAFKA/KIP-648%3A+Renaming+getter+method+for+Interactive+Queries]

Its my first KIP documentation :)  I followed the guidelines and I hope I'm 
conveying the right thing. Could you do a quick check ?

A couple of questions :
 * In the KIP process, the next steps are to start a discussion thread and 
vote. Would this KIP need discussion ? 
 * Since its under Kafka Streams, I've made an entry under [Kafka+Streams 
|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams] Do we need to 
make an entry under 
[Kafka+Improvement+Proposals|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals]
 also.

> Consider renaming getter method for Interactive Queries
> ---
>
> Key: KAFKA-10316
> URL: https://issues.apache.org/jira/browse/KAFKA-10316
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: John Thomas
>Priority: Minor
>  Labels: beginner, need-kip, newbie
>
> In the 2.5 release, we introduce new classes for Interactive Queries via 
> KIP-535 (cf https://issues.apache.org/jira/browse/KAFKA-6144). The KIP did 
> not specify the names for getter methods of `KeyQueryMetadata` explicitly and 
> they were added in the PR as `getActiveHost()`, `getStandbyHosts()`, and 
> `getPartition()`.
> However, in Kafka it is custom to not use the `get` prefix for getters and 
> thus the methods should have been added as `activeHost()`, `standbyHosts()`, 
> and `partition()`, respectively.
> We should consider renaming the methods accordingly, by deprecating the 
> existing ones and adding the new ones in parallel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10284) Group membership update due to static member rejoin should be persisted

2020-07-28 Thread Boyang Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166681#comment-17166681
 ] 

Boyang Chen commented on KAFKA-10284:
-

That's a good observation, I forgot that for static members they don't actually 
do another Fetch offset immediately after they rejoin the group. You should be 
right about this, and I haven't thought about the scenario where EOS and static 
membership are both turned on.

> Group membership update due to static member rejoin should be persisted
> ---
>
> Key: KAFKA-10284
> URL: https://issues.apache.org/jira/browse/KAFKA-10284
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.3.0, 2.4.0, 2.5.0, 2.6.0
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.6.1
>
>
> For known static members rejoin, we would update its corresponding member.id 
> without triggering a new rebalance. This serves the purpose for avoiding 
> unnecessary rebalance for static membership, as well as fencing purpose if 
> some still uses the old member.id. 
> The bug is that we don't actually persist the membership update, so if no 
> upcoming rebalance gets triggered, this new member.id information will get 
> lost during group coordinator immigration, thus bringing up the zombie member 
> identity.
> The bug find credit goes to [~hachikuji] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8037) KTable restore may load bad data

2020-07-28 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166703#comment-17166703
 ] 

Matthias J. Sax commented on KAFKA-8037:


{quote}Per the interaction: I'm actually suggesting that the source-topic reuse 
would NOT be part of the topology optimization moving forward and are ONLY be 
controlled by the per-store APIs, and hence it would not be related to the 
`topology.optimization` any more.
{quote}
Sure, this would hold if a user specifies the new value `2.6`. – But what 
happens is the user specifies the deprecated `OPTIMIZE_ALL` (it seem we would 
still need to do the optimization for this case for backward compatibility 
reasons) and in addition updates the code to enable/disable it on a per topic 
basis?

Thanks for clarifying the serde question. Makes sense.

> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Minor
>  Labels: pull-request-available
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10316) Consider renaming getter method for Interactive Queries

2020-07-28 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166706#comment-17166706
 ] 

Matthias J. Sax commented on KAFKA-10316:
-

Thanks. – I glanced over the KIP quickly and it look good. Please also add to 
the main KIP table as this table should list all KIPs.

As this KIP is rather straightforward, I agree that we can skip the discuss 
thread and you can call for a vote right away.

> Consider renaming getter method for Interactive Queries
> ---
>
> Key: KAFKA-10316
> URL: https://issues.apache.org/jira/browse/KAFKA-10316
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: John Thomas
>Priority: Minor
>  Labels: beginner, need-kip, newbie
>
> In the 2.5 release, we introduce new classes for Interactive Queries via 
> KIP-535 (cf https://issues.apache.org/jira/browse/KAFKA-6144). The KIP did 
> not specify the names for getter methods of `KeyQueryMetadata` explicitly and 
> they were added in the PR as `getActiveHost()`, `getStandbyHosts()`, and 
> `getPartition()`.
> However, in Kafka it is custom to not use the `get` prefix for getters and 
> thus the methods should have been added as `activeHost()`, `standbyHosts()`, 
> and `partition()`, respectively.
> We should consider renaming the methods accordingly, by deprecating the 
> existing ones and adding the new ones in parallel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8037) KTable restore may load bad data

2020-07-28 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166762#comment-17166762
 ] 

Guozhang Wang commented on KAFKA-8037:
--

{quote}
Sure, this would hold if a user specifies the new value `2.6`. – But what 
happens is the user specifies the deprecated `OPTIMIZE_ALL` (it seem we would 
still need to do the optimization for this case for backward compatibility 
reasons) and in addition updates the code to enable/disable it on a per topic 
basis?
{quote}

Yes, that's right. In that case the per-store API takes precedence to override 
the effects of `OPTIMIZE_ALL`.

> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Minor
>  Labels: pull-request-available
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10316) Consider renaming getter method for Interactive Queries

2020-07-28 Thread John Thomas (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166863#comment-17166863
 ] 

John Thomas commented on KAFKA-10316:
-

Thanks Matt. Added the KIP to main tables and called in for voting. 

> Consider renaming getter method for Interactive Queries
> ---
>
> Key: KAFKA-10316
> URL: https://issues.apache.org/jira/browse/KAFKA-10316
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: John Thomas
>Priority: Minor
>  Labels: beginner, need-kip, newbie
>
> In the 2.5 release, we introduce new classes for Interactive Queries via 
> KIP-535 (cf https://issues.apache.org/jira/browse/KAFKA-6144). The KIP did 
> not specify the names for getter methods of `KeyQueryMetadata` explicitly and 
> they were added in the PR as `getActiveHost()`, `getStandbyHosts()`, and 
> `getPartition()`.
> However, in Kafka it is custom to not use the `get` prefix for getters and 
> thus the methods should have been added as `activeHost()`, `standbyHosts()`, 
> and `partition()`, respectively.
> We should consider renaming the methods accordingly, by deprecating the 
> existing ones and adding the new ones in parallel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10321) shouldDieOnInvalidOffsetExceptionWhileRunning would block forever on JDK11

2020-07-28 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10321:
---

 Summary: shouldDieOnInvalidOffsetExceptionWhileRunning would block 
forever on JDK11
 Key: KAFKA-10321
 URL: https://issues.apache.org/jira/browse/KAFKA-10321
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Boyang Chen


Have spotted two definite cases where the test  
shouldDieOnInvalidOffsetExceptionWhileRunning fails to stop during the whole 
test suite:
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7604/consoleFull]

[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7602/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10321) shouldDieOnInvalidOffsetExceptionWhileRunning would block forever on JDK11

2020-07-28 Thread Boyang Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166909#comment-17166909
 ] 

Boyang Chen commented on KAFKA-10321:
-

Looks like we have an infinite blocking logic in `GlobalStreamThread`
{code:java}
@Override
public synchronized void start() {
super.start();
while (!stillRunning()) {
Utils.sleep(1);
if (startupException != null) {
throw startupException;
}
}
}
{code}

> shouldDieOnInvalidOffsetExceptionWhileRunning would block forever on JDK11
> --
>
> Key: KAFKA-10321
> URL: https://issues.apache.org/jira/browse/KAFKA-10321
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Boyang Chen
>Priority: Major
>
> Have spotted two definite cases where the test  
> shouldDieOnInvalidOffsetExceptionWhileRunning fails to stop during the whole 
> test suite:
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7604/consoleFull]
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7602/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10321) shouldDieOnInvalidOffsetExceptionWhileRunning would block forever on JDK11

2020-07-28 Thread Boyang Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166911#comment-17166911
 ] 

Boyang Chen commented on KAFKA-10321:
-

cc [~mjsax] [~vvcephei] who added this test in the PR: 
[https://github.com/apache/kafka/pull/9075]

> shouldDieOnInvalidOffsetExceptionWhileRunning would block forever on JDK11
> --
>
> Key: KAFKA-10321
> URL: https://issues.apache.org/jira/browse/KAFKA-10321
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Boyang Chen
>Priority: Major
>
> Have spotted two definite cases where the test  
> shouldDieOnInvalidOffsetExceptionWhileRunning fails to stop during the whole 
> test suite:
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7604/consoleFull]
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7602/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10321) shouldDieOnInvalidOffsetExceptionDuringStartup would block forever on JDK11

2020-07-28 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-10321:

Summary: shouldDieOnInvalidOffsetExceptionDuringStartup would block forever 
on JDK11  (was: shouldDieOnInvalidOffsetExceptionWhileRunning would block 
forever on JDK11)

> shouldDieOnInvalidOffsetExceptionDuringStartup would block forever on JDK11
> ---
>
> Key: KAFKA-10321
> URL: https://issues.apache.org/jira/browse/KAFKA-10321
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Boyang Chen
>Priority: Major
>
> Have spotted two definite cases where the test  
> shouldDieOnInvalidOffsetExceptionWhileRunning fails to stop during the whole 
> test suite:
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7604/consoleFull]
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7602/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10321) shouldDieOnInvalidOffsetExceptionDuringStartup would block forever on JDK11

2020-07-28 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-10321:

Description: 
Have spotted two definite cases where the test 
shouldDieOnInvalidOffsetExceptionDuringStartup

fails to stop during the whole test suite:
 [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7604/consoleFull]

[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7602/console]

  was:
Have spotted two definite cases where the test  
shouldDieOnInvalidOffsetExceptionWhileRunning fails to stop during the whole 
test suite:
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7604/consoleFull]

[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7602/console]


> shouldDieOnInvalidOffsetExceptionDuringStartup would block forever on JDK11
> ---
>
> Key: KAFKA-10321
> URL: https://issues.apache.org/jira/browse/KAFKA-10321
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Boyang Chen
>Priority: Major
>
> Have spotted two definite cases where the test 
> shouldDieOnInvalidOffsetExceptionDuringStartup
> fails to stop during the whole test suite:
>  [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7604/consoleFull]
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7602/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10321) shouldDieOnInvalidOffsetExceptionDuringStartup would block forever on JDK11

2020-07-28 Thread Boyang Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166928#comment-17166928
 ] 

Boyang Chen commented on KAFKA-10321:
-

Actually I think I found the bug, check out the PR here: 
[https://github.com/apache/kafka/pull/9095]

> shouldDieOnInvalidOffsetExceptionDuringStartup would block forever on JDK11
> ---
>
> Key: KAFKA-10321
> URL: https://issues.apache.org/jira/browse/KAFKA-10321
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Boyang Chen
>Priority: Major
>
> Have spotted two definite cases where the test 
> shouldDieOnInvalidOffsetExceptionDuringStartup
> fails to stop during the whole test suite:
>  [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7604/consoleFull]
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/7602/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)